Active NameNode TransferFsImage and EditLogFileOutputStream Stuck In FileChannel Force/Truncate
We're in the process of upgrading our Hadoop cluster from 2.2.1 to 2.7.2 and currently testing 2.7.2 in our pre-prod/backup cluster. We're seeing a lot of active NameNode failovers (sometimes as often as every 30 minutes), especially when we're running DistCp to copy data from our production cluster for users to test with. We had seen similar failovers occasionally while running 2.2.1, but not nearly as often (once every month or two). Haven't been able to verify it's the exact same root cause in the 2.2.1 version since files/logs have rolled over since the last time it happened.
So here's the chain of events we've found so far. Hoping someone can provide further direction.
The standby NameNode's checkpointing process succeeds locally and issues the image PUT request in TransferFsImage.uploadImage. The active NameNode finishes downloading the fsimage.ckpt file, but when it tries to issue the fos.getChannel().force(true) call in TransferFsImage.receiveFile it seems to get stuck in native code. The standby NameNode then gets a SocketTimeoutException -- it happens 60 seconds after the last modification time we see in the "stat" output for the fsimage.ckpt file that the active NameNode pulled down.
Right after the time this is happening (~30 sec after the last modification to the fsimage.ckpt file) we see a similar issue with the edit log roll. The standby NameNode's EditLogTailer triggers the rolling of the edit log on the active NameNode. We see the active NameNode enter its rollEditLog process, and will either see the endCurrentLogSegment call get stuck in EditLogFileOutputStream.close on the fc.truncate(fc.position()) call or the startLogSegment call get stuck in EditLogFileOutputStream.flushAndSync on the fc.force(true) call. They both get stuck in the native code. Looking at the last modification time in the "stat" output of the edits file, we see that 20 seconds later the standby NameNode's RPC call times out.
The rollEditLog ends up holding onto the FSNamesystem's write lock on fsLock, and this causes all other RPC calls to pile up trying to acquire read locks until ZKFC times out on the health monitor and signals for the NameNode to be killed. We patched the SshFenceByTcpPort code to issue a kill -3 to get a thread dump before it kills the active NameNode.
We're running on CentOS 6 using ext4 FS (w/ noatime) using kernel 2.6.32. The fsimage file is typically ~7.2GB and the edits files are typically ~1MB-2MB. The cluster running 2.7.2 is 256 nodes. We're running on JDK 1.8.0_92 (compiled against it too w/ a few JDK8 specific patches).
See the relevant stacks below of the FileChannel code getting stuck in the native code. I can also provide the full thread dumps and any relevant configs, if needed.
Tried looking in JIRA and online but didn't see anything directly related. Any insight as to whether this is a bug in Hadoop or if it's a side-effect of something else? When the cluster is mostly idle, everything seems fine. Our dev/test clusters haven't had any issues with the upgrades but they're only 10 nodes or less and have little load.