Build failed in Hudson: Hadoop-Nightly #366

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #366

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/366/changes

Changes:

[tomwhite] HADOOP-2532.  Add to MapFile a getClosest method that returns the key that comes just before if the key is not present.  Contributed by stack.

[tomwhite] HADOOP-2494.  Set +x on contrib/*/bin/* in packaged tar bundle.  Contributed by stack.

[stack] HADOOP-2598 Remove chatty debug logging from 2443 patch

[stack] HADOOP-2589 Change an classes/package name from shell to hql

[shv] HADOOP-2601. Start name-node on a free port for TestNNThroughputBenchmark. Contributed by Konstantin Shvachko

[jimk] HADOOP-2558 modified CHANGES.txt to indicate that this is an incompatible change.

[dhruba] HADOOP-2481. NNBench report its progress periodically.
(Hairong Kuang via dhruba)

[dhruba] HADOOP-2398. Additional instrumentation for NameNode and RPC server.
Add support for accessing instrumentation statistics via JMX.
(Sanjay radia via dhruba)

[omalley] HADOOP-2516.  Moved message for HADOOP-1819 to incompatible changes in change
log. (omalley)

[acmurthy] HADOOP-2574. Fixed mapred_tutorial.xml to correct minor errors with the WordCount examples.

[acmurthy] HADOOP-1876. Updating hadoop-default.html to reflect changes to hadoop-default.xml.

[acmurthy] HADOOP-2077. Added version and build information to STARTUP_MSG for all hadoop daemons to aid error-reporting, debugging etc.

------------------------------------------
[...truncated 87686 lines...]
    [junit] at java.lang.Thread.run(Thread.java:595)

    [junit] 2008-01-15 14:25:56,048 INFO  [main] hbase.StaticTestEnvironment(142): Shutting down Mini DFS
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 0
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 381.875 sec
    [junit] 2008-01-15 14:25:58,184 INFO  [main] hbase.HRegionServer$ShutdownThread(156): Starting shutdown thread.
    [junit] 2008-01-15 14:25:58,185 INFO  [main] hbase.HRegionServer$ShutdownThread(161): Shutdown thread complete
    [junit] Running org.apache.hadoop.hbase.util.TestBase64

    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.279 sec
    [junit] Running org.apache.hadoop.hbase.util.TestKeying
    [junit] Original url http://abc:bcd@.../index.html?query=something#middle, Transformed url r:http://abc:bcd@.../index.html?query=something#middle
    [junit] Original url file:///usr/bin/java, Transformed url file:///usr/bin/java
    [junit] Original url dns:www.powerset.com, Transformed url dns:www.powerset.com
    [junit] Original url dns://dns.powerset.com/www.powerset.com, Transformed url r:dns://com.powerset.dns/www.powerset.com
    [junit] Original url http://one.two.three/index.html, Transformed url r:http://three.two.one/index.html
    [junit] Original url https://one.two.three:9443/index.html, Transformed url r:https://three.two.one:9443/index.html
    [junit] Original url ftp://one.two.three/index.html, Transformed url r:ftp://three.two.one/index.html
    [junit] Original url filename, Transformed url filename
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.127 sec
    [junit] Running org.apache.hadoop.hbase.util.TestMigrate
    [junit] Starting DataNode 0 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data2 
    [junit] Starting DataNode 1 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data4 
    [junit] d hregion_1028785192
    [junit] d hregion_1028785192/compaction.dir
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192/info
    [junit] f hregion_1028785192/compaction.dir/hregion_1028785192/info/done size=0
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192/info/info
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192/info/mapfiles
    [junit] f hregion_1028785192/compaction.dir/hregion_1028785192/info/toreplace size=72
    [junit] d hregion_1028785192/info
    [junit] d hregion_1028785192/info/info
    [junit] f hregion_1028785192/info/info/5273171824992064091 size=9
    [junit] d hregion_1028785192/info/mapfiles
    [junit] d hregion_1028785192/info/mapfiles/5273171824992064091
    [junit] f hregion_1028785192/info/mapfiles/5273171824992064091/data size=1710
    [junit] f hregion_1028785192/info/mapfiles/5273171824992064091/index size=249
    [junit] d hregion_1396626490
    [junit] d hregion_1396626490/column_a
    [junit] d hregion_1396626490/column_a/info
    [junit] f hregion_1396626490/column_a/info/7048898707195909278 size=9
    [junit] d hregion_1396626490/column_a/mapfiles
    [junit] d hregion_1396626490/column_a/mapfiles/7048898707195909278
    [junit] f hregion_1396626490/column_a/mapfiles/7048898707195909278/data size=1685790
    [junit] f hregion_1396626490/column_a/mapfiles/7048898707195909278/index size=1578
    [junit] d hregion_1396626490/column_b
    [junit] d hregion_1396626490/column_b/info
    [junit] f hregion_1396626490/column_b/info/4973609345075242702 size=9
    [junit] d hregion_1396626490/column_b/mapfiles
    [junit] d hregion_1396626490/column_b/mapfiles/4973609345075242702
    [junit] f hregion_1396626490/column_b/mapfiles/4973609345075242702/data size=1685790
    [junit] f hregion_1396626490/column_b/mapfiles/4973609345075242702/index size=1582
    [junit] d hregion_1396626490/compaction.dir
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_a
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_a/done size=0
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_a/info
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_a/mapfiles
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_a/toreplace size=80
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_b
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_b/done size=0
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_b/info
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_b/mapfiles
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_b/toreplace size=80
    [junit] d hregion_1971203659
    [junit] d hregion_1971203659/column_a
    [junit] d hregion_1971203659/column_a/info
    [junit] f hregion_1971203659/column_a/info/3526482879590887371.1396626490 size=63
    [junit] d hregion_1971203659/column_a/mapfiles
    [junit] f hregion_1971203659/column_a/mapfiles/3526482879590887371.1396626490 size=0
    [junit] d hregion_1971203659/column_b
    [junit] d hregion_1971203659/column_b/info
    [junit] f hregion_1971203659/column_b/info/209479190043547321.1396626490 size=63
    [junit] d hregion_1971203659/column_b/mapfiles
    [junit] f hregion_1971203659/column_b/mapfiles/209479190043547321.1396626490 size=0
    [junit] d hregion_341377241
    [junit] d hregion_341377241/column_a
    [junit] d hregion_341377241/column_a/info
    [junit] f hregion_341377241/column_a/info/4514508232435632076.1396626490 size=63
    [junit] d hregion_341377241/column_a/mapfiles
    [junit] f hregion_341377241/column_a/mapfiles/4514508232435632076.1396626490 size=0
    [junit] d hregion_341377241/column_b
    [junit] d hregion_341377241/column_b/info
    [junit] f hregion_341377241/column_b/info/2547853154428391603.1396626490 size=63
    [junit] d hregion_341377241/column_b/mapfiles
    [junit] f hregion_341377241/column_b/mapfiles/2547853154428391603.1396626490 size=0
    [junit] d hregion_70236052
    [junit] d hregion_70236052/compaction.dir
    [junit] d hregion_70236052/compaction.dir/hregion_70236052
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info
    [junit] f hregion_70236052/compaction.dir/hregion_70236052/info/done size=0
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info/info
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info/mapfiles
    [junit] f hregion_70236052/compaction.dir/hregion_70236052/info/toreplace size=68
    [junit] d hregion_70236052/info
    [junit] d hregion_70236052/info/info
    [junit] f hregion_70236052/info/info/7214912435301412040 size=9
    [junit] d hregion_70236052/info/mapfiles
    [junit] d hregion_70236052/info/mapfiles/7214912435301412040
    [junit] f hregion_70236052/info/mapfiles/7214912435301412040/data size=332
    [junit] f hregion_70236052/info/mapfiles/7214912435301412040/index size=232
    [junit] 2008-01-15 14:26:52,654 INFO  [main] hbase.HLog(232): new log writer created at hdfs://localhost:34992/hbase/log/hlog.dat.000
    [junit] 2008-01-15 14:26:52,771 DEBUG [main] hbase.HStore(697): starting 70236052/info (no reconstruction log)
    [junit] 2008-01-15 14:26:52,772 DEBUG [main] hbase.HStore(856): infodir: hdfs://localhost:34992/hbase/-ROOT-/70236052/info/info mapdir: hdfs://localhost:34992/hbase/-ROOT-/70236052/info/mapfiles
    [junit] 2008-01-15 14:26:52,809 DEBUG [main] hbase.HStore(723): maximum sequence id for hstore 70236052/info is 7
    [junit] 2008-01-15 14:26:52,864 WARN  [main] util.NativeCodeLoader(51): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    [junit] 2008-01-15 14:26:52,872 DEBUG [main] hbase.HRegion(293): Next sequence id for region -ROOT-,,0 is 8
    [junit] 2008-01-15 14:26:52,890 INFO  [main] hbase.HRegion(321): region -ROOT-,,0 available
    [junit] 2008-01-15 14:26:53,633 DEBUG [main] hbase.HStore(697): starting 1028785192/info (no reconstruction log)
    [junit] 2008-01-15 14:26:53,634 DEBUG [main] hbase.HStore(856): infodir: hdfs://localhost:34992/hbase/.META./1028785192/info/info mapdir: hdfs://localhost:34992/hbase/.META./1028785192/info/mapfiles
    [junit] 2008-01-15 14:26:53,648 DEBUG [main] hbase.HStore(723): maximum sequence id for hstore 1028785192/info is 35173
    [junit] 2008-01-15 14:26:53,658 DEBUG [main] hbase.HRegion(293): Next sequence id for region .META.,,1 is 35174
    [junit] 2008-01-15 14:26:53,662 INFO  [main] hbase.HRegion(321): region .META.,,1 available
    [junit] 2008-01-15 14:26:54,746 DEBUG [main] hbase.HStore(1047): closed 1028785192/info
    [junit] 2008-01-15 14:26:54,748 INFO  [main] hbase.HRegion(439): closed .META.,,1
    [junit] 2008-01-15 14:26:54,750 DEBUG [main] hbase.HStore(1047): closed 70236052/info
    [junit] 2008-01-15 14:26:54,752 INFO  [main] hbase.HRegion(439): closed -ROOT-,,0
    [junit] 2008-01-15 14:26:54,753 DEBUG [main] hbase.HLog(318): closing log writer in hdfs://localhost:34992/hbase/log
    [junit] d -ROOT-
    [junit] d -ROOT-/70236052
    [junit] d -ROOT-/70236052/compaction.dir
    [junit] d -ROOT-/70236052/compaction.dir/70236052
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/done size=0
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/info
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/mapfiles
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/toreplace size=68
    [junit] d -ROOT-/70236052/info
    [junit] d -ROOT-/70236052/info/info
    [junit] f -ROOT-/70236052/info/info/7214912435301412040 size=9
    [junit] d -ROOT-/70236052/info/mapfiles
    [junit] d -ROOT-/70236052/info/mapfiles/7214912435301412040
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/data size=332
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/index size=232
    [junit] d .META.
    [junit] d .META./1028785192
    [junit] d .META./1028785192/compaction.dir
    [junit] d .META./1028785192/compaction.dir/1028785192
    [junit] d .META./1028785192/compaction.dir/1028785192/info
    [junit] f .META./1028785192/compaction.dir/1028785192/info/done size=0
    [junit] d .META./1028785192/compaction.dir/1028785192/info/info
    [junit] d .META./1028785192/compaction.dir/1028785192/info/mapfiles
    [junit] f .META./1028785192/compaction.dir/1028785192/info/toreplace size=72
    [junit] d .META./1028785192/info
    [junit] d .META./1028785192/info/info
    [junit] f .META./1028785192/info/info/5273171824992064091 size=9
    [junit] d .META./1028785192/info/mapfiles
    [junit] d .META./1028785192/info/mapfiles/5273171824992064091
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/data size=1710
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/index size=249
    [junit] d TestUpgrade
    [junit] d TestUpgrade/1396626490
    [junit] d TestUpgrade/1396626490/column_a
    [junit] d TestUpgrade/1396626490/column_a/info
    [junit] f TestUpgrade/1396626490/column_a/info/7048898707195909278 size=9
    [junit] d TestUpgrade/1396626490/column_a/mapfiles
    [junit] d TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/data size=1685790
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/index size=1578
    [junit] d TestUpgrade/1396626490/column_b
    [junit] d TestUpgrade/1396626490/column_b/info
    [junit] f TestUpgrade/1396626490/column_b/info/4973609345075242702 size=9
    [junit] d TestUpgrade/1396626490/column_b/mapfiles
    [junit] d TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/data size=1685790
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/index size=1582
    [junit] d TestUpgrade/1396626490/compaction.dir
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/toreplace size=80
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/toreplace size=80
    [junit] d TestUpgrade/1971203659
    [junit] d TestUpgrade/1971203659/column_a
    [junit] d TestUpgrade/1971203659/column_a/info
    [junit] f TestUpgrade/1971203659/column_a/info/3526482879590887371.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_a/mapfiles
    [junit] f TestUpgrade/1971203659/column_a/mapfiles/3526482879590887371.1396626490 size=0
    [junit] d TestUpgrade/1971203659/column_b
    [junit] d TestUpgrade/1971203659/column_b/info
    [junit] f TestUpgrade/1971203659/column_b/info/209479190043547321.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_b/mapfiles
    [junit] f TestUpgrade/1971203659/column_b/mapfiles/209479190043547321.1396626490 size=0
    [junit] d TestUpgrade/341377241
    [junit] d TestUpgrade/341377241/column_a
    [junit] d TestUpgrade/341377241/column_a/info
    [junit] f TestUpgrade/341377241/column_a/info/4514508232435632076.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_a/mapfiles
    [junit] f TestUpgrade/341377241/column_a/mapfiles/4514508232435632076.1396626490 size=0
    [junit] d TestUpgrade/341377241/column_b
    [junit] d TestUpgrade/341377241/column_b/info
    [junit] f TestUpgrade/341377241/column_b/info/2547853154428391603.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_b/mapfiles
    [junit] f TestUpgrade/341377241/column_b/mapfiles/2547853154428391603.1396626490 size=0
    [junit] 2008-01-15 14:26:55,155 INFO  [main] hbase.StaticTestEnvironment(135): Shutting down FileSystem
    [junit] 2008-01-15 14:26:55,247 INFO  [main] hbase.StaticTestEnvironment(142): Shutting down Mini DFS
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] Shutting down DataNode 0
    [junit] 2008-01-15 14:26:56,038 WARN  [DataNode: [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data2]]  dfs.DataNode(658): java.io.InterruptedIOException
    [junit] at java.net.SocketOutputStream.socketWrite0(Native Method)
    [junit] at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
    [junit] at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    [junit] at org.apache.hadoop.ipc.Client$Connection$2.write(Client.java:199)
    [junit] at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    [junit] at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    [junit] at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:344)
    [junit] at org.apache.hadoop.ipc.Client.call(Client.java:501)
    [junit] at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    [junit] at org.apache.hadoop.dfs.$Proxy1.sendHeartbeat(Unknown Source)
    [junit] at org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:562)
    [junit] at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1736)
    [junit] at java.lang.Thread.run(Thread.java:595)

    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 55.351 sec
    [junit] Running org.onelab.test.TestFilter
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.059 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :516: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build.xml :31: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build-contrib.xml :206: Tests failed!

Total time: 181 minutes 7 seconds
Recording fingerprints
Publishing Javadoc
Recording test results
Updating HADOOP-2574
Updating HADOOP-1876
Updating HADOOP-2077
Updating HADOOP-2598
Updating HADOOP-2589
Updating HADOOP-2516
Updating HADOOP-2601
Updating HADOOP-2481
Updating HADOOP-1819
Updating HADOOP-2532
Updating HADOOP-2558
Updating HADOOP-2398
Updating HADOOP-2494

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #367

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/367/changes

Changes:

[stack] HADOOP-2592 Scanning, a region can let out a row that its not supposed to have

[nigel] Preparing for release 0.15.3

[cutting] HADOOP-2514.  Change trash feature to keep per-user trash directory.

[shv] HADOOP-2605. Remove bogus slash in task-tracker report bindAddress. Contributed by Konstantin Shvachko

[jimk] HADOOP-2587 Splits blocked by compactions cause region to be offline for duration of compaction.
Patch verified by Billy Pearson

[stack] HADOOP-2579 initializing a new HTable object against a nonexistent table
throws a NoServerForRegionException instead of a TableNotFoundException
when a different table has been created previously

------------------------------------------
[...truncated 71302 lines...]
    [junit] at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    [junit] at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:344)
    [junit] at org.apache.hadoop.ipc.Client.call(Client.java:501)
    [junit] at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    [junit] at org.apache.hadoop.dfs.$Proxy1.sendHeartbeat(Unknown Source)
    [junit] at org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:562)
    [junit] at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1736)
    [junit] at java.lang.Thread.run(Thread.java:595)

    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 204.009 sec
    [junit] 2008-01-16 17:28:21,276 INFO  [main] hbase.HRegionServer$ShutdownThread(156): Starting shutdown thread.
    [junit] 2008-01-16 17:28:21,277 INFO  [main] hbase.HRegionServer$ShutdownThread(161): Shutdown thread complete
    [junit] Running org.apache.hadoop.hbase.util.TestBase64

    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.299 sec
    [junit] Running org.apache.hadoop.hbase.util.TestKeying
    [junit] Original url http://abc:bcd@.../index.html?query=something#middle, Transformed url r:http://abc:bcd@.../index.html?query=something#middle
    [junit] Original url file:///usr/bin/java, Transformed url file:///usr/bin/java
    [junit] Original url dns:www.powerset.com, Transformed url dns:www.powerset.com
    [junit] Original url dns://dns.powerset.com/www.powerset.com, Transformed url r:dns://com.powerset.dns/www.powerset.com
    [junit] Original url http://one.two.three/index.html, Transformed url r:http://three.two.one/index.html
    [junit] Original url https://one.two.three:9443/index.html, Transformed url r:https://three.two.one:9443/index.html
    [junit] Original url ftp://one.two.three/index.html, Transformed url r:ftp://three.two.one/index.html
    [junit] Original url filename, Transformed url filename
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.073 sec
    [junit] Running org.apache.hadoop.hbase.util.TestMigrate
    [junit] Starting DataNode 0 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data2 
    [junit] Starting DataNode 1 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data4 
    [junit] d hregion_1028785192
    [junit] d hregion_1028785192/compaction.dir
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192/info
    [junit] f hregion_1028785192/compaction.dir/hregion_1028785192/info/done size=0
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192/info/info
    [junit] d hregion_1028785192/compaction.dir/hregion_1028785192/info/mapfiles
    [junit] f hregion_1028785192/compaction.dir/hregion_1028785192/info/toreplace size=72
    [junit] d hregion_1028785192/info
    [junit] d hregion_1028785192/info/info
    [junit] f hregion_1028785192/info/info/5273171824992064091 size=9
    [junit] d hregion_1028785192/info/mapfiles
    [junit] d hregion_1028785192/info/mapfiles/5273171824992064091
    [junit] f hregion_1028785192/info/mapfiles/5273171824992064091/data size=1710
    [junit] f hregion_1028785192/info/mapfiles/5273171824992064091/index size=249
    [junit] d hregion_1396626490
    [junit] d hregion_1396626490/column_a
    [junit] d hregion_1396626490/column_a/info
    [junit] f hregion_1396626490/column_a/info/7048898707195909278 size=9
    [junit] d hregion_1396626490/column_a/mapfiles
    [junit] d hregion_1396626490/column_a/mapfiles/7048898707195909278
    [junit] f hregion_1396626490/column_a/mapfiles/7048898707195909278/data size=1685790
    [junit] f hregion_1396626490/column_a/mapfiles/7048898707195909278/index size=1578
    [junit] d hregion_1396626490/column_b
    [junit] d hregion_1396626490/column_b/info
    [junit] f hregion_1396626490/column_b/info/4973609345075242702 size=9
    [junit] d hregion_1396626490/column_b/mapfiles
    [junit] d hregion_1396626490/column_b/mapfiles/4973609345075242702
    [junit] f hregion_1396626490/column_b/mapfiles/4973609345075242702/data size=1685790
    [junit] f hregion_1396626490/column_b/mapfiles/4973609345075242702/index size=1582
    [junit] d hregion_1396626490/compaction.dir
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_a
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_a/done size=0
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_a/info
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_a/mapfiles
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_a/toreplace size=80
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_b
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_b/done size=0
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_b/info
    [junit] d hregion_1396626490/compaction.dir/hregion_1396626490/column_b/mapfiles
    [junit] f hregion_1396626490/compaction.dir/hregion_1396626490/column_b/toreplace size=80
    [junit] d hregion_1971203659
    [junit] d hregion_1971203659/column_a
    [junit] d hregion_1971203659/column_a/info
    [junit] f hregion_1971203659/column_a/info/3526482879590887371.1396626490 size=63
    [junit] d hregion_1971203659/column_a/mapfiles
    [junit] f hregion_1971203659/column_a/mapfiles/3526482879590887371.1396626490 size=0
    [junit] d hregion_1971203659/column_b
    [junit] d hregion_1971203659/column_b/info
    [junit] f hregion_1971203659/column_b/info/209479190043547321.1396626490 size=63
    [junit] d hregion_1971203659/column_b/mapfiles
    [junit] f hregion_1971203659/column_b/mapfiles/209479190043547321.1396626490 size=0
    [junit] d hregion_341377241
    [junit] d hregion_341377241/column_a
    [junit] d hregion_341377241/column_a/info
    [junit] f hregion_341377241/column_a/info/4514508232435632076.1396626490 size=63
    [junit] d hregion_341377241/column_a/mapfiles
    [junit] f hregion_341377241/column_a/mapfiles/4514508232435632076.1396626490 size=0
    [junit] d hregion_341377241/column_b
    [junit] d hregion_341377241/column_b/info
    [junit] f hregion_341377241/column_b/info/2547853154428391603.1396626490 size=63
    [junit] d hregion_341377241/column_b/mapfiles
    [junit] f hregion_341377241/column_b/mapfiles/2547853154428391603.1396626490 size=0
    [junit] d hregion_70236052
    [junit] d hregion_70236052/compaction.dir
    [junit] d hregion_70236052/compaction.dir/hregion_70236052
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info
    [junit] f hregion_70236052/compaction.dir/hregion_70236052/info/done size=0
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info/info
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info/mapfiles
    [junit] f hregion_70236052/compaction.dir/hregion_70236052/info/toreplace size=68
    [junit] d hregion_70236052/info
    [junit] d hregion_70236052/info/info
    [junit] f hregion_70236052/info/info/7214912435301412040 size=9
    [junit] d hregion_70236052/info/mapfiles
    [junit] d hregion_70236052/info/mapfiles/7214912435301412040
    [junit] f hregion_70236052/info/mapfiles/7214912435301412040/data size=332
    [junit] f hregion_70236052/info/mapfiles/7214912435301412040/index size=232
    [junit] 2008-01-16 17:28:51,605 INFO  [main] hbase.HLog(232): new log writer created at hdfs://localhost:59952/hbase/log/hlog.dat.000
    [junit] 2008-01-16 17:28:51,641 DEBUG [main] hbase.HStore(697): starting 70236052/info (no reconstruction log)
    [junit] 2008-01-16 17:28:51,642 DEBUG [main] hbase.HStore(856): infodir: hdfs://localhost:59952/hbase/-ROOT-/70236052/info/info mapdir: hdfs://localhost:59952/hbase/-ROOT-/70236052/info/mapfiles
    [junit] 2008-01-16 17:28:51,673 DEBUG [main] hbase.HStore(723): maximum sequence id for hstore 70236052/info is 7
    [junit] 2008-01-16 17:28:51,742 WARN  [main] util.NativeCodeLoader(51): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    [junit] 2008-01-16 17:28:51,750 DEBUG [main] hbase.HRegion(293): Next sequence id for region -ROOT-,,0 is 8
    [junit] 2008-01-16 17:28:51,753 INFO  [main] hbase.HRegion(321): region -ROOT-,,0 available
    [junit] 2008-01-16 17:28:51,956 DEBUG [main] hbase.HStore(697): starting 1028785192/info (no reconstruction log)
    [junit] 2008-01-16 17:28:51,957 DEBUG [main] hbase.HStore(856): infodir: hdfs://localhost:59952/hbase/.META./1028785192/info/info mapdir: hdfs://localhost:59952/hbase/.META./1028785192/info/mapfiles
    [junit] 2008-01-16 17:28:51,978 DEBUG [main] hbase.HStore(723): maximum sequence id for hstore 1028785192/info is 35173
    [junit] 2008-01-16 17:28:52,025 DEBUG [main] hbase.HRegion(293): Next sequence id for region .META.,,1 is 35174
    [junit] 2008-01-16 17:28:52,029 INFO  [main] hbase.HRegion(321): region .META.,,1 available
    [junit] 2008-01-16 17:28:52,406 DEBUG [main] hbase.HRegion(400): compactions and cache flushes disabled for region .META.,,1
    [junit] 2008-01-16 17:28:52,408 DEBUG [main] hbase.HRegion(404): new updates and scanners for region .META.,,1 disabled
    [junit] 2008-01-16 17:28:52,409 DEBUG [main] hbase.HRegion(423): no more active scanners for region .META.,,1
    [junit] 2008-01-16 17:28:52,410 DEBUG [main] hbase.HRegion(429): no more write locks outstanding on region .META.,,1
    [junit] 2008-01-16 17:28:52,412 DEBUG [main] hbase.HStore(1047): closed 1028785192/info
    [junit] 2008-01-16 17:28:52,413 INFO  [main] hbase.HRegion(455): closed .META.,,1
    [junit] 2008-01-16 17:28:52,414 DEBUG [main] hbase.HRegion(400): compactions and cache flushes disabled for region -ROOT-,,0
    [junit] 2008-01-16 17:28:52,415 DEBUG [main] hbase.HRegion(404): new updates and scanners for region -ROOT-,,0 disabled
    [junit] 2008-01-16 17:28:52,416 DEBUG [main] hbase.HRegion(423): no more active scanners for region -ROOT-,,0
    [junit] 2008-01-16 17:28:52,417 DEBUG [main] hbase.HRegion(429): no more write locks outstanding on region -ROOT-,,0
    [junit] 2008-01-16 17:28:52,418 DEBUG [main] hbase.HStore(1047): closed 70236052/info
    [junit] 2008-01-16 17:28:52,419 INFO  [main] hbase.HRegion(455): closed -ROOT-,,0
    [junit] 2008-01-16 17:28:52,420 DEBUG [main] hbase.HLog(318): closing log writer in hdfs://localhost:59952/hbase/log
    [junit] d -ROOT-
    [junit] d -ROOT-/70236052
    [junit] d -ROOT-/70236052/compaction.dir
    [junit] d -ROOT-/70236052/compaction.dir/70236052
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/done size=0
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/info
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/mapfiles
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/toreplace size=68
    [junit] d -ROOT-/70236052/info
    [junit] d -ROOT-/70236052/info/info
    [junit] f -ROOT-/70236052/info/info/7214912435301412040 size=9
    [junit] d -ROOT-/70236052/info/mapfiles
    [junit] d -ROOT-/70236052/info/mapfiles/7214912435301412040
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/data size=332
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/index size=232
    [junit] d .META.
    [junit] d .META./1028785192
    [junit] d .META./1028785192/compaction.dir
    [junit] d .META./1028785192/compaction.dir/1028785192
    [junit] d .META./1028785192/compaction.dir/1028785192/info
    [junit] f .META./1028785192/compaction.dir/1028785192/info/done size=0
    [junit] d .META./1028785192/compaction.dir/1028785192/info/info
    [junit] d .META./1028785192/compaction.dir/1028785192/info/mapfiles
    [junit] f .META./1028785192/compaction.dir/1028785192/info/toreplace size=72
    [junit] d .META./1028785192/info
    [junit] d .META./1028785192/info/info
    [junit] f .META./1028785192/info/info/5273171824992064091 size=9
    [junit] d .META./1028785192/info/mapfiles
    [junit] d .META./1028785192/info/mapfiles/5273171824992064091
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/data size=1710
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/index size=249
    [junit] d TestUpgrade
    [junit] d TestUpgrade/1396626490
    [junit] d TestUpgrade/1396626490/column_a
    [junit] d TestUpgrade/1396626490/column_a/info
    [junit] f TestUpgrade/1396626490/column_a/info/7048898707195909278 size=9
    [junit] d TestUpgrade/1396626490/column_a/mapfiles
    [junit] d TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/data size=1685790
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/index size=1578
    [junit] d TestUpgrade/1396626490/column_b
    [junit] d TestUpgrade/1396626490/column_b/info
    [junit] f TestUpgrade/1396626490/column_b/info/4973609345075242702 size=9
    [junit] d TestUpgrade/1396626490/column_b/mapfiles
    [junit] d TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/data size=1685790
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/index size=1582
    [junit] d TestUpgrade/1396626490/compaction.dir
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/toreplace size=80
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/toreplace size=80
    [junit] d TestUpgrade/1971203659
    [junit] d TestUpgrade/1971203659/column_a
    [junit] d TestUpgrade/1971203659/column_a/info
    [junit] f TestUpgrade/1971203659/column_a/info/3526482879590887371.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_a/mapfiles
    [junit] f TestUpgrade/1971203659/column_a/mapfiles/3526482879590887371.1396626490 size=0
    [junit] d TestUpgrade/1971203659/column_b
    [junit] d TestUpgrade/1971203659/column_b/info
    [junit] f TestUpgrade/1971203659/column_b/info/209479190043547321.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_b/mapfiles
    [junit] f TestUpgrade/1971203659/column_b/mapfiles/209479190043547321.1396626490 size=0
    [junit] d TestUpgrade/341377241
    [junit] d TestUpgrade/341377241/column_a
    [junit] d TestUpgrade/341377241/column_a/info
    [junit] f TestUpgrade/341377241/column_a/info/4514508232435632076.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_a/mapfiles
    [junit] f TestUpgrade/341377241/column_a/mapfiles/4514508232435632076.1396626490 size=0
    [junit] d TestUpgrade/341377241/column_b
    [junit] d TestUpgrade/341377241/column_b/info
    [junit] f TestUpgrade/341377241/column_b/info/2547853154428391603.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_b/mapfiles
    [junit] f TestUpgrade/341377241/column_b/mapfiles/2547853154428391603.1396626490 size=0
    [junit] 2008-01-16 17:28:52,648 INFO  [main] hbase.StaticTestEnvironment(135): Shutting down FileSystem
    [junit] 2008-01-16 17:28:52,801 INFO  [main] hbase.StaticTestEnvironment(142): Shutting down Mini DFS
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] Shutting down DataNode 0
    [junit] 2008-01-16 17:28:53,529 ERROR [DataNode: [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data2]]  dfs.DataNode(1738): Exception: java.lang.reflect.UndeclaredThrowableException
    [junit] at org.apache.hadoop.dfs.$Proxy1.sendHeartbeat(Unknown Source)
    [junit] at org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:562)
    [junit] at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1736)
    [junit] at java.lang.Thread.run(Thread.java:595)
    [junit] Caused by: java.lang.InterruptedException
    [junit] at java.lang.Object.wait(Native Method)
    [junit] at org.apache.hadoop.ipc.Client.call(Client.java:504)
    [junit] at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    [junit] ... 4 more

    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 30.244 sec
    [junit] Running org.onelab.test.TestFilter
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.067 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :516: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build.xml :31: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build-contrib.xml :206: Tests failed!

Total time: 199 minutes 9 seconds
Recording fingerprints
Publishing Javadoc
Recording test results
Updating HADOOP-2587
Updating HADOOP-2514
Updating HADOOP-2605
Updating HADOOP-2579
Updating HADOOP-2592

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #368

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/368/changes

------------------------------------------
started
ERROR: svn: PROPFIND request failed on '/repos/asf/lucene/hadoop/trunk'
svn: Connection refused
org.tmatesoft.svn.core.SVNException: svn: PROPFIND request failed on '/repos/asf/lucene/hadoop/trunk'
svn: Connection refused
        at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:49)
        at org.tmatesoft.svn.core.internal.io.dav.DAVUtil.findStartingProperties(DAVUtil.java:124)
        at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.fetchRepositoryUUID(DAVConnection.java:88)
        at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.testConnection(DAVRepository.java:85)
        at hudson.scm.SubversionSCM$DescriptorImpl.checkRepositoryPath(SubversionSCM.java:1134)
        at hudson.scm.SubversionSCM.repositoryLocationsExist(SubversionSCM.java:1195)
        at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:335)
        at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:292)
        at hudson.model.AbstractProject.checkout(AbstractProject.java:541)
        at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:223)
        at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:189)
        at hudson.model.Run.run(Run.java:649)
        at hudson.model.Build.run(Build.java:102)
        at hudson.model.ResourceController.execute(ResourceController.java:70)
        at hudson.model.Executor.run(Executor.java:64)
Recording fingerprints
Publishing Javadoc
Recording test results

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #369

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/369/changes

------------------------------------------
[...truncated 71768 lines...]
    [junit] at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:849)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:195)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:134)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:106)
    [junit] at org.apache.hadoop.security.TestPermission.testFilePermision(TestPermission.java:111)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at junit.framework.TestCase.runTest(TestCase.java:154)
    [junit] at junit.framework.TestCase.runBare(TestCase.java:127)
    [junit] at junit.framework.TestResult$1.protect(TestResult.java:106)
    [junit] 2008-01-18 15:36:45,782 INFO  dfs.NameNodeMetrics (NameNodeMetrics.java:<init>(74)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
    [junit] at junit.framework.TestResult.runProtected(TestResult.java:124)
    [junit] at junit.framework.TestResult.run(TestResult.java:109)
    [junit] at junit.framework.TestCase.run(TestCase.java:118)
    [junit] at junit.framework.TestSuite.runTest(TestSuite.java:208)
    [junit] at junit.framework.TestSuite.run(TestSuite.java:203)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:297)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:672)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:567)
    [junit] 2008-01-18 15:36:45,810 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(321)) - fsOwner=hudson,other
    [junit] 2008-01-18 15:36:45,812 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(325)) - supergroup=supergroup
    [junit] 2008-01-18 15:36:45,812 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(326)) - isPermissionEnabled=true
    [junit] 2008-01-18 15:36:45,829 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(248)) - Finished loading FSImage in 45 msecs
    [junit] 2008-01-18 15:36:45,831 INFO  fs.FSNamesystem (FSNamesystem.java:leave(3554)) - Leaving safemode after 47 msecs
    [junit] 2008-01-18 15:36:45,832 INFO  dfs.StateChange (FSNamesystem.java:leave(3563)) - STATE* Network topology has 0 racks and 0 datanodes
    [junit] 2008-01-18 15:36:45,832 INFO  dfs.StateChange (FSNamesystem.java:leave(3566)) - STATE* UnderReplicatedBlocks has 0 blocks
    [junit] 2008-01-18 15:36:46,004 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-18 15:36:46,171 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@10980e7
    [junit] 2008-01-18 15:36:46,180 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-18 15:36:46,181 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:36:46,182 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-18 15:36:46,184 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:49801
    [junit] 2008-01-18 15:36:46,185 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@4c71d2
    [junit] 2008-01-18 15:36:46,186 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(287)) - Web-server up at: 0.0.0.0:49801
    [junit] 2008-01-18 15:36:46,188 INFO  ipc.Server (Server.java:run(470)) - IPC Server Responder: starting
    [junit] 2008-01-18 15:36:46,189 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 1 on 49800: starting
    [junit] 2008-01-18 15:36:46,190 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 4 on 49800: starting
    [junit] 2008-01-18 15:36:46,191 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 5 on 49800: starting
    [junit] 2008-01-18 15:36:46,192 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 6 on 49800: starting
    [junit] 2008-01-18 15:36:46,192 INFO  ipc.Server (Server.java:run(317)) - IPC Server listener on 49800: starting
    [junit] 2008-01-18 15:36:46,192 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 3 on 49800: starting
    [junit] 2008-01-18 15:36:46,191 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 9 on 49800: starting
    [junit] 2008-01-18 15:36:46,191 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 8 on 49800: starting
    [junit] 2008-01-18 15:36:46,191 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 7 on 49800: starting
    [junit] 2008-01-18 15:36:46,190 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 2 on 49800: starting
    [junit] 2008-01-18 15:36:46,189 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 0 on 49800: starting
    [junit] Starting DataNode 0 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2 
    [junit] 2008-01-18 15:36:46,258 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-18 15:36:46,264 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1  is not formatted.
    [junit] 2008-01-18 15:36:46,265 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-18 15:36:48,107 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data2  is not formatted.
    [junit] 2008-01-18 15:36:48,108 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-18 15:36:50,600 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 49805
    [junit] 2008-01-18 15:36:50,601 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-18 15:36:50,746 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-18 15:36:50,950 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@1972e3a
    [junit] 2008-01-18 15:36:50,977 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-18 15:36:50,978 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:36:50,979 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-18 15:36:50,980 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:49806
    [junit] 2008-01-18 15:36:50,981 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@c24193
    [junit] 2008-01-18 15:36:50,985 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:49805 storage DS-763672180-140.211.11.75-49805-1200670610983
    [junit] 2008-01-18 15:36:50,986 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:49805
    [junit] 2008-01-18 15:36:51,437 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-763672180-140.211.11.75-49805-1200670610983 is assigned to data-node 127.0.0.1:49805
    [junit] 2008-01-18 15:36:51,439 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:49805In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-18 15:36:51,440 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3508737msec Initial delay: 0msec
    [junit] Starting DataNode 1 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4 
    [junit] 2008-01-18 15:36:51,597 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-18 15:36:51,694 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2008-01-18 15:36:51,698 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-18 15:36:51,841 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 3 msecs
    [junit] 2008-01-18 15:36:54,696 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2008-01-18 15:36:54,697 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-18 15:36:58,381 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 49810
    [junit] 2008-01-18 15:36:58,383 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-18 15:36:58,397 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-18 15:36:58,556 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@19b4748
    [junit] 2008-01-18 15:36:58,609 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-18 15:36:58,611 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:36:58,613 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-18 15:36:58,616 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:49811
    [junit] 2008-01-18 15:36:58,617 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@89dd
    [junit] 2008-01-18 15:36:58,622 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:49810 storage DS-908038220-140.211.11.75-49810-1200670618621
    [junit] 2008-01-18 15:36:58,623 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:49810
    [junit] 2008-01-18 15:36:59,065 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-908038220-140.211.11.75-49810-1200670618621 is assigned to data-node 127.0.0.1:49810
    [junit] Starting DataNode 2 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6 
    [junit] 2008-01-18 15:36:59,070 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:49810In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] 2008-01-18 15:36:59,071 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-18 15:36:59,072 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3302827msec Initial delay: 0msec
    [junit] 2008-01-18 15:36:59,286 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5  is not formatted.
    [junit] 2008-01-18 15:36:59,287 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-18 15:36:59,770 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 3 msecs
    [junit] 2008-01-18 15:37:01,053 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data6  is not formatted.
    [junit] 2008-01-18 15:37:01,054 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-18 15:37:03,268 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 49814
    [junit] 2008-01-18 15:37:03,269 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-18 15:37:03,344 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-18 15:37:03,493 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@d647d8
    [junit] 2008-01-18 15:37:03,669 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-18 15:37:03,670 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:37:03,671 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-18 15:37:03,675 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:49818
    [junit] 2008-01-18 15:37:03,676 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@1a41cc7
    [junit] 2008-01-18 15:37:03,680 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:49814 storage DS-1639333971-140.211.11.75-49814-1200670623679
    [junit] 2008-01-18 15:37:03,681 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:49814
    [junit] 2008-01-18 15:37:04,139 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-1639333971-140.211.11.75-49814-1200670623679 is assigned to data-node 127.0.0.1:49814
    [junit] 2008-01-18 15:37:04,142 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:49814In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] 2008-01-18 15:37:04,144 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3386847msec Initial delay: 0msec
    [junit] 2008-01-18 15:37:04,379 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 5 msecs
    [junit] 2008-01-18 15:37:05,654 INFO  fs.DFSClient (DFSClient.java:run(1592)) - Allocating new block
    [junit] 2008-01-18 15:37:05,657 INFO  dfs.StateChange (FSNamesystem.java:allocateBlock(1274)) - BLOCK* NameSystem.allocateBlock: /data/file1. blk_6638807983918515008
    [junit] 2008-01-18 15:37:05,660 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:49805
    [junit] 2008-01-18 15:37:05,661 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:49810
    [junit] 2008-01-18 15:37:05,661 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:49814
    [junit] 2008-01-18 15:37:05,662 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1985)) - Connecting to 127.0.0.1:49805
    [junit] 2008-01-18 15:37:05,666 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_6638807983918515008 from /127.0.0.1
    [junit] 2008-01-18 15:37:05,723 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_6638807983918515008 from /127.0.0.1
    [junit] 2008-01-18 15:37:05,861 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_6638807983918515008 from /127.0.0.1
    [junit] 2008-01-18 15:37:05,972 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 0 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-18 15:37:05,973 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 1 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-18 15:37:05,974 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 1 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-18 15:37:05,974 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 2 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-18 15:37:05,975 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 2 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-18 15:37:06,227 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1802)) - Received block blk_6638807983918515008 of size 100 from /127.0.0.1
    [junit] 2008-01-18 15:37:06,228 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1819)) - PacketResponder 0 for block blk_6638807983918515008 terminating
    [junit] 2008-01-18 15:37:06,228 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:49814 is added to blk_6638807983918515008 size 100
    [junit] 2008-01-18 15:37:06,359 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_6638807983918515008 of size 100 from /127.0.0.1
    [junit] 2008-01-18 15:37:06,361 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 1 for block blk_6638807983918515008 terminating
    [junit] 2008-01-18 15:37:06,361 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:49810 is added to blk_6638807983918515008 size 100
    [junit] 2008-01-18 15:37:06,527 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_6638807983918515008 of size 100 from /127.0.0.1
    [junit] 2008-01-18 15:37:06,527 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 2 for block blk_6638807983918515008 terminating
    [junit] 2008-01-18 15:37:06,528 INFO  fs.DFSClient (DFSClient.java:run(1653)) - Closing old block blk_6638807983918515008
    [junit] 2008-01-18 15:37:06,530 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:49805 is added to blk_6638807983918515008 size 100
    [junit] 2008-01-18 15:37:07,222 INFO  dfs.DataNode (DataNode.java:readBlock(1051)) - 127.0.0.1:49810 Served block blk_6638807983918515008 to /127.0.0.1
    [junit] 2008-01-18 15:37:07,467 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 8 on 49800, call mkdirs(/data/web2, rwxr-xr-x) from 127.0.0.1:49825: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1541)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1524)
    [junit] at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:413)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-18 15:37:07,492 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 7 on 49800, call create(/data/file2, rwxr-xr-x, DFSClient_-2020994479, true, 3, 67108864) from 127.0.0.1:49825: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:940)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:915)
    [junit] at org.apache.hadoop.dfs.NameNode.create(NameNode.java:273)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-18 15:37:07,497 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 2 on 49800, call open(/data/file1, 0, 671088640) from 127.0.0.1:49825: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:106)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPathAccess(FSNamesystem.java:3924)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:732)
    [junit] at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:246)
    [junit] at org.apache.hadoop.dfs.NameNode.open(NameNode.java:233)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2008-01-18 15:37:08,445 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:49818
    [junit] 2008-01-18 15:37:08,447 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@d647d8
    [junit] 2008-01-18 15:37:08,781 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-18 15:37:08,890 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:37:09,006 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-18 15:37:09,007 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@1a41cc7
    [junit] 2008-01-18 15:37:09,008 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-18 15:37:09,262 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-18 15:37:10,012 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2008-01-18 15:37:10,013 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:49814:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] Shutting down DataNode 1
    [junit] 2008-01-18 15:37:10,017 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:49811
    [junit] 2008-01-18 15:37:10,018 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@19b4748
    [junit] 2008-01-18 15:37:10,133 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-18 15:37:10,237 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:37:10,333 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-18 15:37:10,334 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@89dd
    [junit] 2008-01-18 15:37:10,335 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-18 15:37:10,336 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-18 15:37:10,336 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:49810:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] Shutting down DataNode 0
    [junit] 2008-01-18 15:37:10,338 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:49806
    [junit] 2008-01-18 15:37:10,339 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1972e3a
    [junit] 2008-01-18 15:37:10,439 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-18 15:37:10,536 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:37:10,633 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-18 15:37:10,633 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@c24193
    [junit] 2008-01-18 15:37:10,634 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-18 15:37:10,792 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-18 15:37:11,642 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2008-01-18 15:37:11,643 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:49805:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-18 15:37:11,644 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:49801
    [junit] 2008-01-18 15:37:11,645 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@10980e7
    [junit] 2008-01-18 15:37:11,742 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-18 15:37:11,839 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-18 15:37:11,933 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-18 15:37:11,933 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@4c71d2
    [junit] 2008-01-18 15:37:11,934 INFO  fs.FSNamesystem (FSEditLog.java:printStatistics(772)) - Number of transactions: 6 Total time for transactions(ms): 0 Number of syncs: 4 SyncTimes(ms): 457 592
    [junit] 2008-01-18 15:37:12,027 INFO  ipc.Server (Server.java:stop(999)) - Stopping server on 49800
    [junit] 2008-01-18 15:37:12,028 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 1 on 49800: exiting
    [junit] 2008-01-18 15:37:12,028 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 49800: exiting
    [junit] 2008-01-18 15:37:12,030 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 7 on 49800: exiting
    [junit] 2008-01-18 15:37:12,031 INFO  ipc.Server (Server.java:run(525)) - Stopping IPC Server Responder
    [junit] 2008-01-18 15:37:12,030 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 3 on 49800: exiting
    [junit] 2008-01-18 15:37:12,030 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 5 on 49800: exiting
    [junit] 2008-01-18 15:37:12,030 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 6 on 49800: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 58.865 sec
    [junit] 2008-01-18 15:37:12,030 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 8 on 49800: exiting
    [junit] 2008-01-18 15:37:12,029 INFO  ipc.Server (Server.java:run(353)) - Stopping IPC Server listener on 49800
    [junit] 2008-01-18 15:37:12,029 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 9 on 49800: exiting
    [junit] 2008-01-18 15:37:12,029 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 4 on 49800: exiting
    [junit] 2008-01-18 15:37:12,029 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 2 on 49800: exiting
    [junit] Running org.apache.hadoop.security.TestUnixUserGroupInformation
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.574 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 1.021 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :532: Tests failed!

Total time: 224 minutes 50 seconds
Recording fingerprints
Publishing Javadoc
Recording test results

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #370

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/370/changes

Changes:

[cdouglas] HADOOP-2582. Prevent 'bin/hadoop fs -copyToLocal' from creating zero-length
files when the src does not exist. Contributed by Lohit Vijayarenu.

[nigel] parameterized clover jar location

[nigel] parameterized nightly build script

------------------------------------------
[...truncated 75123 lines...]
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:134)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:106)
    [junit] at org.apache.hadoop.security.TestPermission.testFilePermision(TestPermission.java:111)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 2008-01-19 13:51:58,601 INFO  dfs.NameNodeMetrics (NameNodeMetrics.java:<init>(74)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at junit.framework.TestCase.runTest(TestCase.java:154)
    [junit] at junit.framework.TestCase.runBare(TestCase.java:127)
    [junit] at junit.framework.TestResult$1.protect(TestResult.java:106)
    [junit] at junit.framework.TestResult.runProtected(TestResult.java:124)
    [junit] at junit.framework.TestResult.run(TestResult.java:109)
    [junit] at junit.framework.TestCase.run(TestCase.java:118)
    [junit] at junit.framework.TestSuite.runTest(TestSuite.java:208)
    [junit] at junit.framework.TestSuite.run(TestSuite.java:203)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:297)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:672)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:567)
    [junit] 2008-01-19 13:51:58,633 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(321)) - fsOwner=hudson,other
    [junit] 2008-01-19 13:51:58,636 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(325)) - supergroup=supergroup
    [junit] 2008-01-19 13:51:58,636 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(326)) - isPermissionEnabled=true
    [junit] 2008-01-19 13:51:58,655 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(248)) - Finished loading FSImage in 52 msecs
    [junit] 2008-01-19 13:51:58,656 INFO  fs.FSNamesystem (FSNamesystem.java:leave(3554)) - Leaving safemode after 53 msecs
    [junit] 2008-01-19 13:51:58,657 INFO  dfs.StateChange (FSNamesystem.java:leave(3563)) - STATE* Network topology has 0 racks and 0 datanodes
    [junit] 2008-01-19 13:51:58,657 INFO  dfs.StateChange (FSNamesystem.java:leave(3566)) - STATE* UnderReplicatedBlocks has 0 blocks
    [junit] 2008-01-19 13:51:58,664 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-19 13:51:58,821 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@edbe39
    [junit] 2008-01-19 13:51:58,826 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-19 13:51:58,828 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:51:58,829 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-19 13:51:58,832 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:53228
    [junit] 2008-01-19 13:51:58,834 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@c4bc34
    [junit] 2008-01-19 13:51:58,836 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(287)) - Web-server up at: 0.0.0.0:53228
    [junit] 2008-01-19 13:51:58,838 INFO  ipc.Server (Server.java:run(470)) - IPC Server Responder: starting
    [junit] 2008-01-19 13:51:58,839 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 0 on 53227: starting
    [junit] 2008-01-19 13:51:58,840 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 2 on 53227: starting
    [junit] 2008-01-19 13:51:58,842 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 6 on 53227: starting
    [junit] 2008-01-19 13:51:58,842 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 7 on 53227: starting
    [junit] 2008-01-19 13:51:58,843 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 8 on 53227: starting
    [junit] 2008-01-19 13:51:58,843 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 9 on 53227: starting
    [junit] 2008-01-19 13:51:58,841 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 5 on 53227: starting
    [junit] 2008-01-19 13:51:58,841 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 4 on 53227: starting
    [junit] 2008-01-19 13:51:58,841 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 3 on 53227: starting
    [junit] 2008-01-19 13:51:58,840 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 1 on 53227: starting
    [junit] 2008-01-19 13:51:58,838 INFO  ipc.Server (Server.java:run(317)) - IPC Server listener on 53227: starting
    [junit] Starting DataNode 0 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2 
    [junit] 2008-01-19 13:51:58,895 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-19 13:51:58,905 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1  is not formatted.
    [junit] 2008-01-19 13:51:58,906 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-19 13:51:59,423 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data2  is not formatted.
    [junit] 2008-01-19 13:51:59,424 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-19 13:51:59,974 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 53231
    [junit] 2008-01-19 13:51:59,976 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-19 13:51:59,981 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-19 13:52:00,161 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@1b6235b
    [junit] 2008-01-19 13:52:00,166 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-19 13:52:00,167 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:52:00,168 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-19 13:52:00,169 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:53232
    [junit] 2008-01-19 13:52:00,170 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@4d41e2
    [junit] 2008-01-19 13:52:00,174 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:53231 storage DS-749940427-140.211.11.75-53231-1200750720172
    [junit] 2008-01-19 13:52:00,176 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:53231
    [junit] 2008-01-19 13:52:00,326 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-749940427-140.211.11.75-53231-1200750720172 is assigned to data-node 127.0.0.1:53231
    [junit] 2008-01-19 13:52:00,328 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:53231In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-19 13:52:00,329 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3473877msec Initial delay: 0msec
    [junit] Starting DataNode 1 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4 
    [junit] 2008-01-19 13:52:00,388 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-19 13:52:00,446 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2008-01-19 13:52:00,447 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-19 13:52:00,629 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-19 13:52:00,968 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2008-01-19 13:52:00,969 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-19 13:52:01,662 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 53237
    [junit] 2008-01-19 13:52:01,664 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-19 13:52:01,669 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-19 13:52:01,809 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@1df2964
    [junit] 2008-01-19 13:52:01,813 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-19 13:52:01,815 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:52:01,816 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-19 13:52:01,819 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:53238
    [junit] 2008-01-19 13:52:01,820 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@1d840d9
    [junit] 2008-01-19 13:52:01,825 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:53237 storage DS-1737614705-140.211.11.75-53237-1200750721824
    [junit] 2008-01-19 13:52:01,826 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:53237
    [junit] 2008-01-19 13:52:01,950 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-1737614705-140.211.11.75-53237-1200750721824 is assigned to data-node 127.0.0.1:53237
    [junit] Starting DataNode 2 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6 
    [junit] 2008-01-19 13:52:01,952 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:53237In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] 2008-01-19 13:52:01,954 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3421312msec Initial delay: 0msec
    [junit] 2008-01-19 13:52:01,955 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-19 13:52:02,003 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5  is not formatted.
    [junit] 2008-01-19 13:52:02,004 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-19 13:52:02,256 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 2 msecs
    [junit] 2008-01-19 13:52:02,543 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data6  is not formatted.
    [junit] 2008-01-19 13:52:02,544 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-19 13:52:03,780 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 53239
    [junit] 2008-01-19 13:52:03,781 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-19 13:52:03,787 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-19 13:52:03,919 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@8beff2
    [junit] 2008-01-19 13:52:03,924 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-19 13:52:03,925 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:52:03,926 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-19 13:52:03,930 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:53240
    [junit] 2008-01-19 13:52:03,931 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@cc7439
    [junit] 2008-01-19 13:52:03,937 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:53239 storage DS-744006028-140.211.11.75-53239-1200750723935
    [junit] 2008-01-19 13:52:03,938 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:53239
    [junit] 2008-01-19 13:52:04,244 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-744006028-140.211.11.75-53239-1200750723935 is assigned to data-node 127.0.0.1:53239
    [junit] 2008-01-19 13:52:04,246 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:53239In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] 2008-01-19 13:52:04,247 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3589598msec Initial delay: 0msec
    [junit] 2008-01-19 13:52:04,505 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 3 msecs
    [junit] 2008-01-19 13:52:05,941 INFO  fs.DFSClient (DFSClient.java:run(1592)) - Allocating new block
    [junit] 2008-01-19 13:52:05,944 INFO  dfs.StateChange (FSNamesystem.java:allocateBlock(1274)) - BLOCK* NameSystem.allocateBlock: /data/file1. blk_8355597249011503480
    [junit] 2008-01-19 13:52:05,947 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:53237
    [junit] 2008-01-19 13:52:05,948 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:53231
    [junit] 2008-01-19 13:52:05,948 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:53239
    [junit] 2008-01-19 13:52:05,948 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1985)) - Connecting to 127.0.0.1:53237
    [junit] 2008-01-19 13:52:05,952 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_8355597249011503480 from /127.0.0.1
    [junit] 2008-01-19 13:52:05,956 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_8355597249011503480 from /127.0.0.1
    [junit] 2008-01-19 13:52:05,958 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_8355597249011503480 from /127.0.0.1
    [junit] 2008-01-19 13:52:05,960 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 0 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-19 13:52:05,960 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 1 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-19 13:52:05,961 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 1 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-19 13:52:05,961 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 2 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-19 13:52:05,962 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 2 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-19 13:52:06,011 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1802)) - Received block blk_8355597249011503480 of size 100 from /127.0.0.1
    [junit] 2008-01-19 13:52:06,013 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:53239 is added to blk_8355597249011503480 size 100
    [junit] 2008-01-19 13:52:06,014 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1819)) - PacketResponder 0 for block blk_8355597249011503480 terminating
    [junit] 2008-01-19 13:52:06,463 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_8355597249011503480 of size 100 from /127.0.0.1
    [junit] 2008-01-19 13:52:06,464 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 1 for block blk_8355597249011503480 terminating
    [junit] 2008-01-19 13:52:06,467 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:53231 is added to blk_8355597249011503480 size 100
    [junit] 2008-01-19 13:52:06,555 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_8355597249011503480 of size 100 from /127.0.0.1
    [junit] 2008-01-19 13:52:06,556 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 2 for block blk_8355597249011503480 terminating
    [junit] 2008-01-19 13:52:06,556 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:53237 is added to blk_8355597249011503480 size 100
    [junit] 2008-01-19 13:52:06,556 INFO  fs.DFSClient (DFSClient.java:run(1653)) - Closing old block blk_8355597249011503480
    [junit] 2008-01-19 13:52:07,392 INFO  dfs.DataNode (DataNode.java:readBlock(1051)) - 127.0.0.1:53231 Served block blk_8355597249011503480 to /127.0.0.1
    [junit] 2008-01-19 13:52:08,011 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 7 on 53227, call mkdirs(/data/web2, rwxr-xr-x) from 127.0.0.1:53246: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1541)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1524)
    [junit] at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:413)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-19 13:52:08,037 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 8 on 53227, call create(/data/file2, rwxr-xr-x, DFSClient_-2146738807, true, 3, 67108864) from 127.0.0.1:53246: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:940)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:915)
    [junit] at org.apache.hadoop.dfs.NameNode.create(NameNode.java:273)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-19 13:52:08,042 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 9 on 53227, call open(/data/file1, 0, 671088640) from 127.0.0.1:53246: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:106)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPathAccess(FSNamesystem.java:3924)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:732)
    [junit] at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:246)
    [junit] at org.apache.hadoop.dfs.NameNode.open(NameNode.java:233)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2008-01-19 13:52:08,984 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:53240
    [junit] 2008-01-19 13:52:08,986 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@8beff2
    [junit] 2008-01-19 13:52:09,266 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-19 13:52:09,369 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:52:09,486 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-19 13:52:09,487 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@cc7439
    [junit] 2008-01-19 13:52:09,488 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-19 13:52:10,262 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:53239:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] 2008-01-19 13:52:10,372 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-19 13:52:10,492 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2008-01-19 13:52:10,493 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:53238
    [junit] 2008-01-19 13:52:10,494 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1df2964
    [junit] 2008-01-19 13:52:10,596 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-19 13:52:10,693 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:52:10,791 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-19 13:52:10,791 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@1d840d9
    [junit] 2008-01-19 13:52:10,793 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-19 13:52:10,982 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:53237:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] 2008-01-19 13:52:11,282 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-19 13:52:11,801 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2008-01-19 13:52:11,803 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:53232
    [junit] 2008-01-19 13:52:11,804 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1b6235b
    [junit] 2008-01-19 13:52:11,910 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-19 13:52:12,003 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:52:12,092 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-19 13:52:12,093 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@4d41e2
    [junit] 2008-01-19 13:52:12,094 INFO  dfs.DataNode (DataNode.java:run(926)) - 127.0.0.1:53231:Exiting DataXceiveServer due to java.net.SocketException: Socket closed
    [junit] 2008-01-19 13:52:12,095 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2008-01-19 13:52:12,096 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-19 13:52:12,096 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:53231:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-19 13:52:12,097 INFO  util.ThreadedServer (ThreadedServer.java:run(656)) - Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=53228]
    [junit] 2008-01-19 13:52:12,102 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:53228
    [junit] 2008-01-19 13:52:12,103 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@edbe39
    [junit] 2008-01-19 13:52:12,192 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-19 13:52:12,284 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-19 13:52:12,373 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-19 13:52:12,374 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@c4bc34
    [junit] 2008-01-19 13:52:12,375 INFO  fs.FSNamesystem (FSEditLog.java:printStatistics(772)) - Number of transactions: 6 Total time for transactions(ms): 3 Number of syncs: 4 SyncTimes(ms): 758 648
    [junit] 2008-01-19 13:52:12,420 INFO  ipc.Server (Server.java:stop(999)) - Stopping server on 53227
    [junit] 2008-01-19 13:52:12,422 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 2 on 53227: exiting
    [junit] 2008-01-19 13:52:12,423 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 3 on 53227: exiting
    [junit] 2008-01-19 13:52:12,423 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 4 on 53227: exiting
    [junit] 2008-01-19 13:52:12,424 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 53227: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 33.311 sec
    [junit] 2008-01-19 13:52:12,423 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 1 on 53227: exiting
    [junit] 2008-01-19 13:52:12,423 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 6 on 53227: exiting
    [junit] 2008-01-19 13:52:12,423 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 5 on 53227: exiting
    [junit] 2008-01-19 13:52:12,423 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 7 on 53227: exiting
    [junit] 2008-01-19 13:52:12,422 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 8 on 53227: exiting
    [junit] 2008-01-19 13:52:12,428 INFO  ipc.Server (Server.java:run(525)) - Stopping IPC Server Responder
    [junit] 2008-01-19 13:52:12,422 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 9 on 53227: exiting
    [junit] Running org.apache.hadoop.security.TestUnixUserGroupInformation
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.532 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.948 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :532: Tests failed!

Total time: 144 minutes 59 seconds
Recording fingerprints
Publishing Javadoc
Recording test results
Updating HADOOP-2582

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #371

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/371/changes

Changes:

[cdouglas] HADOOP-2367. Add ability to profile a subset of map/reduce tasks and fetch the
result to the local filesystem of the submitting application. Also includes a
general IntegerRanges extension to Configuration for setting positive, ranged
parameters. Contributed by Owen O'Malley.

[jimk] HADOOP-2643 Make migration tool smarter.

------------------------------------------
[...truncated 74600 lines...]
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:195)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:134)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:106)
    [junit] at org.apache.hadoop.security.TestPermission.testFilePermision(TestPermission.java:111)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at junit.framework.TestCase.runTest(TestCase.java:154)
    [junit] at junit.framework.TestCase.runBare(TestCase.java:127)
    [junit] at junit.framework.TestResult$1.protect(TestResult.java:106)
    [junit] at junit.framework.TestResult.runProtected(TestResult.java:124)
    [junit] at junit.framework.TestResult.run(TestResult.java:109)
    [junit] at junit.framework.TestCase.run(TestCase.java:118)
    [junit] at junit.framework.TestSuite.runTest(TestSuite.java:208)
    [junit] at junit.framework.TestSuite.run(TestSuite.java:203)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:297)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:672)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:567)
    [junit] 2008-01-20 17:29:29,224 INFO  dfs.NameNodeMetrics (NameNodeMetrics.java:<init>(74)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
    [junit] 2008-01-20 17:29:29,259 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(321)) - fsOwner=hudson,other
    [junit] 2008-01-20 17:29:29,261 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(325)) - supergroup=supergroup
    [junit] 2008-01-20 17:29:29,262 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(326)) - isPermissionEnabled=true
    [junit] 2008-01-20 17:29:29,267 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(248)) - Finished loading FSImage in 43 msecs
    [junit] 2008-01-20 17:29:29,268 INFO  fs.FSNamesystem (FSNamesystem.java:leave(3554)) - Leaving safemode after 44 msecs
    [junit] 2008-01-20 17:29:29,268 INFO  dfs.StateChange (FSNamesystem.java:leave(3563)) - STATE* Network topology has 0 racks and 0 datanodes
    [junit] 2008-01-20 17:29:29,269 INFO  dfs.StateChange (FSNamesystem.java:leave(3566)) - STATE* UnderReplicatedBlocks has 0 blocks
    [junit] 2008-01-20 17:29:29,273 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-20 17:29:29,379 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@edbe39
    [junit] 2008-01-20 17:29:29,381 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:29,382 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:29,382 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:29,384 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:36721
    [junit] 2008-01-20 17:29:29,384 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@c4bc34
    [junit] 2008-01-20 17:29:29,384 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(287)) - Web-server up at: 0.0.0.0:36721
    [junit] 2008-01-20 17:29:29,385 INFO  ipc.Server (Server.java:run(470)) - IPC Server Responder: starting
    [junit] 2008-01-20 17:29:29,385 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 0 on 36720: starting
    [junit] 2008-01-20 17:29:29,386 INFO  ipc.Server (Server.java:run(317)) - IPC Server listener on 36720: starting
    [junit] 2008-01-20 17:29:29,387 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 1 on 36720: starting
    [junit] 2008-01-20 17:29:29,387 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 5 on 36720: starting
    [junit] 2008-01-20 17:29:29,387 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 6 on 36720: starting
    [junit] 2008-01-20 17:29:29,388 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 9 on 36720: starting
    [junit] 2008-01-20 17:29:29,388 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 3 on 36720: starting
    [junit] 2008-01-20 17:29:29,388 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 7 on 36720: starting
    [junit] 2008-01-20 17:29:29,388 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 8 on 36720: starting
    [junit] 2008-01-20 17:29:29,388 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 4 on 36720: starting
    [junit] 2008-01-20 17:29:29,386 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 2 on 36720: starting
    [junit] Starting DataNode 0 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2 
    [junit] 2008-01-20 17:29:29,424 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-20 17:29:29,428 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1  is not formatted.
    [junit] 2008-01-20 17:29:29,429 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-20 17:29:29,589 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data2  is not formatted.
    [junit] 2008-01-20 17:29:29,590 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-20 17:29:29,715 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 36724
    [junit] 2008-01-20 17:29:29,723 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-20 17:29:29,725 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-20 17:29:29,804 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@bfed5a
    [junit] 2008-01-20 17:29:29,806 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:29,807 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:29,807 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:29,808 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:36725
    [junit] 2008-01-20 17:29:29,809 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@4d41e2
    [junit] 2008-01-20 17:29:29,810 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:36724 storage DS-1922225288-140.211.11.75-36724-1200850169809
    [junit] 2008-01-20 17:29:29,811 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:36724
    [junit] 2008-01-20 17:29:29,831 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-1922225288-140.211.11.75-36724-1200850169809 is assigned to data-node 127.0.0.1:36724
    [junit] Starting DataNode 1 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4 
    [junit] 2008-01-20 17:29:29,832 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:36724In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-20 17:29:29,832 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-20 17:29:29,833 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3332577msec Initial delay: 0msec
    [junit] 2008-01-20 17:29:29,879 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2008-01-20 17:29:29,880 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-20 17:29:30,050 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-20 17:29:30,140 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2008-01-20 17:29:30,140 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-20 17:29:30,339 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 36726
    [junit] 2008-01-20 17:29:30,340 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-20 17:29:30,342 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-20 17:29:30,420 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@174aa60
    [junit] 2008-01-20 17:29:30,421 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:30,422 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:30,422 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:30,424 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:36727
    [junit] 2008-01-20 17:29:30,424 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@8f2ca6
    [junit] 2008-01-20 17:29:30,426 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:36726 storage DS-25598367-140.211.11.75-36726-1200850170425
    [junit] 2008-01-20 17:29:30,426 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:36726
    [junit] 2008-01-20 17:29:30,455 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-25598367-140.211.11.75-36726-1200850170425 is assigned to data-node 127.0.0.1:36726
    [junit] 2008-01-20 17:29:30,456 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:36726In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] Starting DataNode 2 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6 
    [junit] 2008-01-20 17:29:30,457 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3454231msec Initial delay: 0msec
    [junit] 2008-01-20 17:29:30,457 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-20 17:29:30,491 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5  is not formatted.
    [junit] 2008-01-20 17:29:30,492 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-20 17:29:30,634 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-20 17:29:30,681 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data6  is not formatted.
    [junit] 2008-01-20 17:29:30,682 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-20 17:29:30,849 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 36728
    [junit] 2008-01-20 17:29:30,850 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-20 17:29:30,853 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-20 17:29:30,928 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@1bb326c
    [junit] 2008-01-20 17:29:30,930 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:30,930 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:30,931 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:30,933 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:36729
    [junit] 2008-01-20 17:29:30,933 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@1ab600f
    [junit] 2008-01-20 17:29:30,935 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:36728 storage DS-339789329-140.211.11.75-36728-1200850170934
    [junit] 2008-01-20 17:29:30,935 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:36728
    [junit] 2008-01-20 17:29:31,107 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-339789329-140.211.11.75-36728-1200850170934 is assigned to data-node 127.0.0.1:36728
    [junit] 2008-01-20 17:29:31,108 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:36728In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] 2008-01-20 17:29:31,138 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3520751msec Initial delay: 0msec
    [junit] 2008-01-20 17:29:31,324 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-20 17:29:32,465 INFO  fs.DFSClient (DFSClient.java:run(1592)) - Allocating new block
    [junit] 2008-01-20 17:29:32,466 INFO  dfs.StateChange (FSNamesystem.java:allocateBlock(1274)) - BLOCK* NameSystem.allocateBlock: /data/file1. blk_-6601888446323993104
    [junit] 2008-01-20 17:29:32,468 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:36724
    [junit] 2008-01-20 17:29:32,468 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:36726
    [junit] 2008-01-20 17:29:32,468 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:36728
    [junit] 2008-01-20 17:29:32,468 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1985)) - Connecting to 127.0.0.1:36724
    [junit] 2008-01-20 17:29:32,470 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_-6601888446323993104 from /127.0.0.1
    [junit] 2008-01-20 17:29:32,472 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_-6601888446323993104 from /127.0.0.1
    [junit] 2008-01-20 17:29:32,473 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_-6601888446323993104 from /127.0.0.1
    [junit] 2008-01-20 17:29:32,474 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 0 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-20 17:29:32,474 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 1 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-20 17:29:32,474 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 1 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-20 17:29:32,475 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 2 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-20 17:29:32,475 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 2 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-20 17:29:32,550 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1802)) - Received block blk_-6601888446323993104 of size 100 from /127.0.0.1
    [junit] 2008-01-20 17:29:32,551 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1819)) - PacketResponder 0 for block blk_-6601888446323993104 terminating
    [junit] 2008-01-20 17:29:32,552 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36728 is added to blk_-6601888446323993104 size 100
    [junit] 2008-01-20 17:29:32,700 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_-6601888446323993104 of size 100 from /127.0.0.1
    [junit] 2008-01-20 17:29:32,700 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 1 for block blk_-6601888446323993104 terminating
    [junit] 2008-01-20 17:29:32,701 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36726 is added to blk_-6601888446323993104 size 100
    [junit] 2008-01-20 17:29:32,874 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_-6601888446323993104 of size 100 from /127.0.0.1
    [junit] 2008-01-20 17:29:32,875 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 2 for block blk_-6601888446323993104 terminating
    [junit] 2008-01-20 17:29:32,875 INFO  fs.DFSClient (DFSClient.java:run(1653)) - Closing old block blk_-6601888446323993104
    [junit] 2008-01-20 17:29:32,877 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36724 is added to blk_-6601888446323993104 size 100
    [junit] 2008-01-20 17:29:33,031 INFO  dfs.DataNode (DataNode.java:readBlock(1051)) - 127.0.0.1:36726 Served block blk_-6601888446323993104 to /127.0.0.1
    [junit] 2008-01-20 17:29:33,479 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 0 on 36720, call mkdirs(/data/web2, rwxr-xr-x) from 127.0.0.1:36736: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1541)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1524)
    [junit] at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:413)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-20 17:29:33,495 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 1 on 36720, call create(/data/file2, rwxr-xr-x, DFSClient_858921736, true, 3, 67108864) from 127.0.0.1:36736: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:940)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:915)
    [junit] at org.apache.hadoop.dfs.NameNode.create(NameNode.java:273)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-20 17:29:33,497 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 5 on 36720, call open(/data/file1, 0, 671088640) from 127.0.0.1:36736: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:106)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPathAccess(FSNamesystem.java:3924)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:732)
    [junit] at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:246)
    [junit] at org.apache.hadoop.dfs.NameNode.open(NameNode.java:233)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2008-01-20 17:29:34,472 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:36729
    [junit] 2008-01-20 17:29:34,472 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1bb326c
    [junit] 2008-01-20 17:29:34,596 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:34,704 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:34,809 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:34,810 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@1ab600f
    [junit] 2008-01-20 17:29:34,811 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-20 17:29:34,812 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-20 17:29:34,812 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:36728:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] Shutting down DataNode 1
    [junit] 2008-01-20 17:29:34,814 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:36727
    [junit] 2008-01-20 17:29:34,814 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@174aa60
    [junit] 2008-01-20 17:29:34,914 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:35,012 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:35,109 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:35,109 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@8f2ca6
    [junit] 2008-01-20 17:29:35,110 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-20 17:29:35,600 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-20 17:29:36,120 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2008-01-20 17:29:36,121 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:36726:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] Shutting down DataNode 0
    [junit] 2008-01-20 17:29:36,122 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:36725
    [junit] 2008-01-20 17:29:36,123 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@bfed5a
    [junit] 2008-01-20 17:29:36,220 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:36,312 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:36,404 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:36,405 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@4d41e2
    [junit] 2008-01-20 17:29:36,406 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-20 17:29:36,407 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-20 17:29:36,407 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:36724:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-20 17:29:36,409 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:36721
    [junit] 2008-01-20 17:29:36,409 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@edbe39
    [junit] 2008-01-20 17:29:36,504 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-20 17:29:36,592 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-20 17:29:36,681 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-20 17:29:36,682 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@c4bc34
    [junit] 2008-01-20 17:29:36,683 INFO  fs.FSNamesystem (FSEditLog.java:printStatistics(772)) - Number of transactions: 6 Total time for transactions(ms): 1 Number of syncs: 4 SyncTimes(ms): 228 175
    [junit] 2008-01-20 17:29:36,723 INFO  ipc.Server (Server.java:stop(999)) - Stopping server on 36720
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 1 on 36720: exiting
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 8 on 36720: exiting
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 36720: exiting
    [junit] 2008-01-20 17:29:36,725 INFO  ipc.Server (Server.java:run(525)) - Stopping IPC Server Responder
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 19.376 sec
    [junit] 2008-01-20 17:29:36,725 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 2 on 36720: exiting
    [junit] 2008-01-20 17:29:36,725 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 6 on 36720: exiting
    [junit] 2008-01-20 17:29:36,725 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 5 on 36720: exiting
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 4 on 36720: exiting
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(353)) - Stopping IPC Server listener on 36720
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 9 on 36720: exiting
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 7 on 36720: exiting
    [junit] 2008-01-20 17:29:36,724 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 3 on 36720: exiting
    [junit] Running org.apache.hadoop.security.TestUnixUserGroupInformation
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.393 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.632 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :532: Tests failed!

Total time: 165 minutes 2 seconds
Recording fingerprints
Publishing Javadoc
Recording test results
Updating HADOOP-2367
Updating HADOOP-2643

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #372

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/372/changes

------------------------------------------
[...truncated 72163 lines...]
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:195)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:134)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:106)
    [junit] at org.apache.hadoop.security.TestPermission.testFilePermision(TestPermission.java:111)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at junit.framework.TestCase.runTest(TestCase.java:154)
    [junit] at junit.framework.TestCase.runBare(TestCase.java:127)
    [junit] at junit.framework.TestResult$1.protect(TestResult.java:106)
    [junit] at junit.framework.TestResult.runProtected(TestResult.java:124)
    [junit] at junit.framework.TestResult.run(TestResult.java:109)
    [junit] at junit.framework.TestCase.run(TestCase.java:118)
    [junit] at junit.framework.TestSuite.runTest(TestSuite.java:208)
    [junit] at junit.framework.TestSuite.run(TestSuite.java:203)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:297)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:672)
    [junit] 2008-01-21 13:51:59,816 INFO  dfs.NameNodeMetrics (NameNodeMetrics.java:<init>(74)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:567)
    [junit] 2008-01-21 13:51:59,853 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(321)) - fsOwner=hudson,hudson
    [junit] 2008-01-21 13:51:59,856 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(325)) - supergroup=supergroup
    [junit] 2008-01-21 13:51:59,857 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(326)) - isPermissionEnabled=true
    [junit] 2008-01-21 13:51:59,863 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(248)) - Finished loading FSImage in 47 msecs
    [junit] 2008-01-21 13:51:59,864 INFO  fs.FSNamesystem (FSNamesystem.java:leave(3554)) - Leaving safemode after 48 msecs
    [junit] 2008-01-21 13:51:59,865 INFO  dfs.StateChange (FSNamesystem.java:leave(3563)) - STATE* Network topology has 0 racks and 0 datanodes
    [junit] 2008-01-21 13:51:59,865 INFO  dfs.StateChange (FSNamesystem.java:leave(3566)) - STATE* UnderReplicatedBlocks has 0 blocks
    [junit] 2008-01-21 13:51:59,870 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-21 13:51:59,968 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@cbd8dc
    [junit] 2008-01-21 13:52:00,137 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:00,137 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:00,138 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:00,139 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:54547
    [junit] 2008-01-21 13:52:00,140 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@4b82d2
    [junit] 2008-01-21 13:52:00,141 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(287)) - Web-server up at: 0.0.0.0:54547
    [junit] 2008-01-21 13:52:00,141 INFO  ipc.Server (Server.java:run(470)) - IPC Server Responder: starting
    [junit] 2008-01-21 13:52:00,141 INFO  ipc.Server (Server.java:run(317)) - IPC Server listener on 54546: starting
    [junit] 2008-01-21 13:52:00,142 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 0 on 54546: starting
    [junit] 2008-01-21 13:52:00,142 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 2 on 54546: starting
    [junit] 2008-01-21 13:52:00,142 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 1 on 54546: starting
    [junit] 2008-01-21 13:52:00,142 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 3 on 54546: starting
    [junit] 2008-01-21 13:52:00,143 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 4 on 54546: starting
    [junit] 2008-01-21 13:52:00,143 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 6 on 54546: starting
    [junit] 2008-01-21 13:52:00,143 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 7 on 54546: starting
    [junit] 2008-01-21 13:52:00,144 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 8 on 54546: starting
    [junit] 2008-01-21 13:52:00,143 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 5 on 54546: starting
    [junit] 2008-01-21 13:52:00,144 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 9 on 54546: starting
    [junit] Starting DataNode 0 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2 
    [junit] 2008-01-21 13:52:00,185 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-21 13:52:00,189 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1  is not formatted.
    [junit] 2008-01-21 13:52:00,190 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-21 13:52:04,549 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data2  is not formatted.
    [junit] 2008-01-21 13:52:04,549 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-21 13:52:09,874 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 54558
    [junit] 2008-01-21 13:52:09,875 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-21 13:52:10,239 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-21 13:52:10,341 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@1972e3a
    [junit] 2008-01-21 13:52:10,630 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:10,631 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:10,631 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:10,633 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:54559
    [junit] 2008-01-21 13:52:10,633 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@15f4a7f
    [junit] 2008-01-21 13:52:10,636 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:54558 storage DS-1780115034-140.211.11.75-54558-1200923530634
    [junit] 2008-01-21 13:52:10,636 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:54558
    [junit] 2008-01-21 13:52:11,451 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-1780115034-140.211.11.75-54558-1200923530634 is assigned to data-node 127.0.0.1:54558
    [junit] 2008-01-21 13:52:11,452 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:54558In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-21 13:52:11,453 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3331398msec Initial delay: 0msec
    [junit] Starting DataNode 1 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4 
    [junit] 2008-01-21 13:52:11,529 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-21 13:52:11,534 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2008-01-21 13:52:11,535 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-21 13:52:12,554 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-21 13:52:15,114 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2008-01-21 13:52:15,114 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-21 13:52:18,332 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 54566
    [junit] 2008-01-21 13:52:18,332 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-21 13:52:18,336 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-21 13:52:18,419 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@363068
    [junit] 2008-01-21 13:52:18,422 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:18,422 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:18,423 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:18,424 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:54567
    [junit] 2008-01-21 13:52:18,424 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@b05236
    [junit] 2008-01-21 13:52:18,427 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:54566 storage DS-335295636-140.211.11.75-54566-1200923538426
    [junit] 2008-01-21 13:52:18,427 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:54566
    [junit] 2008-01-21 13:52:19,123 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-335295636-140.211.11.75-54566-1200923538426 is assigned to data-node 127.0.0.1:54566
    [junit] 2008-01-21 13:52:19,124 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:54566In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] 2008-01-21 13:52:19,125 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3329944msec Initial delay: 0msec
    [junit] Starting DataNode 2 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6 
    [junit] 2008-01-21 13:52:19,422 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-21 13:52:19,432 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5  is not formatted.
    [junit] 2008-01-21 13:52:19,432 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-21 13:52:20,275 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-21 13:52:22,145 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data6  is not formatted.
    [junit] 2008-01-21 13:52:22,145 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-21 13:52:25,612 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 54583
    [junit] 2008-01-21 13:52:25,613 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-21 13:52:25,616 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-21 13:52:25,691 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@2da5a6
    [junit] 2008-01-21 13:52:25,693 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:25,694 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:25,695 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:25,697 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:54584
    [junit] 2008-01-21 13:52:25,698 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@dac21
    [junit] 2008-01-21 13:52:25,700 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:54583 storage DS-1471444427-140.211.11.75-54583-1200923545699
    [junit] 2008-01-21 13:52:25,700 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:54583
    [junit] 2008-01-21 13:52:26,411 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-1471444427-140.211.11.75-54583-1200923545699 is assigned to data-node 127.0.0.1:54583
    [junit] 2008-01-21 13:52:26,412 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:54583In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] 2008-01-21 13:52:26,441 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3562839msec Initial delay: 0msec
    [junit] 2008-01-21 13:52:26,624 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-21 13:52:28,754 INFO  fs.DFSClient (DFSClient.java:run(1592)) - Allocating new block
    [junit] 2008-01-21 13:52:28,755 INFO  dfs.StateChange (FSNamesystem.java:allocateBlock(1274)) - BLOCK* NameSystem.allocateBlock: /data/file1. blk_3949052223839208120
    [junit] 2008-01-21 13:52:28,757 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:54558
    [junit] 2008-01-21 13:52:28,757 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:54566
    [junit] 2008-01-21 13:52:28,757 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:54583
    [junit] 2008-01-21 13:52:28,758 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1985)) - Connecting to 127.0.0.1:54558
    [junit] 2008-01-21 13:52:28,759 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_3949052223839208120 from /127.0.0.1
    [junit] 2008-01-21 13:52:28,761 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_3949052223839208120 from /127.0.0.1
    [junit] 2008-01-21 13:52:28,767 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_3949052223839208120 from /127.0.0.1
    [junit] 2008-01-21 13:52:28,768 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 0 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-21 13:52:28,768 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 1 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-21 13:52:28,769 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 1 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-21 13:52:28,769 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 2 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-21 13:52:28,769 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 2 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-21 13:52:28,971 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1802)) - Received block blk_3949052223839208120 of size 100 from /127.0.0.1
    [junit] 2008-01-21 13:52:28,972 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1819)) - PacketResponder 0 for block blk_3949052223839208120 terminating
    [junit] 2008-01-21 13:52:28,974 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:54583 is added to blk_3949052223839208120 size 100
    [junit] 2008-01-21 13:52:29,380 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_3949052223839208120 of size 100 from /127.0.0.1
    [junit] 2008-01-21 13:52:29,380 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 1 for block blk_3949052223839208120 terminating
    [junit] 2008-01-21 13:52:29,381 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:54566 is added to blk_3949052223839208120 size 100
    [junit] 2008-01-21 13:52:29,875 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_3949052223839208120 of size 100 from /127.0.0.1
    [junit] 2008-01-21 13:52:29,876 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 2 for block blk_3949052223839208120 terminating
    [junit] 2008-01-21 13:52:29,876 INFO  fs.DFSClient (DFSClient.java:run(1653)) - Closing old block blk_3949052223839208120
    [junit] 2008-01-21 13:52:29,877 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:54558 is added to blk_3949052223839208120 size 100
    [junit] 2008-01-21 13:52:30,753 INFO  dfs.DataNode (DataNode.java:readBlock(1051)) - 127.0.0.1:54583 Served block blk_3949052223839208120 to /127.0.0.1
    [junit] 2008-01-21 13:52:31,611 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 9 on 54546, call mkdirs(/data/web2, rwxr-xr-x) from 127.0.0.1:54593: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1541)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1524)
    [junit] at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:413)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-21 13:52:31,622 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 0 on 54546, call create(/data/file2, rwxr-xr-x, DFSClient_1744378956, true, 3, 67108864) from 127.0.0.1:54593: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:940)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:915)
    [junit] at org.apache.hadoop.dfs.NameNode.create(NameNode.java:273)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-21 13:52:31,624 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 2 on 54546, call open(/data/file1, 0, 671088640) from 127.0.0.1:54593: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:106)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPathAccess(FSNamesystem.java:3924)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:732)
    [junit] at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:246)
    [junit] at org.apache.hadoop.dfs.NameNode.open(NameNode.java:233)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-21 13:52:32,377 INFO  dfs.DataBlockScanner (DataBlockScanner.java:verifyBlock(387)) - Verification succeeded for blk_3949052223839208120
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2008-01-21 13:52:32,498 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:54584
    [junit] 2008-01-21 13:52:32,498 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@2da5a6
    [junit] 2008-01-21 13:52:32,624 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:32,744 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:32,846 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:32,847 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@dac21
    [junit] 2008-01-21 13:52:32,848 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-21 13:52:32,850 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-21 13:52:32,850 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:54583:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] Shutting down DataNode 1
    [junit] 2008-01-21 13:52:32,851 INFO  util.ThreadedServer (ThreadedServer.java:run(656)) - Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=54567]
    [junit] 2008-01-21 13:52:32,851 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:54567
    [junit] 2008-01-21 13:52:32,852 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@363068
    [junit] 2008-01-21 13:52:32,951 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:33,051 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:33,154 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:33,154 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@b05236
    [junit] 2008-01-21 13:52:33,155 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-21 13:52:33,157 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:54566:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] Shutting down DataNode 0
    [junit] 2008-01-21 13:52:33,157 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-21 13:52:33,158 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:54559
    [junit] 2008-01-21 13:52:33,159 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1972e3a
    [junit] 2008-01-21 13:52:33,258 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:33,351 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:33,440 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:33,441 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@15f4a7f
    [junit] 2008-01-21 13:52:33,442 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-21 13:52:33,746 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-21 13:52:34,446 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2008-01-21 13:52:34,447 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:54558:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-21 13:52:34,449 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:54547
    [junit] 2008-01-21 13:52:34,450 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@cbd8dc
    [junit] 2008-01-21 13:52:34,548 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-21 13:52:34,641 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-21 13:52:34,730 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-21 13:52:34,731 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@4b82d2
    [junit] 2008-01-21 13:52:34,731 INFO  fs.FSNamesystem (FSEditLog.java:printStatistics(772)) - Number of transactions: 6 Total time for transactions(ms): 0 Number of syncs: 4 SyncTimes(ms): 984 1091
    [junit] 2008-01-21 13:52:35,127 INFO  ipc.Server (Server.java:stop(999)) - Stopping server on 54546
    [junit] 2008-01-21 13:52:35,128 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 7 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(353)) - Stopping IPC Server listener on 54546
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 127.354 sec
    [junit] 2008-01-21 13:52:35,130 INFO  ipc.Server (Server.java:run(525)) - Stopping IPC Server Responder
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 1 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 3 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 4 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 8 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 9 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 5 on 54546: exiting
    [junit] 2008-01-21 13:52:35,129 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 6 on 54546: exiting
    [junit] 2008-01-21 13:52:35,128 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 2 on 54546: exiting
    [junit] Running org.apache.hadoop.security.TestUnixUserGroupInformation
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.422 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.733 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :532: Tests failed!

Total time: 155 minutes 56 seconds
Recording fingerprints
Publishing Javadoc
Recording test results

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #373

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/373/changes

Changes:

[stack] HADOOP-2650 Remove Writables.clone and use WritableUtils.clone from
            hadoop instead
HADOOP-2584 Web UI displays an IOException instead of the Tables

------------------------------------------
[...truncated 68520 lines...]
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:134)
    [junit] at org.apache.hadoop.dfs.MiniDFSCluster.<init>(MiniDFSCluster.java:106)
    [junit] at org.apache.hadoop.security.TestPermission.testFilePermision(TestPermission.java:111)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at junit.framework.TestCase.runTest(TestCase.java:154)
    [junit] at junit.framework.TestCase.runBare(TestCase.java:127)
    [junit] at junit.framework.TestResult$1.protect(TestResult.java:106)
    [junit] at junit.framework.TestResult.runProtected(TestResult.java:124)
    [junit] at junit.framework.TestResult.run(TestResult.java:109)
    [junit] at junit.framework.TestCase.run(TestCase.java:118)
    [junit] at junit.framework.TestSuite.runTest(TestSuite.java:208)
    [junit] at junit.framework.TestSuite.run(TestSuite.java:203)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:297)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:672)
    [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:567)
    [junit] 2008-01-22 14:15:49,088 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(321)) - fsOwner=hudson,hudson
    [junit] 2008-01-22 14:15:49,090 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(325)) - supergroup=supergroup
    [junit] 2008-01-22 14:15:49,090 INFO  fs.FSNamesystem (FSNamesystem.java:setConfigurationParameters(326)) - isPermissionEnabled=true
    [junit] 2008-01-22 14:15:49,097 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(248)) - Finished loading FSImage in 37 msecs
    [junit] 2008-01-22 14:15:49,098 INFO  fs.FSNamesystem (FSNamesystem.java:leave(3554)) - Leaving safemode after 38 msecs
    [junit] 2008-01-22 14:15:49,098 INFO  dfs.StateChange (FSNamesystem.java:leave(3563)) - STATE* Network topology has 0 racks and 0 datanodes
    [junit] 2008-01-22 14:15:49,099 INFO  dfs.StateChange (FSNamesystem.java:leave(3566)) - STATE* UnderReplicatedBlocks has 0 blocks
    [junit] 2008-01-22 14:15:49,103 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-22 14:15:49,231 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@278e83
    [junit] 2008-01-22 14:15:49,233 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-22 14:15:49,234 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:15:49,235 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-22 14:15:49,236 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:51277
    [junit] 2008-01-22 14:15:49,237 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@1aacd5f
    [junit] 2008-01-22 14:15:49,237 INFO  fs.FSNamesystem (FSNamesystem.java:initialize(287)) - Web-server up at: 0.0.0.0:51277
    [junit] 2008-01-22 14:15:49,238 INFO  ipc.Server (Server.java:run(470)) - IPC Server Responder: starting
    [junit] 2008-01-22 14:15:49,239 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 1 on 51276: starting
    [junit] 2008-01-22 14:15:49,240 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 0 on 51276: starting
    [junit] 2008-01-22 14:15:49,241 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 3 on 51276: starting
    [junit] 2008-01-22 14:15:49,241 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 9 on 51276: starting
    [junit] 2008-01-22 14:15:49,241 INFO  ipc.Server (Server.java:run(317)) - IPC Server listener on 51276: starting
    [junit] 2008-01-22 14:15:49,241 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 8 on 51276: starting
    [junit] 2008-01-22 14:15:49,241 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 7 on 51276: starting
    [junit] 2008-01-22 14:15:49,241 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 6 on 51276: starting
    [junit] 2008-01-22 14:15:49,241 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 5 on 51276: starting
    [junit] 2008-01-22 14:15:49,240 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 4 on 51276: starting
    [junit] 2008-01-22 14:15:49,240 INFO  ipc.Server (Server.java:run(872)) - IPC Server handler 2 on 51276: starting
    [junit] Starting DataNode 0 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2 
    [junit] 2008-01-22 14:15:49,297 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-22 14:15:49,302 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1  is not formatted.
    [junit] 2008-01-22 14:15:49,303 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-22 14:15:50,985 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data2  is not formatted.
    [junit] 2008-01-22 14:15:50,986 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-22 14:15:52,757 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 51281
    [junit] 2008-01-22 14:15:52,758 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-22 14:15:52,763 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-22 14:15:52,904 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@1c09624
    [junit] 2008-01-22 14:15:52,908 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-22 14:15:52,909 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:15:52,910 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-22 14:15:52,912 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:51283
    [junit] 2008-01-22 14:15:52,913 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@10980e7
    [junit] 2008-01-22 14:15:52,917 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:51281 storage DS-1994754611-140.211.11.75-51281-1201011352915
    [junit] 2008-01-22 14:15:52,917 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:51281
    [junit] 2008-01-22 14:15:53,206 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-1994754611-140.211.11.75-51281-1201011352915 is assigned to data-node 127.0.0.1:51281
    [junit] Starting DataNode 1 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4 
    [junit] 2008-01-22 14:15:53,208 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:51281In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-22 14:15:53,210 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-22 14:15:53,211 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3318479msec Initial delay: 0msec
    [junit] 2008-01-22 14:15:53,258 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2008-01-22 14:15:53,259 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-22 14:15:53,695 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2008-01-22 14:15:54,599 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2008-01-22 14:15:54,600 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-22 14:15:56,447 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 51298
    [junit] 2008-01-22 14:15:56,448 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-22 14:15:56,475 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-22 14:15:56,554 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@76db09
    [junit] 2008-01-22 14:15:56,556 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-22 14:15:56,556 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:15:56,557 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-22 14:15:56,558 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:51299
    [junit] 2008-01-22 14:15:56,558 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@bfed5a
    [junit] 2008-01-22 14:15:56,560 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:51298 storage DS-1108117939-140.211.11.75-51298-1201011356559
    [junit] 2008-01-22 14:15:56,560 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:51298
    [junit] 2008-01-22 14:15:57,162 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-1108117939-140.211.11.75-51298-1201011356559 is assigned to data-node 127.0.0.1:51298
    [junit] 2008-01-22 14:15:57,163 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:51298In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] Starting DataNode 2 with dfs.data.dir: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6 
    [junit] 2008-01-22 14:15:57,164 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3592883msec Initial delay: 0msec
    [junit] 2008-01-22 14:15:57,164 INFO  jvm.JvmMetrics (JvmMetrics.java:init(51)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2008-01-22 14:15:57,208 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5  is not formatted.
    [junit] 2008-01-22 14:15:57,209 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-22 14:15:57,751 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 2 msecs
    [junit] 2008-01-22 14:15:58,923 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(117)) - Storage directory http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data6  is not formatted.
    [junit] 2008-01-22 14:15:58,924 INFO  dfs.Storage (DataStorage.java:recoverTransitionRead(118)) - Formatting ...
    [junit] 2008-01-22 14:16:01,113 INFO  dfs.DataNode (DataNode.java:startDataNode(318)) - Opened server at 51308
    [junit] 2008-01-22 14:16:01,115 INFO  dfs.DataNode (DataNode.java:startDataNode(340)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2008-01-22 14:16:01,121 INFO  http.HttpServer (HttpServer.java:doStart(729)) - Version Jetty/5.1.4
    [junit] 2008-01-22 14:16:01,234 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.servlet.WebApplicationHandler@f42160
    [junit] 2008-01-22 14:16:01,403 INFO  util.Container (Container.java:start(74)) - Started WebApplicationContext[/,/]
    [junit] 2008-01-22 14:16:01,404 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:16:01,404 INFO  util.Container (Container.java:start(74)) - Started HttpContext[/static,/static]
    [junit] 2008-01-22 14:16:01,406 INFO  http.SocketListener (SocketListener.java:start(204)) - Started SocketListener on 0.0.0.0:51311
    [junit] 2008-01-22 14:16:01,406 INFO  util.Container (Container.java:start(74)) - Started org.mortbay.jetty.Server@5d855f
    [junit] 2008-01-22 14:16:01,408 INFO  dfs.StateChange (FSNamesystem.java:registerDatanode(1850)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:51308 storage DS-2245781-140.211.11.75-51308-1201011361407
    [junit] 2008-01-22 14:16:01,409 INFO  net.NetworkTopology (NetworkTopology.java:add(320)) - Adding a new node: /default-rack/127.0.0.1:51308
    [junit] 2008-01-22 14:16:01,924 INFO  dfs.DataNode (DataNode.java:register(500)) - New storage id DS-2245781-140.211.11.75-51308-1201011361407 is assigned to data-node 127.0.0.1:51308
    [junit] 2008-01-22 14:16:01,925 INFO  dfs.DataNode (DataNode.java:run(2432)) - 127.0.0.1:51308In DataNode.run, data = FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] 2008-01-22 14:16:01,969 INFO  dfs.DataNode (DataNode.java:offerService(622)) - using BLOCKREPORT_INTERVAL of 3493743msec Initial delay: 0msec
    [junit] 2008-01-22 14:16:02,211 INFO  dfs.DataNode (DataNode.java:offerService(701)) - BlockReport of 0 blocks got processed in 4 msecs
    [junit] 2008-01-22 14:16:03,533 INFO  fs.DFSClient (DFSClient.java:run(1592)) - Allocating new block
    [junit] 2008-01-22 14:16:03,536 INFO  dfs.StateChange (FSNamesystem.java:allocateBlock(1274)) - BLOCK* NameSystem.allocateBlock: /data/file1. blk_-126555033965283514
    [junit] 2008-01-22 14:16:03,539 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:51281
    [junit] 2008-01-22 14:16:03,539 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:51308
    [junit] 2008-01-22 14:16:03,540 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1982)) - pipeline = 127.0.0.1:51298
    [junit] 2008-01-22 14:16:03,540 INFO  fs.DFSClient (DFSClient.java:createBlockOutputStream(1985)) - Connecting to 127.0.0.1:51281
    [junit] 2008-01-22 14:16:03,543 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_-126555033965283514 from /127.0.0.1
    [junit] 2008-01-22 14:16:03,546 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_-126555033965283514 from /127.0.0.1
    [junit] 2008-01-22 14:16:03,549 INFO  dfs.DataNode (DataNode.java:writeBlock(1084)) - Receiving block blk_-126555033965283514 from /127.0.0.1
    [junit] 2008-01-22 14:16:03,550 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 0 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-22 14:16:03,551 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 1 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-22 14:16:03,551 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 1 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-22 14:16:03,552 INFO  dfs.DataNode (DataNode.java:writeBlock(1150)) - Datanode 2 got response for connect ack  from downstream datanode with firstbadlink as
    [junit] 2008-01-22 14:16:03,553 INFO  dfs.DataNode (DataNode.java:writeBlock(1169)) - Datanode 2 forwarding connect ack to upstream firstbadlink is
    [junit] 2008-01-22 14:16:03,661 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1802)) - Received block blk_-126555033965283514 of size 100 from /127.0.0.1
    [junit] 2008-01-22 14:16:03,661 INFO  dfs.DataNode (DataNode.java:lastDataNodeRun(1819)) - PacketResponder 0 for block blk_-126555033965283514 terminating
    [junit] 2008-01-22 14:16:03,665 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:51298 is added to blk_-126555033965283514 size 100
    [junit] 2008-01-22 14:16:03,778 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_-126555033965283514 of size 100 from /127.0.0.1
    [junit] 2008-01-22 14:16:03,778 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 1 for block blk_-126555033965283514 terminating
    [junit] 2008-01-22 14:16:03,780 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:51308 is added to blk_-126555033965283514 size 100
    [junit] 2008-01-22 14:16:03,985 INFO  dfs.DataNode (DataNode.java:run(1886)) - Received block blk_-126555033965283514 of size 100 from /127.0.0.1
    [junit] 2008-01-22 14:16:03,986 INFO  dfs.DataNode (DataNode.java:run(1944)) - PacketResponder 2 for block blk_-126555033965283514 terminating
    [junit] 2008-01-22 14:16:03,986 INFO  fs.DFSClient (DFSClient.java:run(1653)) - Closing old block blk_-126555033965283514
    [junit] 2008-01-22 14:16:03,987 INFO  dfs.StateChange (FSNamesystem.java:addStoredBlock(2467)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:51281 is added to blk_-126555033965283514 size 100
    [junit] 2008-01-22 14:16:04,318 INFO  dfs.DataBlockScanner (DataBlockScanner.java:verifyBlock(387)) - Verification succeeded for blk_-126555033965283514
    [junit] 2008-01-22 14:16:04,802 INFO  dfs.DataNode (DataNode.java:readBlock(1051)) - 127.0.0.1:51298 Served block blk_-126555033965283514 to /127.0.0.1
    [junit] 2008-01-22 14:16:05,500 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 7 on 51276, call mkdirs(/data/web2, rwxr-xr-x) from 127.0.0.1:51323: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1541)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1524)
    [junit] at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:413)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-22 14:16:05,522 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 6 on 51276, call create(/data/file2, rwxr-xr-x, DFSClient_1736588364, true, 3, 67108864) from 127.0.0.1:51323: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=WRITE, inode="data":hudson:supergroup:rwxr-xr-x
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:152)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:100)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3934)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:940)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:915)
    [junit] at org.apache.hadoop.dfs.NameNode.create(NameNode.java:273)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-22 14:16:05,525 INFO  ipc.Server (Server.java:run(910)) - IPC Server handler 5 on 51276, call open(/data/file1, 0, 671088640) from 127.0.0.1:51323: error: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=Who, access=READ, inode="file1":hudson:supergroup:rw-------
    [junit] at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:171)
    [junit] at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:106)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:3954)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.checkPathAccess(FSNamesystem.java:3924)
    [junit] at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:732)
    [junit] at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:246)
    [junit] at org.apache.hadoop.dfs.NameNode.open(NameNode.java:233)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] at java.lang.reflect.Method.invoke(Method.java:585)
    [junit] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:908)
    [junit] 2008-01-22 14:16:06,002 INFO  dfs.DataBlockScanner (DataBlockScanner.java:verifyBlock(387)) - Verification succeeded for blk_-126555033965283514
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2008-01-22 14:16:06,494 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:51311
    [junit] 2008-01-22 14:16:06,496 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@f42160
    [junit] 2008-01-22 14:16:06,748 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-22 14:16:06,845 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:16:06,950 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-22 14:16:06,951 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@5d855f
    [junit] 2008-01-22 14:16:06,952 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-22 14:16:06,953 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-22 14:16:06,954 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:51308:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data5/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data6/current'}
    [junit] Shutting down DataNode 1
    [junit] 2008-01-22 14:16:06,956 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:51299
    [junit] 2008-01-22 14:16:06,957 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@76db09
    [junit] 2008-01-22 14:16:07,057 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-22 14:16:07,155 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:16:07,261 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-22 14:16:07,262 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@bfed5a
    [junit] 2008-01-22 14:16:07,263 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-22 14:16:07,265 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-22 14:16:07,265 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:51298:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data4/current'}
    [junit] Shutting down DataNode 0
    [junit] 2008-01-22 14:16:07,267 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:51283
    [junit] 2008-01-22 14:16:07,267 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1c09624
    [junit] 2008-01-22 14:16:07,377 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-22 14:16:07,461 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:16:07,554 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-22 14:16:07,555 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@10980e7
    [junit] 2008-01-22 14:16:07,556 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2008-01-22 14:16:08,261 INFO  dfs.DataNode (DataNode.java:run(2463)) - 127.0.0.1:51281:Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/test/data/dfs/data/data2/current'}
    [junit] 2008-01-22 14:16:08,461 INFO  dfs.DataBlockScanner (DataBlockScanner.java:run(561)) - Exiting DataBlockScanner thread.
    [junit] 2008-01-22 14:16:08,561 INFO  dfs.DataNode (DataNode.java:shutdown(540)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2008-01-22 14:16:08,564 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:51277
    [junit] 2008-01-22 14:16:08,564 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@278e83
    [junit] 2008-01-22 14:16:08,565 INFO  util.ThreadedServer (ThreadedServer.java:run(656)) - Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=51277]
    [junit] 2008-01-22 14:16:08,659 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
    [junit] 2008-01-22 14:16:08,755 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
    [junit] 2008-01-22 14:16:08,848 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
    [junit] 2008-01-22 14:16:08,848 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@1aacd5f
    [junit] 2008-01-22 14:16:08,849 INFO  fs.FSNamesystem (FSEditLog.java:printStatistics(772)) - Number of transactions: 6 Total time for transactions(ms): 1 Number of syncs: 4 SyncTimes(ms): 621 665
    [junit] 2008-01-22 14:16:08,978 INFO  ipc.Server (Server.java:stop(999)) - Stopping server on 51276
    [junit] 2008-01-22 14:16:08,979 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 51276: exiting
    [junit] 2008-01-22 14:16:08,980 INFO  ipc.Server (Server.java:run(525)) - Stopping IPC Server Responder
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 56.21 sec
    [junit] 2008-01-22 14:16:08,980 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 1 on 51276: exiting
    [junit] 2008-01-22 14:16:08,980 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 8 on 51276: exiting
    [junit] 2008-01-22 14:16:08,982 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 7 on 51276: exiting
    [junit] 2008-01-22 14:16:08,982 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 2 on 51276: exiting
    [junit] 2008-01-22 14:16:08,980 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 4 on 51276: exiting
    [junit] 2008-01-22 14:16:08,980 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 5 on 51276: exiting
    [junit] 2008-01-22 14:16:08,980 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 3 on 51276: exiting
    [junit] 2008-01-22 14:16:08,979 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 6 on 51276: exiting
    [junit] 2008-01-22 14:16:08,979 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 9 on 51276: exiting
    [junit] Running org.apache.hadoop.security.TestUnixUserGroupInformation
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.423 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.684 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :532: Tests failed!

Total time: 150 minutes 11 seconds
Recording fingerprints
Publishing Javadoc
Recording test results
Updating HADOOP-2584
Updating HADOOP-2650

Reply | Threaded
Open this post in threaded view
|

Build failed in Hudson: Hadoop-Nightly #374

hudson-6
See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/374/changes

Changes:

[jimk] HADOOP-2668 Documentation and improved logging so fact that hbase now requires migration comes as less of a surprise

[shv] HADOOP-2659. Introduce superuser permissions for admin operations. Contributed by Tsz Wo (Nicholas), SZE

[shv] HADOOP-2549. Correct detection of a full disk for data-nodes. Contributed by Hairong Kuang.

[dhruba] HADOOP-2649. The NameNode periodically computes replication work for
the datanodes. The periodicity of this computation is now configurable.
(dhruba)

[ddas] HADOOP-2284. Reduce the number of progress updates during the sorting in the map task. Contributed by Amar Kamat.

[ddas] HADOOP-2189. Incrementing user counters should count as progress. Contributed by Devaraj Das.

------------------------------------------
[...truncated 105513 lines...]
    [junit] d hregion_70236052/compaction.dir
    [junit] d hregion_70236052/compaction.dir/hregion_70236052
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info
    [junit] f hregion_70236052/compaction.dir/hregion_70236052/info/done size=0
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info/info
    [junit] d hregion_70236052/compaction.dir/hregion_70236052/info/mapfiles
    [junit] f hregion_70236052/compaction.dir/hregion_70236052/info/toreplace size=68
    [junit] d hregion_70236052/info
    [junit] d hregion_70236052/info/info
    [junit] f hregion_70236052/info/info/7214912435301412040 size=9
    [junit] d hregion_70236052/info/mapfiles
    [junit] d hregion_70236052/info/mapfiles/7214912435301412040
    [junit] f hregion_70236052/info/mapfiles/7214912435301412040/data size=332
    [junit] f hregion_70236052/info/mapfiles/7214912435301412040/index size=232
    [junit] 2008-01-23 15:36:07,950 INFO  [main] util.Migrate(132): Verifying that file system is available...
    [junit] 2008-01-23 15:36:07,952 INFO  [main] util.Migrate(138): Verifying that HBase is not running...
    [junit] 2008-01-23 15:36:07,954 INFO  [main] util.Migrate(149): Starting upgrade
    [junit] 2008-01-23 15:36:09,729 INFO  [main] hbase.HLog(232): new log writer created at hdfs://localhost:60539/hbase/log/hlog.dat.000
    [junit] 2008-01-23 15:36:09,935 DEBUG [main] hbase.HStore(697): starting 70236052/info (no reconstruction log)
    [junit] 2008-01-23 15:36:09,936 DEBUG [main] hbase.HStore(856): infodir: hdfs://localhost:60539/hbase/-ROOT-/70236052/info/info mapdir: hdfs://localhost:60539/hbase/-ROOT-/70236052/info/mapfiles
    [junit] 2008-01-23 15:36:10,063 DEBUG [main] hbase.HStore(723): maximum sequence id for hstore 70236052/info is 7
    [junit] 2008-01-23 15:36:10,432 WARN  [main] util.NativeCodeLoader(51): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    [junit] 2008-01-23 15:36:10,482 DEBUG [main] hbase.HRegion(298): Next sequence id for region -ROOT-,,0 is 8
    [junit] 2008-01-23 15:36:10,487 INFO  [main] hbase.HRegion(326): region -ROOT-,,0 available
    [junit] 2008-01-23 15:36:11,691 DEBUG [main] hbase.HStore(697): starting 1028785192/info (no reconstruction log)
    [junit] 2008-01-23 15:36:11,692 DEBUG [main] hbase.HStore(856): infodir: hdfs://localhost:60539/hbase/.META./1028785192/info/info mapdir: hdfs://localhost:60539/hbase/.META./1028785192/info/mapfiles
    [junit] 2008-01-23 15:36:11,702 DEBUG [main] hbase.HStore(723): maximum sequence id for hstore 1028785192/info is 35173
    [junit] 2008-01-23 15:36:11,713 DEBUG [main] hbase.HRegion(298): Next sequence id for region .META.,,1 is 35174
    [junit] 2008-01-23 15:36:11,716 INFO  [main] hbase.HRegion(326): region .META.,,1 available
    [junit] 2008-01-23 15:36:13,406 DEBUG [main] hbase.HRegion(405): compactions and cache flushes disabled for region .META.,,1
    [junit] 2008-01-23 15:36:13,409 DEBUG [main] hbase.HRegion(409): new updates and scanners for region .META.,,1 disabled
    [junit] 2008-01-23 15:36:13,409 DEBUG [main] hbase.HRegion(428): no more active scanners for region .META.,,1
    [junit] 2008-01-23 15:36:13,410 DEBUG [main] hbase.HRegion(434): no more row locks outstanding on region .META.,,1
    [junit] 2008-01-23 15:36:13,423 DEBUG [main] hbase.HStore(1047): closed 1028785192/info
    [junit] 2008-01-23 15:36:13,424 INFO  [main] hbase.HRegion(460): closed .META.,,1
    [junit] 2008-01-23 15:36:13,424 DEBUG [main] hbase.HRegion(405): compactions and cache flushes disabled for region -ROOT-,,0
    [junit] 2008-01-23 15:36:13,425 DEBUG [main] hbase.HRegion(409): new updates and scanners for region -ROOT-,,0 disabled
    [junit] 2008-01-23 15:36:13,426 DEBUG [main] hbase.HRegion(428): no more active scanners for region -ROOT-,,0
    [junit] 2008-01-23 15:36:13,426 DEBUG [main] hbase.HRegion(434): no more row locks outstanding on region -ROOT-,,0
    [junit] 2008-01-23 15:36:13,427 DEBUG [main] hbase.HStore(1047): closed 70236052/info
    [junit] 2008-01-23 15:36:13,428 INFO  [main] hbase.HRegion(460): closed -ROOT-,,0
    [junit] 2008-01-23 15:36:13,428 DEBUG [main] hbase.HLog(318): closing log writer in hdfs://localhost:60539/hbase/log
    [junit] 2008-01-23 15:36:14,442 INFO  [main] util.Migrate(197): Setting file system version.
    [junit] 2008-01-23 15:36:15,645 INFO  [main] util.Migrate(199): Upgrade successful.
    [junit] d -ROOT-
    [junit] d -ROOT-/70236052
    [junit] d -ROOT-/70236052/compaction.dir
    [junit] d -ROOT-/70236052/compaction.dir/70236052
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/done size=0
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/info
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/mapfiles
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/toreplace size=68
    [junit] d -ROOT-/70236052/info
    [junit] d -ROOT-/70236052/info/info
    [junit] f -ROOT-/70236052/info/info/7214912435301412040 size=9
    [junit] d -ROOT-/70236052/info/mapfiles
    [junit] d -ROOT-/70236052/info/mapfiles/7214912435301412040
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/data size=332
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/index size=232
    [junit] d .META.
    [junit] d .META./1028785192
    [junit] d .META./1028785192/compaction.dir
    [junit] d .META./1028785192/compaction.dir/1028785192
    [junit] d .META./1028785192/compaction.dir/1028785192/info
    [junit] f .META./1028785192/compaction.dir/1028785192/info/done size=0
    [junit] d .META./1028785192/compaction.dir/1028785192/info/info
    [junit] d .META./1028785192/compaction.dir/1028785192/info/mapfiles
    [junit] f .META./1028785192/compaction.dir/1028785192/info/toreplace size=72
    [junit] d .META./1028785192/info
    [junit] d .META./1028785192/info/info
    [junit] f .META./1028785192/info/info/5273171824992064091 size=9
    [junit] d .META./1028785192/info/mapfiles
    [junit] d .META./1028785192/info/mapfiles/5273171824992064091
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/data size=1710
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/index size=249
    [junit] d TestUpgrade
    [junit] d TestUpgrade/1396626490
    [junit] d TestUpgrade/1396626490/column_a
    [junit] d TestUpgrade/1396626490/column_a/info
    [junit] f TestUpgrade/1396626490/column_a/info/7048898707195909278 size=9
    [junit] d TestUpgrade/1396626490/column_a/mapfiles
    [junit] d TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/data size=1685790
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/index size=1578
    [junit] d TestUpgrade/1396626490/column_b
    [junit] d TestUpgrade/1396626490/column_b/info
    [junit] f TestUpgrade/1396626490/column_b/info/4973609345075242702 size=9
    [junit] d TestUpgrade/1396626490/column_b/mapfiles
    [junit] d TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/data size=1685790
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/index size=1582
    [junit] d TestUpgrade/1396626490/compaction.dir
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/toreplace size=80
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/toreplace size=80
    [junit] d TestUpgrade/1971203659
    [junit] d TestUpgrade/1971203659/column_a
    [junit] d TestUpgrade/1971203659/column_a/info
    [junit] f TestUpgrade/1971203659/column_a/info/3526482879590887371.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_a/mapfiles
    [junit] f TestUpgrade/1971203659/column_a/mapfiles/3526482879590887371.1396626490 size=0
    [junit] d TestUpgrade/1971203659/column_b
    [junit] d TestUpgrade/1971203659/column_b/info
    [junit] f TestUpgrade/1971203659/column_b/info/209479190043547321.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_b/mapfiles
    [junit] f TestUpgrade/1971203659/column_b/mapfiles/209479190043547321.1396626490 size=0
    [junit] d TestUpgrade/341377241
    [junit] d TestUpgrade/341377241/column_a
    [junit] d TestUpgrade/341377241/column_a/info
    [junit] f TestUpgrade/341377241/column_a/info/4514508232435632076.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_a/mapfiles
    [junit] f TestUpgrade/341377241/column_a/mapfiles/4514508232435632076.1396626490 size=0
    [junit] d TestUpgrade/341377241/column_b
    [junit] d TestUpgrade/341377241/column_b/info
    [junit] f TestUpgrade/341377241/column_b/info/2547853154428391603.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_b/mapfiles
    [junit] f TestUpgrade/341377241/column_b/mapfiles/2547853154428391603.1396626490 size=0
    [junit] f hbase.version size=5
    [junit] 2008-01-23 15:36:16,010 INFO  [main] util.Migrate(132): Verifying that file system is available...
    [junit] 2008-01-23 15:36:16,013 INFO  [main] util.Migrate(138): Verifying that HBase is not running...
    [junit] 2008-01-23 15:36:16,014 INFO  [main] util.Migrate(149): Starting upgrade
    [junit] 2008-01-23 15:36:16,018 INFO  [main] util.Migrate(197): Setting file system version.
    [junit] 2008-01-23 15:36:17,211 INFO  [main] util.Migrate(199): Upgrade successful.
    [junit] d -ROOT-
    [junit] d -ROOT-/70236052
    [junit] d -ROOT-/70236052/compaction.dir
    [junit] d -ROOT-/70236052/compaction.dir/70236052
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/done size=0
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/info
    [junit] d -ROOT-/70236052/compaction.dir/70236052/info/mapfiles
    [junit] f -ROOT-/70236052/compaction.dir/70236052/info/toreplace size=68
    [junit] d -ROOT-/70236052/info
    [junit] d -ROOT-/70236052/info/info
    [junit] f -ROOT-/70236052/info/info/7214912435301412040 size=9
    [junit] d -ROOT-/70236052/info/mapfiles
    [junit] d -ROOT-/70236052/info/mapfiles/7214912435301412040
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/data size=332
    [junit] f -ROOT-/70236052/info/mapfiles/7214912435301412040/index size=232
    [junit] d .META.
    [junit] d .META./1028785192
    [junit] d .META./1028785192/compaction.dir
    [junit] d .META./1028785192/compaction.dir/1028785192
    [junit] d .META./1028785192/compaction.dir/1028785192/info
    [junit] f .META./1028785192/compaction.dir/1028785192/info/done size=0
    [junit] d .META./1028785192/compaction.dir/1028785192/info/info
    [junit] d .META./1028785192/compaction.dir/1028785192/info/mapfiles
    [junit] f .META./1028785192/compaction.dir/1028785192/info/toreplace size=72
    [junit] d .META./1028785192/info
    [junit] d .META./1028785192/info/info
    [junit] f .META./1028785192/info/info/5273171824992064091 size=9
    [junit] d .META./1028785192/info/mapfiles
    [junit] d .META./1028785192/info/mapfiles/5273171824992064091
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/data size=1710
    [junit] f .META./1028785192/info/mapfiles/5273171824992064091/index size=249
    [junit] d TestUpgrade
    [junit] d TestUpgrade/1396626490
    [junit] d TestUpgrade/1396626490/column_a
    [junit] d TestUpgrade/1396626490/column_a/info
    [junit] f TestUpgrade/1396626490/column_a/info/7048898707195909278 size=9
    [junit] d TestUpgrade/1396626490/column_a/mapfiles
    [junit] d TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/data size=1685790
    [junit] f TestUpgrade/1396626490/column_a/mapfiles/7048898707195909278/index size=1578
    [junit] d TestUpgrade/1396626490/column_b
    [junit] d TestUpgrade/1396626490/column_b/info
    [junit] f TestUpgrade/1396626490/column_b/info/4973609345075242702 size=9
    [junit] d TestUpgrade/1396626490/column_b/mapfiles
    [junit] d TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/data size=1685790
    [junit] f TestUpgrade/1396626490/column_b/mapfiles/4973609345075242702/index size=1582
    [junit] d TestUpgrade/1396626490/compaction.dir
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_a/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_a/toreplace size=80
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/done size=0
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/info
    [junit] d TestUpgrade/1396626490/compaction.dir/1396626490/column_b/mapfiles
    [junit] f TestUpgrade/1396626490/compaction.dir/1396626490/column_b/toreplace size=80
    [junit] d TestUpgrade/1971203659
    [junit] d TestUpgrade/1971203659/column_a
    [junit] d TestUpgrade/1971203659/column_a/info
    [junit] f TestUpgrade/1971203659/column_a/info/3526482879590887371.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_a/mapfiles
    [junit] f TestUpgrade/1971203659/column_a/mapfiles/3526482879590887371.1396626490 size=0
    [junit] d TestUpgrade/1971203659/column_b
    [junit] d TestUpgrade/1971203659/column_b/info
    [junit] f TestUpgrade/1971203659/column_b/info/209479190043547321.1396626490 size=63
    [junit] d TestUpgrade/1971203659/column_b/mapfiles
    [junit] f TestUpgrade/1971203659/column_b/mapfiles/209479190043547321.1396626490 size=0
    [junit] d TestUpgrade/341377241
    [junit] d TestUpgrade/341377241/column_a
    [junit] d TestUpgrade/341377241/column_a/info
    [junit] f TestUpgrade/341377241/column_a/info/4514508232435632076.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_a/mapfiles
    [junit] f TestUpgrade/341377241/column_a/mapfiles/4514508232435632076.1396626490 size=0
    [junit] d TestUpgrade/341377241/column_b
    [junit] d TestUpgrade/341377241/column_b/info
    [junit] f TestUpgrade/341377241/column_b/info/2547853154428391603.1396626490 size=63
    [junit] d TestUpgrade/341377241/column_b/mapfiles
    [junit] f TestUpgrade/341377241/column_b/mapfiles/2547853154428391603.1396626490 size=0
    [junit] f hbase.version size=5
    [junit] 2008-01-23 15:36:17,296 INFO  [main] util.Migrate(132): Verifying that file system is available...
    [junit] 2008-01-23 15:36:17,298 INFO  [main] util.Migrate(138): Verifying that HBase is not running...
    [junit] 2008-01-23 15:36:17,299 INFO  [main] util.Migrate(149): Starting upgrade check
    [junit] 2008-01-23 15:36:17,304 INFO  [main] util.Migrate(162): No upgrade necessary.
    [junit] 2008-01-23 15:36:17,306 INFO  [main] util.Migrate(132): Verifying that file system is available...
    [junit] 2008-01-23 15:36:17,308 INFO  [main] util.Migrate(138): Verifying that HBase is not running...
    [junit] 2008-01-23 15:36:17,308 INFO  [main] util.Migrate(149): Starting upgrade
    [junit] 2008-01-23 15:36:17,316 INFO  [main] util.Migrate(162): No upgrade necessary.
    [junit] 2008-01-23 15:36:17,317 INFO  [main] hbase.StaticTestEnvironment(135): Shutting down FileSystem
    [junit] 2008-01-23 15:36:17,787 INFO  [main] hbase.StaticTestEnvironment(142): Shutting down Mini DFS
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] Shutting down DataNode 0
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 75.373 sec
    [junit] Running org.onelab.test.TestFilter
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.065 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :536: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build.xml :31: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build-contrib.xml :206: Tests failed!

Total time: 206 minutes 41 seconds
Recording fingerprints
Publishing Javadoc
Recording test results
Updating HADOOP-2549
Updating HADOOP-2189
Updating HADOOP-2649
Updating HADOOP-2668
Updating HADOOP-2284
Updating HADOOP-2659
Updating HADOOP-2584
Updating HADOOP-2650