[jira] Created: (HADOOP-2090) [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

[jira] Created: (HADOOP-2090) [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data

JIRA jira@apache.org
[hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data
------------------------------------------------------------------------

                 Key: HADOOP-2090
                 URL: https://issues.apache.org/jira/browse/HADOOP-2090
             Project: Hadoop
          Issue Type: Bug
          Components: contrib/hbase
            Reporter: stack
            Priority: Minor


From the list this morning from Josh Wills:

Date: Mon, 22 Oct 2007 12:04:01 -0500
From: "Josh Wills" <[hidden email]>
To: [hidden email]
Subject: Re: A basic question on HBase

...

> >
> > 2)  I was running one of these batch-style uploads last night on an
> > HTable that I configured w/BloomFilters on a couple of my column
> > families.  During one of the compaction operations, I got the
> > following exception--
> >
> > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > java.lang.ArrayIndexOutOfBoundsException
> >         at java.lang.System.arraycopy(Native Method)
> >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
a:102)
> >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
a:161)
> >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
a:140)
> >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
est.java:531)
> >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
.append(HStoreFile.java:895)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
)
> >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
17)
> >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
checkForSplitsOrCompactions(HRegionServer.java:198)
> >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
chore(HRegionServer.java:188)
> >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> >
> > Note that this wasn't the first compaction that was run (there were
> > others before it that ran successfully) and that the region hadn't
> > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > on a couple of the columnfamilies, w/the largest one having ~100000
> > distinct entries.  I don't know which of these caused the failure, but
> > I noticed that 100000 is quite a bit larger than the # of entries used
> > in the testcases, so I'm wondering if that might be the problem.
...
{code}

Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....

Plan is to try and reproduce on local cluster....

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Updated: (HADOOP-2090) [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data

JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/HADOOP-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HADOOP-2090:
--------------------------

    Description:
From the list this morning from Josh Wills:

{code}
Date: Mon, 22 Oct 2007 12:04:01 -0500
From: "Josh Wills" ....
To: [hidden email]
Subject: Re: A basic question on HBase

...

> >
> > 2)  I was running one of these batch-style uploads last night on an
> > HTable that I configured w/BloomFilters on a couple of my column
> > families.  During one of the compaction operations, I got the
> > following exception--
> >
> > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > java.lang.ArrayIndexOutOfBoundsException
> >         at java.lang.System.arraycopy(Native Method)
> >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
a:102)
> >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
a:161)
> >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
a:140)
> >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
est.java:531)
> >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
.append(HStoreFile.java:895)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
)
> >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
17)
> >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
checkForSplitsOrCompactions(HRegionServer.java:198)
> >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
chore(HRegionServer.java:188)
> >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> >
> > Note that this wasn't the first compaction that was run (there were
> > others before it that ran successfully) and that the region hadn't
> > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > on a couple of the columnfamilies, w/the largest one having ~100000
> > distinct entries.  I don't know which of these caused the failure, but
> > I noticed that 100000 is quite a bit larger than the # of entries used
> > in the testcases, so I'm wondering if that might be the problem.
...
{code}

Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....

Plan is to try and reproduce on local cluster....

  was:
From the list this morning from Josh Wills:

Date: Mon, 22 Oct 2007 12:04:01 -0500
From: "Josh Wills" <[hidden email]>
To: [hidden email]
Subject: Re: A basic question on HBase

...

> >
> > 2)  I was running one of these batch-style uploads last night on an
> > HTable that I configured w/BloomFilters on a couple of my column
> > families.  During one of the compaction operations, I got the
> > following exception--
> >
> > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > java.lang.ArrayIndexOutOfBoundsException
> >         at java.lang.System.arraycopy(Native Method)
> >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
a:102)
> >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
a:161)
> >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
a:140)
> >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
est.java:531)
> >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
.append(HStoreFile.java:895)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
)
> >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
)
> >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
17)
> >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
checkForSplitsOrCompactions(HRegionServer.java:198)
> >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
chore(HRegionServer.java:188)
> >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> >
> > Note that this wasn't the first compaction that was run (there were
> > others before it that ran successfully) and that the region hadn't
> > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > on a couple of the columnfamilies, w/the largest one having ~100000
> > distinct entries.  I don't know which of these caused the failure, but
> > I noticed that 100000 is quite a bit larger than the # of entries used
> > in the testcases, so I'm wondering if that might be the problem.
...
{code}

Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....

Plan is to try and reproduce on local cluster....


> [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-2090
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2090
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>            Priority: Minor
>
> From the list this morning from Josh Wills:
> {code}
> Date: Mon, 22 Oct 2007 12:04:01 -0500
> From: "Josh Wills" ....
> To: [hidden email]
> Subject: Re: A basic question on HBase
> ...
> > >
> > > 2)  I was running one of these batch-style uploads last night on an
> > > HTable that I configured w/BloomFilters on a couple of my column
> > > families.  During one of the compaction operations, I got the
> > > following exception--
> > >
> > > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > > java.lang.ArrayIndexOutOfBoundsException
> > >         at java.lang.System.arraycopy(Native Method)
> > >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
> a:102)
> > >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:161)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:140)
> > >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
> est.java:531)
> > >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> > >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> > >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> > >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
> .append(HStoreFile.java:895)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
> )
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
> )
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> > >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
> 17)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> checkForSplitsOrCompactions(HRegionServer.java:198)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> chore(HRegionServer.java:188)
> > >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> > >
> > > Note that this wasn't the first compaction that was run (there were
> > > others before it that ran successfully) and that the region hadn't
> > > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > > on a couple of the columnfamilies, w/the largest one having ~100000
> > > distinct entries.  I don't know which of these caused the failure, but
> > > I noticed that 100000 is quite a bit larger than the # of entries used
> > > in the testcases, so I'm wondering if that might be the problem.
> ...
> {code}
> Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....
> Plan is to try and reproduce on local cluster....

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-2090) [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/HADOOP-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12536849 ]

stack commented on HADOOP-2090:
-------------------------------

More from Josh:
{code}
I was using the BloomFilterDescriptor constructor defined on/around
line 85 (takes a BloomFilterType and an int numberOfEntries), with
BloomFilterType.BLOOMFILTER and
numberOfEntries = 100000.
{code}

> [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-2090
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2090
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>            Priority: Minor
>
> From the list this morning from Josh Wills:
> {code}
> Date: Mon, 22 Oct 2007 12:04:01 -0500
> From: "Josh Wills" ....
> To: [hidden email]
> Subject: Re: A basic question on HBase
> ...
> > >
> > > 2)  I was running one of these batch-style uploads last night on an
> > > HTable that I configured w/BloomFilters on a couple of my column
> > > families.  During one of the compaction operations, I got the
> > > following exception--
> > >
> > > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > > java.lang.ArrayIndexOutOfBoundsException
> > >         at java.lang.System.arraycopy(Native Method)
> > >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
> a:102)
> > >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:161)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:140)
> > >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
> est.java:531)
> > >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> > >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> > >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> > >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
> .append(HStoreFile.java:895)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
> )
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
> )
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> > >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
> 17)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> checkForSplitsOrCompactions(HRegionServer.java:198)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> chore(HRegionServer.java:188)
> > >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> > >
> > > Note that this wasn't the first compaction that was run (there were
> > > others before it that ran successfully) and that the region hadn't
> > > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > > on a couple of the columnfamilies, w/the largest one having ~100000
> > > distinct entries.  I don't know which of these caused the failure, but
> > > I noticed that 100000 is quite a bit larger than the # of entries used
> > > in the testcases, so I'm wondering if that might be the problem.
> ...
> {code}
> Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....
> Plan is to try and reproduce on local cluster....

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-2090) [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/HADOOP-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12550434 ]

Edward Yoon commented on HADOOP-2090:
-------------------------------------

If you are ok, I'll do it.

> [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-2090
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2090
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>            Priority: Minor
>
> From the list this morning from Josh Wills:
> {code}
> Date: Mon, 22 Oct 2007 12:04:01 -0500
> From: "Josh Wills" ....
> To: [hidden email]
> Subject: Re: A basic question on HBase
> ...
> > >
> > > 2)  I was running one of these batch-style uploads last night on an
> > > HTable that I configured w/BloomFilters on a couple of my column
> > > families.  During one of the compaction operations, I got the
> > > following exception--
> > >
> > > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > > java.lang.ArrayIndexOutOfBoundsException
> > >         at java.lang.System.arraycopy(Native Method)
> > >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
> a:102)
> > >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:161)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:140)
> > >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
> est.java:531)
> > >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> > >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> > >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> > >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
> .append(HStoreFile.java:895)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
> )
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
> )
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> > >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
> 17)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> checkForSplitsOrCompactions(HRegionServer.java:198)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> chore(HRegionServer.java:188)
> > >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> > >
> > > Note that this wasn't the first compaction that was run (there were
> > > others before it that ran successfully) and that the region hadn't
> > > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > > on a couple of the columnfamilies, w/the largest one having ~100000
> > > distinct entries.  I don't know which of these caused the failure, but
> > > I noticed that 100000 is quite a bit larger than the # of entries used
> > > in the testcases, so I'm wondering if that might be the problem.
> ...
> {code}
> Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....
> Plan is to try and reproduce on local cluster....

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-2090) [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/HADOOP-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12550641 ]

Jim Kellerman commented on HADOOP-2090:
---------------------------------------

Edward,

If you can figure out why it is happening. Go ahead and fix it.

> [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-2090
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2090
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>            Priority: Minor
>
> From the list this morning from Josh Wills:
> {code}
> Date: Mon, 22 Oct 2007 12:04:01 -0500
> From: "Josh Wills" ....
> To: [hidden email]
> Subject: Re: A basic question on HBase
> ...
> > >
> > > 2)  I was running one of these batch-style uploads last night on an
> > > HTable that I configured w/BloomFilters on a couple of my column
> > > families.  During one of the compaction operations, I got the
> > > following exception--
> > >
> > > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > > java.lang.ArrayIndexOutOfBoundsException
> > >         at java.lang.System.arraycopy(Native Method)
> > >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
> a:102)
> > >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:161)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:140)
> > >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
> est.java:531)
> > >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> > >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> > >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> > >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
> .append(HStoreFile.java:895)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
> )
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
> )
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> > >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
> 17)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> checkForSplitsOrCompactions(HRegionServer.java:198)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> chore(HRegionServer.java:188)
> > >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> > >
> > > Note that this wasn't the first compaction that was run (there were
> > > others before it that ran successfully) and that the region hadn't
> > > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > > on a couple of the columnfamilies, w/the largest one having ~100000
> > > distinct entries.  I don't know which of these caused the failure, but
> > > I noticed that 100000 is quite a bit larger than the # of entries used
> > > in the testcases, so I'm wondering if that might be the problem.
> ...
> {code}
> Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....
> Plan is to try and reproduce on local cluster....

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Resolved: (HADOOP-2090) [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/HADOOP-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jim Kellerman resolved HADOOP-2090.
-----------------------------------

    Resolution: Won't Fix

Since we no longer use SHA as the hash function. Closing this issue as won't fix. However, there is another ArrayIndexOutOfBoundsException being tracked in HADOOP-2414

> [hbase] Inexplicable ArrayIndexOutOfBounds in BloomFilter appending data
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-2090
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2090
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>            Priority: Minor
>
> From the list this morning from Josh Wills:
> {code}
> Date: Mon, 22 Oct 2007 12:04:01 -0500
> From: "Josh Wills" ....
> To: [hidden email]
> Subject: Re: A basic question on HBase
> ...
> > >
> > > 2)  I was running one of these batch-style uploads last night on an
> > > HTable that I configured w/BloomFilters on a couple of my column
> > > families.  During one of the compaction operations, I got the
> > > following exception--
> > >
> > > FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in
> > > regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker
> > > java.lang.ArrayIndexOutOfBoundsException
> > >         at java.lang.System.arraycopy(Native Method)
> > >         at sun.security.provider.DigestBase.engineUpdate(DigestBase.jav=
> a:102)
> > >         at sun.security.provider.SHA.implDigest(SHA.java:94)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:161)
> > >         at sun.security.provider.DigestBase.engineDigest(DigestBase.jav=
> a:140)
> > >         at java.security.MessageDigest$Delegate.engineDigest(MessageDig=
> est.java:531)
> > >         at java.security.MessageDigest.digest(MessageDigest.java:309)
> > >         at org.onelab.filter.HashFunction.hash(HashFunction.java:125)
> > >         at org.onelab.filter.BloomFilter.add(BloomFilter.java:99)
> > >         at org.apache.hadoop.hbase.HStoreFile$BloomFilterMapFile$Writer=
> .append(HStoreFile.java:895)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:899)
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:728)
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:632=
> )
> > >         at org.apache.hadoop.hbase.HStore.compactHelper(HStore.java:564=
> )
> > >         at org.apache.hadoop.hbase.HStore.compact(HStore.java:559)
> > >         at org.apache.hadoop.hbase.HRegion.compactStores(HRegion.java:7=
> 17)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> checkForSplitsOrCompactions(HRegionServer.java:198)
> > >         at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.=
> chore(HRegionServer.java:188)
> > >         at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
> > >
> > > Note that this wasn't the first compaction that was run (there were
> > > others before it that ran successfully) and that the region hadn't
> > > been split at this point.  I defined the BloomFilterType.BLOOMFILTER
> > > on a couple of the columnfamilies, w/the largest one having ~100000
> > > distinct entries.  I don't know which of these caused the failure, but
> > > I noticed that 100000 is quite a bit larger than the # of entries used
> > > in the testcases, so I'm wondering if that might be the problem.
> ...
> {code}
> Poking around, could be a concurrency issue -- see http://forum.java.sun.com/thread.jspa?threadID=700440&messageID=4117706 -- but Jim and I chatting can't figure how since there should be one thread only running at compaction time....
> Plan is to try and reproduce on local cluster....

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.