[jira] Created: (HADOOP-1134) Block level CRCs in HDFS

classic Classic list List threaded Threaded
172 messages Options
1234 ... 9
Reply | Threaded
Open this post in threaded view
|

[jira] Created: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
Block level CRCs in HDFS
------------------------

                 Key: HADOOP-1134
                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
             Project: Hadoop
          Issue Type: New Feature
          Components: dfs
            Reporter: Raghu Angadi
         Assigned To: Raghu Angadi



Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :

1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.

2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.

We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.



 

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482264 ]

Doug Cutting commented on HADOOP-1134:
--------------------------------------

Sub-block checksums are useful, as they permit efficient, checksummed random-access without scanning the entire block.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482268 ]

Sameer Paranjpye commented on HADOOP-1134:
------------------------------------------

+1 on sub-block checksums. HDFS ought to be able to support small random reads, we could go to larger blocks than 512 bytes but not by very much. We should seriously consider storing checksums inline with block data, this makes the upgrade harder but it enables us to get data with just 1 seek vs 2 if the checksum are stored in a separate file.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482279 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------


This is surely going to be sub block. For every 64k bytes or so. "Block level CRCs" is probably misleading title. Suggestions are welcome.

Inline CRCs sounds good to me:

1)  Not sure of any disadvantages of storing inline since how blocks are stored is totally internal to Datanodes.

2) It does save a seek if we are reading small chunks of data. But in many cases, we would be reading many megabytes serially. This way the seeks saved may not be great. But every seek saved helps.

3) It save files opened and closed as well for each block accessed.

4) It also matches how I was thinking of sending CRCs to client and peers ( inline with on the same data connection instead of a separate channel ).







> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482494 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------

Regd Datanode upgrade : Online or offline?

Offline : When datanodes restart with the new version, they essentially go offline for couple of hours (100+ GB)  come up with a new and shiny blocks.  
     Pros:
           1) Simpler code. upgrade code does not need to be maintained in future versions. Users with with very old dfs could upgrade iteratively.
            2) No change in block filenames or other maintenance is required.
     Cons:
            1) Cluster would in accessible for couple of hours which implies lot of one time work especially admins.
            2) Upgrade to future versions from current versions requires multiple upgrades.

Online: Datanode would immediately start serving existing data after restart. It will upgrade the blocks in background.
       Pros:
            1) no extra downtime.
      Cons:
            1) Increases the code and this exra code would be spread around the datanode code.
            2) The code to handle mixed database may not be removed anytime in near future.

My (selfish) preference is to upgrade offline :). It does not seem unreasonable before 1.0 release.




> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482501 ]

Doug Cutting commented on HADOOP-1134:
--------------------------------------

I like the offline approach: smaller, simpler code should be more reliable and easier to maintain long-term, which outweighs a one-time downtime.

It'd be nice to not have to rewrite all blocks files, which argues for non-interleaved checksums.  Ideally we could even reuse existing checksum files, so that all blocks would not need to be read.  Then the upgrade would primarily consist of migrating checksum data from HDFS to local files on the datanode beside each block file.  That should be pretty fast.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482507 ]

Owen O'Malley commented on HADOOP-1134:
---------------------------------------

I think the inline crcs are too problematic. They will add a mapping between logical and physical offsets into the block that will hit a fair amount of code. If the side file is opened with a 4k buffer, it will only take 2 reads of the side file to handle the entire block (assuming 4B CRC/64KB and 128MB blocks). It also is much much easier to handle upgrade.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482518 ]

Doug Cutting commented on HADOOP-1134:
--------------------------------------

I'd also vote for using a format like the existing CRC files, with a header that includes a version and bytes/crc.  That way we can switch algorithms to CRC64 or MD5, and also perhaps reuse existing CRCs, even if we change the default bytes/crc for new files.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482534 ]

Sameer Paranjpye commented on HADOOP-1134:
------------------------------------------

+1 for offline upgrades.

>> Owen O'Malley [20/Mar/07 12:59 PM] I think the inline crcs are too problematic. They will add a mapping between logical and physical offsets into the block that will hit a
>> fair amount of code. If the side file is opened with a 4k buffer, it will only take 2 reads of the side file to handle the entire block (assuming 4B CRC/64KB and 128MB
>> blocks). It also is much much easier to handle upgrade.

It takes only 2 reads to handle the entire block which is good.  But it takes those same 2 reads to handle a tiny fraction of the block as well, which is where the downside appears. It's quite clear that doing inline checksums makes the upgrade process a lot harder. The question is whether or not taking the hit of a difficult upgrade and complicating the data access code is a reasonable price to pay for halving the number of seeks in the system for good. It feels like it is, thoughts?








> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482536 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------


I don't think using existing CRCs helps much. First, it will increase the upgrade code by quite a bit:
  Datanode needs to contact namenode to fetch filename for the block
  Then it needs to get blocks for CRC file.
   On top of this, during this upgrade most datanodes are down and namenode does not know where blocks are located.


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482541 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------

I do agree that separate checksum file looks 'cleaner'. Also, when we combine 'Version' upgrade ( HADOOP-702 ) and offline CRC upgrade, datanodes should be able to store each block twice if we want to have inline CRCs. This might be unacceptable in practice.

> It's quite clear that doing inline checksums makes the upgrade process a lot harder.
I am not sure if inline CRCs increases upgrade complexity. Surely upgrade will take less time.. but it would be more like 1 hour instead of 2-3 hours, which is not a big issue.


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482609 ]

Doug Cutting commented on HADOOP-1134:
--------------------------------------

Another thing to think about is reverting if the upgrade doesn't work.  If the upgrade purely adds new files next to block files then reversion is easy until you remove the old CRC files.  So the removal of the old CRC files should probably be a separate step, only performed after the rest of the upgrade is shown to be satisfactory.

> I don't think using existing CRCs helps much.

I suspect it would greatly speed the upgrade.  Yes, the filesystem would need to be brought up in a read-only mode so that the old CRC files could be read.  But note that the old CRCs were computed on the client as the data was created (as should be new CRC files).  If a block has been corrupted, simply CRCing its data on the datanode would hide that.  So the old CRCs are what we want for correctness too.


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482619 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------

> Another thing to think about is reverting if the upgrade doesn't work.

Version upgrade feature lets us go back to old state even if we removed old CRC files. Taking advantage of version upgrade would require that we should not modify block files (ie. no inline CRCs, otherwise DN needs to be able to store two copies). As such, I hadn't planned on removing old CRC files as part of upgrade. We could write an external script to delete all .crc files after the system has been running well.



> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482620 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------

>> I don't think using existing CRCs helps much.
>I suspect it would greatly speed the upgrade.

Do you mean upgrade would not actually read the blocks if we are using old CRC?


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482839 ]

Sameer Paranjpye commented on HADOOP-1134:
------------------------------------------

If we did CRCs for 32 or 64k blocks we could keep the CRCs in side files and cache them in RAM on the Datanodes with only a small amount of overhead. If we did a CRC for every 64k then a 128MB block would have a 4k CRC file. With as many as 3000 such blocks on a node (3TB) we'd only have 24MB of CRC data which could easily be kept in RAM. This would let us work with side files and eliminate seeks as well.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482843 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------

Good idea. If required we could limit the amount of memory used for this and treat it as LRU cache.


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482847 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------

correction : it is 64k memory per GB  => 192 MB for 3 TB.


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482861 ]

Doug Cutting commented on HADOOP-1134:
--------------------------------------

> Do you mean upgrade would not actually read the blocks if we are using old CRC?

Yes.  And, as I noted above, strictly speaking, this is required for correctness.  We cannot simply checksum blocks, assuming they're not corrupt.  The existing checksum data was created on the client and is much more trustworthy.  So if we're going to re-compute checksums we should first validate the data against the old checksums.  But it might be easier to simply re-use the existing checksums.

I'm all for moving to something like 64k bytes/checksum for new files (and old files, if they're validated), although we ought to benchmark the cost of transferring and checksumming 64k before we do this as that will be added to the cost of a seek.  We should test that seek performance does not significantly suffer.  Note that the entire 64k chunk must be transferred to the client for checksumming, so the added cost per seek is not just computation and disk time, but network bandwidth too.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482872 ]

Raghu Angadi commented on HADOOP-1134:
--------------------------------------


Correctness is a big advantage of using existing CRCs. Still, we could choose to 64k checksums during upgrade, which implies reading of of all the blocks and associated increase in downtime. Whether to recreate checksums or not depends on how far off is 512 byte CRCs is from our (gu)estimated optimal value. Another option to get more trust after to upgrade is to compare checksums of replicas and choose the majority. For current design we can  assume we compare with the old CRCs.

If a client asks for 2k data, we are saying we should send 64k (or 128k) where 2k is located to client. Why not send only 2k and send newly calculated CRC for 2k? Datanode still verifies all the 64k blocks involved in the read but recalculates. One argument against this is that this is a weaker guarantee than sending full blocks for verification. But datanode will calculate the new CRC right next to where the on disk CRC is verified thus minimizing some other corruptions. I feel this compromise is probably worth it. But I guess many will disagree. Requiring whole blocks will also increase overhead when support appends in future.

How do we benchmark for good CRC-chunk size? It heavily depends on work load. I will find more about a typical MapReduce load.





> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

ASF GitHub Bot (Jira)
In reply to this post by ASF GitHub Bot (Jira)

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482883 ]

Doug Cutting commented on HADOOP-1134:
--------------------------------------

> Why not send only 2k and send newly calculated CRC for 2k?

Perhaps that could work.  So we'd still need to transfer 64k off disk and checksum it, but, once it's validated, we could re-checksum the 2k we send.  It should be re-checksummed before it's validated, so that the re-checksummed data is guaranteed valid.  On the other hand, the simplicity of the end-to-end checksum makes it more certain that we've implemented things correctly and will properly detect corruptions.  +0

> How do we benchmark for good CRC-chunk size?

Can we push that to a separate issue?  This issue is about removing checksum files from the HDFS namespace.  We might then optimize things in other ways later.

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

1234 ... 9