map-reduce takes too long before/after fetching

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

map-reduce takes too long before/after fetching

AJ Chen-2
I'm using nutch 0.9-dev to crawl web on 1 linux server. With default hadoop
configuration (local file system, no distributed crawling), the Generator
and Fetcher spend unproportional amount of time on map-reduce opearations.
For example:

2006-11-01 20:32:44,074 INFO  crawl.Generator - Generator: segment:
crawl/segments/20061101203244
... (doing map and reduce for 2 hours )
2006-11-01 22:28:11,102 INFO  fetcher.Fetcher - Fetcher: segment:
crawl/segments/20061101203244
... (fetching 12 hours )
2006-11-02 11:15:10,590 INFO  mapred.LocalJobRunner - 175383 pages, 16583
errors, 3.8 pages/s, 687 kb/s,
2006-11-02 11:17:24,039 INFO  mapred.LocalJobRunner - reduce > sort
... (but doing reduce>sort and reduce>duce for 8 hours )
2006-11-02 19:13:38,882 INFO  crawl.CrawlDb - CrawlDb update: segment:
crawl/segments/20061101203244

Is there any configuration that can be set so that the time for map-reduce
can be reduced?  I have to improve the crawl performance. Will appreciate
your suggestion on how to optimize performance of running nutch on a single
server.

Thanks,
--
AJ Chen, PhD
http://web2express.org