Problems Indexing

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Problems Indexing

Amitabha Banerjee
I am a new user of nutch. I tried to set up everything following the
official documentation.
I observe that URL's within the main domain which are linked from the main
page do not get indexed. For example, my start url is
http://www.knowmydestination.com . It seems that
http://www.knowmydestination.com and
http://blogs.knowmydestination.comappear in the search results but a
link such as
http://www.knowmydestination.com/?locationId=1&op=locationDetail does not
appear in the search results.

My crawl filter file looks like the following. All the other files are
unchanged.

[amitabha@thumper conf]$ cat crawl-urlfilter.txt
# The url filter file used by the crawl command.

# Better for intranet crawling.
# Be sure to change MY.DOMAIN.NAME to your domain name.

# Each non-comment, non-blank line contains a regular expression
# prefixed by '+' or '-'.  The first matching pattern in the file
# determines whether a URL is included or ignored.  If no pattern
# matches, the URL is ignored.

# skip file:, ftp:, & mailto: urls
-^(file|ftp|mailto):

# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$

# skip URLs containing certain characters as probable queries, etc.
#-[?*!@=]

# skip URLs with slash-delimited segment that repeats 3+ times, to break
loops
#-.*(/.+?)/.*?\1/.*?\1/

# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*knowmydestination.com/([a-z0-9A-Z?&!=/])*

# skip everything else
-.

I have tried changing the accept hosts line to:
http://([a-z0-9]*\.)*knowmydestination.com/

but that did not help.

Any suggestions?

Thanks,
/Amitab