[jira] Updated: (LUCENE-2167) Implement StandardTokenizer with the UAX#29 Standard

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] Updated: (LUCENE-2167) Implement StandardTokenizer with the UAX#29 Standard

Tim Allison (Jira)

     [ https://issues.apache.org/jira/browse/LUCENE-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steven Rowe updated LUCENE-2167:
--------------------------------

    Attachment: LUCENE-2167.patch

Attached patch includes a Perl script to generate a test based on Unicode.org's WordBreakTest.txt UAX#29 test sequences, along with the java source generated by the Perl script.  Both UAX29Tokenizer and StandardTokenizerImpl are tested, and all Lucene and Solr tests pass.  I added a note to modules/analyzer/NOTICE.txt about the Unicode.org data files used in creating the test class.

This test suite turned up a problem in both tested grammars: the WORD_TYPE rule could match zero characters, and so was in certain cases involving underscores returning a zero-length token instead of end-of-stream.  I fixed the issue by changing the rule in both grammars to require at least one character for a match to succeed.  All test sequences are now successfully tokenized.

I attempted to also test ICUAnalyzer, but since it downcases, the expected tokens are incorrect in some cases.  I didn't pursue it further.

I ran the best-of-25-rounds/20k docs benchmark, and the grammar change has not noticeably affected the results.


> Implement StandardTokenizer with the UAX#29 Standard
> ----------------------------------------------------
>
>                 Key: LUCENE-2167
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2167
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: contrib/analyzers
>    Affects Versions: 3.1
>            Reporter: Shyamal Prasad
>            Assignee: Robert Muir
>            Priority: Minor
>         Attachments: LUCENE-2167-jflex-tld-macro-gen.patch, LUCENE-2167-jflex-tld-macro-gen.patch, LUCENE-2167-jflex-tld-macro-gen.patch, LUCENE-2167-lucene-buildhelper-maven-plugin.patch, LUCENE-2167.benchmark.patch, LUCENE-2167.benchmark.patch, LUCENE-2167.benchmark.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, standard.zip, StandardTokenizerImpl.jflex
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> It would be really nice for StandardTokenizer to adhere straight to the standard as much as we can with jflex. Then its name would actually make sense.
> Such a transition would involve renaming the old StandardTokenizer to EuropeanTokenizer, as its javadoc claims:
> bq. This should be a good tokenizer for most European-language documents
> The new StandardTokenizer could then say
> bq. This should be a good tokenizer for most languages.
> All the english/euro-centric stuff like the acronym/company/apostrophe stuff can stay with that EuropeanTokenizer, and it could be used by the european analyzers.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]