[jira] Commented: (LUCENE-1799) Unicode compression

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (LUCENE-1799) Unicode compression

Markus Jelsma (Jira)

    [ https://issues.apache.org/jira/browse/LUCENE-1799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12893280#action_12893280 ]

Yonik Seeley commented on LUCENE-1799:
--------------------------------------

I took a stab at benchmarking encoding speed only with some different languages.
I encoded a word at a time (which happens at indexing time).
I used some text from wikipedia in different languages: english, german, french, spanish, and chinese.
I used WhitespaceAnalyzer for the first 4 and StandardAnalyzer for the chinese (but analysis speed is not measured.)

|encoding|english|german|french|spanish|chinese|
|UTF-8 size|1888|4550|4875|5123|4497|
|BOCU-1 size|1888|4610|4995|5249|4497|
|BOCU slowdown|29%|39%|47%|61%|80%

I suspect that the StandardAnalyzer is spitting out individual CJK chars, and hence the same size of BOCU-1 and UTF-8?
I'll try and see if I can get SmartChineseAnalyzer working and re-run the chinese test.


> Unicode compression
> -------------------
>
>                 Key: LUCENE-1799
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1799
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Store
>    Affects Versions: 2.4.1
>            Reporter: DM Smith
>            Priority: Minor
>         Attachments: LUCENE-1779.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799_big.patch
>
>
> In lucene-1793, there is the off-topic suggestion to provide compression of Unicode data. The motivation was a custom encoding in a Russian analyzer. The original supposition was that it provided a more compact index.
> This led to the comment that a different or compressed encoding would be a generally useful feature.
> BOCU-1 was suggested as a possibility. This is a patented algorithm by IBM with an implementation in ICU. If Lucene provide it's own implementation a freely avIlable, royalty-free license would need to be obtained.
> SCSU is another Unicode compression algorithm that could be used.
> An advantage of these methods is that they work on the whole of Unicode. If that is not needed an encoding such as iso8859-1 (or whatever covers the input) could be used.    

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]