Andrzej Bialecki updated SOLR-1535:
Patch updated to the latest trunk. This still uses the custom serialization format. Please weigh in with suggestions about how to proceed - I see the following options:
* keep the custom format as is (it's compact and easy to produce)
* use JSON instead (easy to produce, but more chatty, binary values would have to be base64 encoded)
* use Avro (compact and back-forward compatible, self-describing, but added dependencies, not that easy to construct by hand)
> Pre-analyzed field type
> Key: SOLR-1535
> URL: https://issues.apache.org/jira/browse/SOLR-1535 > Project: Solr
> Issue Type: New Feature
> Affects Versions: 1.5
> Reporter: Andrzej Bialecki
> Fix For: 4.0
> Attachments: SOLR-1535.patch, SOLR-1535.patch, preanalyzed.patch, preanalyzed.patch
> PreAnalyzedFieldType provides a functionality to index (and optionally store) content that was already processed and split into tokens using some external processing chain. This implementation defines a serialization format for sending tokens with any currently supported Attributes (eg. type, posIncr, payload, ...). This data is de-serialized into a regular TokenStream that is returned in Field.tokenStreamValue() and thus added to the index as index terms, and optionally a stored part that is returned in Field.stringValue() and is then added as a stored value of the field.
> This field type is useful for integrating Solr with existing text-processing pipelines, such as third-party NLP systems.