Releases: explosion/spaCy
v1.5.0: Alpha support for Swedish and Hungarian
✨ Major features and improvements
- NEW: Alpha support for Swedish tokenization.
- NEW: Alpha support for Hungarian tokenization.
- Update language data for Spanish tokenization.
- Speed up tokenization when no data is preloaded by caching the first 10,000 vocabulary items seen.
🔴 Bug fixes
- List the
language_datapackage in thesetup.py. - Fix missing
vec_pathdeclaration that was failing ifadd_vectorswas set. - Allow
Vocabto load withoutserializer_freqs.
📖 Documentation and examples
- NEW: spaCy Jupyter notebooks repo: ongoing collection of easy-to-run spaCy examples and tutorials.
- Fix issue #657: Generalise dependency parsing annotation specs beyond English.
- Fix various typos and inconsistencies.
👥 Contributors
Thanks to @oroszgy, @magnusburton, @jmizgajski, @aikramer2, @fnorf and @bhargavvader for the pull requests!
v1.4.0: Improved language data and alpha Dutch support
✨ Major features and improvements
- NEW: Alpha support for Dutch tokenization.
- Reorganise and improve format of language data.
- Add shared tag map, entity rules, emoticons and punctuation to language data.
- Convert entity rules, morphological rules and lemmatization rules from JSON to Python.
- Update language data for English, German, Spanish, French, Italian and Portuguese.
🔴 Bug fixes
- Fix issue #649: Update and reorganise stop lists.
- Fix issue #672: Make
token.ent_iob_return unicode. - Fix issue #674: Add missing lemmas for contracted forms of "be" to
TOKENIZER_EXCEPTIONS. - Fix issue #683:
Morphologyclass now supplies tag map value for the special space tag if it's missing. - Fix issue #684: Ensure
spacy.en.English()loads the Glove vector data if available. Previously was inconsistent with behaviour ofspacy.load('en'). - Fix issue #685: Expand
TOKENIZER_EXCEPTIONSwith unicode apostrophe (’). - Fix issue #689: Correct typo in
STOP_WORDS. - Fix issue #691: Add tokenizer exceptions for "gonna" and "Gonna".
⚠️ Backwards incompatibilities
No changes to the public, documented API, but the previously undocumented language data and model initialisation processes have been refactored and reorganised. If you were relying on the bin/init_model.py script, see the new spaCy Developer Resources repo. Code that references internals of the spacy.en or spacy.de packages should also be reviewed before updating to this version.
📖 Documentation and examples
- NEW: "Adding languages" workflow.
- NEW: "Part-of-speech tagging" workflow.
- NEW: spaCy Developer Resources repo – scripts, tools and resources for developing spaCy.
- Fix various typos and inconsistencies.
👥 Contributors
Thanks to @dafnevk, @jvdzwaan, @RvanNieuwpoort, @wrvhage, @jaspb, @savvopoulos and @davedwards for the pull requests!
v1.3.0: Improve API consistency
✨ Major features and improvements
- Add
Span.sentimentattribute. - #658: Add
Span.noun_chunksiterator (thanks @pokey). - #642: Let
--data-pathbe specified when running download.py scripts (thanks @ExplodingCabbage). - #638: Add German stopwords (thanks @souravsingh).
- #614: Fix
PhraseMatcherto work with newMatcher(thanks @sadovnychyi).
🔴 Bug fixes
- Fix issue #605:
acceptargument toMatchernow rejects matches as expected. - Fix issue #617:
Vocab.load()now works with string paths, as well asPathobjects. - Fix issue #639: Stop words in
Languageclass now used as expected. - Fix issues #656, #624:
Tokenizerspecial-case rules now support arbitrary token attributes.
📖 Documentation and examples
- Add "Customizing the tokenizer" workflow.
- Add "Training the tagger, parser and entity recognizer" workflow.
- Add "Entity recognition" workflow.
- Fix various typos and inconsistencies.
👥 Contributors
Thanks to @pokey, @ExplodingCabbage, @souravsingh, @sadovnychyi, @manojsakhwar, @TiagoMRodrigues, @savkov, @pspiegelhalter, @chenb67, @kylepjohnson, @YanhaoYang, @tjrileywisc, @dechov, @wjt, @jsmootiv and @blarghmatey for the pull requests!
v1.2.0: Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese
✨ Major features and improvements
- NEW: Support Chinese tokenization, via Jieba.
- NEW: Alpha support for French, Spanish, Italian and Portuguese tokenization.
🔴 Bug fixes
- Fix issue #376: POS tags for "and/or" are now correct.
- Fix issue #578:
--forceargument on download command now operates correctly. - Fix issue #595: Lemmatization corrected for some base forms.
- Fix issue #588:
Matchernow rejects empty patterns. - Fix issue #592: Added exception rule for tokenization of "Ph.D."
- Fix issue #599: Empty documents now considered tagged and parsed.
- Fix issue #600: Add missing
token.tagandtoken.tag_setters. - Fix issue #596: Added missing unicode import when compiling regexes that led to incorrect tokenization.
- Fix issue #587: Resolved bug that caused
Matcherto sometimes segfault. - Fix issue #429: Ensure missing entity types are added to the entity recognizer.
v1.1.0: Bug fixes and adjustments
✨ Major features and improvements
- Rename new
pipelinekeyword argument ofspacy.load()tocreate_pipeline. - Rename new
vectorskeyword argument ofspacy.load()toadd_vectors.
🔴 Bug fixes
- Fix issue #544: Add
vocab.resize_vectors()method, to support changing to vectors of different dimensionality. - Fix issue #536: Default probability was incorrect for OOV words.
- Fix issue #539: Unspecified encoding when opening some JSON files.
- Fix issue #541: GloVe vectors were being loaded incorrectly.
- Fix issue #522: Similarities and vector norms were calculated incorrectly.
- Fix issue #461:
ent_iobattribute was incorrect after setting entities viadoc.ents - Fix issue #459: Deserialiser failed on empty doc
- Fix issue #514: Serialization failed after adding a new entity label.
v1.0.0: Support for deep learning workflows and entity-aware rule matcher
✨ Major features and improvements
- NEW: custom processing pipelines, to support deep learning workflows
- NEW: Rule matcher now supports entity IDs and attributes
- NEW: Official/documented training APIs and
GoldParseclass - Download and use GloVe vectors by default
- Make it easier to load and unload word vectors
- Improved rule matching functionality
- Move basic data into the code, rather than the json files. This makes it simpler to use the tokenizer without the models installed, and makes adding new languages much easier.
- Replace file-system strings with
Pathobjects. You can now load resources over your network, or do similar trickery, by passing any object that supports thePathprotocol.
⚠️ Backwards incompatibilities
- The data_dir keyword argument of
Language.__init__(and its subclassesEnglish.__init__andGerman.__init__) has been renamed topath. - Details of how the Language base-class and its sub-classes are loaded, and how defaults are accessed, have been heavily changed. If you have your own subclasses, you should review the changes.
- The deprecated
token.repvecname has been removed. - The
.train()method of Tagger and Parser has been renamed to.update() - The previously undocumented
GoldParseclass has a new__init__()method. The old method has been preserved inGoldParse.from_annot_tuples(). - Previously undocumented details of the
Parserclass have changed. - The previously undocumented
get_packageandget_package_by_namehelper functions have been moved into a new module,spacy.deprecated, in case you still need them while you update.
🔴 Bug fixes
- Fix
get_lang_classbug when GloVe vectors are used. - Fix Issue #411:
doc.sentsraised IndexError on empty string. - Fix Issue #455: Correct lemmatization logic
- Fix Issue #371: Make
Lexemeobjects hashable - Fix Issue #469: Make
noun_chunksdetect root NPs
👥 Contributors
Thanks to @daylen, @RahulKulhari, @stared, @adamhadani, @izeye and @crawfordcomeaux for the pull requests!