Result filters

Metadata provider

Language

Resource type

Availability

Loading...
703 record(s) found

Search results

  • The CLASSLA-Stanza model for named entity recognition of standard Slovenian 2.2

    This model for named entity recognition of standard Slovenian was built with the CLASSLA-Stanza tool (https://github.com/clarinsi/classla) by training on the SUK training corpus (http://hdl.handle.net/11356/1959) and using the CLARIN.SI-embed.sl 2.0 word embeddings (http://hdl.handle.net/11356/1791). The difference to the previous version of the model is that the model was trained using the SUK training corpus and uses new embeddings.
  • The CLASSLA-StanfordNLP model for lemmatisation of standard Serbian

    The model for lemmatisation of standard Serbian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the SETimes.SR training corpus (http://hdl.handle.net/11356/1200) and using the srLex inflectional lexicon (http://hdl.handle.net/11356/1233). The estimated F1 of the lemma annotations is ~97.9.
  • Fine-tuned models for extractive question answering in the Slovenian language

    6 different fine-tuned Transformer-based models that solve the downstream task of extractive question answering in the Slovenian language. The fine-tuned models included are: bert-base-cased-squad2-SLO, bert-base-multilingual-cased-squad2-SLO, electra-base-squad2-SLO, roberta-base-squad2-SLO, sloberta-squad2-SLO and xlm-roberta-base-squad2-SLO. The models were trained and evaluated using the Slovene translation of the SQuAD2.0 dataset (https://www.clarin.si/repository/xmlui/handle/11356/1756). The models achieve these metric values: sloberta-squad2-SLO: EM=67.1, F1=73.56 xlm-roberta-base-squad2-SLO: EM=62.52, F1=69.51 bert-base-multilingual-cased-squad2-SLO: EM=61.37, F1=68.1 roberta-base-squad2-SLO: EM=58.23, F1=64.62 bert-base-cased-squad2-SLO: EM=55.12, F1=60.52 electra-base-squad2-SLO: EM=53.69, F1=60.85
  • The CLASSLA-StanfordNLP model for lemmatisation of non-standard Serbian 1.0

    The model for lemmatisation of non-standard Serbian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the SETimes.SR training corpus (http://hdl.handle.net/11356/1200), the ReLDI-NormTagNER-sr corpus (http://hdl.handle.net/11356/1240), the ReLDI-NormTagNER-hr corpus (http://hdl.handle.net/11356/1241), the hr500k training corpus (http://hdl.handle.net/11356/1210) and the RAPUT corpus (https://www.aclweb.org/anthology/L16-1513/), using the srLex inflectional lexicon (http://hdl.handle.net/11356/1233). These corpora were additionally augmented for handling missing diacritics by repeating parts of the corpora with diacritics removed. The estimated F1 of the lemma annotations is ~97.62.
  • Korektor 2

    Korektor is a statistical spell-checker and (occasionally) grammar-checker. It is released under 2-Clause BSD license http://opensource.org/licenses/BSD-2-Clause. Korektor started with Michal Richter's diploma thesis Advanced Czech Spellchecker https://redmine.ms.mff.cuni.cz/documents/1, but it is being developed further. There are two versions: a command line utility (tested on Linux, Windows and OS X) and a REST service with publicly available API http://lindat.mff.cuni.cz/services/korektor/api-reference.php and HTML front end https://lindat.mff.cuni.cz/services/korektor/.
  • The CLASSLA-StanfordNLP model for UD dependency parsing of standard Croatian

    The model for UD dependency parsing of standard Croatian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the UD-parsed portion of the hr500k training corpus (http://hdl.handle.net/11356/1183) and using the CLARIN.SI-embed.hr word embeddings (http://hdl.handle.net/11356/1205). The estimated LAS of the parser is ~85.9.
  • Byte-Level Neural Error Correction Model for Icelandic - Yfirlestur (23.12)

    This Byte-Level Neural Error Correction Model for Icelandic is a fine-tuned byT5-base Transformer model for error correction in natural language. It acts as a machine translation model in that it “translates” from deficient Icelandic to correct Icelandic. The model is an improved version of a previous model which is accessible here: http://hdl.handle.net/20.500.12537/255. The improved model is trained on contextual and domain-tagged data, with an additional span-masking pre-training, along with a wider variety of text genre. The model is trained on span-masked data, parallel synthetic error data and real error data. The span-masking pre-training step consisted of 30 million training examples from a wide variety of texts, including forums and texts from the Icelandic Gigaword Corpus (IGC, http://hdl.handle.net/20.500.12537/254). Synthetic error data consisted of 8.5 million training examples taken from different texts. Data for this was e.g. obtained from IGC (data which was excluded from the span-masked data), MÍM (http://hdl.handle.net/20.500.12537/113), student essays and educational material. This data was scrambled to simulate real grammatical and typographical errors. Fine-tuning data consisted of data from the iceErrorCorpus (IceEC, http://hdl.handle.net/20.500.12537/73) and the three specialised error corpora (L2: http://hdl.handle.net/20.500.12537/131, dyslexia: http://hdl.handle.net/20.500.12537/132, child language: http://hdl.handle.net/20.500.12537/133). The model can correct a variety of textual errors, even in texts containing many errors, such as those written by people with dyslexia. Measured on the Grammatical Error Correction Test Set, the model scores 0.918975 on the GLEU metric (modified BLEU for grammatical error correction) and 0.06% in TER (translation error rate). Þetta leiðréttingarlíkan fyrir íslensku er fínþjálfað byT5-base Transformer-líkan. Það er í raun þýðingalíkan sem þýðir úr íslenskum texta með villum yfir í texta án villna. Líkanið er uppfærð útgáfa af fyrra líkani sem má nálgast hér: http://hdl.handle.net/20.500.12537/255. Uppfærða líkanið er þjálfað á samhengi og gögnum sem hafa verið merkt fyrir óðölum ásamt eyðufylllingarþjálfun og þjálfun með fjölbreyttari texta. Líkanið er þjálfað í eyðufyllingu, á samhliða gervivillugögnum og raunverulegum villugögnum. Eyðufyllingarþjálfun var gerð á 30 milljónum þjálfunardæma sem voru tekin úr ýmsum texta, m.a. úr spjallborðum og textum úr Risamálheildinni (http://hdl.handle.net/20.500.12537/254). Gervivillugögn innihéldu 8,5 milljón þjálfunardæmi sem voru einnig tekin úr ýmsum texta. Sá texti var m.a. úr Risamálheildinni (þeim hluta sem var ekki í eyðufyllingarverkefninu), MÍM (http://hdl.handle.net/20.500.12537/113), nemendaritgerðum og fræðsluefni. Gögnin voru rugluð til þess að líkja eftir raunverulegum málfræði- og ritunarvillum. Fínþjálfunargögn voru tekin úr íslensku villumálheildinni (http://hdl.handle.net/20.500.12537/73) og sérhæfðu villumálheildunum þremur (íslenska sem erlent mál: http://hdl.handle.net/20.500.12537/131, lesblinda: http://hdl.handle.net/20.500.12537/132, barnatextar: http://hdl.handle.net/20.500.12537/133). Líkanið getur leiðrétt fjölbreyttar textavillur, jafnvel í texta sem inniheldur mjög margar villur, svo sem frá fólki með lesblindu. Líkanið skorar 0.918975 GLEU-stig (BLEU nema lagað að málrýni) og er með 0.06% villuhlutfall í þýðingu (translation error rate), þegar það er metið á Prófunarmengi fyrir textaleiðréttingar.
  • Pretrained models for recognising sex education concepts SemSEX 1.0

    Pretrained language models for detecting and classifying the presence of sex education concepts in Slovene curriculum documents. The models are PyTorch neural network models, intended for usage with the HuggingFace transformers library (https://github.com/huggingface/transformers). The models are based on the Slovenian RoBERTa contextual embeddings model SloBERTa 2.0 (http://hdl.handle.net/11356/1397) and on the CroSloEngual BERT model (http://hdl.handle.net/11356/1330). The source code of the model and example usage is available in GitHub repository https://github.com/TimotejK/SemSex. The models and tokenizers can be loaded using the AutoModelForSequenceClassification.from_pretrained() and the AutoTokenizer.from_pretrained() functions from the transformers library. An example of such usage is available at https://github.com/TimotejK/SemSex/blob/main/Concept%20detection/Classifiers/full_pipeline.py. The corpus on which these models have been trained is available at http://hdl.handle.net/11356/1895.
  • MUSCIMarker

    MUSCIMarker is an open-source tool for annotating visual objects and their relationships in binary images. It is implemented in Python, known to run on Windows, Linux and OS X, and supports working offline. MUSCIMarker is being used for creating a dataset of musical notation symbols, but can support any object set. The user documentation online is currently (12.2016) incomplete, as it is continually changing to reflect annotators' comments and incorporate new features. This version of the software is *not* the final one, and it is under continuous development (we're currently working on adding grayscale image support with auto-binarization, and Android support for touch-based annotation). However, the current version (1.1) has already been used to annotate more than 100 pages of sheet music, over all the major desktop OSes, and I believe it is already in a state where it can be useful beyond my immediate music notation data gathering use case.
  • The CLASSLA-StanfordNLP model for lemmatisation of standard Serbian 1.2

    The model for lemmatisation of standard Serbian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the SETimes.SR training corpus (http://hdl.handle.net/11356/1200) and using the srLex inflectional lexicon (http://hdl.handle.net/11356/1233). The estimated F1 of the lemma annotations is ~97.9. The difference to the previous version is that now it relies solely on XPOS annotations, and not on a combination of UPOS, FEATS (lexicon lookup) and XPOS (lemma prediction) annotations.