Result filters

Metadata provider

  • DSpace

Language

Resource type

Availability

Active filters:

  • Metadata provider: DSpace
Loading...
419 record(s) found

Search results

  • DigiLing e-Learning Hub: e-Courses for Digital Linguistics

    The files represent exported e-learning resources created within the DigiLing project, www.digiling.eu. We have identified seven core subjects in Digital Linguistics and built seven corresponding courses: - Introduction to Text Processing and Analysis - Introduction to Python for Linguists - Computational Lexicology and Lexicography - Localization Tools and Workflows - Post-Editing Machine Translation - Mining and Managing Multilingual Terminology - Variability of Languages in Time and Space The data format is .mbz, a compressed archive compatible with any e-learning environment running Moodle.
  • ForFun 1.0

    ForFun is a database of linguistic forms and their syntactic functions built with the use of the multi-layer annotated corpora of Czech, the Prague Dependency Treebanks. The purpose of the Prague Database of Forms and Functions (ForFun) is to help the linguists to study the form-function relation, which we assume to be one of the principal tasks of both theoretical linguistics and natural language processing. A prototypical question to be asked is "What purposes does a preposition 'po' serve for" or "What are the linguistic means in the sentence that can express the meaning 'a destination of an action'?". There are almost 1500 distinct forms (besides the 'po' preposition) and 65 distinct functions (besides the 'destination').
  • Universal Dependencies 2.0 Models for UDPipe (2017-08-01)

    Tokenizer, POS Tagger, Lemmatizer and Parser models for all 50 languages of Universal Depenencies 2.0 Treebanks, created solely using UD 2.0 data (http://hdl.handle.net/11234/1-1983). The model documentation including performance can be found at http://ufal.mff.cuni.cz/udpipe/users-manual#universal_dependencies_20_models . To use these models, you need UDPipe binary version at least 1.2, which you can download from http://ufal.mff.cuni.cz/udpipe . In addition to models itself, all additional data and value of hyperparameters used for training are available in the second archive, allowing reproducible training.
  • The CLASSLA-StanfordNLP model for lemmatisation of standard Slovenian 1.2

    The model for lemmatisation of standard Slovenian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the ssj500k training corpus (http://hdl.handle.net/11356/1210) and using the Sloleks inflectional lexicon (http://hdl.handle.net/11356/1230). The estimated F1 of the lemma annotations is ~99.0. The difference to the previous version is that now it relies solely on XPOS annotations, and not on a combination of UPOS, FEATS (lexicon lookup) and XPOS (lemma prediction) annotations.
  • Translation Models (en-de) (v1.0)

    En-De translation models, exported via TensorFlow Serving, available in the Lindat translation service (https://lindat.mff.cuni.cz/services/translation/). Models are compatible with Tensor2tensor version 1.6.6. For details about the model training (data, model hyper-parameters), please contact the archive maintainer. Evaluation on newstest2020 (BLEU): en->de: 25.9 de->en: 33.4 (Evaluated using multeval: https://github.com/jhclark/multeval)
  • The CLASSLA-StanfordNLP model for morphosyntactic annotation of standard Croatian 1.1

    The model for morphosyntactic annotation of standard Croatian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the hr500k training corpus (http://hdl.handle.net/11356/1183) and using the CLARIN.SI-embed.hr word embeddings (http://hdl.handle.net/11356/1205). The model produces simultaneously UPOS, FEATS and XPOS (MULTEXT-East) labels. The estimated F1 of the XPOS annotations is ~94.1. The difference to the previous version of the model is that now the whole XPOS tag is predicted and not specific characters, as was the case in stanfordnlp, which resulted in illegal XPOS tags (and slightly decreased performance).
  • HaskPL

    HaskPL is a Polish phraseological database designed for language professionals including linguists, language teachers, lexicographers, language materials developers and translators. Query results can be visualised and exported as spreadsheets. A complementary tool is HaskProof (http://pelcra.clarin-pl.eu:9894/#/lang/pl) identifying potential collocations in any text inserted by the user.
  • Universal Dependencies 2.5 Models for UDPipe (2019-12-06)

    Tokenizer, POS Tagger, Lemmatizer and Parser models for 94 treebanks of 61 languages of Universal Depenencies 2.5 Treebanks, created solely using UD 2.5 data (http://hdl.handle.net/11234/1-3105). The model documentation including performance can be found at http://ufal.mff.cuni.cz/udpipe/models#universal_dependencies_25_models . To use these models, you need UDPipe binary version at least 1.2, which you can download from http://ufal.mff.cuni.cz/udpipe . In addition to models itself, all additional data and value of hyperparameters used for training are available in the second archive, allowing reproducible training.
  • Multi-speaker GlowTTS model for Talrómur 2 (prerelease) (22.10)

    This release includes a partially trained multi-speaker model using the GlowTTS architecture in the Coqui TTS library [1]. The model is trained on all of the speakers in the Talrómur 2 [2] corpus. The release includes the model, training log, model configuration file and the recipe used to train the model. The model included here is the best model available during the training at the time of publishing. At run time it is possible to choose any of the voices to produce a similar sounding synthesized voice. Þessi útgáfa inniheldur módel þjálfað á mörgum röddum með notkun GlowTTS nálgunarinnar í Coqui TTS verkfærakistunni [1]. Módelið er þjálfað á öllum röddum í Talrómur 2 [2] gagnasafninu. Innifalið í pakkanum er módelið, þjálfunarsaga, skjal með stillingum fyrir módelið og forskriftin sem var notuð til að þjálfa módelið. Módelið sem er hér inni er besta módelið í þjálfunarferlinu á þeim tíma sem þetta er gefið út. Þegar módelið er keyrt er hægt að velja hvaða rödd sem er úr Talrómur 2 gagnasafninu til að búa til upptöku með sambærilegri rödd. [1] https://github.com/cadia-lvl/coqui-ai-TTS/releases/tag/M9 [2] http://hdl.handle.net/20.500.12537/167