3

Optimal Size-Performance Tradeoffs: Weighing PoS Tagger Models

Improvements in machine learning-based NLP performance are often presented with bigger models and more complex code. This presents a trade-off: better scores come at the cost of larger tools; bigger models tend to require more during training and …

Set-to-Sequence Methods in Machine Learning: a Review

Machine learning on sets towards sequential output is an important and ubiquitous task, with applications ranging from language modelling and meta-learning to multi-agent strategy games and power grid optimization. Combining elements of …

Power Consumption Variation over Activation Functions

The power that machine learning models consume when making predictions can be affected by a model's architecture. This paper presents various estimates of power consumption for a range of different activation functions, a core factor in neural …

The Danish Gigaword Project

Fake News Detection using Stance Classification: A Survey

This paper surveys and presents recent academic work carried out within the field of stance classification and fake news detection. Echo chambers and the model organism problem are examples that pose challenges to acquire data with high quality, due …

Simple Natural Language Processing Tools for Danish