We investigate the association between task difficulty and crowdworker performance.
We cooperate with the Institute of Community Medicine, University Medicine Greifswald
- Assessing the difficulty of annotating medical data in crowdworking with help of experiments. PLOS ONE, (16)7:1-26, Public Library of Science, July 2021.
Assessing the Difficulty of Labelling an Instance in Crowdworking.. 2nd Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning@ ECML PKDD 2020, 2020.
Predicting worker disagreement for more effective crowd labeling. 2018 IEEE 5th International Conference on Data Science andAdvanced Analytics (DSAA), 179--188, 2018.
How do annotators label short texts? Toward understanding the temporal dynamics of tweet labeling. Information Sciences, (457–458):29-47, 2018.
- A framework for validating the merit of properties that predict the influence of a twitter user. Expert Systems with Applications, (42)5:2824-2834, 2015.
Experiments on human learning
Cooperation with Leibniz Institute of Neurobiology Magdeburg
Together with the Leibniz Institute of Neurobiology Magdeburg, we analyze experiments on human learning. We also cooperate in the context of the studies profile "Learning Systems" of the bachelor degree Informatik.
Machine learning identifies the dynamics and influencing factors in an auditory category learning experiment. Scientific reports, (10)1:1--12, Nature Publishing Group, 2020.