Contenuto in:
Capitolo

An experimental annotation task to investigate annotators’ subjectivity in a Misogyny dataset

  • Alice Tontodimamma
  • Stefano Anzani
  • Marco Antonio Stranisci
  • Valerio Basile
  • Elisa Ignazzi
  • Lara Fontanella

In recent years, hatred directed against women has spread exponentially, especially in online social media. Although this alarming phenomenon has given rise to many studies both from the viewpoint of computational linguistics and from that of machine learning, less effort has been devoted to analysing whether models for the detection of misogyny are affected by bias. An emerging topic that challenges traditional approaches for the creation of corpora is the presence of social bias in natural language processing (NLP). Many NLP tasks are subjective, in the sense that a variety of valid beliefs exist about what the correct data labels should be; some tasks, for example misogyny detection, are highly subjective, as different people have very different views about what should or should not be labelled as misogynous. An increasing number of scholars have proposed strategies for assessing the subjectivity of annotators, in order to reduce bias both in computational resources and in NLP models. In this work, we present two corpora: a corpus of messages posted on Twitter after the liberation of Silvia Romano on the 9th of May, 2020 and corpus of comments constructed starting from posts on Facebook that contained misogyny, developed through an experimental annotation task, to explore annotators’ subjectivity. For a given comment, the annotation procedure consists in selecting one or more chunk from each text that is regarded as misogynistic and establishing whether a gender stereotype is present. Each comment is annotated by at least three annotators in order to better analyse their subjectivity. The annotation process was carried by trainees who are engaged in an internship program. We propose a qualitative-quantitative analysis of the resulting corpus, which may include non-harmonised annotations.

  • Keywords:
  • subjectivity,
  • misogyny,
  • disagreement,
  • social bias,
+ Mostra di più

Alice Tontodimamma

University of Chieti-Pescara G. D'Annunzio, Italy

Stefano Anzani

University of Chieti-Pescara G. D'Annunzio, Italy - ORCID: 0009-0000-5408-0104

Marco Antonio Stranisci

University of Turin, Italy - ORCID: 0000-0001-9337-7250

Valerio Basile

University of Turin, Italy - ORCID: 0000-0001-8110-6832

Elisa Ignazzi

University of Chieti-Pescara G. D'Annunzio, Italy

Lara Fontanella

University of Chieti-Pescara G. D'Annunzio, Italy - ORCID: 0000-0002-5441-0035

  1. Basile, V. (2020). It’s the end of the gold standard as we know it. on the impact of pre-aggregation on the evaluation of highly subjective tasks. In 2020 AIxIA Discussion Papers Workshop, AIxIA 2020 DP (Vol. 2776, pp. 31-40). CEUR-WS.
  2. Basile, V., Fell, M., Fornaciari, T., Hovy, D., Paun, S., Plank, B., ... & Uma, A. (2021). We Need to consider disagreement in evaluation. In 1st Workshop on Benchmarking: Past, Present and Future (pp. 15-21). Association for Computational Linguistics.
  3. Beigman Klebanov B., Beigman E., and Diermeier D. 2008. Analyzing disagreements. In Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pages 2–7, Manchester, UK. Coling 2008 Organizing Committee.
  4. Bowman, S. R., & Dahl, G. E. (2021). What Will it Take to Fix Benchmarking in Natural Language Understanding?. arXiv preprint arXiv:2104.02145.
  5. Davani, A. M., Díaz, M., & Prabhakaran, V. (2022). Dealing with disagreements: Looking beyond the majority vote in subjective annotations. Transactions of the Association for Computational Linguistics, 10, 92-110.
  6. Fleiss, J. L., Cohen, J., & Everitt, B. S. (1969). Large sample standard errors of kappa and weighted kappa. Psychological bulletin, 72(5), 323.
  7. Landis JR., Koch GG. 1977. The measurement of observer agreement for categorical data. Biometrics. 1977 Mar;33(1):159-74. PMID: 843571.
  8. Lehnert, W., Cardie, C., Fisher, D., McCarthy, J., Riloff, E., & Soderland, S. (1992). University of Massachusetts: MUC-4 test results and analysis. In Fourth Message Uunderstanding Conference (MUC-4): Proceedings of a Conference Held in McLean, Virginia,
  9. Nozza, D., Volpetti, C., & Fersini, E. (2019, October). Unintended bias in misogyny detection. In Ieee/wic/acm international conference on web intelligence (pp. 149-155).
  10. Pavlopoulos, J., Sorensen, J., Laugier, L., & Androutsopoulos, I. (2021, August). Semeval-2021 task 5: Toxic spans detection. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) (pp. 59-69).
  11. Uma, A., Fornaciari, T., Dumitrache, A., Miller, T., Chamberlain, J., Plank, B., ... & Poesio, M. (2021). Semeval-2021 task 12: Learning with disagreements. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) (pp. 338-3
  12. Tontodimamma A., Fontanella L., Anzani S., Basile V. (2022). An Italian lexical resource for incivility detection in online discourses. Quality & Quantity. DOI: 10.1007/s11135-022-01494-7
PDF
  • Anno di pubblicazione: 2023
  • Pagine: 281-286

XML
  • Anno di pubblicazione: 2023

Informazioni sul capitolo

Titolo del capitolo

An experimental annotation task to investigate annotators’ subjectivity in a Misogyny dataset

Autori

Alice Tontodimamma, Stefano Anzani, Marco Antonio Stranisci, Valerio Basile, Elisa Ignazzi, Lara Fontanella

Lingua

English

DOI

10.36253/979-12-215-0106-3.49

Opera sottoposta a peer review

Anno di pubblicazione

2023

Copyright

© 2023 Author(s)

Licenza d'uso

CC BY 4.0

Licenza dei metadati

CC0 1.0

Informazioni bibliografiche

Titolo del libro

ASA 2022 Data-Driven Decision Making

Sottotitolo del libro

Book of short papers

Curatori

Enrico di Bella, Luigi Fabbris, Corrado Lagazio

Opera sottoposta a peer review

Anno di pubblicazione

2023

Copyright

© 2023 Author(s)

Licenza d'uso

CC BY 4.0

Licenza dei metadati

CC0 1.0

Editore

Firenze University Press, Genova University Press

DOI

10.36253/979-12-215-0106-3

eISBN (pdf)

979-12-215-0106-3

eISBN (xml)

979-12-215-0107-0

Collana

Proceedings e report

ISSN della collana

2704-601X

e-ISSN della collana

2704-5846

91

Download dei libri

150

Visualizzazioni

Salva la citazione

1.347

Libri in accesso aperto

in catalogo

2.262

Capitoli di Libri

3.790.127

Download dei libri

4.421

Autori

da 923 Istituzioni e centri di ricerca

di 65 Nazioni

65

scientific boards

da 348 Istituzioni e centri di ricerca

di 43 Nazioni

1.248

I referee

da 381 Istituzioni e centri di ricerca

di 38 Nazioni