Fr. 179.00

Transfer Learning for Harmful Content Detection

Anglais · Livre Relié

Expédition généralement dans un délai de 6 à 7 semaines

Description

En savoir plus

This book provides an in-depth exploration of the effectiveness of transfer learning approaches in detecting deceptive content (i.e., fake news) and inappropriate content (i.e., hate speech). The author first addresses the issue of insufficient labeled data by reusing knowledge gained from other natural language processing (NLP) tasks, such as language modeling. He goes on to observe the connection between harmful content and emotional signals in text after emotional cues were integrated into the classification models to evaluate their impact on model performance. Additionally, since pre-processing plays an essential role in NLP tasks by enriching raw data-especially critical for tasks with limited data, such as fake news detection-the book analyzes various pre-processing strategies in a transfer learning context to enhance the detection of fake stories online. Optimal settings for transferring knowledge from pre-trained models across subtasks, including claim extraction and check-worthiness assessment, are also investigated.  The author shows that the findings indicate that incorporating these features into check-worthy claim models can improve overall model performance, though integrating emotional signals did not significantly affect classifier results. Finally, the experiments highlight the importance of pre-processing for enhancing input text, particularly in social media contexts where content is often ambiguous and lacks context, leading to notable performance improvements.

Table des matières

Introduction.- Background.- The Impact of Pre-processing on Fake News Detection.- Transfer Learning for Harmful Content Detection.- Sentiment Analysis and Fake News Detection.- Outlook.- Conclusion.

A propos de l'auteur










Salar Mohtaj is a Research Scientist at the German Research Center for Artificial Intelligence (DFKI) and a postcoctoral researcher in the Speech & Language Technology group. He completed his PhD at Technische Universität Berlin, focusing on fake news and hate speech detection, and hold a Master’s degree in Information Technology from Tehran Polytechnic (Amirkabir University of Technology), specializing in natural language processing. Previously, he led the development of a Persian plagiarism detection system at ICT Research Institute of Tehran. With over 40 publications in journals and conferences, Salar has made contributions to different natural language processing tasks, notably publishing research and creating datasets across various tasks—from plagiarism detection and German text readability assessment to fake news detection.

Résumé

This book provides an in-depth exploration of the effectiveness of transfer learning approaches in detecting deceptive content (i.e., fake news) and inappropriate content (i.e., hate speech). The author first addresses the issue of insufficient labeled data by reusing knowledge gained from other natural language processing (NLP) tasks, such as language modeling. He goes on to observe the connection between harmful content and emotional signals in text after emotional cues were integrated into the classification models to evaluate their impact on model performance. Additionally, since pre-processing plays an essential role in NLP tasks by enriching raw data—especially critical for tasks with limited data, such as fake news detection—the book analyzes various pre-processing strategies in a transfer learning context to enhance the detection of fake stories online. Optimal settings for transferring knowledge from pre-trained models across subtasks, including claim extraction and check-worthiness assessment, are also investigated.  The author shows that the findings indicate that incorporating these features into check-worthy claim models can improve overall model performance, though integrating emotional signals did not significantly affect classifier results. Finally, the experiments highlight the importance of pre-processing for enhancing input text, particularly in social media contexts where content is often ambiguous and lacks context, leading to notable performance improvements.

Commentaires des clients

Aucune analyse n'a été rédigée sur cet article pour le moment. Sois le premier à donner ton avis et aide les autres utilisateurs à prendre leur décision d'achat.

Écris un commentaire

Super ou nul ? Donne ton propre avis.

Pour les messages à CeDe.ch, veuillez utiliser le formulaire de contact.

Il faut impérativement remplir les champs de saisie marqués d'une *.

En soumettant ce formulaire, tu acceptes notre déclaration de protection des données.