The Use of Artıfıcıal Intellıgence for Fake News Detectıon
Session
Computer Science and Communication Engineering
Description
The phenomenon of fake news has become a global concern, directly influencing democratic processes, social perceptions, and public security. Detecting fake news represents a critical challenge for computer science, where Artificial Intelligence (AI) offers advanced solutions through Natural Language Processing (NLP) and text classification techniques. This paper analyzes and compares the main approaches used in this field, ranging from traditional machine learning models such as Naive Bayes and Support Vector Machines (SVM) to modern deep learning architectures like Long Short-Term Memory (LSTM), BERT, and GPT-based models. Furthermore, it discusses widely adopted datasets such as FakeNewsNet and LIAR, which serve as essential benchmarks for training and evaluating detection algorithms. The study highlights the strengths and limitations of existing approaches and emphasizes the need for more transparent and context-aware AI models that can better interpret semantic nuances and reduce algorithmic bias. Findings from recent literature suggest that combining semantic and contextual analysis provides the most promising results for accurate and reliable fake news identification, ultimately contributing to more trustworthy and explainable AI-based media ecosystems.
Keywords:
Fake news, Artificial intelligence, Natural language processing, Text classification, BERT, FakeNewsNet
Proceedings Editor
Edmond Hajrizi
ISBN
978-9951-982-41-2
Location
UBT Kampus, Lipjan
Start Date
25-10-2025 9:00 AM
End Date
26-10-2025 6:00 PM
DOI
10.33107/ubt-ic.2025.85
Recommended Citation
Salihu, Altina and Berisha, Diellza, "The Use of Artıfıcıal Intellıgence for Fake News Detectıon" (2025). UBT International Conference. 17.
https://knowledgecenter.ubt-uni.net/conference/2025UBTIC/CS/17
The Use of Artıfıcıal Intellıgence for Fake News Detectıon
UBT Kampus, Lipjan
The phenomenon of fake news has become a global concern, directly influencing democratic processes, social perceptions, and public security. Detecting fake news represents a critical challenge for computer science, where Artificial Intelligence (AI) offers advanced solutions through Natural Language Processing (NLP) and text classification techniques. This paper analyzes and compares the main approaches used in this field, ranging from traditional machine learning models such as Naive Bayes and Support Vector Machines (SVM) to modern deep learning architectures like Long Short-Term Memory (LSTM), BERT, and GPT-based models. Furthermore, it discusses widely adopted datasets such as FakeNewsNet and LIAR, which serve as essential benchmarks for training and evaluating detection algorithms. The study highlights the strengths and limitations of existing approaches and emphasizes the need for more transparent and context-aware AI models that can better interpret semantic nuances and reduce algorithmic bias. Findings from recent literature suggest that combining semantic and contextual analysis provides the most promising results for accurate and reliable fake news identification, ultimately contributing to more trustworthy and explainable AI-based media ecosystems.
