Robotics and Semantic Systems

Computer Science | LTH | Lund University

Denna sida på svenska This page in English


CS BSc Thesis Presentation 15 September 2023


From: 2023-09-15 14:15 to 15:00
Place: E:4130 (Lucas)
Contact: birger [dot] swahn [at] cs [dot] lth [dot] se

One Computer Science BSc thesis to be presented on 15 September

Friday, 15 September there will be a bachelor thesis presentation in Computer Science at Lund University, Faculty of Engineering.

The presentation will take place in room E:4130 (Lucas).

Note to potential opponents: (Register as an opponent to the presentation of your choice by sending an email to the examiner for that presentation ( Do not forget to specify the presentation you register for! Note that the number of opponents may be limited (often to two), so you might be forced to choose another presentation if you register too late. Registrations are individual, just as the oppositions are! More instructions are found on this page.)

14:15-15:00 in E:4130 (Lucas)

Presenter: Erik Kolterjahn Kjellberg
Title: Classifying Swedish Political Speeches by Party Affiliation using Large Language Models
Examiner: Pierre Nugues
Supervisor: Marcus Klang (LTH)

Analyzing language within Politics and telling the difference between different narratives and opinions is a difficult task. With Large Language Models and large amounts of data, however, the last decade has created a possibility to be able to analyze the semantics of such texts computationally in order to get a well-founded picture of general patterns within different narratives. In this report, political speeches in the Swedish parliament are classified by party affiliation, using models based on BERT language models combined with recurrent neural networks. The most advanced model tested, KB-BERT-HAN, combines Swedish BERT word embeddings with a Hierarchical Attention Network (HAN). This network has the ability to put different amounts of attention on different parts of the speech, both in terms of words and sentences. Overall, KB-BERT-HAN performs vastly better than a baseline model based on tf-idf, achieving a macro F1 score of 68 % compared to 38 % for the baseline model. Thanks to its hierarchical structure, it is also possible to visualize the model through the attention on the word and sentence levels. Because of its deep and hierarchical structure, it can, aside from merely making predictions, give useful information about how narratives belonging to different political affiliations relate to each other.

Link to popular science summary: To be uploaded