lunduniversity.lu.se

Robotics and Semantic Systems

Computer Science | LTH | Lund University

Denna sida på svenska This page in English

Events

CS MSc Thesis Presentation 29 May 2024

Föreläsning

From: 2024-05-29 13:15 to 14:00
Place: E:4130 (Lucas)
Contact: birger [dot] swahn [at] cs [dot] lth [dot] se


One Computer Science MSc thesis to be presented on 29 May

Wednesday, 29 May there will be a master thesis presentation in Computer Science at Lund University, Faculty of Engineering.

The presentation will take place in E:4130 (Lucas).

Note to potential opponents: Register as an opponent to the presentation of your choice by sending an email to the examiner for that presentation (firstname.lastname@cs.lth.se). Do not forget to specify the presentation you register for! Note that the number of opponents may be limited (often to two), so you might be forced to choose another presentation if you register too late. Registrations are individual, just as the oppositions are! More instructions are found on this page.


13:15-14:00 in E:4130 (Lucas) N.B. No more opponents for this presentation

Presenters: Amanda Axelsson, Allis Rodvaldr
Title: Building a Framework for Evaluation of Small Language Models: A Study of NLP Task Performances and Swedish Comprehension
Examiner: Elin A. Topp
Supervisors: Maj Stenmark (LTH), Henrik Tingström (Scholaro)

As of today, computational resources are one of the biggest limitations to the development and deployment of natural language processing (NLP) models in organizations. For Scholaro, a company that aims to build a scholarship matching platform driven by NLP models, the extensive usage of large language models (LLMs) is becoming difficult to financially motivate. Therefore, this thesis explores whether small language models (SLMs) could offer a more cost-effective alternative to LLMs, and in that case, to what extent. The effectiveness of five SLMs for both bilingual (English and Swedish) as well as task-specific applications, including classification, summarization, and translation are being studied. Furthermore, our study employs a framework that introduces a variety of human and automatic evaluation metrics suitable for measuring the performance of these SLMs. Our findings suggest that SLMs can, if the correct model, prompt, and task are combined, serve as viable alternatives to LLMs, particularly in scenarios where resource efficiency and task specificity are vital.

Link to popular science summary: To be uploaded