Robotics and Semantic Systems

Computer Science | LTH | Lund University

Denna sida på svenska This page in English


Extra CS MSc Thesis Presentation Day June 2 2023


From: 2023-06-02 10:15 to 16:00
Place: See information for each presentation
Contact: birger [dot] swahn [at] cs [dot] lth [dot] se

Five MSc theses to be presented on Friday June 2, 2023

Friday June 2 is an extra day for coordinated master thesis presentations in Computer Science at Lund University, Faculty of Engineering. Five MSc theses will be presented.

You will find information about how to follow along under each presentation. There will be presentations in two different rooms: E:4130 (Lucas) and E:2405 (Glasburen). A preliminary schedule follows.

Please note that there will also be eight thesis presentations on Thursday June 1, schedule at:

Note to potential opponents: Register as an opponent to the presentation of your choice by sending an email to the examiner for that presentation ( Do not forget to specify the presentation you register for! Note that the number of opponents may be limited (often to two), so you might be forced to choose another presentation if you register too late. Registrations are individual, just as the oppositions are! More instructions are found on this page.


E:4130 (Lucas)


10:15-11:00 in E:4130 LUCAS

Presenter: Nils Olén
Title: Enhancing Satellite Images Using Super-Resolution
Examiner: Jacek Malec
Supervisors: Michael Doggett (LTH), Jonas Bondesson (Tactel AB), Tobias Leksell (Tactel AB)

Super-resolution refers to the concept of upscaling images using neural networks. This project investigates the use of super-resolution to upscale satellite images for use in an interactive map application that is displayed in airplanes. The purpose is to store lower quality satellite images and upscale them instead of storing the high-resolution images to save storage space. The research includes evaluating different super-resolution implementations, as well as a conventional bicubic image interpolation algorithm, to determine which works best in terms of image quality (PSNR and SSIM), upscaling speed and storage space saved. Findings include that some of the deeper network architectures result in a good quality image but an upscaling speed that is too slow, while the simpler networks give a faster result but lower image quality, so a balance has to be found between image quality and upscaling speed. 

Link to popular science summary:

11:15-12:00 in E:4130 LUCAS and in Zoom (see link below)

Presenters: Martin Roback Lyrå, Marcus Nagy
Title: Intelligent Robotic Systems for Quality Control
Examiner: Jacek Malec
Supervisor: Volker Krueger (LTH)

This thesis investigates the application of a picking skill for intelligent robotic systems with the intention of quality control. Picking an object is a skill crucial for quality control yet non-trivial in robotics, requiring both sensing and actuation. Using ArUco markers, we established a baseline for accurate pose estimation.

This study emphasizes the necessity of a robust hand-eye calibration for precision in vision-based tasks, consequently implementing a suitable skill. 

The proposed picking skill, leveraging fiducial markers like adequately sized ArUco, demonstrated substantial efficacy and potential for enhanced reliability and versatility with suggested improvements. This allowed successful picking of variously sized and oriented containers. Most required information for skill implementation was stored within the robot's knowledge base.

Link to popular science summary: To be uploaded

Link to Zoom presentation:

13:15-14:00 in E:4130 LUCAS

Presenters: Yoonjoo Park, Joel Ehnroth
Title: Correction of Grammatical Errors in Swedish
Examiner: Jacek Malec
Supervisors: Pierre Nugues (LTH), Filip Bolling (NordAxon AB)

This thesis examines the use of transformer-based models for detecting and correcting grammatical errors in Swedish. We exlore several pre-trained large language models, including both monolingual and multilingual variants, that we fine-tune on the task-specific dataset DaLAJ-GED. We evaluate the models' ability to classify sentences as correct or incorrect, identify errors, classify error types and, generate corrected sentences. The results demonstrate that transformer-based models outperform traditional rule-based methods on error detection and correction. In addition, we propose combining the limited human-annotated data with synthetic data to improve model performance on these tasks. Our work contributes to the field by showing the potential of transformer-based models, and proposing several ways for future development of these systems.

Link to popular science summary:

14:15-15:00 in E:4130 LUCAS

Presenters: Alice Berggren, Linnea Palmblad
Title: Using Transformers To Improve Search Functions
Examiner: Jacek Malec
Supervisors: Pierre Nugues (LTH), Jesper Gunnarsson (Telavox)

Following the explosion of information available on the internet and in digital databases, search functions have become an essential tool for efficient Information Retrieval (IR). Transformers have become increasingly popular in recent years for creating search functions due to their ability to handle natural language processing tasks with remarkable accuracy. Yet, many applications and companies still do not utilize this technology and instead use keyword search methods. This can lead to irrelevant or incomplete search results that force organizations to spend time manually searching for relevant results. To this end, we investigated which Transformer architecture has the best performance measured on the sentence-pair tasks from the GLUE benchmark, representing different linguistic properties. Additionally, we evaluated the best-performing models on the IR benchmark CISI, for chat messages and queries taken from a chat application. Given the results, we propose the use of an SBERT encoder model that outperforms the other models. Furthermore, we developed a web application to measure the performance when using the SBERT model in combination with a Cross-Encoder and found that the combination of a pre-trained Cross-Encoder and SBERT model further increases the IR performance. 

Link to popular science summary: To be uploaded



E:2405 (Glasburen)


15:15-16:00 in E:2405 (Glasburen) N.B. No more opponents for this presentation

Presenters: Nils Romanus, Simon Danielsson
Title: Question-Answering Systems in the Financial Domain​​​​​​​
Examiner: Flavius Gruian​​​​​​​
Supervisors: Pierre Nugues (LTH), Amin Ahmadi (SEB), Rahul Biswas (SEB)

Question-answering systems have gained significant attention for their ability to provide specific information to users through a natural language interface. While recent advancements in large language models and neural architectures have resulted in highly competitive state-of-the-art performances, the application of question-answering systems in real-life scenarios remains a challenge. This master's thesis, conducted in collaboration with Swedish bank SEB, explores the viability of constructing effective open-retrieval question-answering systems for banking applications. Our study demonstrates the feasibility of constructing open-retrieval question-answering systems for banking applications. Our systems achieve close to state-of-the-art performance on benchmark datasets but are on average 30\% worse in the banking setting. We also find a weak correlation between the performance of a given system on benchmark datasets and its performance on banking-specific data. Lastly, we find that lexical evaluation metrics are inadequate for assessing the performance of generative models on banking data.

Link to popular science summary: