Tue, Oct 21 · 5:30 PM CEST
## Details
Large language models (LLMs) may seem powerful, but they can be tricked into leaking secrets, making poor decisions, or even bypassing their own rules. Weak prompts, inadequate safeguards, or biased data can open the door to manipulation and unexpected risks.
In this talk, Maryia will use the OWASP Top 10 as a guide to show how these vulnerabilities appear in AI systems. Drawing on real-world examples from the news and his work at Stockholm’s public service, he will demonstrate how attackers exploit weaknesses—and how familiar testing techniques, such as exploratory testing, risk analysis, and monitoring, can help prevent them.
Meanwhile, Ida will provide an overview of the challenges, methods, and technologies involved in using deep learning models to segment tumour regions in the brain on MRI scans—a task that radiologists currently perform manually, which is both time-consuming and resource-intensive. She trained, compared, and combined different models to evaluate their segmentation performance. In this presentation, she will highlight the challenges, the methods, and what this technology could mean for the future of healthcare.
By the end of this talk, you will leave with practical strategies for securing LLMs, methods for deep learning, strategies for segmenting tumours, and a few laughs along the way.
We look forward to an engaging discussion with all of you.
Welcome!
Agenda:
17:30 - Doors open (food and drinks will be served ) 🍜 🍺
18:00 - Deep Learning for Brain Tumor Segmentation in MRI Scans
18:10 - Securing LLMs: Insights into OWASP top 10
19:00 - Mingle and networking 🍸 👋
20:00 - Thank you
Speakers:
Ida Kols - Consultant - HIQ
Maryia Tuleika - Quality Engineering Leader - Regent AB
The seats are limited, so make sure to register asap! :)