Practical information

Do you work with sensitive text data (e.g., research proposals, participant or patient data, HR data...) and therefore hesitate to use AI tools such as ChatGPT? Are you unsure about what really happens to your data and concerned about privacy and security? And most importantly, are you looking for a safer alternative?

In this course, consisting of a lecture and workshop, you’ll explore open-source AI tools to handle sensitive text data in a safer way. You’ll compare their capabilities, potential dangers and performance with those of large commercial tools. Finally, you’ll cover case studies that you can then apply to your own data.

Note that this course can now be booked in two different versions, depending on the audience's prior knowledge and technical skills:

technical version: for people with (basic) programming skills

- non-technical version: no prerequisites

(This session had a slightly different title before: "Processing Sensitive Text Data Using Open-Source Large Language Models". The contents of the session have not changed!)