The use and development of (generative) AI has become a constant in scientific research. Not withstanding the many benefits Artificial Intelligence can offer, these applications also contain some limitations and possible hazards and it is important not stay critical. For researchers it is crucial to learn how to use Artificial Intelligence in a responsible way for which we offer some basic principles to help you guide your way.

We differentiate between the use of AI in research and the development of AI applications. Different guidelines apply to both modes of use.

1.      Use of AI in research

Key concept: the more responsibility is placed on the AI system, the more human control is required afterwards. The responsibility for the correctness and robustness of information ALWAYS lies with the researcher.

2.      Development of AI applications

At the University of Antwerp, new AI applications are also being developed. When developing these new technologies one should be aware of a few potential risks, e.g. the risk of misuse, or military use.

ACRAI

The UAntwerp has a very own research group working actively on ‘responsible AI’. ACRAI stands for Antwerp Centre on Responsible AI and consists of an interdisciplinary team of scientists, legal experts, philosophers, sociologists, etc.

On it’s website, ACRAI has made several presentations freely accessible, and you can also find a program of future lectures.  

Guidelines & Useful links