For an intelligent algorithm or computational system to perform effectively, data that is given to them as input should be mathematically and computationally convenient to their internal processes. To address this aspect, we conduct research along four axes on algorithms and methods for improving the input data and internal representations used by these intelligent systems.
The first axis focuses on designing intelligible models and algorithms with high interpretability and the capability of explaining their decisions. This is achieved by exploiting causal relationships and deep analysis of their internal representations.
The second axis focuses on the design of algorithms for learning from synthetic data and simulated environments. Such environments can not only be used to monitor real-time data, but also to simulate future scenarios.
The third axis focuses on the research of algorithms capable of considering its context, and its changes over time, in its decision-making process. Towards this goal, we research sensor fusion and transfer learning methods for time series data.
Complementing the three axes above, the fourth axis is oriented towards designing methods to process data fed to a machine learning algorithm in order to produce engineered-features with improved characteristics to help the models achieve better performance.