Who we are
The ELP group develops agile technology for the analysis, transformation, verification, debugging, validation, synthesis, certification and automatic optimization of programs, models and data. Some of its most recent achievements are related to the analysis of properties of cryptographic programs and protocols, techniques and tools for software analysis and verification, evaluation and calibration of data classifiers, and diagnosis and automatic repair of complex systems.
- Industrial Formal Methods. Formal Methods contribute to eradicate software failures that might otherwise lead to security vulnerabilities. We develop formal techniques and automated tools to guarantee software quality and trustworthiness.
- Logic for Automated Reasoning and Systems Verification. Logic provides a suitable expressive framework for specifying computational systems and analyzing their formulated properties through logical statements. We use automation of deductive techniques and logical models to represent computer systems and analyze their correct behavior efficiently.
- Semantic-based Analysis. Semantic-based analysis and verification techniques are used to reason about complex systems, for instance to formally prove their safety and correctness or to synthesize contracts from the program code. These techniques can be defined for programming languages of any paradigm and for different application domains (reactive systems, concurrent systems, protocols, transformational/functional systems, etc.).
- Cryptographic Protocol Analysis. The design and analysis of security protocols is critical to verify the security of communications. We have developed the state-of-the-art protocol analyzer Maude-NPA in collaboration with the Naval Research Laboratory at Washington, D.C. and the University of Illinois at Urbana-Champaign, IL. We have added cryptographic properties such as exclusive-or, elliptic curves, homomorphisms, and bilinear pairing. We have also expanded the modelling capabilities with message travel time and spatial location information. We have developed many analysis optimization techniques as well as protocol transformations. We also work in the analysis of new Quantum Computing-resistant protocols that will replace the current standards, which will become obsolete as the quantum computers become more common.
- Robust Evaluation of AI Capabilities. Evaluation frameworks can be used to anticipate how well a system is going to behave for new cases and situations. This has important practical applications for safety, ethics and governance (e.g., spam/fraud detection, credit score models, facial recognition systems, and network anomaly detection). We develop robust evaluation frameworks that can be used to identify a profile of capabilities of AI systems rather than average performance. We also work on testing platforms designed to facilitate progress on the key cognitive abilities that are currently missing in even the most advanced AI systems, from agents to language models.
- Metamodels for Adversarial Machine Learning. Reconciling transparency and privacy in Machine Learning. We extract intrinsic (meta-)characteristics of black-box Machine Learning models in order to identify weaknesses, vulnerabilities or gaps, and improving systems security and privacy by providing appropriate countermeasures.
- Visual Artificial Intelligence. We study and analyze the use of Deep Learning (Convolutional Neural Networks and Transformers) in Digital Image Processing and Analysis. We also work in the use of Explainable Artificial Intelligence (XAI) to allow models explain their predictions.
- Data Science Methodologies and Automation. We work on data science methodologies that can lead to more efficient data science projects, through the notion of data science trajectories, and the integration of automation (through mechanization, composition or assistance) in the data science pipeline. We apply these methodologies to real data science projects in different domains, from the understanding of customer behavior or urban segregation.