The (Human Insight + AI) HI+AI Lab explores the synergy between human understanding and artificial intelligence to develop intelligent systems that are adaptive,
interpretable, and socially responsible. Our mission is to design AI that not only performs well but also aligns with human values, cognitive processes,
and real-world needs.
We focus on building AI systems that can understand, model, and support complex human behavior, with a research agenda that spans machine learning,
user modeling, explainable AI (XAI), cognitive science, and data-driven assessment.
Our work is particularly oriented toward real-world, high-stakes domains (such as education, healthcare, and social systems) where trust, transparency,
and fairness are essential. We are especially interested in developing interpretable and adaptive AI models that can inform and enhance human decision-making in these contexts.
The HI+AI Lab aims to push the boundaries of
what AI can achieve, while remaining firmly grounded in ethical, cognitive, and societal considerations.
This research project investigates the integration of Artificial Intelligence (AI) into educational environments to enhance personalized learning and improve student outcomes. Positioned at the intersection of Learning Science and AI in Education, the project explores how intelligent systems can adaptively tailor instructional content and learning strategies to individual learner needs. The core of the project involves designing and implementing machine learning models that analyze learner data (such as students' performance, engagement patterns, and learning behaviors) to generate personalized learning pathways. These pathways will support timely, data-driven interventions aimed at maximizing learning efficiency and effectiveness. By leveraging predictive analytics, the project seeks to anticipate learner challenges and proactively guide students through customized instructional experiences. By bridging Learning Science with AI innovation, this project aims to transform the learning experience through scalable, intelligent systems that promote equity, adaptability, and success in diverse educational settings.
This research project focuses on providing interpretable explanations for complex AI model decisions on tabular data by leveraging Bayesian Networks and the Markov Blanket : a probabilistic graphical modeling framework well-suited for capturing dependencies and causal relationships among variables. Tabular data is ubiquitous across domains such as healthcare, education, and finance; however, the lack of transparency in many high-performing machine learning models presents a major barrier to their use in decision-critical applications. Bayesian Networks offer a principled approach to modeling conditional dependencies, enabling transparent and structured reasoning. A key strength of this approach lies in the use of the Markov Blanket, which identifies the minimal and most relevant subset of features that directly influence a target variable. This not only enhances model interpretability but also improves computational efficiency by focusing on the most informative variables. These models are particularly effective in uncovering causal and probabilistic relationships in data, making them a robust foundation for explainable decision-support systems. By developing methodologies that allow users to trace, understand, and trust the reasoning behind model predictions, this project seeks to bridge the gap between high predictive performance and interpretability.
Feel free to reach out via email:
sein.minn.cs @ gmail.com