About the lab

Sein Minn The (Human Insight + AI) HI+AI Lab explores the synergy between human understanding and artificial intelligence to develop intelligent systems that are adaptive, interpretable, and socially responsible. Our mission is to design AI that not only performs well but also aligns with human values, cognitive processes, and real-world needs. We focus on building AI systems that can understand, model, and support complex human behavior, with a research agenda that spans machine learning, user modeling, explainable AI (XAI), cognitive science, and data-driven assessment. Our work is particularly oriented toward real-world, high-stakes domains (such as education, healthcare, and social systems) where trust, transparency, and fairness are essential. We are especially interested in developing interpretable and adaptive AI models that can inform and enhance human decision-making in these contexts. The HI+AI Lab aims to push the boundaries of what AI can achieve, while remaining firmly grounded in ethical, cognitive, and societal considerations.

Projects

Advancing Personalized Learning through AI-Driven Educational Technologies

This research project investigates the integration of Artificial Intelligence (AI) into educational environments to enhance personalized learning and improve student outcomes. Positioned at the intersection of Learning Science and AI in Education, the project explores how intelligent systems can adaptively tailor instructional content and learning strategies to individual learner needs. The core of the project involves designing and implementing machine learning models that analyze learner data (such as students' performance, engagement patterns, and learning behaviors) to generate personalized learning pathways. These pathways will support timely, data-driven interventions aimed at maximizing learning efficiency and effectiveness. By leveraging predictive analytics, the project seeks to anticipate learner challenges and proactively guide students through customized instructional experiences. By bridging Learning Science with AI innovation, this project aims to transform the learning experience through scalable, intelligent systems that promote equity, adaptability, and success in diverse educational settings.

Relevant Publications

  1. EIKT, AIED, 2025 paper
  2. Privacy-Preserving Synthetic Data Generation, EC-TEL, 2018 paper
  3. IKT, AAAI, 2022 paper, code
  4. BKT-LSTM, Arxiv, 2021 paper, code
  5. DSCMN, PAKDD, 2019 paper, code
  6. DKT-DSC, ICDM, 2018 paper, code
  7. KT, ICDM, 2018 paper
  8. Q-matrix Reinment, EC-TEL, 2016 paper
  9. Uni Library Rec. Syst., e-learning, 2013 paper

Interpretable Explanations for Complex AI Models on Tabular Data Using Bayesian Networks and Markov Blankets

This research project focuses on providing interpretable explanations for complex AI model decisions on tabular data by leveraging Bayesian Networks and the Markov Blanket : a probabilistic graphical modeling framework well-suited for capturing dependencies and causal relationships among variables. Tabular data is ubiquitous across domains such as healthcare, education, and finance; however, the lack of transparency in many high-performing machine learning models presents a major barrier to their use in decision-critical applications. Bayesian Networks offer a principled approach to modeling conditional dependencies, enabling transparent and structured reasoning. A key strength of this approach lies in the use of the Markov Blanket, which identifies the minimal and most relevant subset of features that directly influence a target variable. This not only enhances model interpretability but also improves computational efficiency by focusing on the most informative variables. These models are particularly effective in uncovering causal and probabilistic relationships in data, making them a robust foundation for explainable decision-support systems. By developing methodologies that allow users to trace, understand, and trust the reasoning behind model predictions, this project seeks to bridge the gap between high predictive performance and interpretability.

Relevant Publications

  1. LAPLACE, Arxiv, 2023 paper, code
  2. (FSBN & SSBN), ICDM 2023, CXAI paper
  3. GBNC, DSAA, 2014 paper
  4. GMBNC, MLDM, 2014 paper
  5. MBNC, Computer Science and Application, 2014 paper in Chinese

Funding

We would like to thank the following sponsors and funding agencies for supporting our research: Uni Kyoto-Inria associate team grant (Inria, France),Natural Sciences and Engineering Research Council of Canada (NSERC, Canada), National Natural Science Foundation of China (NSFC, China), National Institute of Informatics visting research grant (Japan), Singapore Management University visting research grant (Singapore), Science and Technology Foundation of Xiamen (China), and Science Foundation of Huaqiao University (China).

Contact

Feel free to reach out via email:

sein.minn.cs @ gmail.com