Lab Mission

We design, study, and responsibly deploy AI systems to improve human decision-making in engineering and education.

Our work advances theory and practice through interdisciplinary evidence-based frameworks, usable tools, and open resources that help people learn, design, and govern complex socio-technical systems.

Research Thrusts

    As noted in the About page, our research is currently organized around four thrusts:
    1. AI‑Powered Pedagogy: Generative AI for learning, tutoring, feedback, and assessment; design patterns and practices that move beyond “prompting”.
    2. Decision Architectures: Modeling and supporting decisions in complex engineering systems; human–AI teaming and allocation of cognitive work.
    3. Cognitive Modeling Leveraging NLP and LLMs to study and improve cognitive and psychological processes; understanding and improving mental models of educators, students, and researchers using AI; how beliefs and incentives shape adoption and outcomes.
    4. Sustainable Systems: Data‑driven approaches that integrate technical, social, and ecological constraints.

Approach and Methods

  • Mixed methods: qualitative studies, surveys, rapid experiments, and causal/observational analyses.
  • NLP/LLM engineering for research and instruction (evaluation, reliability, safety).
  • Responsible AI practices: risk assessment, documentation, and guardrails.
  • Open science: reproducible code/data, transparent reporting, and shareable teaching materials.

Outputs and Impact

  • Frameworks and guides for responsible AI integration in education and research.
  • Reusable datasets, instruments, and code.
  • Classroom examples, case studies, and templates for instructors.
  • Peer‑reviewed publications, practitioner workshops, and public talks/panels.

Lab Culture and Values

  • Rigor with empathy: high standards, supportive mentoring, and psychological safety.
  • Responsibility: privacy, fairness, and safety are first‑order requirements.
  • Reproducibility by default: version control, data provenance, and documented workflows.
  • Inclusion and respect: diverse perspectives strengthen our science and our impact.
  • Growth mindset: frequent feedback, reflection, and continuous improvement.

How We Work

  • Project lifecycle: seed → active study → completed; short written plan, milestones, and defined deliverables at each stage.
  • Collaboration: weekly project check‑ins, concise async updates; shared repos and briefs; single DRI (directly responsible individual) per deliverable.
  • Writing pipeline: outline → figure plan → methods → results → intro/discussion; early and iterative drafting.
  • Authorship: contribution‑based, discussed at project start and revisited transparently.
  • Tools: GitHub for code/issues; lightweight docs for project briefs; structured datasets with data cards.

Mentoring Commitments

  • Individual development plans with goals for skills, publications, and portfolios.
  • Regular 1:1s focused on feedback and career development.
  • Opportunities to lead: papers, talks, workshops, and open‑source releases.

Expectations

  • Be proactive: communicate risks early, propose next steps, and ask for help when blocked.
  • Write things down: decisions, assumptions, and procedures.
  • Practice responsible AI: evaluate failures, document limitations, and avoid overclaiming.
  • Support one another: we win as a team; professionalism is non‑negotiable.

Collaborations

We collaborate with educators, researchers, and practitioners to co‑design studies, evaluate real deployments, and translate results into usable resources (courses, tools, and policy guidance).

Joining the Lab

We welcome curious, mission‑driven students and collaborators across disciplines. See the Join and Contact pages for current openings and how to get in touch.