Assessment Framework for AI in Engineering Education
A comprehensive framework for assessing student learning in AI-integrated engineering courses
Last updated: 2025-04-08
Downloads
Assessment Framework for Generative AI in Engineering Education
Introduction
This framework provides a structured approach to assessment in engineering courses that incorporate generative AI tools. It addresses the challenges of measuring learning outcomes when AI tools are part of the educational environment and offers concrete strategies for designing appropriate assessments.
Table of Contents
- Assessment Challenges in an Generative AI Environment
- Guiding Principles for Generative AI-Aware Assessment
- The Assessment Alignment Dimension
- Assessment Strategy Matrix
- Generative AI Literacy Rubric Framework
- Output Evaluation Framework
- Documentation and Process Assessment
- Student Self-Assessment Tools
- Academic Integrity Considerations
- Implementation Strategies
- Common Assessment Pitfalls
- Assessment Examples by Discipline
- Ready-to-Use Assessment Templates
- Ensuring Assessment Equity
1. Assessment Challenges in an Generative AI Environment
Engineering educators face several challenges when assessing student learning in courses that incorporate generative AI tools:
Challenge 1: Determining Authorship and Contribution
Problem: When students use AI tools, it becomes difficult to distinguish between student-generated and AI-generated work.
Assessment Implications: Traditional assessment methods that assume independent student work may no longer be valid.
Challenge 2: Balancing Foundational Skills and AI-Enhanced Capabilities
Problem: While AI tools can enhance student capabilities, foundational engineering knowledge remains essential.
Assessment Implications: Need for assessment approaches that can measure both fundamental skills and the ability to effectively use AI tools.
Challenge 3: Evaluating Process vs. Product
Problem: The final product may not reveal the quality of the student's process in working with AI.
Assessment Implications: Need for assessment methods that evaluate process and critical thinking, not just final outputs.
Challenge 4: Addressing Varying Levels of AI Access and Proficiency
Problem: Students may have different levels of access to and proficiency with AI tools.
Assessment Implications: Need for equitable assessment approaches that account for these differences.
2. Guiding Principles for Generative AI-Aware Assessment
These principles provide a foundation for designing assessment in the age of generative AI:
Principle 1: Focus on Higher-Order Thinking
Assessment should emphasize critical thinking, evaluation, and judgment rather than information recall or routine procedures that generative AI can easily perform.
Principle 2: Balance AI-Restricted and AI-Enhanced Components
Effective assessment strategies include both components where AI use is restricted (to assess foundational knowledge) and components where AI use is encouraged (to assess AI-enhanced capabilities).
Principle 3: Assess Process Documentation and Reflection
Students should document their process of using AI tools, including prompting strategies, evaluation of outputs, and modifications made.
Principle 4: Include Comparative Analysis
Ask students to compare AI-generated approaches with traditional methods, evaluating strengths and limitations of each.
Principle 5: Emphasize Application to Novel Contexts
Assessment should require students to apply concepts to new situations not covered directly in course materials or easily solved through basic AI prompting.
Visual Summary of Assessment Principles
┌───────────────────────────────────────────────────────────────┐
│ AI-AWARE ASSESSMENT PRINCIPLES │
└───────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────────┐ ┌──────────────┐ ┌──────────────────┐
│ WHAT to Assess │ │ HOW to Assess │ │ WHERE to Assess │
└──────────────────┘ └──────────────┘ └──────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────────┐ ┌──────────────┐ ┌──────────────────┐
│ Higher-Order │ │ Document │ │ Novel Contexts │
│ Thinking │ │ Process │ │ │
└──────────────────┘ └──────────────┘ └──────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────────┐ ┌──────────────┐ ┌──────────────────┐
│ Balance │ │ Compare │ │ Professional │
│ Restricted/ │ │ Approaches │ │ Scenarios │
│ Enhanced │ │ │ │ │
└──────────────────┘ └──────────────┘ └──────────────────┘
3. The Assessment Alignment Dimension
The Assessment Alignment dimension of the Generative AI Integration Taxonomy includes five approaches that can be combined and adapted for different learning contexts:
Process Documentation
- Definition: Evaluating how students use generative AI in their engineering workflow
- Assessment Methods: generative AI interaction logs, process journals, prompt development documentation
- Key Metrics: Quality of prompting, iteration strategies, workflow efficiency
Comparative Analysis
- Definition: Assessing students' ability to evaluate generative AI outputs against alternatives
- Assessment Methods: Comparative reports, side-by-side analysis, evaluation matrices
- Key Metrics: Critical evaluation skills, recognition of strengths/limitations, justification quality
Critical Evaluation
- Definition: Measuring how students verify and refine generative AI contributions
- Assessment Methods: Verification reports, error analysis, enhancement documentation
- Key Metrics: Verification strategies, error detection accuracy, quality of refinements
Meta-Learning
- Definition: Assessing students' reflection on how generative AI affects their engineering learning process
- Assessment Methods: Reflective essays, learning journals, self-assessments
- Key Metrics: Metacognitive awareness, learning strategy adaptation, transfer of learning
Generative AI-Restricted Components
- Definition: Maintaining some assessment components that prohibit generative AI use
- Assessment Methods: Proctored exams, in-class activities, specialized assignments
- Key Metrics: Foundational knowledge, independent problem-solving ability, core competencies
Assessment Alignment Decision Tree
┌────────────────────────────┐
│ What are your primary │
│ assessment objectives? │
└────────────────────────────┘
│
┌──────────────────┬─┴───────────┬──────────────────┐
▼ ▼ ▼ ▼
┌────────────────────┐ ┌──────────────┐ ┌────────────┐ ┌───────────────────┐
│ Assess fundamental │ │ Evaluate AI │ │ Measure │ │ Develop │
│ knowledge │ │ literacy │ │ process │ │ metacognitive │
└────────────────────┘ └──────────────┘ │ quality │ │ skills │
│ │ └────────────┘ └───────────────────┘
▼ ▼ │ │
┌────────────────────┐ ┌──────────────┐ │ │
│ AI-RESTRICTED │ │ COMPARATIVE │ │ │
│ COMPONENTS │ │ ANALYSIS │ │ │
└────────────────────┘ └──────────────┘ │ │
▲ │ │
│ ▼ ▼
│ ┌────────────┐ ┌───────────────────┐
└──────┤ PROCESS │ │ META-LEARNING │
│ DOCUMENTATION │ ASSESSMENT │
└────────────┘ └───────────────────┘
4. Assessment Strategy Matrix
This matrix provides guidance on which assessment approaches are most appropriate based on the integration depth and student agency dimensions of your AI implementation:
| | Supplemental Resource | Guided Integration | Embedded Practice | Transformative Redesign | |---|---|---|---|---| | Instructor-Directed | • Traditional assessments with optional AI components• Basic AI literacy measures | • Guided AI tasks with specific evaluation criteria• Process documentation for structured AI activities | • Comparative analysis between AI and traditional approaches• Focused assessment of AI augmentation skills | • Comprehensive AI workflow assessment• New skill evaluation frameworks | | Scaffolded Autonomy | • Progressive AI skill assessment• Foundational + optional AI components | • Structured documentation of increasingly complex AI use• Guided reflection on AI contributions | • Process + outcome assessment• Evaluation of prompt refinement strategies | • Assessment of complex AI integration• Adaptive assessment pathways | | Guided Exploration | • Self-selected AI application assessment• Documentation of exploration process | • Critical evaluation of AI outputs in domain contexts• Assessment of exploration boundaries | • Project-based assessment with AI components• Creative application evaluation | • Novel problem-solving with AI• Assessment of AI tool selection strategy | | Full Autonomy | • Student-designed AI integration assessment• Focus on decision-making justification | • Comprehensive documentation of autonomous AI use• Assessment of AI strategy development | • Portfolio assessment of AI-enhanced work• Evaluation of tool orchestration | • Professional practice simulation• Assessment of AI enhancement of engineering judgment |
5. AI Literacy Rubric Framework
This framework provides a structure for assessing students' AI literacy in engineering contexts across five key competency areas. Each competency can be assessed on a scale from Novice to Expert.
AI Tool Operation
The ability to effectively use generative AI tools for engineering tasks
| Level | Descriptor | Observable Behaviors | |-------|-----------|----------------------| | Novice | Basic operational understanding | • Uses basic prompts with AI tools• Follows prescribed interaction patterns• May struggle with tool limitations | | Developing | Functional operation with guidance | • Formulates clear prompts for routine tasks• Navigates basic tool features• Recognizes common error patterns | | Proficient | Independent effective operation | • Creates well-structured prompts for complex tasks• Utilizes advanced features appropriately• Troubleshoots common issues independently | | Expert | Sophisticated tool orchestration | • Develops strategic prompt sequences• Combines multiple AI tools effectively• Creates innovative workflows• Anticipates and mitigates tool limitations |
Critical Evaluation
The ability to critically evaluate AI-generated content for engineering applications
| Level | Descriptor | Observable Behaviors | |-------|-----------|----------------------| | Novice | Basic recognition of obvious errors | • Identifies clear factual errors• May accept plausible-sounding but incorrect information• Limited verification strategies | | Developing | Structured evaluation approach | • Verifies key calculations or claims• Recognizes common AI limitations in engineering contexts• Uses prescribed verification methods | | Proficient | Comprehensive evaluation | • Systematically verifies outputs against established knowledge• Identifies subtle errors or limitations• Evaluates appropriateness for specific engineering contexts | | Expert | Sophisticated critical analysis | • Employs multiple verification strategies• Identifies boundary conditions where AI may fail• Evaluates conceptual frameworks and assumptions• Conducts rigorous analysis of AI outputs |
Prompt Engineering
The ability to craft effective prompts for engineering tasks
| Level | Descriptor | Observable Behaviors | |-------|-----------|----------------------| | Novice | Basic prompting | • Uses simple, direct prompts• Minimal context or specifications provided• Limited iteration on prompts | | Developing | Structured prompting | • Includes relevant technical parameters• Provides necessary context• Iterates prompts based on initial results | | Proficient | Strategic prompting | • Crafts prompts with precise technical specifications• Includes constraint parameters and boundary conditions• Employs systematic iteration strategies• Specifies output format and detail level | | Expert | Advanced prompting techniques | • Develops multi-step prompting strategies• Effectively communicates complex engineering requirements• Anticipates AI limitations in prompt design• Employs domain-specific terminology effectively |
Output Enhancement
The ability to refine, extend, and improve AI-generated content
| Level | Descriptor | Observable Behaviors | |-------|-----------|----------------------| | Novice | Basic editing | • Makes simple corrections to obvious errors• Minimal modifications to AI output• May use output with little enhancement | | Developing | Functional enhancement | • Corrects technical errors consistently• Enhances explanations for clarity• Adapts formatting for improved communication | | Proficient | Comprehensive refinement | • Integrates multiple AI outputs cohesively• Significantly extends analysis beyond AI suggestions• Applies engineering judgment to improve solutions• Adapts outputs for specific audiences | | Expert | Transformative enhancement | • Uses AI output as foundation for innovative solutions• Synthesizes AI suggestions with advanced domain knowledge• Creates novel approaches by building on AI contributions• Produces output that significantly exceeds original AI quality |
Meta-Cognitive Awareness
The ability to reflect on and regulate AI use in engineering learning
| Level | Descriptor | Observable Behaviors | |-------|-----------|----------------------| | Novice | Limited awareness | • Basic recognition of when AI was helpful• Minimal reflection on AI impact on learning• May over-rely or under-utilize AI tools | | Developing | Developing awareness | • Identifies specific ways AI affects understanding• Recognizes some personal tendencies in AI use• Articulates basic learning strategies with AI | | Proficient | Strategic awareness | • Analyzes impact of AI on engineering problem-solving approaches• Deliberately varies AI use based on learning needs• Balances AI assistance with independent work• Articulates personal AI learning strategies | | Expert | Sophisticated metacognition | • Develops personalized frameworks for effective AI learning• Precisely articulates how AI transforms engineering thinking• Creates strategies to address potential cognitive dependencies• Continuously adapts approach based on reflective analysis |
6. Output Evaluation Framework
This framework provides criteria for evaluating AI-generated outputs in engineering contexts, which can be used both by instructors assessing student work and by students evaluating AI outputs.
Technical Accuracy
The correctness of technical content from an engineering perspective
Assessment Questions:
- Are calculations, equations, and numerical values correct?
- Are physical principles and scientific concepts accurately represented?
- Is the technical terminology used appropriately and consistently?
- Are engineering standards and codes correctly applied?
- Are limitations and assumptions clearly stated and valid?
Methodological Soundness
The appropriateness of methods and approaches used
Assessment Questions:
- Is the problem-solving approach appropriate for the engineering context?
- Are analysis methods aligned with accepted engineering practice?
- Is the level of approximation appropriate for the problem context?
- Does the approach consider relevant constraints and requirements?
- Are alternative methods considered when appropriate?
Communicative Clarity
The effectiveness of communication for the intended audience
Assessment Questions:
- Is the information organized logically for engineering communication?
- Are visual elements (diagrams, graphs) clear and properly labeled?
- Is technical language balanced with accessibility for the intended audience?
- Are complex concepts explained with appropriate supporting details?
- Does the communication follow domain-specific conventions?
Critical Analysis
The depth of critical thinking demonstrated
Assessment Questions:
- Are limitations of the analysis explicitly acknowledged?
- Are assumptions clearly stated and justified?
- Is uncertainty quantified or discussed when appropriate?
- Are multiple perspectives or approaches considered?
- Is there evidence of verification or validation of results?
Engineering Judgment
The application of professional judgment to technical decisions
Assessment Questions:
- Do safety considerations appropriately influence the analysis?
- Are trade-offs between competing objectives explicitly addressed?
- Is the solution contextually appropriate for real-world constraints?
- Does the analysis consider broader impacts (environmental, social, etc.)?
- Are engineering ethics and professional standards upheld?
7. Documentation and Process Assessment
Effective assessment of AI use requires examining not just final products but the process students followed. This section provides frameworks for assessing AI documentation and engineering processes.
AI Consultation Documentation Rubric
| Criteria | Beginning | Developing | Accomplished | Exemplary | |----------|-----------|------------|--------------|-----------| | Prompt Quality | Basic prompts with minimal context or specificity | Structured prompts with some context and technical parameters | Well-crafted prompts with appropriate context, parameters, and specifications | Sophisticated prompts showing strategic thinking, anticipation of AI limitations, and iterative refinement | | Verification Strategy | Minimal or no verification of AI outputs | Basic verification of key points using reference materials | Systematic verification using multiple sources and engineering principles | Comprehensive verification strategy with multiple methods and critical analysis of boundary conditions | | Critical Analysis | Accepts AI output with minimal evaluation | Identifies obvious errors or limitations in AI output | Thoroughly analyzes AI output for technical accuracy, assumptions, and appropriateness | Provides sophisticated analysis of conceptual frameworks, implicit assumptions, and contextual appropriateness | | Enhancement Approach | Minimal modification of AI output | Makes necessary corrections and some improvements to AI output | Significantly enhances AI output with additional analysis and engineering judgment | Transforms AI output through substantial additions, alternative approaches, and innovative extensions | | Reflective Insight | Limited reflection on AI's role or impact | Basic reflection on how AI affected the engineering process | Thoughtful analysis of AI's influence on problem-solving approach and learning | Sophisticated metacognitive analysis of how AI interaction shaped understanding and engineering judgment |
Engineering Process Portfolio Assessment
For extended projects, a portfolio approach can assess the engineering process with AI integration:
Portfolio Components:
- Problem definition and requirements analysis
- Initial approach planning (with rationale for AI use decisions)
- AI interaction documentation (prompts, outputs, evaluations)
- Verification and validation evidence
- Iteration history showing development of solutions
- Final solution with critical evaluation
- Reflective analysis of engineering process
Assessment Dimensions:
- Process rigor and documentation quality
- Engineering judgment in AI integration
- Evidence-based decision making
- Iterative improvement
- Technical communication effectiveness
8. Student Self-Assessment Tools
Developing students' ability to self-assess their AI use is critical for long-term professional development. These tools help students reflect on and improve their AI practices.
AI Interaction Self-Assessment Questionnaire
This questionnaire can be provided to students after AI-integrated assignments:
AI Interaction Self-Assessment
For the assignment you just completed with AI assistance, please reflect on the following:
1. PREPARATION
a. How clearly did I define the engineering problem before using AI? (1-5 scale)
b. What background knowledge did I bring to the interaction?
c. What specific goals did I have for the AI interaction?
2. PROMPTING
a. How effective were my initial prompts? (1-5 scale)
b. How did my prompts evolve during the interaction?
c. What specific elements made my prompts effective or ineffective?
3. EVALUATION
a. What methods did I use to verify the AI's output?
b. What errors or limitations did I identify in the AI's responses?
c. How confident am I in distinguishing accurate from inaccurate information? (1-5 scale)
4. ENHANCEMENT
a. How significantly did I modify or extend the AI output? (1-5 scale)
b. What value did I add beyond what the AI provided?
c. What engineering judgment did I apply to improve the solution?
5. LEARNING
a. What did I learn about the subject matter through this AI interaction?
b. What did I learn about effective AI use for engineering tasks?
c. How might I approach a similar task differently next time?
AI Literacy Progress Tracker
This tool helps students track their development of AI literacy skills over time:
| Competency Area | Beginning of Term | Mid-Term | End of Term | Evidence of Growth | |-----------------|-------------------|----------|-------------|-------------------| | AI Tool Operation | Self-rating (1-4) | Self-rating (1-4) | Self-rating (1-4) | Specific examples demonstrating growth | | Critical Evaluation | Self-rating (1-4) | Self-rating (1-4) | Self-rating (1-4) | Specific examples demonstrating growth | | Prompt Engineering | Self-rating (1-4) | Self-rating (1-4) | Self-rating (1-4) | Specific examples demonstrating growth | | Output Enhancement | Self-rating (1-4) | Self-rating (1-4) | Self-rating (1-4) | Specific examples demonstrating growth | | Meta-Cognitive Awareness | Self-rating (1-4) | Self-rating (1-4) | Self-rating (1-4) | Specific examples demonstrating growth |
Peer Assessment Protocol
Students can use this protocol to provide feedback on each other's AI-integrated work:
Peer Assessment for AI-Enhanced Engineering Work
Review your peer's work and provide constructive feedback:
1. STRENGTHS
• What aspects of AI use were particularly effective?
• Which parts of the work show strong engineering judgment?
• How well did your peer enhance or extend the AI output?
2. AREAS FOR DEVELOPMENT
• What verification methods might strengthen the work?
• How might the prompting strategy be improved?
• What additional engineering analysis would improve the solution?
3. QUESTIONS TO CONSIDER
• Have you considered alternative approaches to [specific aspect]?
• How did you verify [specific calculation or claim]?
• What was your reasoning for [specific decision]?
4. PROFESSIONAL PRACTICE CONNECTION
• How does this approach compare to professional engineering practice?
• What aspects of this work demonstrate professional engineering judgment?
9. Academic Integrity Considerations
Maintaining academic integrity while incorporating AI requires rethinking traditional approaches and developing new frameworks.
Shifting from Detection to Documentation
Rather than focusing primarily on detecting unauthorized AI use, shift toward:
- Clear documentation requirements for AI interactions
- Transparent guidelines about appropriate AI use contexts
- Process-oriented assessment that values critical thinking
Documentation-Based Integrity Framework
| Type of Assignment | Documentation Requirement | Assessment Focus | |-------------------|---------------------------|------------------| | AI-Restricted | Statement of compliance with AI restrictions | Independent demonstration of core competencies | | AI-Allowed | Complete AI consultation documentation | Critical evaluation and enhancement of AI outputs | | AI-Enhanced | Comprehensive workflow documentation | Strategic AI integration and engineering judgment |
Integrity Policy Language Template
Academic Integrity and AI Tools
This course adopts a documentation-based approach to academic integrity with AI tools:
1. For all assignments, you must clearly document any AI assistance according to the course documentation guidelines.
2. Undocumented use of AI tools is considered an academic integrity violation equivalent to plagiarism.
3. For AI-restricted assignments or components, use of AI tools is not permitted. These components are designed to assess your independent mastery of fundamental concepts.
4. For AI-allowed or AI-enhanced assignments, appropriate use with documentation is encouraged as part of developing professional engineering skills.
The focus is not on restricting tool use, but on developing professional responsibility, transparency, and critical thinking.
10. Implementation Strategies
Implementing effective assessment in AI-integrated courses requires careful planning and clear communication.
Phased Implementation Approach
Phase 1: Preparation
- Audit existing assessments for AI vulnerability
- Identify core competencies requiring direct assessment
- Develop initial documentation requirements
- Create clear guidelines for students
Phase 2: Pilot Implementation
- Implement AI-aware assessments in selected assignments
- Collect feedback on documentation process
- Evaluate effectiveness of assessment methods
- Refine rubrics based on initial results
Phase 3: Comprehensive Implementation
- Integrate AI-aware assessment throughout the course
- Develop scaffolded approach to building AI literacy
- Implement portfolio or continuous assessment methods
- Connect assessment to professional practice scenarios
Communication Strategies
Effective communication with students about assessment in an AI environment is essential:
-
Explicit Rationale: Clearly explain why AI-aware assessment methods are being used and how they connect to professional practice
-
Process Transparency: Make assessment criteria and documentation requirements explicit from the beginning
-
Focus on Learning: Emphasize that the goal is effective learning, not restricting tool use
-
Professional Relevance: Connect assessment approaches to emerging professional practices in engineering fields
-
Feedback Loops: Create opportunities for students to provide input on assessment methods
11. Common Assessment Pitfalls
Be aware of these common challenges when implementing AI-aware assessment, along with strategies to address them.
Pitfall 1: Over-reliance on Detection Tools
Problem: Focusing predominantly on detecting AI use rather than evaluating the quality of work and learning.
Solution: Shift to process documentation and critical thinking assessment. Design "AI-proof" assessments that focus on higher-order thinking and application rather than attempting to prevent AI use.
Pitfall 2: Binary AI/No-AI Thinking
Problem: Treating AI use as an all-or-nothing proposition rather than recognizing the spectrum of appropriate uses.
Solution: Develop nuanced guidelines about appropriate AI use in different contexts. Create assignment components with different AI policies rather than blanket restrictions.
Pitfall 3: Inequitable Assessment Impact
Problem: Assessment approaches that inadvertently advantage students with better AI access or prior AI experience.
Solution: Provide in-class time for AI tool use, offer equitable access to AI tools, provide baseline AI training, and offer multiple assessment pathways.
Pitfall 4: Excessive Documentation Burden
Problem: Creating documentation requirements so onerous that they detract from learning objectives.
Solution: Scale documentation requirements to assignment complexity and weight. Focus on quality of reflection rather than quantity of documentation. Provide templates to streamline the process.
Pitfall 5: Failing to Adapt Rubrics
Problem: Using traditional assessment rubrics that don't account for AI integration skills.
Solution: Develop specific rubric components for AI literacy skills. Weight process and enhancement over final product alone. Include metacognitive assessment components.
Pitfall 6: Assessment Stagnation
Problem: Failing to update assessment strategies as AI capabilities evolve.
Solution: Regularly review and update assessment approaches. Monitor AI capabilities and adjust assessment complexity accordingly. Form faculty learning communities around evolving practices.
Pitfall 7: Missing Learning Opportunities
Problem: Treating AI-integrated assessment only as evaluation rather than as a learning opportunity.
Solution: Use assessment debrief sessions to discuss effective AI strategies. Provide model examples of effective AI use. Allow revision after feedback on AI integration approach.
12. Assessment Examples by Discipline
These examples demonstrate how to implement AI-aware assessment across different engineering disciplines.
Mechanical Engineering: Thermodynamics
Traditional Assessment: Exam with thermodynamic cycle analysis problems
AI-Aware Assessment Alternative:
- Part 1 (AI-Restricted): In-class fundamental concept questions and basic cycle calculations
- Part 2 (AI-Enhanced): Take-home comprehensive cycle optimization with required documentation:
- Initial system analysis and approach planning
- AI consultation with prompt documentation
- Verification calculations for critical parameters
- Enhancement of AI solution with engineering judgment
- Reflection on how AI affected problem-solving approach
Electrical Engineering: Circuit Analysis
Traditional Assessment: Circuit design and analysis problem sets
AI-Aware Assessment Alternative:
- Comparative Analysis Assignment:
- Solve assigned circuit problems using traditional methods
- Solve the same problems using AI assistance
- Create analysis comparing approaches, addressing:
- Efficiency of solution methods
- Insight gained from each approach
- Errors or limitations in AI-generated solutions
- How AI tools could be improved for circuit analysis
- When each approach is most appropriate
Civil Engineering: Structural Analysis
Traditional Assessment: Beam and truss analysis projects
AI-Aware Assessment Alternative:
- Structural Analysis Portfolio:
- Component 1: Manual calculations of basic structural elements (AI-restricted)
- Component 2: AI-assisted analysis of complex structures with documentation
- Component 3: Critical comparison of computer software, AI, and manual approaches
- Component 4: Novel structural challenge requiring synthesis of approaches
- Component 5: Reflection on workflow efficiency and engineering judgment
Computer Science: Algorithm Design
Traditional Assessment: Algorithm implementation assignments
AI-Aware Assessment Alternative:
- Algorithm Enhancement Project:
- Obtain initial algorithm implementation using AI
- Document prompt engineering process and iterations
- Critically evaluate AI-generated code for efficiency and correctness
- Enhance the implementation with optimizations beyond AI suggestions
- Provide test cases demonstrating algorithm correctness
- Analyze time and space complexity with verification
Chemical Engineering: Process Design
Traditional Assessment: Process flow calculations and design report
AI-Aware Assessment Alternative:
- Layered Process Design Assessment:
- Layer 1: Core manual calculations of critical parameters (AI-restricted)
- Layer 2: AI-assisted process flow development with documentation
- Layer 3: Safety and optimization analysis with AI consultation
- Layer 4: Comparative analysis of design alternatives using multiple approaches
- Layer 5: Metacognitive reflection on engineering decision-making process
13. Ready-to-Use Assessment Templates
These templates can be adapted for various engineering courses and assessment needs.
Template 1: AI Documentation Form
AI CONSULTATION DOCUMENTATION FORM
Student Name: _____________________ Course: _____________________
Assignment: _____________________ Date: _____________________
PART 1: PREPARATION
• Engineering problem/task I used AI for: _____________________
• My initial approach/plan: _____________________
• Specific goals for AI consultation: _____________________
PART 2: INTERACTION DOCUMENTATION
• AI tool(s) used: _____________________
• Prompt 1: _____________________
• Response 1 (summarize or attach): _____________________
• Evaluation of Response 1: _____________________
• Prompt 2 (if applicable): _____________________
• Response 2 (summarize or attach): _____________________
• Evaluation of Response 2: _____________________
• [Continue as needed]
PART 3: VERIFICATION & ENHANCEMENT
• Methods used to verify AI information: _____________________
• Errors/limitations identified: _____________________
• How I enhanced or modified the AI output: _____________________
• Engineering judgment applied: _____________________
PART 4: REFLECTION
• How AI affected my approach to this problem: _____________________
• What I learned about the subject through this interaction: _____________________
• How my AI use strategy could improve next time: _____________________
Student Signature: _____________________ Date: _____________________
Template 2: Comparative Analysis Assignment
COMPARATIVE ANALYSIS ASSIGNMENT
In this assignment, you will solve an engineering problem using both traditional methods and AI assistance, then analyze the differences between approaches.
PART 1: TRADITIONAL APPROACH
• Solve the attached problem using traditional engineering methods.
• Document your solution process step-by-step.
• Note any challenges encountered and how you addressed them.
PART 2: AI-ASSISTED APPROACH
• Solve the same problem using AI assistance.
• Document your AI interaction using the course documentation form.
• Include your prompts, the AI responses, and your evaluation.
PART 3: COMPARATIVE ANALYSIS (2-3 pages)
Compare and contrast the two approaches, addressing:
1. ACCURACY & CORRECTNESS
• Were both approaches technically correct?
• Did either approach make simplifying assumptions?
• Were there differences in precision or detail?
2. EFFICIENCY & PROCESS
• Which approach was more time-efficient?
• Which approach provided better insight into the underlying concepts?
• How did the problem-solving process differ between approaches?
3. STRENGTHS & LIMITATIONS
• What were the strengths of each approach?
• What were the limitations of each approach?
• In what situations would each approach be preferable?
4. LEARNING VALUE
• What did you learn from each approach?
• How did the comparative exercise deepen your understanding?
• How might this experience inform your future problem-solving strategies?
GRADING CRITERIA:
• Traditional solution correctness (20%)
• AI-assisted solution and documentation quality (20%)
• Depth of comparative analysis (30%)
• Critical thinking about approaches (20%)
• Reflection on learning and implications (10%)
Template 3: Layered Assessment Design
LAYERED ASSESSMENT MODEL
This assignment uses a layered approach to assess both fundamental skills and AI-enhanced capabilities.
LAYER 1: CORE CONCEPTS (AI-RESTRICTED)
• In-class assessment of fundamental principles
• Short problems focused on key concepts
• No AI tools permitted; calculators only
LAYER 2: APPLIED ANALYSIS (AI-ALLOWED WITH DOCUMENTATION)
• Take-home component with more complex problems
• AI use permitted with complete documentation
• Focus on critical evaluation and enhancement of AI outputs
LAYER 3: SYNTHESIS & REFLECTION (AI-ENHANCED)
• Integration of concepts into comprehensive solution
• Strategic use of AI encouraged with documentation
• Reflection on how AI affected approach and understanding
SUBMISSION REQUIREMENTS:
• Layer 1: Completed in-class worksheet
• Layer 2: Solutions with AI documentation form
• Layer 3: Final report with integrated solution and reflection
EVALUATION CRITERIA:
• Layer 1 (30%): Accuracy, conceptual understanding
• Layer 2 (40%): Analysis quality, AI critical evaluation, enhancement
• Layer 3 (30%): Integration, reflection depth, professional communication
Template 4: AI Literacy Development Portfolio
AI LITERACY DEVELOPMENT PORTFOLIO
This semester-long portfolio assessment will document your development of AI literacy skills in engineering contexts.
PORTFOLIO COMPONENTS:
1. BASELINE ASSESSMENT (Week 1-2)
• Initial AI interaction documentation
• Self-assessment of starting AI literacy
• Goals for AI skill development
2. GUIDED PRACTICE EVIDENCE (Weeks 3-8)
• Documentation from 3 structured AI interactions
• Analysis of prompt effectiveness
• Verification strategies applied
• Enhancement of AI outputs
3. INDEPENDENT APPLICATION (Weeks 9-12)
• Documentation from 2 self-directed AI applications
• Strategic approach to AI integration
• Critical evaluation of AI contributions
• Enhancement and professional judgment
4. SYNTHESIS & REFLECTION (Weeks 13-15)
• Comprehensive reflection on AI literacy development
• Before/after examples demonstrating growth
• Personal framework for effective AI use in engineering
• Professional ethics statement on AI use
EVALUATION CRITERIA:
• Documentation quality and completeness (20%)
• Progressive development of AI literacy skills (30%)
• Critical thinking and engineering judgment (25%)
• Metacognitive awareness and reflection depth (25%)
Specific rubrics for each component will be provided.
14. Ensuring Assessment Equity
Equitable assessment practices are essential when incorporating AI into engineering education. These strategies help ensure all students have fair opportunities to demonstrate their learning.
Access Equity Strategies
1. Provide In-Class AI Access Time
- Schedule lab sessions where all students have equal access to AI tools
- Create assignments that can be completed during these sessions
- Ensure technical support is available during these sessions
2. Tiered Tool Requirements
- Design assessments that work with free versions of AI tools
- When premium features are beneficial, provide alternatives or access paths
- Consider institutional licenses for key AI tools
3. Technical Infrastructure Support
- Provide clear guides for accessing AI tools on various devices
- Create backup submission methods for technical failures
- Establish support channels for technical troubleshooting
Proficiency Equity Strategies
1. Baseline Training for All Students
- Provide foundational AI literacy training before assessments
- Create scaffolded practice opportunities
- Develop reference guides for essential AI operations
2. Varied Entry Points
- Design assignments with multiple starting points based on AI proficiency
- Provide prompt templates for less experienced students
- Create advanced challenges for more experienced students
3. Emphasize Growth Over Prior Experience
- Assess improvement from individual baselines
- Value critical thinking over technical sophistication
- Create opportunities to revise and improve based on feedback
Cultural and Contextual Equity Strategies
1. Diverse Engineering Contexts
- Include problems from various cultural and geographical contexts
- Acknowledge how AI training data may reflect cultural biases
- Encourage critical evaluation of AI outputs for cultural assumptions
2. Multilingual Considerations
- Provide guidance for non-native English speakers using primarily English-trained AI
- Allow documentation of language barriers encountered
- Consider translation resources when appropriate
3. Disability Accommodations
- Ensure AI tools used are compatible with assistive technologies
- Provide alternative assessment formats when needed
- Consider accessibility in documentation requirements
Equity-Centered Assessment Design Checklist
- [ ] All required AI tools are accessible to all students regardless of financial resources
- [ ] Assessment timing accommodates varied access to technology
- [ ] Baseline AI literacy training is provided before assessment
- [ ] Multiple paths to success are available for different learning approaches
- [ ] Documentation requirements are clear and scaffolded appropriately
- [ ] Cultural and contextual diversity is represented in assessment problems
- [ ] Language requirements consider multilingual student populations
- [ ] Accessibility needs are addressed in assessment design
- [ ] Feedback mechanisms emphasize growth and development
- [ ] Prior AI experience does not create insurmountable advantages
This assessment framework was developed as part of the IDEEAS Lab materials for integrating generative AI into engineering education. For questions or additional resources, please contact [[email protected]].