This application generates learning objectives and multiple-choice questions for AI course materials based on uploaded content files. It uses OpenAI's language models to create high-quality educational assessments that adhere to specified quality standards.
- Upload course materials in various formats (.vtt, .srt, .ipynb)
- Generate customizable number of learning objectives
- Create multiple-choice questions based on learning objectives
- Evaluate question quality using an LLM judge
- Save assessments to JSON format
- Track source references for each learning objective and question
- Clone this repository
- Install the required dependencies:
pip install -r requirements.txt - Create a
.envfile in the project root with your OpenAI API key:OPENAI_API_KEY=your_api_key_here
- Run the application:
python app.py - Open the Gradio interface in your web browser (typically at http://127.0.0.1:7860)
- Upload your course materials (.vtt, .srt, .ipynb files)
- Specify the number of learning objectives to generate
- Select the OpenAI model to use
- Generate learning objectives
- Review and provide feedback on the generated objectives
- Generate multiple-choice questions based on the approved objectives
- Review the generated questions and their quality assessments
- The final assessment will be saved as
assessment.jsonin the project directory
app.py: Entry point for the application
-
models/: Pydantic data models__init__.py: Exports all modelslearning_objectives.py: Learning objective data modelsquestions.py: Question and option data modelsassessment.py: Assessment data models
-
ui/: User interface components__init__.py: Package initializationapp.py: Gradio UI implementationcontent_processor.py: Processes uploaded files and extracts contentobjective_handlers.py: Handlers for learning objective generationquestion_handlers.py: Handlers for question generationfeedback_handlers.py: Handlers for feedback and regenerationformatting.py: Formatting utilities for UI displaystate.py: State management for the UI
-
quiz_generator/: Quiz generation components__init__.py: Package initializationgenerator.py: Main QuizGenerator classassessment.py: Assessment generation logicquestion_generation.py: Question generation logicquestion_improvement.py: Question quality improvement logicquestion_ranking.py: Question ranking and grouping logicfeedback_questions.py: Feedback-based question generation
-
learning_objective_generator/: Learning objective generation components__init__.py: Package initializationgenerator.py: Main generator classbase_generation.py: Base generation logicenhancement.py: Enhancement logicgrouping_and_ranking.py: Grouping and ranking logic
-
prompts/: Prompt templates and componentsquestions.py: Question generation promptsincorrect_answers.py: Incorrect answer generation promptslearning_objectives.py: Learning objective generation prompts
-
obsolete/: Deprecated files (not used in current implementation) -
specs.md: Project specifications -
project_flow.md: Detailed description of the project architecture and workflow
- Python 3.8+
- Gradio 4.19.2+
- Pydantic 2.8.0+
- OpenAI 1.52.0+
- nbformat 5.9.2+
- instructor 1.7.9+
- python-dotenv 1.0.0+
Install dependencies using uv (recommended):
uv venv -p 3.12
source .venv/bin/activate # On Windows use: .venv\Scripts\activate
uv pip install -r requirements.txt
- The application uses XML-style source tags to track which file each piece of content comes from
- Questions are evaluated against quality standards to ensure they meet educational requirements
- Each question includes feedback for both correct and incorrect answers
The application's prompt system in prompts.py has been refactored into modular components for better maintainability:
GENERAL_QUALITY_STANDARDS: Overall quality standards for all generated contentQUESTION_SPECIFIC_QUALITY_STANDARDS: Standards specific to question generationCORRECT_ANSWER_SPECIFIC_QUALITY_STANDARDS: Standards for correct answer optionsINCORRECT_ANSWER_SPECIFIC_QUALITY_STANDARDS: Standards for creating plausible incorrect answersEXAMPLE_QUESTIONS: A collection of high-quality example questions for model guidanceMULTIPLE_CHOICE_STANDARDS: Standards specific to multiple-choice question formatBLOOMS_TAXONOMY_LEVELS: Educational taxonomy for different levels of learningANSWER_FEEDBACK_QUALITY_STANDARDS: Standards for providing helpful feedbackLEARNING_OBJECTIVES_PROMPT: Template for generating learning objectivesLEARNING_OBJECTIVE_EXAMPLES: Examples of well-formulated learning objectives
These components are imported and combined in quiz_generator.py to create comprehensive prompts for different generation tasks. This modular approach makes it easier to:
- Update individual aspects of the prompt without affecting others
- Reuse common standards across different generation tasks
- Maintain consistent quality across all generated content
This section provides a more detailed look at how the various components of the system work together to generate educational assessments.
- Content Processing: Handles ingestion of course materials from different file formats
- Learning Objective Generation: Creates learning objectives from the processed content
- Question Generation: Produces multiple-choice questions for each learning objective
- Quality Assessment: Evaluates the generated questions for quality
- UI Interface: Provides a Gradio-based web interface for user interaction
- Serves as the entry point for the application
- Loads environment variables (including OpenAI API key)
- Creates and launches the Gradio UI
-
Creates the Gradio interface for user interaction
-
Organizes functionality into tabs:
- File upload and learning objective generation
- Question generation
- Preview and export
-
Key components:
app.py: Creates the Gradio interface and defines the UI layoutobjective_handlers.py: Handles learning objective generation and regenerationquestion_handlers.py: Handles question generation and regenerationfeedback_handlers.py: Handles user feedback and custom question generationformatting.py: Formats quiz data for UI displaystate.py: Manages state between UI components
ContentProcessorclass processes different file types:.vttand.srtsubtitle files.ipynbJupyter notebook files
- For each file, adds XML source tags to track the origin of content
- Returns structured content for further processing
QuizGeneratorclass is the central component that:- Generates learning objectives from processed content
- Creates multiple-choice questions for each objective
- Judges question quality
- Saves assessments to JSON
- Takes processed file contents as input
- Combines content and creates a prompt (utilizing modular components from
prompts.py) - Uses OpenAI's API with instructor to generate learning objectives
- Returns structured
LearningObjectiveobjects
- For each learning objective:
- Retrieves relevant content from source files
- Creates a prompt by combining modular components from
prompts.py - Generates a multiple-choice question with feedback for each option
- Returns a structured
MultipleChoiceQuestionobject
Defines the data structures used throughout the application:
LearningObjective: Represents a learning objective with ID, text, and source referencesMultipleChoiceOption: Represents an answer option with text, correctness flag, and feedbackMultipleChoiceQuestion: Represents a complete question with options, linked to learning objectivesRankedMultipleChoiceQuestion: Extends MultipleChoiceQuestion with ranking informationGroupedMultipleChoiceQuestion: Extends RankedMultipleChoiceQuestion with grouping informationAssessment: Collection of learning objectives and questions
The modular prompt components in the prompts/ directory are imported into the quiz generation modules and assembled into complete prompts as needed:
-
Learning Objective Generation:
- Components like
LEARNING_OBJECTIVES_PROMPT,LEARNING_OBJECTIVE_EXAMPLES, andBLOOMS_TAXONOMY_LEVELSare combined with course content - This creates a comprehensive prompt that guides the LLM in generating relevant and well-structured learning objectives
- Components like
-
Question Generation:
- Components like
GENERAL_QUALITY_STANDARDS,MULTIPLE_CHOICE_STANDARDS,QUESTION_SPECIFIC_QUALITY_STANDARDS, etc. are combined - Along with the learning objective and course content, these form a detailed prompt that ensures high-quality question generation
- Components like
- User uploads content files (notebooks, subtitles) through the UI
- System processes files and extracts content with source references
- LLM generates learning objectives based on content
- User reviews and approves learning objectives
- System generates multiple-choice questions for each approved objective
- Questions are presented to the user for review and export
This modular approach makes it easier to maintain, update, and experiment with different prompt components without disrupting the overall system. Any changes to the components in prompts.py will affect how learning objectives and questions are generated, potentially changing the style, format, and quality of the output.