Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 61 additions & 0 deletions docs/decisions/0020-assessment-criteria-location.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
20. Where in the codebase should CBE assessment criteria go?
============================================================

Context
-------
Competency Based Education (CBE) requires that the LMS have the ability to track learners' mastery of competencies through the means of assessment criteria. For example, in order to demonstrate that I have mastered the Multiplication competency, I need to have earned 75% or higher on Assignment 1 or Assignment 2\. The association of the competency, the threshold, the assignments, and the logical OR operator together make up the assessment criteria for the competency. Course Authors and Platform Administrators need a way to set up these associations in Studio so that their outcomes can be calculated as learners complete their materials. This is an important prerequisite for being able to display competency progress dashboards to learners and staff to make Open edX the platform of choice for those using the CBE model.

Decisions
---------
CBE Assessment Criteria, Student Assessment Criteria Status, and Student Competency Status values should go in the openedx-learning repository as there are broader architectural goals to refactor as much code as possible out of the edx-platform repository into the openedx-learning repository such that it can be designed in a way that is easy for plugin developers to utilize.

More specifically, all code related to adding Assessment Criteria to Open edX will live in openedx-learning/openedx_learning/apps/assessment_criteria. The exception is a small app in edx-platform to receive grading signals, invoke the openedx-learning evaluation logic to perform calculations, and persist results in openedx-learning. This is a pragmatic integration until grading events move out of edx-platform and into openedx-events; it is acknowledged technical debt to keep grading signal access in edx-platform for now.

This keeps a single cohesive Django app for authoring the criteria and for storing learner status derived from those criteria, which reduces cross-app dependencies and simplifies migrations and APIs. It also keeps Open edX-specific models (users, course identifiers, LMS/Studio workflows) out of the standalone ``openedx_tagging`` package and avoids forcing the authoring app to depend on learner runtime data. The tradeoff is that authoring and runtime concerns live in the same app; if learner status needs to scale differently or be owned separately in the future, a split into a dedicated status app can be revisited. Alternatives that externalize runtime status to analytics/services or split repos introduce operational and coordination overhead that is not justified at this stage.

Rejected Alternatives
---------------------
1. edx-platform repository
- Pros: This is where all data currently associated with students is stored, so it would match the existing pattern and reduce integration work for the LMS.
- Cons: The intention is to move core learning concepts out of edx-platform (see `0001-purpose-of-this-repo.rst <0001-purpose-of-this-repo.rst>`_), and keeping it there makes reuse and pluggability harder.
2. All code related to adding Assessment Criteria to Open edX goes in openedx-learning/openedx\_learning/apps/authoring/assessment\_criteria
- Pros:
- Tagging and assessment criteria are part of content authoring workflows as is all of the other code in this directory.
- All other elements using the Publishable Framework are in this directory.
- Cons:
- We want each package of code to be independent, and this would separate assessment criteria from the tags that they are dependent on.
- Assessment criteria also includes learner status and runtime evaluation, which do not fit cleanly in the authoring app.
- The learner status models in this feature would have a ForeignKey to settings.AUTH_USER_MODEL, which is a runtime/learner concern. If those models lived under the authoring app, then the authoring app would have to import and depend on the user model, forcing an authoring-only package to carry learner/runtime dependencies. This may create unwanted coupling.
3. New Assessment Criteria Content tables will go in openedx-learning/openedx_learning/openedx_tagging/core/assessment_criteria. New Student Status tables will go in openedx-learning/student_status.
- Pros:
- Keeps assessment criteria in the same package as the tags that they are dependent on.
- Cons:
- `openedx_tagging` is intended to be a standalone library without Open edX-specific dependencies (see `0007-tagging-app.rst <0007-tagging-app.rst>`_) assessment criteria would violate that boundary.
- Splitting Assessment Criteria and Student Statuses into two apps would require cross-app foreign keys (e.g., status rows pointing at criteria/tag rows in another app), migration ordering and dependency declarations to ensure tables exist in the right order, and shared business logic or APIs for computing/updating status that now must live in one app but reference models in the other.
4. Split assessment criteria and learner statuses into two apps inside openedx-learning/openedx\_learning/apps (e.g., assessment\_criteria and learner\_status)
- Pros:
- Clear separation between authoring configuration and computed learner state.
- Could allow different storage or scaling strategies for status data.
- Cons:
- Still introduces cross-app dependency and coordination for a single feature set.
- May be premature for the POC; adds overhead without proven need.
5. Store learner status in a separate service
- Pros:
- Scales independently and avoids write-heavy tables in the core app database.
- Could potentially reuse existing infrastructure for grades.
- Cons:
- Introduces eventual consistency and more integration complexity for LMS/Studio views.
- Requires additional infrastructure and operational ownership.
6. Split authoring and runtime into separate repos/packages
- Pros:
- Clear ownership boundaries and independent release cycles.
- Cons:
- Adds packaging and versioning overhead for a tightly coupled domain.
- Increases coordination cost for migrations and API changes.
7. Migrate grading signals to openedx-events now and have openedx-learning consume events directly
- Pros:
- Aligns with the long-term direction of moving events out of edx-platform.
- Avoids a shim app in edx-platform and reduces tech debt.
- Cons:
- Requires cross-repo coordination and work beyond the current scope.
- Depends on changes to openedx-events that are not yet scheduled or ready.
47 changes: 47 additions & 0 deletions docs/decisions/0021-assessment-criteria-versioning.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
21. How should versioning be handled for CBE assessment criteria?
=================================================================

Context
-------
Course Authors and/or Platform Administrators will be entering the assessment criteria rules in Studio that learners are required to meet in order to demonstrate competencies. Depending on the institution, these Course Authors or Platform Administrators may have a variety of job titles, including Instructional Designer, Curriculum Designer, Instructor, LMS Administrator, Faculty, or other Staff.

Typically, only one person would be responsible for entering assessment criteria rules in Studio for each course, though this person may change over time. However, entire programs could have many different Course Authors or Platform Administrators with this responsibility.

Typically, institutions and instructional designers do not change the mastery requirements (assessment criteria) for their competencies frequently over time. However, the ability to do historical audit logging of changes within Studio can be a valuable feature to those who have mistakenly made changes and want to revert or those who want to experiment with new approaches.

Currently, Open edX always displays the latest edited version of content in the Studio UI and always shows the latest published version of content in the LMS UI, despite having more robust version tracking on the backend (Publishable Entities). Publishable Entities for Libraries is currently inefficient for large nested structures because all children are copied any time an update is made to a parent.

Authoring data (criteria definitions) and runtime learner data (status) have different governance needs: the former is long-lived and typically non-PII, while the latter is user-specific, can be large (learners x criteria/competencies x time), and may require stricter retention and access controls. These differing lifecycles can make deep coupling of authoring and runtime data harder to manage at scale. Performance is also a consideration: computing or resolving versioned criteria for large courses could add overhead in Studio authoring screens or LMS views.

Decision
--------
Defer assessment criteria versioning for the initial implementation. Store only the latest authored criteria and expose the latest published state in the LMS, consistent with current Studio/LMS behavior. This keeps the initial implementation lightweight and avoids the publishable framework's known inefficiencies for large nested structures. The tradeoff is that there is no built-in rollback or audit history; adding versioning later will require data migration and careful choices about draft vs published defaults.

Rejected Alternatives
---------------------

1. Each model indicates version, status, and audit fields
- Pros:
- Simple and familiar pattern (version + status + created/updated metadata)
- Straightforward queries for the current published state
- Can support rollback by marking an earlier version as published
- Stable identifiers (original_ids) can anchor versions and ease potential future migrations
- Cons:
- Requires custom conventions for versioning across related tables and nested groups
- Lacks shared draft/publish APIs and immutable version objects that other authoring apps can reuse
- Not necessarily consistent with existing patterns in the codebase (though these are already not overly consistent).
2. Publishable framework in openedx-learning
- Pros:
- First-class draft/published semantics with immutable historical versions
- Consistent APIs and patterns shared across other authoring apps
- Cons:
- Inefficient for large nested structures because all children are copied for each new parent version
- Requires modeling criteria/groups as publishable entities and wiring Studio/LMS workflows to versioning APIs
- Adds schema and migration complexity for a feature that does not yet require full versioning
3. Append-only audit log table (event history)
- Pros:
- Lightweight way to capture who changed what and when
- Enables basic rollback by replaying or reversing events
- Cons:
- Requires custom tooling to reconstruct past versions
- Does not align with existing publishable versioning patterns
96 changes: 96 additions & 0 deletions docs/decisions/0022-assessment-criteria-model.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
22. How should CBE assessment criteria be modeled in the database?
==================================================================

Context
-------
Competency Based Education (CBE) requires that the LMS have the ability to track learners' mastery of competencies through the means of assessment criteria. For example, in order to demonstrate that I have mastered the Multiplication competency, I need to have earned 75% or higher on Assignment 1 or Assignment 2\. The association of the competency, the threshold, the assignments, and the logical OR operator together make up the assessment criteria for the competency. Course Authors and Platform Administrators need a way to set up these associations in Studio so that their outcomes can be calculated as learners complete their materials. This is an important prerequisite for being able to display competency progress dashboards to learners and staff to make Open edX the platform of choice for those using the CBE model.

In order to support these use cases, we need to be able to model these rules (assessment criteria) and their association to the tag/competency to be demonstrated and the object (course, subsection, unit, etc) or objects that are used as the means to assess competency mastery. We also need to leave flexibility for a variety of different types as well as groupings to be able to develop a variety of pathways of different combinations of objects that can be used by learners to demonstrate mastery of a competency.

Additionally, we need to be able to track each learner's progress towards competency demonstration as they begin receiving results for their work on objects associated with the competency via assessment criteria.

Decision
--------

1. Update `oel_tagging_taxonomy` to have a new column for `taxonomy_type` where the value could be “Competency” or “Tag”.
2. Add new database table for `oel_assessment_criteria_group` with these columns:
1. `id`: unique primary key
2. `parent_id`: The `oel_assessment_criteria_group.id` of the group that is the parent to this one.
3. `oel_tagging_tag_id`: The `oel_tagging_tag.id` of the tag that represents the competency that is mastered when the assessment criteria in this group are demonstrated.
4. `course_id`: The nullable `course_id` to which all of the child assessment criteria's associated objects belong.
5. `name`: string
6. `ordering`: Indicates evaluation sequence number for this criteria group. This defines the evaluation sequence for siblings and enables short-circuit evaluation.
7. `logic_operator`: Either “AND” or “OR” or null. This determines how children are combined at a group node ("AND" or "OR").

Example: A root group uses "OR" with two child groups.
- Child group A (`ordering=1`) requires "AND" across Assignment 1 and Assignment 2.
- Child group B (`ordering=2`) requires "AND" across Final Exam and viewing prerequisites.
- If group A evaluates to true, group B is not evaluated.
3. Add new database table for `oel_assessment_criteria` with these columns:
1. `id`: unique primary key
2. `assessment_criteria_group_id`: foreign key to Assessment Criteria Group id
3. `oel_tagging_objecttag_id`: Tag/Object Association id
4. `oel_tagging_tag_id`: The `oel_tagging_tag.id` of the tag that represents the competency that is mastered when this assessment criteria is demonstrated.
5. `object_id`: The `object_id` found with `oel_tagging_objecttag_id` which is included here to maximize query efficiency. It points to the course, subsection, unit, or other content that is used to assess mastery of the competency.
6. `course_id`: The nullable `course_id` to which the object associated with the tag belongs.
7. `rule_type`: “View”, “Grade”, “MasteryLevel” (Only “Grade” will be supported for now)
8. `rule_payload`: JSON payload keyed by `rule_type` to avoid freeform strings. Examples:
1. `Grade`: `{"op": "gte", "value": 75, "scale": "percent"}`
2. `MasteryLevel`: `{"op": "gte", "level": "Proficient"}`
4. Add constraints and indexes to keep denormalized values aligned and queries fast.
1. Enforce that `oel_assessment_criteria.oel_tagging_tag_id` matches the `oel_assessment_criteria_group.oel_tagging_tag_id` for its `assessment_criteria_group_id`.
2. Enforce that `oel_assessment_criteria.object_id` matches the `object_id` referenced by `oel_tagging_objecttag_id`.
3. Add indexes for common lookups:
1. `oel_assessment_criteria_group(oel_tagging_tag_id, course_id)`
2. `oel_assessment_criteria(assessment_criteria_group_id)`
3. `oel_assessment_criteria(oel_tagging_objecttag_id, object_id)`
4. `student_assessmentcriteriastatus(user_id, assessment_criteria_id)`
5. `student_assessmentcriteriagroupstatus(user_id, assessment_criteria_group_id)`
6. `student_competencystatus(user_id, oel_tagging_tag_id)`
5. When a completion event (graded, completed, mastered, etc.) occurs for the object, then determine and track where the learner is at in earning this competency. To reduce the number of times calculations need to run, we can have tables that hold the results at each level.
1. Add new database table for `student_assessmentcriteriastatus` with these columns:
1. `id`: unique primary key
2. `assessment_criteria_id`: Foreign key pointing to assessment criteria id
3. `user_id`: Foreign key pointing to user_id (presumably the learner's id, although it appears that it is possible for staff to get grades as well) in `auth_user` table
4. `status`: “Demonstrated”, “AttemptedNotDemonstrated”, “PartiallyAttempted”
5. `timestamp`: The timestamp at which the student's assessment criteria status was set.
2. Add a new database table for `student_assessmentcriteriagroupstatus` with these columns:
1. `id`: unique primary key
2. `assessment_criteria_group_id`: Foreign key pointing to assessment criteria group id
3. `user_id`: Foreign key pointing to user_id (presumably the learner's id, although it appears that it is possible for staff to get grades as well) in `auth_user` table
4. `status`: “Demonstrated”, “AttemptedNotDemonstrated”, “PartiallyAttempted”
5. `timestamp`: The timestamp at which the student's assessment criteria status was set.
3. Add a new database table for `student_competencystatus` with these columns:
1. `id`: unique primary key
2. `oel_tagging_tag_id`: Foreign key pointing to Tag id
3. `user_id`: Foreign key pointing to user_id (presumably the learner's id, although it appears that it is possible for staff to get grades as well) in `auth_user` table
4. `status`: “Demonstrated” or “PartiallyAttempted”
5. `timestamp`: The timestamp at which the student's competency status was set.

.. image:: images/AssessmentCriteriaModel.png
:alt: Assessment Criteria Model
:width: 80%
:align: center


Rejected Alternatives
---------------------

1. Add a generic oel\_tagging\_objecttag\_metadata table to attempt to assist with pluggable metadata concept. This table would have foreign keys to each metadata table, currently only assessment\_criteria\_group and assessment\_criteria as well as a type field to indicate what metadata table is being pointed to.
1. Pros
1. Centrally organizes metadata associations in one place
2. Cons
1. Adds additional overhead to retrieve specific metadata

.. image:: images/AssessmentCriteriaModelAlternative.png
:alt: Assessment Criteria Model Alternative
:width: 80%
:align: center

2. Split rule storage into per-type tables (for example, `assessment_criteria_grade_rule` and `assessment_criteria_mastery_rule`) instead of a single JSON payload.
1. Pros
1. Provides stricter schemas and validation per rule type
2. Cons
1. Increases table count and join complexity as new rule types are added


Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.