This feature allows course creators to collect real-time feedback from students, providing valuable insights throughout the learning process. Unlike traditional feedback or questionnaires, which are typically collected after activities, the rating element enables continuous evaluation. This feedback supports adaptive learning by influencing the course path based on students’ responses.
With the Rating feature, course creators can:
- Gather feedback during various stages of the course.
- Integrate the ratings seamlessly with adaptive learning paths or trigger notifications with Pulse based on specific responses.
- Make feedback comparable across multiple courses, allowing for consistent reporting.
Course creators can choose from numeric or Moodle defined scales, with scales based on Moodle’s customizable scale options. Custom scales can be easily created and integrated into the Rating element, offering flexibility to match the course’s needs.
Settings for the rating element are fully customizable, including:
- The ability to determine whether ratings can be modified after submission.
- Options to control how results are displayed after submission, such as hiding aggregated results or showing average scores for numeric scales and counts for non-numeric responses.
- Option to make ratings mandatory for students to submit.
Variables
Each rating element can be linked to a variable. Admins can create, manage, and assign variables to course categories, streamlining reporting across multiple courses. Variables can be either active or archived: active variables collect new data, while archived ones preserve historical data without accepting further input. This system enables cross-course evaluations, allowing ratings from different courses to be analyzed based on shared variables. As a result, course creators gain a broader understanding of trends and feedback across the curriculum.
The Rating element also allows course creators to map evaluation schemas using the Variable/Type logic. Each rating item is linked to an underlying construct (variable) and grouped into broader thematic categories (types). This structured approach makes it possible to build pedagogically meaningful evaluation instruments, while at the same time enabling empirical analysis of the resulting reporting data.

This course evaluation example shows rating items linked to specific variables. Once students submit their answer, the most common responses from others are revealed.
Ratings can be embedded within a learning activity and thus refer directly to the content.


In this example, the rating elements use classic Likert scales, allowing learners to agree or disagree with given statements.
In this example, the rating captures an NPS (Net Promoter Score) survey asking about the likelihood of recommending the course.


Visual scales using only emojis can also be created. Depending on the target audience or the tone of the course, this can be a more casual way to engage learners.