Uploaded image for project: 'Moodle'
  1. Moodle
  2. MDL-62192

Display accuracy of models more prominently and with more detail

XMLWordPrintable

    • Icon: Improvement Improvement
    • Resolution: Won't Do
    • Icon: Minor Minor
    • None
    • 3.5
    • Analytics
    • None

      Currently the accuracy of models is reported only in the Log action menu. Model accuracy is a key aspect of learning analytics, and needs to be displayed more prominently. Accuracy varies with time-splitting methods, but only one time-splitting method can be active for a model at a time, so the accuracy level for the currently used time-splitting method should be displayed on the list of models. More detail should be available about the accuracy of each model, as well.

      One simple way to display this information is by using a ROC curve plotting the true positive rate of the model predictions against the false positive rate: https://en.wikipedia.org/wiki/Receiver_operating_characteristic

      Display the following details for each model:

      • Contingency distribution (graph)
      • ROC curve (graph)
      • Area under curve (number)
      • Contingency table and diagnostics: (note that most of these are simple calculations of a small number of basic values)
        • True positive n
        • Power
        • Recall/Sensitivity (TPR)
        • False positive n
        • Type I error
        • Fall-out (FPR)
        • False negative n
        • Type II error
        • Miss rate (FNR)
        • True negative n
        • Specificity (TNR)
        • Predicted positive n
        • Prevalence
        • Precision (PPV)
        • LR+
        • Predicted negative n
        • False omission (FOR)
        • LR-
        • Condition positive n
        • Condition negative n
        • Total n
      • Diagnostics:
        • Accuracy
        • False discovery rate
        • Negative prediction value
        • DIagnostic odds ratio
        • F1 score
      • Detail of indicators:
        • Selected indicator ROC curves (graph) with control to select how many indicators to display
        • Model effect size (standardized)
        • Indicator graphs:
          • Indicator significance (p-value)
          • Standardized effect sizes with confidence intervals for all indicators

      Alternative charts, such as the detection error tradeoff graph or zROC could be used as the header graph instead of a ROC. Usability studies should determine which kind of display would be easiest for most users to understand and use in decision-making.

      Not that the ROC curve is only applicable for binary predictions, but at present these are the only prediction models supported by Moodle learning analytics. This could be extended to REC or RROC curves once we extend support for other types of predictions.

            Unassigned Unassigned
            emdalton1 Elizabeth Dalton
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved:

                Error rendering 'clockify-timesheets-time-tracking-reports:timer-sidebar'. Please contact your Jira administrators.