Overpromised or underdelivered?

0

Evaluating the Performance of Australia’s National Cyber Security Strategy

This article compares the Australian Government’s own assessment of its progress towards implementing the National Cyber Security Strategy with the Australian Strategic Policy Institute’s (ASPI) evaluation of the same progress.

Published in April 2016, the Strategy provided a plan of 33 actions the Government would take between publication and 2020 to achieve its stated objectives. A year after original publication, the Government published an Annual Update to the Strategy, appraising its progress according to the action plan. Each of the actions were assessed on a scale of four ratings: Not scheduled to have commenced, Progress, Strong Progress, and Completed. Out of the 33 actions, 2 were ranked as Not scheduled, 14 as Progress, 11 as Strong progress, and 6 as Completed.

Not taking this appraisal at face value, ASPI published its own evaluation of the Government’s implementation in May 2017. This was largely optimistic, though it also criticised the Strategy’s action plan for lack of clarity, including lack of timeframes and the unmeasurable character of several actions. Similar to the Government’s own effort, ASPI provided a scale, albeit with a more fine-grained six ratings: Unmeasurable, Outcome-dependent, Not started, Underway, Significant progress, and Achieved. ASPI were also more thorough in their assessment, rating not just the 33 actions, but the individual sub-points within them. Accordingly, there were 11 Unmeasurable, 12 Outcome-dependent, 14 Not started, 22 Underway, 19 Significant progress, and 4 Achieved actions.

With these basic metrics, comparative analysis can be conducted to evaluate whether the Government has so far overpromised or underdelivered on the National Cyber Security Strategy.

Method

Because the Government and ASPI assessments used different rating scales, it was necessary to normalise these with a quantified metric before they could be compared. This was done by assigning a score of 1 to 4 to each rating, with higher scores reflecting better ratings, as detailed in the table below. The Unmeasurable and Outcome-dependent ratings were assigned 1 to neutralise them. 0 was also assigned in cases where ASPI had rated an action as Not started when it should have been started.

The scores were then assigned to each action as defined by the Government. Where ASPI had rated multiple sub-points within an action, the scores for each sub-point were aggregated and averaged to return a score for the action.

The end result of applying this method was two scores for each action: one reflecting the Government rating and one the ASPI rating. A comparison of these scores could be used to determine whether the Government had over- or underestimated its performance for each action, and the deviation between the scores indicated the extent of the over/ underestimation. There was also a total score out of 132 [33 (actions) x 4 (highest possible score)] indicating overall performance…Click HERE to read full article.

Share.

Comments are closed.