We have been debating the frequency of our critical risk reviews. Firstly should they be based on inherent or raw risk score or the residual with controls in place, secondly what is the frequency if low, medium or high risk.
This what we have in place now:
Yearly - low risk
6 monthly-medium risk
3 monthly-high risk
We work on initial risk score. Our current standard is
low - 3 yearly
Medium 2 yearly
High - annually
Significant - management review following assessment.
However I believe the context of your companies operations must be taken in to account when deciding the review frequency.
This is where exposure amount comes into things as well. No real point doing 6 monthly reviews for something like winter seasonal work.
We have high risk work such as frost flying work for helicopters, and we review the risks at the beginning of the frost season. It both serves as a risk review, and recurrency training for pilots and the crew working with them.
Inherent risk - I think is old school thinking - sorry to be negative. We should focus on current as we have controls in place already. Why think about a risk without controls. If you look at the swiss cheese model this is a prime example to support current risk. I know this is off topic but current risk and improvements plans is the way I look at it and makes sense to me. Supporting this with a systematic approach to identifying the hazards should be approached first.
Back on topic - agree with Tracey - look at your risk profile and controls to determine the frequency and of course consultation with the people who understand the risk is important.
Ensuring people who are confident to do the assessment is important, is important and a devils advocate (if you have one) to keep you on track.
I recently was talking about risk and the HR Manager who referred to worse case scenario - another mistake (my thoughts) in managing residual risk. Again applying the swiss cheese model here where there are control in place. As Dr Conklin would say "failing safety"
re: "old school thinking" does the Plan-Do-Check-Act cycle that a lot of management systems (and standards) are based around exacerbate this thinking by continuing a focus on a set frequency of the management reviews, e.g. monthly / 6-monthly / annually (which may or may not be adjusted to a unique frequency for each identified risk such as is referred in the OP), rather than considering reviews when there is a change to the risk or how the risk is being effectively managed.
I've seen this commonly materialise in "XYZ period" board / senior management risks review meetings, which (after reading through the risk register for the n'th time) end up just re-approving the status-quo without any deeper thought since; everything seems alright on the surface and there hasn't been any major incidents relating to the risk and nothing else seems to have prompted this review except for it came up for its "XYZ period" review. With more frequent reviews sometimes actually make the problem worst - after all if nothing has changed but they still have to review a risk every monthly meeting then a tick-n-flick mentality can easily develop.
However if we look to the regulations instead for guidance they actually set-out a reasonably well defined list of prompts for when to review your risks and control measures in Reg 8 (of the General Risks and Workplace Management Regulations), in summary;
if the control measure no longer controls the risk
when there is a change to the workplace or how work is to be done
if a new hazard/risk is identified
if a health monitoring report indicates that there has been an unsafe exposure to a worker
if exposure monitoring indicates that there has been a breach of a Workplace Exposure Standard
due to consultation with the workers, H&S representative or H&S committee
None of the above are set by a define time period, so (technically) if you are reviewing your risks only on the basis of a set frequency you could unintentionally run afoul of the law.
#1 & #3 are probably the "hardest" to figure out when they actually are prompting a review is required, since it requires you to basically review the risks to know whether you should review the risks.
#1 is where clarification of the Check aspect of the PDCA cycle is needed. This shouldn't be (re)checking the Plan, i.e. Check isn't periodically reviewing the risks and control measures on the risk register (as is common place in most senior management risk review meetings). It rather should be checking the plan is working, i.e. monitoring the control measures are in place and are still effectively managing their related risk. Effective monitoring programs are key to ensuring the relevant information filters up to management to inform them when making decisions regarding allocation of resources.
#3 is very reliant on the other prompts - generally hazards/risks don't materialise out of nowhere, but rather when there is a change. Having a process to identify and manage these changes is key - which relies on the other points, specifically #2 and #6 (with #4 & #5 being essentially one of the means of monitoring control measures mentioned earlier for #1).
So back to the question in the OP - my thoughts; start steering our boards, senior managers, etc. to re-frame the question from "how frequently should we review critical risks" to "how confident are we that we are managing critical risks", with leading them towards how they can establish that confidence by reviewing the information that comes from the monitoring of the control measures.
This however requires that there is effective monitoring of the control measures to confirm they are in place as expected and they are managing the risk as expected. Essentially this is changing from a typical KPI regime focused on lead and lag indicators to one related to verification and validation. As an example; if you identify a risk of process upsets due to the miscommunication of process conditions during shift change-over, and you implement a 15-minute shift-handover debrief meeting between each two shifts to manage this risk - then you might verify the control is in place by recording the % of shift-changovers that these meetings are held each week/month, and validate it is actually working by recording the number of production upsets that occur each week/month (that could have been avoided if the shift had additional information from the previous shift). Reporting to management when either of these indicate the control measure isn't working as intended - so management can review and allocate additional resources as required (either to strengthen the control measure or implement an alternative).
Sarah and others, the question and some answers suggests the use of a 5x5 matrix.
Use of a matrix assumes that (a) the matrix in question was actually designed for the business or undertaking AND for the specific activity within the business or undertaking; (b) the context of the business activity is clearly understood; (c) the matrix was used without fudging the results; (d) the effectiveness of any controls is clearly understood.
If you could look a District Court judge in the eye while under oath and confirm all of the above you are in an exceptional position and should consider writing the definitive text on risk assessment and management.
Which begs another question. What do you mean by "risk"? The Act gives no definition and ISO45001 gives two definitions. I prefer the definition in ISO31000 Risk management: guidance because it makes you think. There, risk is the "effect of uncertainty on objectives". All readers, think about that definition and the words effect, uncertainty and objectives for at least 48 hours before bursting into print.
This sort of conundrum is part of the paper I am teaching in trimester 2 at Victoria University of Wellington. Please don't think about enrolling in the paper - just enrol.