Developing a New Tool to Measure Psychosocial Stress Risk for Remote Workers
For those reading for the first time about this work, you may find it helpful to first read about the background of the project here. (See here).
This work aligns with the Health & Safety Executive’s approach to managing occupational stress risk – The Management Standards. The Management Standards consists of a series of survey questions (or ‘items’) categorised across seven domains of stressors, taken from existing theory about work stress – Control, Demands, Support (separated into Peer Support and Management Support), Role Clarity, Relationships, and Change Management. One of the key outputs of this project is a survey tool, which consists of the 35 items from the management standards indicator tool, and a series of demographic and bespoke questions relating to the recent changes to working practices. This week’s blog post provides a little insight into the process of going from qualitative data and findings to the bespoke items in the survey.
Devising the Items
From the 32 focus groups that were conducted, qualitative data was thematically analysed.
On the basis of the qualitative data and themes extracted from it, we identified potential stress risks specifically associated with remote working. For each risk identified, we created a question (or item) that would encapsulate that risk and provide an opportunity for the respondent to think about the extent to which that item was a risk for them.
This process resulted in 73 items of which 57 aligned closely with the management standards domains. The remaining items were categorised into new domains as these did not meaningfully fit within the existing management standards domains. The items were then reviewed and discussed with a panel of three experts to consider overlap and coherence.
Each item needs to tap into a specific issue and be a discreet item in and of itself, but to be categorised into a domain, it needs to align with existing items and yet add a new dimension to that area of work stress. For example:
“I feel trusted by my line manager to make good decisions about my remote working practices”
“I feel that my line manager recognises and values the work I do remotely”
Our view was that both of these items fitted into the domain ‘Management Support’ but address different aspects of the sort of a support a manager provide, specifically in the context of remote working.
This review process significantly reduced the total number of items so that the final draft version of the tool contained 44 items.
A particularly interesting part of this process was trying to understand what information to collect about participant demographics that would provide useful insight and data exploration in order to understand stress risks. We were interested in aspects such as:
- What proportion of their time they work from home
- Where they work when not homeworking
- Full or part time
- How they worked before lockdown
The Pilot Survey
The next step was to test out these items with a relevant group of people who had been remote working within local authorities. The aim of this process was to test out our assumptions about the way the newly devised items fitted together, using statistical methods. 51 people across the four local authorities completed a pilot version of the survey. We were then able to use this data to carry out a technical process called ‘Reliability Analysis’. This analysis provides evidence of the extent to which each item fits within its category, known as ‘Internal consistency’. It is judged by a score provided by the statistical programme SPSS. The score is called Cronbach’s Alpha, and a reliable, internally consistent subscale should achieve a Cronbach Alpha score of between 0.60 and 0.90. All of the subscales produced were within this range with the exception of one subscale (Relationships) which did not meet the criteria because it only had two items in it, making it difficult to gain enough breadth for a consistent scale. A further domain only had one item and therefore was not appropriate for testing internal consistency. The remaining subscales were all consistent, and in fact most produced high scores. This means that the bespoke items in each subscale do relate to one another sufficiently to be evaluated as a unit and produce meaningful results.
Why is this important?
In general, the processes that psychologists undertake to develop these types of tools are to ensure that the tool does the job it is supposed to – it measures what it sets out to measure.
If you produce a scale that is reliable, you can look at that domain as a whole – we can get an overview by looking at those items as a unit – the individual items provide the detail, and the scale as a whole provides an overview of overall performance in that area.
There is further work that can be done once data has been collected from a broader sample to further develop the evidence supporting the use of the tool, and this will be explored in the next phase of this work.