Category: Social Policy

Disabled people are sanctioned more than other people, according to research

Image result for work disabled people

A study has found that people with disabilities who claim social security support are 26-53 per cent more likely to be sanctioned than people who are not disabled. According to the research, the main reason behind this is a “culture of disbelief” among jobcentre staff, who fail to take sufficient account of the impact of people’s disabilities on their capacity to meet strict welfare conditionality criteria.

This implies that welfare conditionality has an inbuilt discrimination, as it disproportionately affects people according to their characteristics.

Such discrimination violates the Equality Act 2010:

Ahead of the release of a Demos report by Ben Baumberg Geiger on the Work Capability Assessment on Tuesday, the headline findings on benefits conditionality were featured today in the Observer: ‘More than a million benefit sanctions imposed on disabled people since 2010′.

Ben is a Senior Lecturer in Sociology and Social Policy at the School of Social Policy, Sociology and Social Research (SSPSSR) at the University of Kent. The figures on benefits sanctions can be found in Ben’s 2017 paper ‘Benefits conditionality for disabled people: stylised facts from a review of international evidence and practice’ published (open access) here (p109-111), and the appendices that provide the source for the UK benefit sanctions data is here.

The article in the Guardian also briefly mentions new polling on the public’s attitudes to sanctioning disabled benefit claimants. However, full details of this will be available in the report to be released on Tuesday. 

The recent Work and Pensions Committee inquiry into Employment and Support Allowance (ESA) and Personal Independence Payment (PIP) assessments highlights how disability benefits are not a ‘safe place’ for disabled people, despite ministers using language that implies it is. Warnings from Iain Duncan Smith about “up to a million people ‘languishing’ on sickness benefits, who could be ‘put back to work’ with the right ‘help’, or descriptions in policy papers of disabled people being “parked” on benefits mislead the public.

It is through such political definitions that groups become restricted, face boundaries, become oppressed. Over the last seven years, disabled people have somehow lost the right to self-determination and to express our own group identity. The Government have redefined us and radically rewritten the terms and conditions of the social contract more generally, removing state obligations and duties towards citizens. The Conservative settlement – a fusion of economic neoliberalism with state and social authoritarianism – openly demonstrates an aversion to any notion of social equality and justice.  

Sanctions – the cutting or withholding of lifeline benefits – are applied as a punishment when citizens infringe the conditions of their welfare support by, say, through missing an appointment, being late or failing to apply for enough jobs.

The sanctions regime has been championed by the Government as a means of imposing ‘behavioural change’ on claimants, as they believe that people are unemployed because they need ‘incentives to work’. However, rather than addressing low pay, insecure employment and poor working conditions, the Government has instead decided that unemployed people and welfare itself are the problem: welfare is seen as a ‘perverse incentive’ that prevents people from looking for employment.

Sanctions and wider welfare conditionality were introduced to significantly reduce the basic security and material comfort of people needing social security, in order to push them back into the labour market. This behaviourist turn has transformed a system that was designed to ensure that all citizens could meet their basic survival needs into one that punishes people for non-compliance with politically imposed conditionality criteria, comprised of what the Conservatives regard as acceptable ‘job seeking behaviours’. In this way, Conservatives claim that people are more likely to gain employment. 

However, unsurprisingly most of the experts consulted as part of the Demos project have concluded that welfare conditionality has little or no effect on improving employment  for disabled people, often having a negative impact to the point where disabled people were even less likely to find employment than if they hadn’t been subjected to state impositions. There was also widespread anecdotal evidence that the threat of sanctions can lead to anxiety and have a wider impact on peoples’ health.

Polly Mackenzie, director of Demos, said it was now clear that the benefits system isn’t working for disabled people: “Conditionality is important in any benefits system, but when disabled people are so much more likely to be sanctioned, something is going wrong. Jobcentre advisers and capability assessors too often have a culture of disbelief about disability, especially mental illness, that leads them to sanction claimants who genuinely could not do the job they are being bullied into applying for.

“We need to think again about how we assess work capability. Employers also need to be better at adapting to disabled people’s needs so that more jobs can be done by people with fluctuating conditions.”

A damning research report by the National Audit Office (NAO) in 2016, also found that there was no evidence that sanctions were working. It also said there was a failure to measure whether money was being saved, and that the application of sanctions varied from one jobcentre to another. 

The 2017 Demos study uncovered that more than 900,000 JSA claimants who report a disability have been sanctioned since May 2010. People who claim ESA and have been placed in a work-related activity group – which requires them to attend jobcentre interviews and complete work-related activities – can also be sanctioned. The research found that more than 110,000 ESA sanctions have been applied since May 2010.

Mark Atkinson, chief executive at disability charity Scope, said: “Punitive sanctions can be extremely harmful to disabled people, who already face the financial penalty of higher living costs. There is no clear evidence that cutting disabled people’s benefits supports them to get into and stay in work.

“Sanctions are likely to cause unnecessary stress, pushing the very people that the government aims to support into work further away from the jobs market.”

The Work Capability Assessment (WCA) was introduced in part to bolster neoliberal imperatives related to the supply of labour. The political focus on these economic concerns fails to  prioritise the wellbeing of disabled people. Another reason for the introduction of the WCA was to cut costs. This intention was evident in the ‘scrounger’ and fraud’ narrative that seeped into political and media discourse. Disability welfare is portrayed as ‘unsustainable’, with the Government claiming that resources need to be ‘targeted’ at those ‘most in need’.

However, it is evident from the recent Work and Pensions inquiry into ESA and PIP assessments is that many of those most in need are being catastrophically let down by the current system.

The Guardian reports: Polling for the Demos project found that while the public often supported the imposition of sanctions for disabled people, they did not back the way in which they were applied in practice.

A majority thought that disabled people’s benefits should be cut if they do not take a job they can do, but they were less supportive of sanctioning for minor non-compliance, such as sometimes turning up late for meetings. Even those who supported sanctions preferred a much less punitive approach than the government currently imposes.

The sanctions are taking place in a context where the number of unemployed disabled people being supported with specialist help to find work has actually been halved. according to the companies running the government’s Health and Work programme.

Kirsty McHugh, chief executive of the Employment Related Services Association (Ersa), which represents the employment support sector, said: “The size of the new Work and Health Programme means only one in eight disabled people who want to work will have specialist help to do so. As a society, we have an obligation to ensure appropriate support is available and the report shows that we are in danger of failing disabled people and their families.” 

The analysis shows that there is to be a cut in funding from £750m in 2013-14 to less than £130m in 2017. Ersa says that the cut in funding will severely hamper the Government in its goal of securing work for more than 1.2 million more people with disabilities. It seems that the Government is relying on punitive and coercive measures such as the threat and use of sanctions, to achieve its goal. Disabled people are not permitted to have goals that don’t align with state-defined neoliberal ones. 

The collaborative Demos researchers recommend a reduction in the use of so-called “benefit conditionality” for disabled people and a strengthening of the safeguards to ensure disabled people are not unfairly punished. However, despite the growing numbers of campaigners, charity groups and academic researchers calling for the Government to introduce less aggressive sanctions, the Government remains disinclined to do so.

The theories of ‘behaviour change’ underpinning conditionality have been questioned by commentators, particularly with respect to the assumed ‘rationality’ of citzens’ responses to financial sanctions.

Concerns have been raised that welfare conditionality leads to a range of unintended effects, including distancing people from support, causing hardship and even destitution. There is also ample evidence that those social groups with complex needs, such as disabled people, young people with chaotic lifestyles and homeless people have been disproportionately affected by the intensification of welfare conditionality under successive Conservative governments. Research implies that there are differential impacts based on citizens’ characteristics. 

This observation is also consistent with international evidence, especially from the US, that the most potentially vulnerable claimants are at greatest disadvantage within highly conditional social security systems, for example, those with mental health problems, those with long term illnesses and disabled people more generally.

Welfare ensures that people are able to meet their basic needs. Welfare covers the costs of food, fuel and shelter. It’s a safeguard to prevent absolute poverty. That was its original purpose when it was introduced. It is difficult to imagine how removing the means that people have of meeting their basic survival needs can possibly motivate them to find work. Comprehensive historical research shows that when people cannot meet their basic biological needs, their pressing cognitive priority is simply survival.

In other words, when people are hungry and facing destitution, addressing those fundamental needs becomes a significant barrier to addressing their psychosocial needs such as seeking employment.

For disabled people, who already face additional barriers to addressing their  fundamental needs.  Welfare sanctions for disabled people has created injustices, caused fear and inflicted considerable distress and harm on disabled people.

 


I don’t make any money from my work. I am disabled because of illness and have a very limited income. But you can help by making a donation to help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated – thank you.

DonatenowButton

Advertisements

Government quietly scraps plans to introduce softer approach to benefit sanctions

Image result for welfare sanctions

Last October, the Department for Work and Pensions (DWP) agreed to trial a less aggressive approach to sanctions, which included the issuing of warnings instead of immediate benefit sanctions when a claimant breaches the conditions imposed on them for the first time. Iain Duncan Smith had proposed the idea in response to sustained criticism that sanctions are often applied unfairly, that they ultimately cause severe hardship, they are a barrier to employment rather than providing an incentive for work, and are costing more to administer than they actually save. 

Last year, David Gauke admitted at the Conservative’s annual conference that the system of benefit sanctions often fails to work and can cause harm. He said he would to try to find a way to make the sanctions system less damaging to people, particularly those with mental health conditions. The announcement of the trial soon afterwards seemed to demonstrate the Department for Work and Pension’s (DWP) commitment to learning from feedback and using evidence to make positive changes. 

However, the Department’s commitment to the trial is now being called into question, following Esther McVey’s appointment as Gauke’s successor.

Some of the widely criticised sanction decisions include people being sanctioned for missing jobcentre appointments because they are ill, or had to attend a job interview, or people sanctioned for not looking for work because they had already secured a job due to start in a week’s time. In one case, a man with heart problems was sanctioned because he had a heart attack during a disability benefits assessment and so failed to complete the assessment.

Welfare was originally designed to safeguard people experiencing hardship from absolute poverty. Now the Government uses sanctions to create hardship as a punishment for non-compliance with rigid conditionality criteria that doesn’t permit mitigation for someone experiencing a heart attack, or for someone being late for a meeting with a job coach.

Last March, the Work and Pensions Committee called for an independent inquiry into the way that sanctions operated, for the second time in a year. The committee report at the time had warned that the sanctions regime appeared to be “purely punitive”.

In August 2015, the DWP was caught making up quotes from supposed “benefit claimants” saying that sanctions had actually helped them. The Department later admitted the quotes were fabricated and withdrew the leaflet, claiming they were for “illustrative purposes only”.

This deceit came to light because of a response to a Freedom of Information (FoI) request from Welfare Weekly which led the DWP to withdraw the leaflet featuring fictional case studies. It’s particularly damning that the Department can present no real cases studies that support the use of sanctions and their claims that they are effective and necessary. 

Sanctioning a claimant who is single and without dependants can often have implications for other family members, causing hardship for others – for example younger siblings of JSA claimants who are living in their parental home. It is under-acknowledged that when a claimant is sanctioned, the loss of benefits may affect low-income families rather than individuals alone. 

It was hoped that the change proposed by Duncan Smith and Gauke would soften some of the severe hardship caused by sanctions. Although Conservative ministers have claimed that sanctions ensure that people are compliant in their commitment to look for work, in practice a very high proportion of benefit sanctions challenged at independent appeal are overturned, because they have been unfairly or unreasonably applied. In 2014 the DWP released figures which showed that 58 per cent of people seeking to overturn sanctions were successful – up from 20 per cent before 2010.

The introduction of less aggressive sanctions – which involves a system of warnings and a period of dialogue between claimant and the DWP to ascertain reasons for possible breaches to the claimant commitment, exploring possible mitigating circumstances – was also one of five recommendations made in last February’s report by the public accounts committee (PAC) on benefits sanctions, all of which have been accepted by ministers, according to a document sent by the Treasury to the committee earlier this month.

Concerns expressed in the report are that benefit sanctions affect a large number of people, leading to hardship and undermining efforts to find work. Around a quarter of people on Jobseeker’s Allowance between 2010 and 2015 had at least one sanction imposed on them. Suspending people’s benefit payments can lead to rent arrears and homelessness. The consequences of sanctions on people can be serious so they should be used “very carefully”. However, sanctions are imposed for “honest mistakes”. Citizens Advice (CAB) highlighted the need for flexibility for people who are trying their best.

Other concerns stated in the report are that sanctions are imposed inconsistently on claimants by different jobcentres and providers, the Department does not understand the wider effects of sanctions and the Department’s data systems are not good enough to provide routine understanding of what effect sanctions have on claimants’ employment prospects.  In other words, it’s a policy applied without adequate justification or evidence of its efficacy. 

This echoes much of what the National Audit Office (NAO) said in their report on benefit sanctions in 2016. Their report, which has also been cited as a source by the PAC, said the DWP is not doing enough to find out how sanctions affect people on benefits, and concluded that it is likely that management focus and local work coach discretion have had a substantial influence on whether or not people are sanctioned.

The NAO report recommended that the DWP carries out a wide-ranging review of benefit sanctions, particularly as it introduces further changes to labour market support such as Universal Credit. The NAO found that the previous government increased the scope and severity of sanctions in 2012 and recognised that these changes would affect claimants’ behaviour in ways that were “difficult to predict.”

Benefits ensure that people are able to meet their basic needs. Welfare covers the costs of food, fuel and shelter. It’s a safeguard to prevent absolute poverty. That was its original purpose when it was introduced. It is difficult to imagine how removing the means that people have of meeting their basic survival needs can possibly motivate them to find work. Comprehensive historical research shows that when people cannot meet their basic biological needs, their pressing cognitive priority is simply survival. In other words, when people are hungry and facing destitution, addressing those fundamental needs becomes a significant barrier to addressing their psychosocial needs such as seeking employment.

Welfare rights advisers on the rightsnet online forum, and from Buckinghamshire Disability Service have voiced their concerns that the DWP has decided not to carry out the less aggressive sanctions warning trial after all, because of “competing priorities in the Parliamentary timetable”. This government decision was included on page 139 of the latest Treasury Minutes Progress Report, published last month, which describes progress on implementing those PAC recommendations that have been accepted by the government. There was no public announcement of the governments’ intentions.

The progress report is dated 25 January, nonetheless, a DWP spokeswoman has insisted that the decision to abandon the sanctions trial had been taken before the appointment of Esther McVey as the new work and pensions secretary on 8 January.

She said: “The decision not to undertake a trial was taken at the end of 2017 – before Esther McVey took up her position as secretary of state.

“As you have read, introducing the trial through legislative change cannot be secured within a reasonable timescale.

But we are keeping the spirit of the recommendation in mind in our thinking around future sanctions policy.

“To keep the sanctions system clear, fair and effective we keep the policies and processes under continuous review.”

The decision last October to trial handing out warnings prior to implementing sanctions was welcomed by many campaigners, disabled activists, academics and anti-austerity protesters. 

It had come only weeks after the UN’s committee on the rights of persons with disabilities (UNCRPD) published their inquiry report, which found that the UK government’s welfare reforms “systematically” violate the rights of disabled persons..

The UN committee recommeded that the government reviewed “the conditionality and sanction regimes” linked to employment and support allowance (ESA), the out-of-work disability benefit, and “tackle the negative consequences on the mental health and situation” of disabled people.

Gauke had previously acknowledged that sanctions cause harm, and had voiced a commitment to amend the severity of welfare sanctions. The change in direction by the Government is thought by some campaigners to be directly linked to the return of Esther McVey as a Department for Work and Pensions minister.

A PAC spokesperson said: “The committee has not yet considered its course of action.”

However, sanctions are not compatible with our human rights framework or democracy: “A legal right to a basic income necessary to live with dignity is rooted in inalienable human rights. These rights should be properly enshrined in UK constitutional laws and systems of governance. Currently the poorest 10% of families (about 6 million people) live on £40 per week after tax. It is utterly unacceptable to further reduce this tiny income to zero for any reason. As it stands [welfare] conditionality has opened the door to injustice and cruelty (Dr Simon Duffy, Centre for Welfare Reform, 2010).

 

Related

Benefit Sanctions Can’t Possibly ‘Incentivise’ People To Work – And Here’s Why

Benefit Sanctions Lead To Hunger, Debt And Destitution, Report Says

This post was written for Welfare Weekly, which is a socially responsible and ethical news provider, specialising in social welfare related news and opinion.


 

I don’t make any money from my work. But you can support Politics and Insights and contribute by making a donation which will help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated, and helps to keep my articles free and accessible to all – thank you. 

DonatenowButton

The PIP & ESA inquiry report from the Work and Pensions Select Committee – main recommendations

Image result for pip esa inquiry

Yesterday, the government published the latest Work and Pensions Select Committee report on PIP and ESA Assessments. This is an utterly damning report, highlighting a lack of quality, consistency, transparency, objectivity and fairness from the government’s PIP and ESA incredibly expensive outsourced assessment regimes.

The report highlights failures by the private contractors, Atos and Maximus, to conduct accurate assessments, and substantial failures in the DWP’s decision-making – both the initial decisions about benefit awards and mandatory reviews were all too often found to be lacking in facts and accuracy. 

The report document says: “We heard many reports of errors appearing in assessment reports. Such experiences serve to undermine confidence amongst claimants. So does the proportion of DWP decisions overturned at appeal. At worst, there is an unsubstantiated belief among some claimants and their advisers that assessors are encouraged to misrepresent assessments deliberately in a way that leads to claimants being denied benefits.

“All three contractors carry out assessments using non-specialist assessors” it adds “Without good use of expert evidence to supplement their analysis, the Department will struggle to convince sceptical claimants that the decision on their entitlement is an informed one…It is extraordinary that basic deficiencies in the accessibility of PIP and ESA assessments remain, five and ten years respectively after their introduction.”

The committee concludes: “Claimants of PIP and ESA should be able to rely on assessments for those benefits being efficient, fair and consistent. Failings in the processes – from application, to assessment, to decision-making and to challenge mechanisms – have contributed to a lack of trust in both benefits. This risks undermining their entire operation.”

Meanwhile, the private contractors have made massive profits, despite the governments’ own quality targets having been universally missed.

The report continues: “The Government has also spent hundreds of millions of pounds more checking and defending the Department’s decisions.”

Recommending ultimately that the assessments might be better conducted by ‘in-house’  assessors, in the meantime the Work and Pensions Committee has called for new conditions to be put in place for transparency in the process.

Conclusions and recommendations made:

The importance of trust

1For most claimants, PIP and ESA assessments go smoothly. But in a sizeable minority of cases, things go very wrong indeed. For at least 290,000 claimants of PIP and ESA—6% of all those assessed—the right decision on entitlement was not made first time. Those cases, set alongside other problems throughout the application and assessment process, fuel a lack of trust amongst claimants of both benefits. The consequences—human and financial—can be enormous.

Our recommendations aim to correct the worst of these problems and rebuild claimant trust. Properly implemented, they will bring real improvements for claimants going through the system now and in the near future. The question of whether a more fundamental overhaul of welfare support for disabled people is necessary remains open. We do not intend this to be the end of our work on PIP and ESA. (Paragraph 12)

Before the assessment

2. Applying for PIP or ESA can be daunting. The Department has so far only made limited efforts to provide support and guidance in a variety of clear, accessible formats. It should not rely on already stretched third sector organisations to explain the Department’s own processes. A concerted effort from the Department to help with applications would be both reassuring to claimants, and of great practical benefit. 

We recommend the Department co-design, with expert stakeholders, guidance in a range of accessible formats on filling in forms and preparing for assessment. This should include accessible information on the descriptors for each benefit, to be sent out or signposted alongside application forms. We also recommend the Department makes clear to claimants being reassessed that they should not assume information from their previous assessment will be re-used, and should be prepared to re-submit any supporting evidence already provided. (Paragraph 18)

3. Many PIP and ESA claimants have multiple health conditions that bring with them severe limitations. Focusing on what they are able to do is a common coping strategy—one that is often incompatible with filling in PIP and ESA application forms. It is impossible to draw a causal link from application to claimant health. The Department should demonstrate, however, that it is alert to the risk to mental health posed by parts of the application processes and seek to offset this. (Paragraph 20)

4. We recommend that the Department commission and publish independent research on the impact of application and assessment for PIP and ESA on claimant health. This should focus initially on improvements to the application forms, identifying how they can be made more claimant-friendly and less distressing for claimants to fill in. The Department should set out a timescale for carrying out this work in response to our Report. (Paragraph 21)

5. As a result of their health conditions, many PIP and ESA claimants require communications in a specific format. The Department’s resistance to meeting even some of the most basic of these needs makes applying for PIP and ESA unnecessarily challenging for some claimants. Its failure to provide a widely-used, accessible alternative to telephone calls, and Easy Read communications, is extraordinary. 

We recommend that the Department enables claimants with hearing impairments to apply for PIP and ESA via email, ensuring this service is appropriately resourced to prevent delays to claims. In the longer term, it should look to offer this option to all claimants. It should also ensure key forms and communications—especially the PIP2, appointment and decision letters—are available in Easy Read format, allowing claimants to register this as a communication preference at the start of their claim. (Paragraph 25)

6. Home visits are an important option for claimants whose health conditions make attending an assessment centre difficult. Contractors interpret the Department’s guidance on home visits differently. They take varying approaches to granting them and require different standards of supporting evidence. This leads to inconsistencies between the benefits and between contractors. It can also place additional burdens on claimants and the NHS. (Paragraph 30)

7. We recommend the Department issue new guidance to PIP and ESA assessors on the procedure for determining whether claimants receive a home visit. This should specify that GP letters are not required where other forms of evidence and substantiation are available. This should include evidence from the claimant, as well as from carers, support workers and other health professionals. To ensure guidance is being followed, we recommend contractors be required to gather evidence and the Department audit requests made and granted for home visits, as well as reasons for refusal. (Paragraph 31)

The assessment

8. Atos, Capita and Maximus all use a generalist assessor model. They pay no regard to the specialist expertise of individual assessors in assigning cases. They therefore assess claimants with the full gamut of conditions. The success of this model depends on a consistent supply of high quality, relevant expert evidence. There is ongoing confusion amongst claimants and those supporting them alike about what constitutes “good evidence” for functional purposes.

We recommend that the Department sets out in response to this Report its approach to improving understanding amongst health and social care professionals and claimants of what constitutes good evidence for PIP and ESA claims. This should include setting out how it will measure, monitor and report on the supply of evidence into PIP and ESA assessments. (Paragraph 39)

9. Successive evidence-based reviews conducted on behalf of the Department have identified a pervasive culture of mistrust around PIP and ESA processes. This culminates in fear of the face-to-face assessments. This has implications far beyond the minority of claimants who directly experience poor decision-making. It can add to claimant anxiety even among those for whom the process works fairly. While that culture prevails, assessors risk being viewed as, at best lacking in competence and at worst, actively deceitful. Addressing this is a vital step in restoring confidence in PIP and ESA. 

The case for improving trust through implementing default audio recording of assessments has been strongly made. We recommend the Department implement this measure for both benefits without delay. In the longer term, the Department should look to provide video recording for all assessments. (Paragraph 44)

10. Some claimants may be unable or embarrassed to explain the full implications of their condition to their assessor. Companions can help them to articulate these and support claimants during a potentially stressful process. Their role in assessments is vital. The Department’s recognition of this in its guidance to contractors is welcome. We are concerned, however, that this guidance is not consistently followed.

There is no reference to companions in the Department’s auditing or contractor training programmes. That none of the contractors could even reliably tell us how many claimants are accompanied to assessment suggests this is not a priority. (Paragraph 49)

11. We recommend that the Department develop detailed guidance on the role of companions, including case studies demonstrating when and how to use their evidence. Contractors should also incorporate specific training on companions into their standard assessor training. After implementing default recording of assessments, a sample of assessments where claimants are accompanied should be audited on a regular basis to ensure guidance is being followed. (Paragraph 50)

The report and initial decision

12. DWP decisions on PIP and ESA claims are often opaque, even when decisions are correctly made. Ensuring claimants can see what is being written about them during assessment, and providing a copy of the assessor’s report by default would prove invaluable in helping claimants understand the reasoning behind the Department’s decisions. Both steps would increase transparency and ensure claimants are able to make informed decisions about whether to challenge a decision. In turn, many tribunals could be avoided, the workload of Decision Makers at Mandatory Reconsideration reduced, and overall costs lowered. 

We recommend the Department proceed without delay in sending a copy of the assessor’s report by default to all claimants, alongside their initial decision. We also recommend it issues instructions to contractors on ensuringclaimants are able to see what is being written about them during assessment, and allowing their input if they feel this is incorrect or misleading. This should include, for example, emphasising to contractors that rooms should be configured by default to allow the claimant to sit next to the assessor or be able to see their computer screen. (Paragraph 55)

13. Claimants often go to considerable efforts to collect additional evidence for their claim, providing important information for generalist HCPs. Contractors and the Department should ensure that it is clear to claimants how and when this evidence is used. Without doing so, they will struggle to convince sceptical claimants that the decision on their entitlement to benefits is an informed one. Knowing how their evidence has been used will further empower claimants to understand the Department’s decisions, and to decide whether an MR is necessary. (Paragraph 60)

14. We recommend that the Department introduce a checklist system, requiring HCPs to confirm whether and how they have used each piece of supporting evidence supplied in compiling their report. Decisions not to use particular pieces of evidence should also be noted and justified. This information should be supplied to Decision Makers so they can clearly see whether and how supporting evidence has been used, making it easier to query reports with contractors. It should also be supplied to the claimant along with a copy of their report. (Paragraph 61)

Disputed decisions

15. Mandatory Reconsideration should function as a genuine check, not an administrative hurdle for claimants to clear. Improving the quality of assessments and reports will ensure fewer claimants have to go to MR, but disputes will always happen. The Department deserves credit for a renewed emphasis on MR quality. MR decision-making has not always been characterised by thoroughness, consistency and an emphasis on quality, however. Not all claimants who have, perhaps wrongly, been turned down at MR will have had the strength and resources to appeal. (Paragraph 66)

16. We recommend the Department review a representative sample of MRs conducted between 2013 and December 2017, when it dropped its aspiration to uphold 80% of MRs, to establish if adverse incorrect decisions were made and, if so, whether there were common factors associated with those decisions. It should set out its findings and any proposed next steps in response to this report. (Paragraph 67)

17. The Department argues that the high rate of decisions overturned at appeal is driven by the emergence of new evidence that was not available at initial or MR stage. It has displayed a lack of determination in exploring why it takes until that stage for evidence to come to light. In almost half of cases the “new evidence” presented was oral evidence from claimants. It is difficult to understand why this information was not, or could not have been elicited and reported by the assessor. The Department’s argument does not absolve it of responsibility.

Its feedback to and quality control over contractors is weak. Addressing these fundamental shortcomings would not only ensure a fairer system for claimants. It would also reduce the cost to the public purse of correcting poor decision-making further down the line. (Paragraph 72)

18. The Department must learn from overturned decisions at appeal in a much more systematic and consistent fashion. We recommend it uses recording of assessments to start auditing and quality assuring the whole assessment process.

When a decision is overturned, the Department should also ensure that the HCP who carried out the initial assessment is identified and that an individual review of how the assessment was carried out is conducted. Given what we know about reasons for overturn, this should focus on improving questioning techniques and ensuring claimants’ statements are given due weight.

We also recommend the Department lead regular feedback meetings with contractors and organisations that support claimants. These should keep the Department informed of emerging concerns and ensure that swift action is taken to rectify them. (Paragraph 73)

Incentives and contracting

19. The Department’s quality standards for PIP and ESA set a low bar for what are considered acceptable reports. The definition of “acceptable” leaves ample room for reports to be riddled with obvious errors and omissions. Despite this, all three contractors have failed to meet key performance targets in any given period. It is difficult not to conclude that this regime contributes to a lack of confidence amongst claimants. (Paragraph 87)

20. The Department’s use of contractual levers to improve performance has not led to consistent improvements in assessment quality, especially in relation to PIP. Large sums of money have been paid to contractors despite quality targets having been universally missed. (Paragraph 88)

21. The PIP and ESA contracts are drawing to a close. In both cases, the decision to contract out assessments in the first instance was driven by a perceived need to introduce efficient, consistent and objective tests for benefit eligibility. It is hard to see how these objectives have been met. None of the providers has ever hit the quality performance targets required of them, and many claimants experience a great deal of anxiety over assessments.

The Department will need to consider whether the market is capable of delivering assessments at the required level and of rebuilding claimant trust. If it cannot—as already floundering market interest may suggest—the Department may well conclude assessments are better delivered in-house. (Paragraph 94).

While the above recommendations will help make some improvements, these alone are not sufficient to fix the fundamental lack of trust in the current assessment system. Many of us who have been through more than one assessment for PIP and ESA – that have too often been ordeals – face more of them in the future. It’s a relentless process and some of us have been forced to challenge Kafkaesque decisions more than once or twice.

The assessment is itself a challenge, after that many of us face mandatory review, and sometimes a formal complaint is also appropriate. With no more than 18% of mandatory reviews resulting in a reversal of unreasonable and often profoundly unfair decisions, we are then forced to go through an appeal. Often within 3 months of winning an appeal, we face a reassessment – I did.

Many of us also need to claim PIP. My experiences of the ESA assessments were so distressing and damaging to my health, exacerbating my illness, that I put off claiming PIP in 2011. In fact I only claimed from last year, and that was with a huge amount of support from my local councils’ occupational therapy and welfare support teams. Despite my illness being progressive, I will be reassessed in 2020. 

Urgent reform of PIP and ESA is needed to ensure that disabled people are treated humanely, fairly and may maintain their dignity. It’s needed to ensure assessments are accurate, transparent and fair, and lead to disabled people getting the lifeline support that they need and are entitled to. 

“Independent” assessments were introduced to reduce successful disability benefit claims, to save money. That was a clearly stated objective. However they have cost much more than they were intended to save.

__

Read the report summary

Read the conclusions and recommendations

Read the full report: PIP and ESA assessments

 


I don’t make any money from my work. But you can support Politics and Insights and contribute by making a donation which will help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated, and helps to keep my articles free and accessible to all – thank you. 

DonatenowButton

DWP spent £100m on disability benefit appeals over 2 year period

Image result for welfare reform disability UK

Part 1 of this article is from the Press Association and part 2 is written by me.

 

The Department for Work and Pensions (DWP) has spent more than £100m in just over two years on administering reviews and appeals against disability benefits, figures show. 

Tens of millions of pounds a year are also spent by the Ministry of Justice on the appeals, about two-thirds of which were won by claimants in the past 12 months. 

The costs were described as “staggering” and a former Conservative minister said “something is seriously wrong with the system”. 

The DWP said a small proportion of decisions were overturned and most employment and support allowance and personal independence payment claimants were happy with their assessments. (However, please see: Summary of key problems with the DWP’s recent survey of claimant satisfaction.)

But the department is facing questions from the work and pensions select committee over the figures, following claims that it was not given similar information for MPs’ inquiry into PIP and ESA.

Figures obtained through a freedom of information request show the DWP has spent £108.1m on direct staffing costs for ESA and PIP appeals since October 2015. 

“Thousands of disabled individuals have had to fight to receive support to which they are legally entitled.”  

Since October 2015, 87,500 PIP claimants had their decision changed at mandatory reconsideration, while 91,587 others won their appeals at tribunal.

In the first half of 2017-18, 66% of 42,741 PIP appeals went in the claimant’s favour. 

The figures for ESA since October 2015 show 47,000 people had decisions revised at mandatory reconsideration and 82,219 appeals went in the claimant’s favour. 

So far in 2017-18, 68% of 35,452 ESA appeals have gone in favour of the claimant.

Ros Altmann, a Conservative peer and former DWP minister, said the money could be spent on benefits for those who need them, rather than the costs of fighting claims. 

Figures released to the select committee inquiry show further costs to taxpayers. 

The Ministry of Justice spent £103.1m on social security and child support tribunals in 2016-17, up from £92.6m the year before. 

In a letter to the committee, the then justice minister Dominic Raab said the average cost of an appeal had more than doubled to £579 in 2014-15 because PIP cases “now comprise a much larger proportion of the caseload” and require more members on the tribunal.

The MPs are due to publish the results of their inquiry on Wednesday. 

Frank Field, the committee chairman, has written to Esther McVey, the work and pensions secretary, to ask why MPs were not given the information. 

The DWP gave the committee the average cost of a mandatory reconsideration and appeal for PIP and ESA, but Field said it was unable to work out the full cost because information on whether PIP appeals were from new claimants or those being reassessed, which have different costs, was not available.

“That this data was provided in response to an FoI request, but not for our report, is doubly regrettable, since the key theme of our report is the need to introduce much greater trust and transparency into the PIP and ESA systems,” Field wrote.

A DWP spokeswoman said it was working to improve the process, including recruiting about 190 officers who will attend PIP and ESA appeals to provide feedback on decisions.

“We’ve already commissioned five independent reviews of the work capability assessment, implementing more than 100 of their recommendations, and two independent reviews of PIP assessments,” she said. 

“Meanwhile, we continue to spend more than £50bn a year on supporting people with disabilities and health conditions.”


Part 2

I’ll add to this, however, that according to the Office for National Statsitics (ONS) spending on sickness and disability, combined with social care costs was £53,275bn for 2016/17. Sickness and disability benefit spending was £43,545bn, and personal social services was 9,730bn.  

The National Audit Office (NAO) scrutinises public spending for Parliament and is independent of government. An audit report in 2016 concluded that the Department for Work and Pension’s spending on contracts for disability benefit assessments was expected to double in 2016/17 compared with 2014/15. The government’s flagship welfare-cut scheme will be actually spending more money on the assessments themselves than it is saving in reductions to the benefits bill – as Frances Ryan pointed out in the Guardianit’s the political equivalent of burning bundles of £50 notes.

The report also states that only half of all the doctors and nurses hired by Maximus – the US outsourcing company brought in by the Department for Work and Pensions to carry out the assessments – had even completed their training.

The NAO report summarises:

5.5
Million assessments completed in five years up to March 2015

65%
Estimated increase in cost per ESA assessment based on published information after transfer of the service in 2015 (from £115 to £190)


84%
Estimated increase in healthcare professionals across contracts from 2,200 in May 2015 to 4,050 November 2016

£1.6 billion
Estimated cost of contracted-out health and disability assessments over three years, 2015 to 2018

£0.4 billion
Latest expected reduction in annual disability benefit spending

13%
Proportion of ESA and PIP targets met for assessment report quality meeting contractual standard (September 2014 to August 2015).

This summary reflects staggering economic incompetence, a flagrant, politically motivated waste of tax payers money and even worse, the higher spending has not created a competent or ethical assessment framework, nor is it improving the lives of sick and disabled people. 

The National Audit Office (NAO) found last year that the number of completed ESA assessments were below target, despite an expected doubling of the cost to the taxpayer of the contracts for disability benefit assessments, to £579m a year in 2016/17compared with 2014/15.

The NAO said that nearly 1 in 10 of the reports on disabled people claiming support were rejected as below standard by the government. This compares with around one in 25 before Atos left its contract. 

The provider was not on track to complete the number of assessments expected last year and has also missed assessment report quality targets. 

Atos abandoned its contract early following mounting evidence that hundreds of thousands of ill and disabled people have been wrongly judged to be fit for work and ineligible for government support. 

The proportion of Capita PIP tests deemed unacceptable reached a peak of 56% in the three months to April 2015.

For Atos, the peak was 29.1% for one lot in June 2014. 

More than 2.7million people have had a DWP decision regarding PIP since the benefit launched in 2013 – this suggests that tens of thousands went through an ‘unacceptable’ assessment.

The PCS union, which represents lower paid workers at the Department for Work and Pensions (DWP), told MPs during the Work and Pensions Committee inquiry: “We do not believe that there is any real quality control.

“Our belief is that delivering the assessments in-house is the only effective way for DWP to guarantee the level of quality that is required.” 

In evidence submitted to the Work and Pensions Committee, Capita said 95% of assessments are now deemed acceptable – giving the figure for the past year. The company said:

“This represents a significant improvement from previous years and producing quality reports for the DWP remains a top priority within Capita.”

“Additionally, we use a range of intelligence as indicators, to identify disability assessors who may not be operating at the high quality output levels we expect.

“This includes data from audit activity, coaching and monitoring.

“This enables us to continually monitor performance, and take appropriate internal actions… where necessary to ensure we continue to deliver a quality service.”

Atos claim that 95.4% of tests are now acceptable and more work was needed to ensure the auditing process itself is “consistent”, adding: “We strive to deliver fair and accurate assessment reports 100% of the time.”

It also emerged that Atos and Capita employ just FOUR doctors between them. Most employees within the companies are nurses, paramedics, physiotherapists or occupational therapists. Capita’s chief medical officer Dr Ian Gargan confessed he was just one of two doctors at the firm’s PIP division, which has 1,500 staff.

He told the Commons Work and Pensions Committee: “Two thirds of our professionals have a nursing background and the remainder are from occupational therapy, physiotherapy and paramedicine.”

Dr Barrie McKillop, clinical director of Atos’ PIP division, admitted they too only had two doctors among their staff. 

Frank Field said: “You’ve got two doctors each, mega workload – maybe there’s a lot of doctors out there who would long for some part-time work.” 

“You haven’t sought them out to raise your game, have you?”

However Dr McKillop insisted Atos’ current model “is a strong one” and people “bring clinical experience in different areas”.

You can listen to this submission to Work and Pensions Committee’s PIP and ESA evidence session here. 

The witnesses are: Simon Freeman, Managing Director, Capita Personal Independence Payments, Dr Ian Gargan, Chief Medical Officer, Capita Personal Independence Payments, David Haley, Chief Executive, Atos Independent Assessment Services and Dr Barrie McKillop, Clinical Director, Atos Independent Assessment Services.

You can access the written evidence here.

Many of us have been campaigning for reforms to the failing system – complaints about PIP rose by nearly 880 per cent last year – work and pensions inquiry report adds more pressure on the government to address a system that is failing so many people.

Since 2013 there have been 170,000 PIP appeals taken to the Tribunal: Claimants won in 108,000 cases – 63%. In the same time, there have been 53,000 ESA appeals. Claimants won in 32,000 – or 60% – of those cases.

Ministers have been citing statistics from a recent survey about satisfaction with Department for Work and Pensions services. However, I have critiqued the survey, and in particular, I faulted it because those claimants whose benefit had been disallowed by the Department were excluded from the survey. This means that the people most ikely to register their dissatisfaction with the Department in the survey were not allowed to participate.

I also found some statistics that are not fully or adequately discussed in the survey report – these were to be found tucked away in the Excel data tables which were referenced at the end of the report – and certainly not cited by Government ministers, are those particularly concerning problems and difficulties with the Department for Work and Pensions that arose for some claimants. 

It’s worrying that 51 per cent of all respondents across all types of benefits who experienced difficulties or problems in their dealings with the Department for Work and Pensions did not see them resolved. A further 4 per cent saw only a partial resolution, and 3 per cent didn’t know if there had been any resolution.

disatisfied

–  means the sample size is less than 40.

In the Employment and Support Allowance (ESA) group, 50 per cent had unresolved problems with the Department, and in the Personal Independent Payment (PIP) group, 57 per cent of claimants had ongoing problems with the Department, while only 33 per cent have seen their problems resolved. 

It is time that the Government stopped glossing over the fundamental problems with a system of assessment and decision making for disability benefits that is costing so much to administrate, it’s causing distress, hardship, and sometimes, it is costing people their lives. Fake statistics and PR designed surveys don’t hide the mounting evidence of the catastrophic impact that the Conservative reforms have had on many people.

The impact of the welfare reforms on disabled people has been brutal. More than a third of those who have had their benefit cut say they’re struggling to pay for food, rent and bills, while 40% say they’ve become more isolated as over 50,000 disabled people lost access to Motability vehicles.

To the government’s utter shame, they have claimed that this state of affairs is acceptable for the past 4 years.  It never was, and it needs to change.

 


 

I don’t make any money from my work. I am disabled because of illness and have a very limited income. But you can help by making a donation to help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated – thank you.

DonatenowButton

Summary of key problems with the DWP’s recent survey of claimant satisfaction

The Department for Work and Pensions Claimant Service and Experience Survey (CSES) is described as “an ongoing cross-sectional study with quarterly bursts of interviewing. The survey is designed to monitor customers’ satisfaction with the service offered by DWP and enable customer views to be fed into operational and policy development.”

The survey measures levels of satisfaction in a defined group of ‘customers’ who have had contact with the Department for Work and Pensions within a three-month period prior to the survey. The research was commissioned by the Department for Work and Pensions and conducted by Kantar Public UK –  who undertake marketing research, social surveys, and also specialise in consultancy, public opinion data, policy and also economy polling, among other things

One problem with the aim of the survey is that satisfaction is an elusive concept – a subjective experience that is not easily definable, accessible or open to precise quantitative measurement. The selection of responses available to participants and how these are measured and presented also affected the survey outcome.

For example, two categories of responses were conflated on the main report, with ‘satisfied’ and ‘fairly satisfied’ being presented as just one category – which gives the impression that people are fully satisfied. However, a ‘fairly satisfied’ response indicates that it is to some degree or extent but not fully, very or extremely satisfied. The presented survey findings, therefore, don’t distinguish between those who are fully satisfied with their interaction with the Department and those satisfied only to a moderate extent. Conflating these responses doesn’t provide us with the accurate ‘measurement’ of claimant satisfaction that the report claims. 

Furthermore, statistics that are not fully or adequately discussed in the survey report – these were to be found tucked away in the Excel data tables which were referenced at the end of the report – and certainly not cited by Government ministers, are those particularly concerning problems and difficulties with the Department for Work and Pensions that arose for some claimants. 

It’s worrying that 51 per cent of all respondents across all types of benefits who experienced difficulties or problems in their dealings with the Department for Work and Pensions did not see them resolved. A further 4 per cent saw only a partial resolution, and 3 per cent didn’t know if there had been any resolution.

In the job seeker’s allowance (JSA) category, some 53 per cent had unresolved problems with the Department and only 39 per cent had seen their problems resolved. In the Employment and Support Allowance (ESA) group, 50 per cent had unresolved problems with the Department, and in the Personal Independent Payment (PIP) group, 57 per cent of claimants had ongoing problems with the Department, while only 33 per cent have seen their problems resolved. 

disatisfied

–  means the sample size is less than 40. 

Government officials have tended to select one set of statistics from the whole survey: “The latest official research shows that 76% of PIP claimants and 83% of ESA claimants are satisfied with their overall experience.” (Spokesperson for the Department for Work and Pensions.)

One problem with this is firstly it overlooks the problems outlined above, giving the impression that people don’t have any problems with the Department. Secondly, the survey conflates two sets of responses to come up with the overall percentages.

The positive categories for responses are “satisfied” and “fairly satisfied”. Given the problem of interpreting and precisely expressing subjective states like satisfaction, there is also the problem of measuring degrees of subjective states. There is some difficulty with “fairly satisfied” responses, as they may simply indicate that people experienced some difficulties, but these were handled politely by the Department. There may be varied reasons why people chose this category.

Some people are more likely to try to see situations positively. It tells us nothing about outcomes for those people. The questionnaires were closed – meaning responses were limited to a small number of simple response categories. So the responses don’t have a particularly helpful context of meaning to help us understand them. 

Some basic problems with using closed questions in a survey:

  • It imposes a limited framework of responses on respondents
  • The survey may not have the exact answer the respondent wants to give
  • The questions lead and limit the scope of responses 
  • Respondents may select answers which are simply the most similar to their “true” response – the one they want to give but can’t because it isn’t in the response options – even though it is different
  • The options presented may confuse the respondent
  • Respondents with no opinion may answer anyway
  • Does not provide us with information about whether or not the respondent actually understood the question being asked, or if the survey response options provided include an accurate capture and reflection of the respondents’ views.

Another problem which is not restricted to the use of surveys in research is the Hawthorne effect. This is a well-documented phenomenon that affects many areas of research and experiment in social sciences. It is the process where human subjects taking part in research change or modify their behaviour, simply because they are being studied. This is one of the most difficult inbuilt biases to eliminate or account for in research design. This was a survey conducted mostly over the telephone, which again introduces the risk of an element of ‘observer bias.’

Furthermore, the respondents in this survey had active, open benefit claims or had registered a claim. This may have had some effect on their responses, since they may have felt they were being scrutinised by the Department for Work and Pensions. Social relationships between the observer and the observed ought to be assessed when performing any type of social analysis and especially when there may be a perceived imbalanced power relationship between an organisation and the respondents, in any research that they conduct or commission.

Given the punitive nature of welfare policies, it is very difficult to determine the extent to which fear of reprisal may have influenced peoples’ responses, regardless of how many reassurances participants were given regarding anonymity in advance. 

The important bit about sampling practices: the changed sampling criteria impacted the results

The report states clearly: “The proportion of Personal Independence Payment customers who were ‘very dissatisfied’ fell from 19 per cent to 12 per cent over the same period. 

Then comes the killer: “This is likely to be partly explained by the inclusion in the 2014/15 sample of PIP customers who had a new claim disallowed who have not been sampled for the study since 2015/16. This brings PIP sampling into line with sampling practises for other benefits in the survey.

In other words, those people with the greatest reason to be very dissatisfied with their contact with the Department for Work and Pensions  – those who haven’t been awarded PIP or ESA, for example – are not included in the survey. 

This introduces a problem in the survey called sampling bias. Sampling bias undermines the external validity of a survey (the capacity for its results to be accurately generalised to the entire population, in this case, of those claiming PIP and ESA). Given that people who are not awarded PIP and ESA make up a significant proportion of the PIP customer population who have registered for a claim, this will skew the survey result, slanting it towards positive responses.

Award rates for PIP (under normal rules, excluding withdrawn claims) for new claims are 46 per cent. However, they are higher for one group –  73 per cent for Disability Living Allowance (DLA) reassessment claims. This covers PIP awards made between April 2013 and October 2016. Nearly all special rules (for those people who are terminally ill) claimants are found eligible for PIP. 

If an entire section of the PIP claimant population are excluded from the sample, then there are no adjustments that can produce estimates that are representative of the entire population of PIP claimants.

The same is true of the other groups of claimants. If those who have had a new claim disallowed (and again, bearing in mind that only 46 per cent of those new claims for PIP resulted in an award), then that excludes a considerable proportion of claimants registering across all types of benefits who were likely to have registered a lower level of satisfaction with the Department because their claim was disallowed. This means the survey cannot be used to accurately track the overall performance of the Department or monitor in terms of whether it is fulfilling its customer charter commitments. The survey excludes the possibility for monitoring and scrutinising Department decision-making and clamaint outcomes when the decision reached isn’t in the claimant’s favour..

The report clearly states: “There was a revision to sample eligibility criteria in 2014/15. Prior to this date the survey included customers who had contacted DWP within the past 6 months. From 2014/15 onwards this was shortened to a 3 month window. This may also have impacted on trend data.” 

We have no way of knowing why those peoples’ claim was disallowed. We have no way of knowing if this is due to error or poor administrative procedures within the Department. If the purpose of a survey like this is to produce a valid account of levels of ‘customer satisfaction’ with the Department, then it must include a representative sample of all of those ‘customers’, and include those whose experiences have been negative.

Otherwise the survey is reduced to little more than a PR exercise for the Department. 

The sampling procedure is therefore a way of only permitting an unrepresentative  sample of people to participate in a survey, who are likeliest to produce the most positive responses, because their experiences have been of a largely positive outcome within the survey time frame. If those who have been sanctioned are also excluded across the sample, then this will also hide the experiences and comments of those most adversely affected by the Department’s policies, decisions and administration procedures, again these are claimants who are the likeliest to register their dissatisfaction in the survey. 

Measurement error occurs when a survey respondent’s answer to a survey question is inaccurate, imprecise, or cannot be compared in any useful way to other respondents’ answers. This type of error results from poor question wording and questionnaire construction. Closed and directed questions may also contribute to measurement error, along with faulty assumptions and imperfect scales. The kind of questions asked may also have limited the scope of the research.

For example, there’s a fundamental difference in asking questions like “Was the advisor polite on the telephone?” and “Did the decision-maker make the correct decision about your claim?”. The former generates responses that are relatively simplistic and superficial, the latter is rather more informative and tells us much more about how well the DWP fulfils one of its key functions, rather than demonstrating only how politely staff go about discussing claim details with claimants. 

This survey is not going to produce a valid range of accounts or permit a reliable generalisation regarding the wider populations’ experiences with the Department for Work and Pensions. Nor can the limited results provide meaningful conclusions to inform a genuine learning opportunity and support a committment to improvement for the Department.

With regard to the department’s Customer Charter, this survey does not include valid feedback and information regarding this section in particular:

Getting it right

We will:
• Provide you with the correct decision, information or payment
• Explain things clearly if the outcome is not what you’d hoped for
• Say sorry and put it right if we make a mistake 
• Use your feedback to improve how we do things

One other issue with the sampling is that the Employment and Support Allowance (ESA) and Job Seeker’s Allowance (JSA) groups were overrepresented in the cohort. 

The sample was intentionally designed to overrepresent these groups in order to allow “robust quarterly analysis of these benefits”, according to the report. However, because a proportion of the cohort – those having their benefit disallowed – were excluded in the latest survey and not the previous one, so cross comparision and establishing trends over time is problematic. 

Kantar do say: “When reading the report, bear in mind the fact that customers’ satisfaction levels are likely to be impacted by the nature of the benefit they are claiming. As such, it is more informative to look at trends over time for each benefit rather than making in-year comparisons between benefits.” 

With regard to my previous point, Kantar also say: “Please also note that there was a methodological change to the way that Attendance Allowance, Disability Living Allowance and Personal Independence Payment customers were sampled in 2015/16 which means that for these benefits results for 2015/16 are not directly comparable with previous years.” 

And: “As well as collecting satisfaction at an overall level, the survey also collects data on customers’ satisfaction with specific transactions such as ‘making a claim’, ‘reporting  a change in circumstances’ and ‘appealing a decision’ (along with a number of other transactions) covering the remaining aspects of the DWP Customer Charter.These are not covered in this report, but the data are presented in the accompanying data tabulations.” 

The survey also covered only those who had been in touch with DWP over a three month period shortly prior to the start of fieldwork. As such it is a survey of contacting customers rather than all benefits customers.

Again it is problematic to make inferences and generalisations about the levels of satisfaction among the wider population of claimants, based on a sample selected by using such a narrow range of characteristics.

The report also says: “Parts of the interview focus on a specific transaction which respondents had engaged in (for example making a claim or reporting a change in circumstances). In cases where a respondent had been involved in more than one transaction, the questionnaire prioritised less common or more complex transactions. As
such, transaction-specific measures are not representative of ALL transactions conducted by DWP”.

And regarding subgroups: “When looking at data for specific benefits, the base sizes for benefits such as Employment and Support Allowance and Jobseeker’s Allowance (circa 5,500) are much larger than those for benefits such as Carer’s Allowance and Attendance Allowance (circa 450). As such, the margins of error for Employment and Support Allowance and Jobseeker’s Allowance are smaller than those of other benefits and it is therefore possible to identify relatively small changes as being statistically significant.”

Results from surveys are estimates and there is a margin of error associated with each figure quoted in this report. The smaller the sample size, the greater the uncertainty.

In fairness, the report does state: “In the interest of avoiding misinterpretation, data with a base size of less than 100 are omitted from the charts in this report.” 

On non-sampling error, the report says: “Surveys depend on the responses given by participants. Some participants may answer questions inaccurately and some groups of respondents may be more likely to refuse to take part altogether. This can introduce biases and errors. Nonsampling error is minimised by the application of rigorous questionnaire design, the use of skilled and experienced interviewers who work under close supervision  and rigorous quality assurance of the data.

Differing response rates amongst key sub-groups are addressed through weighting. Nevertheless, it is not possible to eliminate non-sampling error altogether and its impact cannot be reliably quantified.”

As I have pointed out, sampling error in a statistical analysis may also arise from the unrepresentativeness of the sample taken. 

The survey response rates were not discussed either. In the methodological report, it says: “In 2015/16 DWP set targets each quarter for the required number of interviews  for each benefit group to either produce a representative proportion of the benefit group in the eventual survey or a higher number of interviews for sub-group analysis where required. It is therefore not strictly appropriate to report response rates as fieldwork for a benefit group ceased if a target was reached.” 

The Government says:This research monitors claimants’ satisfaction with DWP services and ensures their views are considered in operational and policy planning.” 

Again, it doesn’t include those claimants whose benefit support has been disallowed. There is considerable controversy around disability benefit award decisions (and sanctioning) in particular, yet the survey does not address this important issue, since those experiencing negative outcomes are excluded from the survey sample. We know that there is a problem with the PIP and ESA benefits award decision-making processes, since a significant proportion of those people who go on to appeal DWP decisions are subsequently awarded their benefit.

The DWP, however, don’t seem to have any interest in genuine feedback from this group that may contribute to an improvement in both performance and decision-making processes, leading to improved outcomes for disabled people.

Last year, judges ruled 14,077 people should be given PIP against the government’s decision not to between April and June – 65 per cent of all cases.  The figure is higher still when it comes to ESA (68 per cent). Some 85 per cent of all benefit appeals were accounted for by PIP and ESA claimants.

The system, also criticised by the United Nations because it “systematically violates the rights of disabled persons”, seems to have been deliberately set up in a way that tends towards disallowing support awards. The survey excluded the voices of those people affected by this government’s absolute callousness or simple bureaucratic incompetence. The net effect, consequent distress and hardship caused to sick and disabled people is the same regardless of which it is.

Given that only 18 per cent of PIP decisions to disallow a claim are reversed  at mandatory reconsideration, I’m inclined to think that this isn’t just a case of bureaucratic incompetence, since the opportunity for the DWP to rectify mistakes doesn’t result in subsequent correct decisions, in the majority of cases, for those refused an award. 

Without an urgent overhaul of the assessment process by the Government, the benefit system will continue to work against disabled people, instead of for them.

The Government claim: “The objectives of this research are to:

  • capture the views and experiences of DWP’s service from claimants, or their representatives, who used their services recently
  • identify differences in the views and experiences of people claiming different benefits
  • use claimants’ views of the service to measure the department’s performance against its customer charter”

The commissioned survey does not genuinely meet those objectives.

                                         

There is an alternative reality being presented by the other side. The use of figures diminishes disabled peoples’ experiences.”

You can read my full analysis of the survey here: A critique of the government’s claimant satisfaction survey

 


 

I don’t make any money from my work. I am disabled because of illness and have a very limited income. But you can help by making a donation to help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated – thank you.

DonatenowButton

A critique of the government’s claimant satisfaction survey

“An official survey shows that 76% of people in the [PIP] system responded to say that they were satisfied. That itself is not a happy position, but it shows that her representation of people’s average experience as wholly negative on the basis of a Twitter appeal does not reflect the results of a scientific survey.”  Stephen Kerr, (Conservative and Unionist MP for Stirling), Personal Independence Payments debate, Hansard, Volume 635, Column 342WH, 31 January 2018 

“The latest official research shows that 76% of PIP claimants and 83% of ESA claimants are satisfied with their overall experience.” Spokesperson for the Department for Work and Pensions.

The Department for Work and Pensions Claimant Service and Experience Survey (CSES) is described as “an ongoing cross-sectional study with quarterly bursts of interviewing. The survey is designed to monitor customers’ satisfaction with the service offered by DWP and enable customer views to be fed into operational and policy development.”

The survey measures levels of satisfaction in a defined group of ‘customers’ who have had contact with the Department for Work and Pensions within a three-month period prior to the survey.

One problem with the aim of the survey is that satisfaction is an elusive concept – a subjective experience that is not easily definable, accessible or open to precise quantitative measurement. 

Furthermore, statistics that are not fully or adequately discussed in the survey report – these were to be found tucked away in the Excel data tables which were referenced at the end of the report – and certainly not cited by Government ministers, are those particularly concerning problems and difficulties with the Department for Work and Pensions that arose for some claimants. 

It’s worrying that 51 per cent of all respondents across all types of benefits who experienced difficulties or problems in their dealings with the Department for Work and Pensions did not see them resolved. A further 4 per cent saw only a partial resolution, and 3 per cent didn’t know if there had been any resolution.

In the job seeker’s allowance (JSA) category, some 53 per cent had unresolved problems with the Department and only 39 per cent had seen their problems resolved. In the Employment and Support Allowance (ESA) group, 50 per cent had unresolved problems with the Department, and in the Personal Independent Payment (PIP) group, 57 per cent of claimants had ongoing problems with the Department, while only 33 per cent have seen their problems resolved. 

disatisfied

–  means the sample size is less than 40. 

A brief philosophical analysis

The survey powerfully reminded me of Jeremy Bentham’s Hedonistic Calculus, which was an algorithm designed to measure pleasure and pain, as Bentham believed the moral rightness or wrongness of an action to be a function of the amount of pleasure or pain that it produced.

Bentham discussed at length some of the ways that moral investigations are a ‘science’. There is an inherent contradiction in Bentham’s work between his positivism, which is founded on the principle of verification – this says that a sentence is strictly meaningful only if it expresses something that can be confirmed or disconfirmed by empirical observation (establishing facts, which are descriptive) – and his utilitarianism, which concerns normative ethics (values, which are prescriptive). Bentham conflates the fact-value distinction when it suits his purpose, as do the current Government.

The recent rise in ‘happiness’, ‘wellbeing’ and ‘satisfaction’ surveys are linked with Bentham’s utilitarian ideas and a Conservative endorsement of entrenched social practices as a consequence of this broadly functionalist approach. It’s not only a reflection of the government’s simplistic, reductionist view of citizens, it’s also a reflection of the reduced functioning and increasing rational incoherence of a neoliberal state. 

As we have witnessed over recent years, utilitarian ideologues in power tend to impose his/her vision of the ‘greatest happiness for the greatest number,’ which may entail some negative consequences for minorities and socially marginalised groups. For example, the design of a disciplinarian, coercive and punitive welfare system to make ‘the taxpayer’ or ‘hard-working families’ happy (both groups being perceived as the majority). The happiness of those people who don’t currently conform to a politically defined norm doesn’t seem matter to the Government. Of course people claiming welfare support pay tax, and more often than not, paid tax before needing support.

Nonetheless, those in circumstances of poverty are regarded as acceptable collateral damage in the war for the totalising neoliberal terms and conditions of the ‘greater good’ of society, sacrificed for the greatest happiness of others. As a consequence, we live in a country where tax avoidance is considered more acceptable behaviour than being late for a job centre appointment. Tax avoidance and offshore banking is considered more ‘sustainable’ than welfare support for disabled people. 

This utilitarian problem, arising because of a belief that a state’s imposed paradigm of  competitive socioeconomic organisation is the way to bring about the greatest happiness of the greatest number, also causes the greatest misery for some social groups. This is a problem that raises issues with profound implications for democracy, socioeconomic inclusion, citizenship and human rights. 

My point is that the very nature and subject choice of the research is a reflection of a distinctive political ideology, which is problematic, especially when the survey is passed off as ‘objective’ and value-neutral’.

There are certain underpinning and recognisable assumptions drawn from the doctrine of utilitarianism, which became a positivist pseudoscience in the late nineteenth century. The idea that human behaviour should be mathematised in order to turn the study of humans into a science proper strips humans down to the simplest, most basic motivational structures, in an attempt to reduce human behaviour to a formula. To be predictable in this way, behaviour must also be determined.

Yet we have a raft of behavioural economists complaining of everyone elses’ ‘cognitive bias’, who have decided to go about helping the population to make decisions in their own and society’s best interests. These best interests are defined by behavioural economists. The theory that people make faulty decisions somehow exempts the theorists from their own theory, of course. However, if decisions and behaviours are determined, so are the theories about decisions and behaviours. Behavioural science itself isn’t value-neutral, being founded on a collection of ideas called libertarian paternalism, which is itself a political doctrine. 

The Government have embraced these ideas, which are based on controversial assumptions. 

The current government formulates many policies with ‘behavioural science’ theory and experimental methodology behind them, which speaks in a distinct language of individual and social group ‘incentives’, ‘optimising decision-making’ and all for the greater ‘good of society’ (where poor citizens tend to get the cheap policy package of thrifty incentives, which entail austerity measures and having their income reduced, whereas wealthy citizens get the deluxe package, with generous financial rewards and free gifts.) 

There are problems with trying to objectively measure a subjectively experienced phenomena. There are major contradictions in the ideas that underpin the motive to do so. There is also a problem with using satisfaction surveys as a measure of the success or efficacy of government policies and practices. 

A little about the company commissioned to undertake the survey

The research was commissioned by the Department for Work and Pensions and conducted by Kantar Public UK –  who undertake marketing research, social surveys, and also specialise in consultancy, public opinion data, policy and also economy polling, with, it seems, multi-tasking fingers in several other lucrative pies

Kantar Public “Works with clients in government, the public sector, global institutions, NGOs and commercial businesses to advise in the delivery of public policy, public services and public communications.” 

Kantar Public will deliver global best practice through local, expert teams; will synthesise innovations in marketing science, data analytics with the best of classic social research approaches; and will build on a long history of methodological innovation to deliver public value. It includes consulting, research and analytical capabilities.” (A touch of PR and technocracy).

Eric Salama, Kantar CEO, commented on the launch of this branch of Kantar Public in 2016: “We are proud of the work that we do in this sector, which is growing fast. Its increasing importance in stimulating behavioural change in many aspects of societies requires the kind of expert resource and investment that Kantar Public will provide.”

The world seems to be filling up with self-appointed, utilitarian choice architects. Who needs to live in a democracy when we have so many people who say they’re not only  looking out for our ‘best interests’, but defining them, and also, helping us all to make “optimum choices” (whatever they may be). All of these flourishing technocratic businesses are of course operating without a shred of cognitive bias or self-consciousness of their own. Apparently, the whopping profit motive isn’t a bias at all. It’s only everyone else that is cognitively flawed. 

Based on those assumptions, what could possibly go wrong right?

I digress. 

The nitty-gritty

Ok, so having set the table, I’m going to nibble at the served dish. Kantar’s survey – commissioned by the Government – cited in the opening quotes – by the Government.  The quotes have been cited in the media, in a Commons debate and even presented as evidence in a Commons Committee inquiry into disability support (Personal Independence Payments and Employment and Support Allowance).

It seems that no-one has examined the validity and reliability of the survey cited, it has simply been taken at face value. It’s assumed that the methodology, interpretation and underlying motives are neutral, value-free and ‘objective’. In fact the survey has been described as ‘scientific’ by at least one Conservative MP.

There are a couple of problems, however, with that. My first point is a general one about quantitative surveys, especially those using closed questions. This survey was conducted mostly by telephone and most questions in the used questionnaire were closed

Some basic problems with using closed questions in a survey:

  • It imposes a limited framework of responses on respondents
  • The survey may not have the exact answer the respondent wants to give
  • The questions lead and limit the scope of responses 
  • Respondents may select answers which are simply the most similar to their “true” response – the one they want to give but can’t because it isn’t in the response options – even though it is different
  • The options presented may confuse the respondent
  • Respondents with no opinion may answer anyway
  • Does not provide us with information about whether or not the respondent actually understood the question being asked, or if the survey response options provided include an accurate capture and reflection of the respondents’ views.

Another problem which is not restricted to the use of surveys in research is the Hawthorne effect. The respondents in this survey had active, open benefit claims or had registered a claim. This may have had some effect on their responses, since they may have felt scrutinised by the Department for Work and Pensions. Social relationships between the observer and the observed ought to be assessed when performing any type of social analysis and especially when there may be a perceived imbalanced power relationship between an organisation and the respondents in any research that they conduct or commission.

Given the punitive nature of welfare policies, it is very difficult to determine the extent to which fear of reprisal may have influenced peoples’ responses, regardless of how many reassurances participants were given regarding anonymity in advance.

The respondents in a survey may not be aware that their responses are to some extent influenced because of their relationship with the researcher (or those commissioning the research); they may subconsciously change their behaviour to fit the expected results of the survey, partly because of the context in which the research is being conducted.

The Hawthorne Effect is a well-documented phenomenon that affects many areas of research and experiment in social sciences. It is the process where human subjects taking part in research change or modify their behaviour, simply because they are being studied. This is one of the hardest inbuilt biases to eliminate or factor into research design. This was a survey conducted over the telephone, which again introduces the risk of an element of ‘observer bias.’

Methodological issues

On a personal level, I don’t believe declared objectivity in research means that positivism and quantitative research methodology has an exclusive stranglehold on ‘truth’. I don’t believe there is a universally objective, external vantage point that we can reach from within the confines of our own human subjectivity, nor can we escape an intersubjectively experienced social, cultural, political and economic context.

There is debate around verificationism, not least because the verification principle itself is unverifiable. The positivist approach more generally treats human subjects as objects of interest and research – much like phenomena studied in the natural sciences. As such, it has an inbuilt tendency to dehumanise the people being researched. Much human meaning and experience gets lost in the translation of responses into quantified data – the chief goal of statistical analysis is to identify trends

An example of the employment of ‘objective’ and ‘value-neutral’ methods resulting in dehumanisation is some of the inappropriate questions asked during assessment for disability benefits. The Work and Pensions Select Committee received nearly 4,000 submissions – the most received by a select committee inquiry – after calling for evidence on the assessments for personal independence payment (PIP) and Employment and Support Allowance (ESA). 

The recent committee report highlighted people with Down’s syndrome being asked when they ‘caught’ it. Assessors have asked insulting and irrelevant questions, such as when someone with a progressive condition will recover, and what level of education they have.

This said, my own degree and Master’s, undertaken in the 1990s, and my profession up until 2010, when I became too ill to work, were actually used as an indication that I have “no cognitive problems” in 2017, after some 7 years of being unable to work because of the symptoms of a progressive illness that is known to cause cognitive problems. My driving licence in 2003 was also used as evidence of my cognitive functioning.

Yet I explained that have been unable to drive since 2004 because of my sensitivity to flickering (lamp posts, trees, telegraph poles have a strobe light effect on me as the car moves) which triggers vertigo, nausea, severe coordination difficulties, scintillating scotoma and subsequent loss of vision, slurred and incoherent speech, severe drowsiness, muscle rigidity and uncontrollable jerking in my legs. I usually get an incapacitating headache, too. I’m sensitive to flashing or flickering lights, certain patterns such as ripples on a pond, some black and white stripe patterns and even walking past railings on an overcast day completely incapacitates me. 

The PIP assessment framework is claimed to be ‘independent, unbiased’ and objective.’ Central to the process is the use of ‘descriptors’, which are a limited set of criteria used to ‘measure’ the impact of the day-to-day level of disability that a person experiences. Assessors use objective methods such as “examination techniques, collecting robust evidence, selecting the correct descriptor as to the claimant’s level of ability in each of the 10 activities of daily living and two mobility activities, and report writing.”  They speak the language of positivism with fluency.

However, positivism does not accommodate human complexity, vulnerability and context very well. In an assessment situation, the assessor is a stranger to the person undergoing the assessment. How appropriate is it that a stranger assessing ‘functional capacity’ asks disabled people why they have not killed themselves? Alice Kirby is one of many people this happened to.

She says: “In this setting it’s not safe to ask questions like these because assessors have neither the time or skills to support us, and there’s no consideration of the impact it could have on our mental health.

The questions were also completely unnecessary, they were barely mentioned in my report and had no impact on my award.”

So, not only an extremely insensitive and potentially risk-laden question but an apparently pointless one. 

It may be argued that some universal ‘truths’ such as the importance of ‘impartiality’, or ‘objectivity’ are little more than misleading myths which allow practitioners and researchers alike to claim, and convince themselves, that they behave in a manner that is morally robust and ethically defensible.

A brief discussion of the methodological debate  

Quiz 1 Quiz 2 Quiz 3 All Quizzes

Social phenomena cannot always be studied in the same way as natural phenomena, because human beings are subjective, intentional and have a degree of free will. One problem with quantitative research is that it tends to impose theoretical frameworks on those being studied, and it limits responses from those participating in the study. Quantitative surveys tend not to capture or generate understanding about the lived, meaningful experiences of real people in context.

There are also distinctions to be made between facts, values and meanings. Qualitative researchers are concerned with generating explanations and extending understanding  rather than simply describing and measuring social phenomena and attempting to establish basic cause and effect relationships.

Qualitative research tends to be exploratory, potentially illuminating underlying intentions, responses, beliefs, reasons, opinions, and motivations to human behaviours. This type of analysis often provides insights into social problems, helps to develop ideas and establish explanations, and may also be used to formulate hypotheses for further quantitative research.

The dichotomy between quantitative and qualitative methodological approaches, theoretical structuralism (macro-level perspectives) and interpretivism (micro-level perspectives) in sociology, for example, is not nearly so clear as it once was, however, with many social researchers recognising the value of both means of data and evidence collection and employing methodological triangulation, reflecting a commitment to methodological and epistemological pluralism.

Qualitative methods of research tend to be much more inclusive, detailed and expansive than quantitative analysis, lending participants a dialogic, democratic and first hand voice regarding their own experiences.

The current government has tended to dismiss qualitative evidence from first hand witnesses of the negative impacts of their policies – presented cases studies, individual accounts and ethnographies – as ‘anecdotal.’ This presents a problem in that it stifles legitimate feedback. An emphasis on positivism reflects a very authoritarian approach to social administration and it needs to be challenged.

A qualitative approach to research is open and democratic. It potentially provides insight, depth and richly detailed accounts. The evidence collected is much more coherent and comprehensive, because it explores beneath surface appearances, and reaches above causal relationships, delving much deeper than the simplistic analysis of ranks, categories and counts. It provides a reliable and rather more authentic record of experiences, attitudes, feelings and behaviours, it prompts an openness and is expansive, whereas quantitative methods tend to limit and are somewhat reductive.

Qualitative research methods encourage people to expand on their responses and may then open up new issues and topic areas not initially considered by researchers.

Government ministers like to hear facts, figures and statistics all the time. What we need to bring to the equation is a real, live human perspective. We need to let ministers know how the policies they are implementing directly impact on their own constituents and social groups more widely.

Another advantage of qualitative methods is that they are prefigurative and bypass problems regarding potential power imbalances between the researcher and the subjects of research, by permitting participation (as opposed to respondents being acted upon) and creating space for genuine dialogue and reasoned discussions to take place. Research regarding political issues and policy impacts must surely engage citizens on a democratic, equal basis and permit participation in decision-making, to ensure an appropriate balance of power between citizens and the state.

Quantitative research draws on surveys and experimental research designs which limit the interaction between the investigator and those being investigated. Systematic sampling techniques are used, in order to control the risk of bias. However not everyone agrees that this method is an adequate safeguard against bias.

Kantar say in their published survey report: “As the Personal Independence Payment has become more established and its customer base increased, there has been an increase in overall satisfaction from 68 per cent in 2014/15 to 76 per cent in 2015/16. This increase is driven by an increase in the proportion of customers reporting that they were ‘very satisfied’ which rose from 25 per cent in 2014/15 to 35 per cent in 2015/16.

Sampling practices

The report states clearly: “The proportion of Personal Independence Payment customers who were ‘very dissatisfied’ fell from 19 per cent to 12 per cent over the same period. 

Then comes the killer: “This is likely to be partly explained by the inclusion in the 2014/15 sample of PIP customers who had a new claim disallowed who have not been sampled for the study since 2015/16. This brings PIP sampling into line with sampling practises for other benefits in the survey.

In other words, those people with the greatest reason to be very dissatisfied with their contact with the Department for Work and Pensions  – those who haven’t been awarded PIP, for example – are not included in the survey. 

This introduces a problem in the survey called sampling bias. Sampling bias undermines the external validity of a survey (the capacity for its results to be accurately generalised to the entire population, in this case, of those claiming PIP). Given that people who are not awarded PIP make up a significant proportion of the PIP customer population who have registered for a claim, this will skew the survey result, slanting it towards positive responses.

Award rates for PIP (under normal rules, excluding withdrawn claims) for new claims are 46 per cent. However, they are at 73 per cent for Disability Living Allowance (DLA) reassessment claims. This covers PIP awards made between April 2013 and October 2016. Nearly all special rules (for those people who are terminally ill) claimants are found eligible for PIP. 

If an entire segment of the PIP claimant population are excluded from the sample, then there are no adjustments that can produce estimates that are representative of the entire population of PIP claimants.

The same is true of the other groups of claimants. If those who have had a new claim disallowed (and again, bearing in mind that only 46 per cent of those new claims for PIP resulted in an award), then that excludes a considerable proportion of claimants registering across all types of benefits who were likely to have registered a lower level of satisfaction with the Department because their claim was disallowed. This means the survey cannot be used to accurately track the overall performance of the Department or monitor in terms of whether it is fulfilling its customer charter commitments.

The report clearly states: “There was a revision to sample eligibility criteria in 2014/15. Prior to this date the survey included customers who had contacted DWP within the past 6 months. From 2014/15 onwards this was shortened to a 3 month window. This may also have impacted on trend data.” 

We have no way of knowing why those peoples’ claim was disallowed. We have no way of knowing if this is due to error or poor administrative procedures within the Department. If the purpose of a survey like this is to produce a valid account of levels of ‘customer satisfaction’ with the Department, then it must include a representative sample of all of those ‘customers’, and include those whose experiences have been negative.

Otherwise the survey is reduced to little more than a PR exercise for the Department. 

The sampling procedure is therefore a way of only permitting an unrepresentative  sample of people to participate in a survey, who are likeliest to produce the most positive responses, because their experiences have been of a largely positive outcome within the survey time frame. If those who have been sanctioned are also excluded across the sample, then this will also hide the experiences and comments of those most adversely affected by the Department’s policies and administration procedures, again these are claimants who are the likeliest to register their dissatisfaction in the survey. 

Measurement error occurs when a survey respondent’s answer to a survey question is inaccurate, imprecise, or cannot be compared in any useful way to other respondents’ answers. This type of error results from poor question wording and questionnaire construction. Closed and directed questions may also contribute to measurement error, along with faulty assumptions and imperfect scales. The kind of questions asked may also have limited the scope of the research.

For example, there’s a fundamental difference in asking questions like “Was the advisor polite on the telephone?” and “Did the decision-maker make the correct decision about your claim?”. The former generates responses that are relatively simplistic and superficial, the latter is rather more informative and tells us much more about how well the DWP fulfils one of its key functions, rather than demonstrating only how politely staff go about discussing claim details with claimants. 

This survey is not going to produce a valid range of accounts or permit a reliable generalisation regarding the wider populations’ experiences with the Department for Work and Pensions. Nor can it provide a template for a genuine learning opportunity and committment to improvement for the Department.

With regard to the department’s Customer Charter, this survey does not include valid feedback and information regarding this section in particular:

Getting it right

We will:
• Provide you with the correct decision, information or payment
• Explain things clearly if the outcome is not what you’d hoped for
• Say sorry and put it right if we make a mistake 
• Use your feedback to improve how we do things

One other issue with the sampling is that the Employment and Support Allowance (ESA) and Job Seeker’s Allowance (JSA) groups were overrepresented in the cohort. 

Kantar do say: “When reading the report, bear in mind the fact that customers’ satisfaction levels are likely to be impacted by the nature of the benefit they are claiming. As such, it is more informative to look at trends over time for each benefit rather than making in-year comparisons between benefits.” 

The sample was intentionally designed to overrepresent these groups in order to allow “robust quarterly analysis of these benefits”, according to the report. However, because a proportion of the cohort – those having their benefit disallowed – were excluded in the latest survey and not the previous one, so cross comparision and establishing trends over time is problematic. 

To reiterate, the report also says: “When reading the report, bear in mind the fact that customers’ satisfaction levels are likely to be impacted by the nature of the benefit they are claiming. As such, it is more informative to look at trends over time for each benefit rather than making in-year comparisons between benefits.” 

With regard to my previous point: “Please also note that there was a methodological change to the way that Attendance Allowance, Disability Living Allowance and Personal Independence Payment customers were sampled in 2015/16 which means that for these benefits results for 2015/16 are not directly comparable with previous years.” 

And: “As well as collecting satisfaction at an overall level, the survey also collects data on customers’ satisfaction with specific transactions such as ‘making a claim’, ‘reporting  a change in circumstances’ and ‘appealing a decision’ (along with a number of other transactions) covering the remaining aspects of the DWP Customer Charter.These are not covered in this report, but the data are presented in the accompanying data tabulations.” 

The survey also covered only those who had been in touch with DWP over a three month period shortly prior to the start of fieldwork. As such it is a survey of contacting customers rather than all benefits customers.

Again it is problematic to make inferences and generalisations about the levels of satisfaction among the wider population of claimants, based on a sample selected by using such a narrow range of characteristics.

The report also says: “Parts of the interview focus on a specific transaction which respondents had engaged in (for example making a claim or reporting a change in circumstances). In cases where a respondent had been involved in more than one transaction, the questionnaire prioritised less common or more complex transactions. As
such, transaction-specific measures are not representative of ALL transactions conducted by DWP”.

And regarding subgroups: “When looking at data for specific benefits, the base sizes for benefits such as Employment and Support Allowance and Jobseeker’s Allowance (circa 5,500) are much larger than those for benefits such as Carer’s Allowance and Attendance Allowance (circa 450). As such, the margins of error for Employment and Support Allowance and Jobseeker’s Allowance are smaller than those of other benefits and it is therefore possible to identify relatively small changes as being statistically significant.”

Results from surveys are estimates and there is a margin of error associated with each figure quoted in this report. The smaller the sample size, the greater the uncertainty.

In fairness, the report does state: “In the interest of avoiding misinterpretation, data with a base size of less than 100 are omitted from the charts in this report.” 

On non-sampling error, the report says: “Surveys depend on the responses given by participants. Some participants may answer questions inaccurately and some groups of respondents may be more likely to refuse to take part altogether. This can introduce biases and errors. Nonsampling error is minimised by the application of rigorous questionnaire design, the use of skilled and experienced interviewers who work under close supervision  and rigorous quality assurance of the data.

Differing response rates amongst key sub-groups are addressed through weighting. Nevertheless, it is not possible to eliminate non-sampling error altogether and its impact cannot be reliably quantified.”

As I have pointed out, sampling error in a statistical analysis may also arise from the unrepresentativeness of the sample taken. 

The survey response rates were not discussed either. In the methodological report, it says: “In 2015/16 DWP set targets each quarter for the required number of interviews  for each benefit group to either produce a representative proportion of the benefit group in the eventual survey or a higher number of interviews for sub-group analysis where required. It is therefore not strictly appropriate to report response rates as fieldwork for a benefit group ceased if a target was reached.” 

The Government says: “This research monitors claimants’ satisfaction with DWP services and ensures their views are considered in operational and policy planning.” 

Again, it doesn’t include those claimants whose benefit support has been disallowed. There is considerable controversy around disability benefit award decisions (and sanctioning) in particular, yet the survey does not address this important issue, since those experiencing negative outcomes are excluded from the survey sample. We know that there is a problem with the PIP and ESA benefits award decision-making processes, since a significant proportion of those people who go on to appeal DWP decisions are subsequently awarded their benefit.

The DWP, however, don’t seem to have any interest in genuine feedback from this group that may contribute to an improvement in both performance and decision-making processes, leading to improved outcomes for disabled people.

Last year, judges ruled 14,077 people should be given PIP against the government’s decision not to between April and June – 65 per cent of all cases.  The figure is higher still when it comes to ESA (68 per cent). Some 85 per cent of all benefit appeals were accounted for by PIP and ESA claimants.

The system, also criticised by the United Nations because it “systematically violates the rights of disabled persons”, seems to have been deliberately set up in a way that tends towards disallowing support awards. The survey excluded the voices of those people affected by this government’s absolute callousness or simple bureaucratic incompetence. The net effect, consequent distress and hardship caused to sick and disabled people is the same regardless of which it is.

Given that only 18 per cent of PIP decisions to disallow a claim are reversed  at mandatory reconsideration, I’m inclined to think that this isn’t just a case of bureaucratic incompetence, since the opportunity for the DWP to rectify mistakes doesn’t result in subsequent correct decisions, in the majority of cases, for those refused an award. 

Without an urgent overhaul of the assessment process by the Government, the benefit system will continue to work against disabled people, instead of for them.

The Government claim: “The objectives of this research are to:

  • capture the views and experiences of DWP’s service from claimants, or their representatives, who used their services recently
  • identify differences in the views and experiences of people claiming different benefits
  • use claimants’ views of the service to measure the department’s performance against its customer charter

The commissioned survey does not genuinely meet those objectives.

Related

DWP splash out more than £100m trying to deny disabled people vital benefits

Inquiry into disability benefits ‘deluged’ by tales of despair

The importance of citizens’ qualitative accounts in democratic inclusion and political participation

Thousands of disability assessments deemed ‘unacceptable’ under the government’s own quality control scheme

Government guidelines for PIP assessment: a political redefinition of the word ‘objective’

PIP and ESA Assessments Inquiry – Work and Pensions Committee

 

There is an alternative reality being presented by the other side. The use of figures diminishes disabled peoples’ experiences.”

 


 

I don’t make any money from my work. I am disabled because of illness and have a very limited income. But you can help by making a donation to help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated – thank you.

DonatenowButton

 

Thousands of disability assessments deemed ‘unacceptable’ under the government’s own quality control scheme

351-burden-cuts-by-populationInfographic from The Centre for Welfare Reform

New figures released by the Government indicate that neither Atos nor Capita – the private companies contracted by the government – paid more than £500m to assess people for Personal Independence Payments (PIP) – are actually meeting the target of 97% of assessments conforming to standards. 

The government have released the data to the Commons Work and Pensions Committee, which was due to take evidence from Atos and Capita regarding the assessments yesterday.

While private companies carry out the assessments, it is the Department for Work and Pensions (DWP) that makes the final decision on whether to award people financial support. However, those decisions are informed by the contents of reports that privately contracted ‘health professionals’ write during the assessment process.

Latest audits show that 6.4% of PIP assessments were deemed “unacceptable” in the three months leading up to October 2017.

Furthermore, the two companies have never met the target once, by the standard set using the government’s current method of quality control and measuring performance for PIP assessments.

Audits of the 4,200 PIP assessments take place every three months and are split between three ‘lots’ that are managed by different companies.

Lot 1 is assigned to Atos trading as ‘Independent Assessment Services’ (IAS). The Department for Work and Pensions (DWP) said 6.2% of its assessments were “unacceptable” in the three months to October 2017.

Lot 3 is also assigned to IAS. The DWP said 5.7% of the assessments were “unacceptable” in the three months to October 2017.

Lot 2 is assigned to Capita. The DWP said 7.3% of its assessments were “unacceptable” in the three months to October 2017.

        The government’s own figures on the rate of ‘unacceptable’ PIP tests.                                   (Image: Department for Work and Pensions)

The current performance measure – which sees an independent team pick cases at random – was launched in March 2016. Under the previous method, the private providers audited assessments themselves. 

The National Audit Office (NAO) found last year that the number of completed ESA assessments were below target, despite an expected doubling of the cost to the taxpayer of the contracts for disability benefit assessments, to £579m a year in 2016/17compared with 2014/15.

The NAO said that nearly 1 in 10 of the reports on disabled people claiming support were rejected as below standard by the government. This compares with around one in 25 before Atos left its contract. 

The provider was not on track to complete the number of assessments expected last year and has also missed assessment report quality targets. 

Atos abandoned its contract early following mounting evidence that hundreds of thousands of ill and disabled people have been wrongly judged to be fit for work and ineligible for government support. 

The proportion of Capita PIP tests deemed unacceptable reached a peak of 56% in the three months to April 2015.

For Atos, the peak was 29.1% for one lot in June 2014. 

More than 2.7million people have had a DWP decision regarding PIP since the benefit launched in 2013 – this suggests that tens of thousands went through an ‘unacceptable’ assessment.

The PCS union, which represents lower paid workers at the Department for Work and Pensions (DWP), told MPs during the Work and Pensions Committee inquiry: “We do not believe that there is any real quality control.

“Our belief is that delivering the assessments in-house is the only effective way for DWP to guarantee the level of quality that is required.” 

In evidence submitted to the Work and Pensions Committee, Capita said 95% of assessments are now deemed acceptable – giving the figure for the past year. The company said:

“This represents a significant improvement from previous years and producing quality reports for the DWP remains a top priority within Capita.”

“Additionally, we use a range of intelligence as indicators, to identify disability assessors who may not be operating at the high quality output levels we expect.

“This includes data from audit activity, coaching and monitoring.

“This enables us to continually monitor performance, and take appropriate internal actions… where necessary to ensure we continue to deliver a quality service.”

Atos said 95.4% of tests are now acceptable and more work was needed to ensure the auditing process itself is “consistent”, adding: “We strive to deliver fair and accurate assessment reports 100% of the time.”

However, many disabled people would beg to differ. See for example: Essential Information for ESA claims, assessments and appeals. The comments section alone highlights just how unfair and inaccurate Atos assessments commonly are.

It also emerged that Atos and Capita employ just FOUR doctors between them. Most employees within the companies are nurses, paramedics, physiotherapists or occupational therapists. Capita’s chief medical officer Dr Ian Gargan confessed he was just one of two doctors at the firm’s PIP division, which has 1,500 staff.

He told the Commons Work and Pensions Committee: “Two thirds of our professionals have a nursing background and the remainder are from occupational therapy, physiotherapy and paramedicine.”

Dr Barrie McKillop, clinical director of Atos’ PIP division, admitted they too only had two doctors among their staff. 

Frank Field said: “You’ve got two doctors each, mega workload – maybe there’s a lot of doctors out there who would long for some part-time work.” 

“You haven’t sought them out to raise your game, have you?”

However Dr McKillop insisted Atos’ current model “is a strong one” and people “bring clinical experience in different areas”.

You can listen to yesterday’s Work and Pensions Committee’s PIP and ESA evidence session here. 

The witnesses are: Simon Freeman, Managing Director, Capita Personal Independence Payments, Dr Ian Gargan, Chief Medical Officer, Capita Personal Independence Payments, David Haley, Chief Executive, Atos Independent Assessment Services and Dr Barrie McKillop, Clinical Director, Atos Independent Assessment Services.

You can access the written evidence here.

You can access the written evidence and watch the session online from the previous session here from 22 November.

The inquiry is ongoing. The Committee is interested in receiving recommendations for change both on the assessment process for each benefit individually, and on common lessons that can be learned from the two processes. 

 

Related 

Government guidelines for PIP assessment: a political redefinition of the word ‘objective’

 


I don’t make any money from my work. But you can support Politics and Insights and contribute by making a donation which will help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated, and helps to keep my articles free and accessible to all – thank you. 

DonatenowButton

%d bloggers like this: