My Lords, first, I say that this report was undertaken during the excellent chairmanship of my predecessor in that role, my noble friend Lord Filkin.
Our report was prompted by earlier reports and inquiries that the committee had undertaken, not least one on the cumulative impact of statutory instruments on schools, which identified post-implementation review as an area of weakness. Rather than keep saying "post-implementation review", I shall for the most part use "evaluation". It still appears to be the case that government does not place as much emphasis as it might on reviewing existing secondary legislation. Cabinet Office guidance published in July last year referred to a new process but said: ""The new process applies only to primary legislation, in the sense that there is no separate process for post-legislative review of individual statutory instruments. However … the preliminary assessment of the Act would cover how the principal delegated legislation under the Act has worked in practice"."
That is a debatable assertion since statutory instruments are likely to continue to appear after the review of the Act in question, they continue to appear under Acts passed some years ago, and secondary legislation implements EU directives and would not be covered by the new process referred to. It is crucial that there should be robust systems for the review of secondary legislation as well as primary legislation.
Government departments provide the Merits Committee with Explanatory Memoranda and impact assessments in relation to the secondary legislation that we consider, and the quality and accuracy of the information provided in those documents is crucial to the effectiveness of our scrutiny work. We wanted to find out the extent to which government departments subsequently check up on secondary legislation to see whether it is having the impact anticipated, is achieving its objectives, is having unexpected consequences, and original estimates and assumptions are proving correct.
If post-implementation review of secondary legislation is not undertaken there is no effective way of establishing whether the policy change is achieving the desired results, or whether costs and benefits are in line with expectations. As a result, vital information which could inform and improve future policy development, improve delivery methods to achieve the best results and develop the techniques used to assess the impact of policy interventions is not available, ultimately to the detriment of us all.
We were extremely grateful to have the invaluable support and assistance of the National Audit Office, which provided us with solid evidence on which to base our findings through conducting a quantitative survey of a sample of statutory instruments from 2005—18 per cent of the 1,282 statutory instruments considered by the committee that year—to see how many had been reviewed. It is accepted good practice to review the implementation of a policy, usually three years after it has taken effect.
We also selected a few statutory instruments for more detailed review, seeking feedback from the relevant department and from those affected by the regulations, and those case studies and the key messages from the evidence are published in full in an appendix to our report. The instruments were drawn from our early reports or had attracted media attention at the time they were presented.
The Merits Committee's view is that, for every statutory instrument reviewed, the following criteria should be met. First, even if conducted as part of a broader review, the impact of each SI should be clearly identified and assessed. Secondly, the review should assess the extent to which the SI has achieved its objectives. Thirdly, the review should examine how the outcome compares with the success criteria set out in the impact assessment. Fourthly, the review should assess the costs and benefits compared with original estimates. Finally, the review should identify whether there have been any unintended consequences.
The survey by the National Audit Office found that 46 per cent of the sample of significant statutory instruments had not been subject to any degree of follow-up evaluation at all, and only 29 per cent to full post-implementation review. Although in the cases of 45 per cent of the 229 statutory instruments there had been a commitment to conduct a review, only half of them had been the subject of some sort of review four years later. In total, some sort of evaluation work, ranging from simple statistics to full post-implementation review, had been carried out in 54 per cent of cases.
In their response the Government have been broadly positive to the recommendations from the Merits Committee, since there is an acknowledgement that better co-ordination and clearer instructions for departments are required. Evidence from the National Audit Office survey revealed that there is no clear methodology for doing post-implementation reviews. The Government in their response have set out three principles for the evaluation of secondary legislation: namely, that it should be integrated, transparent and proportionate. I am sure that those principles would have wide support, but of course the key issue then is how they are going to be interpreted. That is where some problems start to arise.
In looking at the principle of proportionality, we were somewhat surprised to see that the Government in their response felt that a full reconsideration of the impact assessment and public consultation exercise would be appropriate only for statutory instruments that implemented policies imposing burdens of more than £50 million per year. Frankly, if the qualifying bar for a proper evaluation is to be placed that high, that seems to be a move designed more to water down the Merits Committee’s recommendations than to implement them. Subsequent correspondence has indicated that the Government may have intended a sliding scale of evaluation and that forthcoming guidance, which we await to see with interest, would set this out. I hope my noble friend will address that point when he comes to reply because I am sure that the Committee, and I hope the House, would expect much smaller-scale projects than ones of £50 million included in meaningful evaluation and validation of the estimated costs and benefits.
The Government also appeared to reject a recommendation that we never made. They rejected our recommendation 7 on the grounds that it would be too resource-intensive to conduct a full review on all statutory instruments. What the recommendation actually proposed was that in all cases the Explanatory Memorandum should include an explanation of the department’s plans to review the statutory instrument. A response from the department that they did not intend to review the instrument because it was simply a technical amendment, or that the instrument would be reviewed with, say, two or three others that were also related to the same policy or scheme, would be perfectly acceptable.
It is not a case of seeking a Rolls-Royce standard of review on all occasions. The objective is to see a more transparent approach to evaluation with departments considering and publishing their plans for reviewing the effectiveness of a policy at the time when it is being devised. Nor are we suggesting a one-size-fits-all approach. One of our case studies, on the report on the human fertilisation regulations, has both short-term and very long-term outcomes, which demonstrates clearly that a suitable approach must be judged on a case-by-case basis.
The Government say that Select Committees such as ours, the National Audit Office and others should hold departments accountable for their performance. That will be difficult unless the Government ensure that more than 15 per cent of evaluations are published, which is the current position, and unless the Government show greater enthusiasm for consulting stakeholders. Surely the Government should be taking steps to keep their own departments up to scratch. Guidance is issued, but there does not appear to be any evaluation of whether that guidance is proving effective. The official line is that it is for departments to take responsibility for the quality of their own legislation. However, this hands-off approach does not appear to be working; our report shows that there is a wide discrepancy between departments. The Department of Health had done some sort of evaluation on 63 per cent of its instruments in the sample, and Defra had completed the highest proportion—33 per cent—of formal post-implementation reviews, but other departments seemed less enthusiastic about following the guidance. There is no point having guidance unless it is implemented and evaluated.
One of the things that our investigation showed was the different interpretations of what constitutes success. The responses to our case studies suggested that departments often focus on the aspirational side of the policy, whereas the public focused much more closely on the practical aspects of the policy’s delivery and how it could be improved. Together those two aspects make a very good basis for evaluation.
The Government clearly hold the view that evaluation is resource intensive and can only be done selectively to avoid wasting taxpayers’ money. In principle, in relation to minor legislation, that is a not unreasonable stance. However, the Government need to bear in mind as well that poorly targeted legislation wastes resources and ineffective systems waste money and cause a lot of frustration. If something is not working properly, it is definitely in the taxpayers’ interests to have a mechanism in place to identify and fix the problem promptly.
Evaluation of secondary legislation is not just an academic exercise, and if the Government are to continue to allow some departments to regard it as an unnecessary and unwelcome burden to be carried out as infrequently as possible and in the quickest possible time, the report we are discussing today might just as well have not been written. Our previous study of the cumulative impact of statutory instruments on schools showed teachers struggling under the constant stream of instruments. They complained that the department was not waiting to see whether the policy had bedded in before changing it and no one appeared to be evaluating to see what had been achieved before, metaphorically speaking, throwing the baby out with the bath-water.
I hope that when my noble friend responds he will address the concerns raised in our report and be able to give us not just some warm words but some hard, specific, concrete evidence that the Government’s approach to evaluation and post-implementation reviews of secondary legislation has changed; and that they do not regard the issuing of guidance as the end of their involvement but intend to take firm action to ensure that departments consistently and rigorously adhere to that guidance. After all, it is not much good introducing major changes in policy if evaluating the effectiveness of the implementation of the policy, and the delivery or otherwise of the benefits it was intended to achieve within the costs projected, is regarded as simply a burden rather than an invaluable way of accurately assessing outcomes against objectives and enabling steps to be taken to improve the quality and effectiveness of legislation, and the legislative process, in the future. I beg to move.
Merits of Statutory Instruments Committee: Post-implementation Reviews
Proceeding contribution from
Lord Rosser
(Labour)
in the House of Lords on Wednesday, 24 February 2010.
It occurred during Debates on select committee report on Merits of Statutory Instruments Committee: Post-implementation Reviews.
Type
Proceeding contribution
Reference
717 c293-6GC 
Session
2009-10
Chamber / Committee
House of Lords Grand Committee
Subjects
Librarians' tools
Timestamp
2024-04-22 02:22:51 +0100
URI
http://data.parliament.uk/pimsdata/hansard/CONTRIBUTION_623672
In Indexing
http://indexing.parliament.uk/Content/Edit/1?uri=http://data.parliament.uk/pimsdata/hansard/CONTRIBUTION_623672
In Solr
https://search.parliament.uk/claw/solr/?id=http://data.parliament.uk/pimsdata/hansard/CONTRIBUTION_623672