My Lords, we have heard some powerful concerns on this group already. This clause is in one of the most significant parts of the Bill for the future. The Government’s AI policy is of long standing. They started it many years ago, then had a National AI Strategy in 2021, followed by a road map, a White Paper and a consultation response to the White Paper. Yet this part of the Bill, which is overtly about artificial intelligence and automated decision-making, does not seem to be woven into their thinking at all.
6.45 pm
All the Government have decided to do to is to water down the original Article 22 provisions in the GDPR. I find that somewhat baffling, particularly as we have heard about the importance of artificial intelligence and the algorithmic and automated tools that are increasingly being used across the public sector, in particular. The noble Baroness, Lady Bennett, talked about how it impacts on the welfare system, healthcare, policing, immigration and so on—many sensitive areas of individuals’ lives—and in the private sector, in really sensitive areas such as financial services.
I do not think that, nowadays, we are unaware of the problems that arise in relation to decisions that are made solely by automated means. I think we are all aware, not just in the context of lethal autonomous weapons, that human oversight can help guard against the machine’s errors, mitigate risk, such as encoded bias, and ensure that there are robust and rational reasons behind a decision.
It is important that we hold at the front of our mind that we are trying to ensure that there are core ethical duties in place when we encounter artificial intelligence. We now know that that is going to become even more important with the recent Budget with considerable capital expenditure promised to public services. The Secretary of State for DSIT has stated that the Government intend to revolutionise our public services using Al. That makes this kind of provision
and the changes being made to Article 22 of even greater significance. As the noble Lord, Lord Bassam, mentioned, the Post Office Horizon scandal has demonstrated the disastrous consequences that can occur when faulty technology is not used responsibly and safely by the humans involved. I was very interested in the example from Australia raised by the noble Baroness, Lady Bennett.
The Government’s data consultation response acknowledged that,
“the right to human review of an automated decision was a key safeguard”.
Currently, Sections 49 and 50 of the DPA and Article 22 of the UK GDPR provide a right not to be subject to a decision based solely on automated processing, with some narrow exemptions. Even that is pretty limited; it is “solely” and obviously we are going to have a bit more argument about whether it should be a bit wider than that. Despite that, Clause 14 would reverse the presumption against solely automated decision-making and would permit ADM—as I think we should call it—in a much wider range of contexts. This is being done without any evidence that the current prohibition on ADM is not working as intended, when we should be enhancing rights in the first place.
Clause 14 would mean that solely automated decision-making would be allowed, unless it is a significant decision and is based on special categories of personal data, in which case specified conditions must be met. The conditions are that the automated decision-making is required or authorised by law or that the data subject has explicitly consented. As part of this change, solely automated decisions that do not involve sensitive personal data are now permissible, so that is quite a change. The burden is now shifted to the individual to complain, requiring controllers to provide information and have measures in place which enable individuals to contest and make representations to a human.
Automated decisions can have significant effects on people’s lives without involving sensitive personal data. Examples include decisions concerning access to financial products, educational decisions such as the A-level algorithm scandal, or the SyRI case in the Netherlands, where innocuous datasets such as household water usage were used to accuse individuals of benefit fraud.
It is also unclear what will meet the threshold of a “significant decision”. Big Brother Watch has identified local authorities that use predictive models to identify children deemed at high risk of committing crimes and include them on a database. Whether a decision to include someone on a database meets the threshold of a significant decision is simply not known, leading to uncertainty for decision-makers and data subjects.
These changes would mean that solely automated decision-making is permitted in a much wider range of contexts. It is especially concerning given that many high-impact algorithmic decisions do not involve processing of special categories of personal data, which is a narrow and specific category. Further, the proposed changes would mean that Article 22 will no longer be cast as a right not to be subject to solely automated decision-making, but rather as a restriction on solely automated decision-making.
There are quite a number of amendments in this group. Many of them speak for themselves, but the whole idea, particularly of the amendments relating to “meaningful automated processing”, is to try to reverse the way that Clause 14 operates so that, if there is meaningful involvement by automated decision-making, these rights arise under the clause. The amendments seek not only to maintain but to improve the current level of protection, so that public authorities that use automation even partially to make decisions must ensure that safeguards for data subjects’ rights and freedoms are in place.
I do not think that I can read out all the amendments, but a number of them would ensure that those decisions are qualified by this concept of “meaningful” automated processing. The review must not be superficial, and the person performing it must have appropriate training, competency and authority to change the decision.
Amendment 43 seeks to ensure that restrictions on automated decision-making in Clause 14 apply to all categories of personal data, not just sensitive personal data. The amendment would ensure similar levels of protection around automated decision-making to those we currently have. It would do this by widening the scope of the restrictions in new Article 22B so that they restrict automation based on all kinds of personal data, not just automation based on special category data.
We very much support Amendments 36 and 37, proposed by the noble Baroness, Lady Jones, on profiling. My 10 minutes are running out very quickly so, sadly, I must leave it there.