Springbok AI response to the US Blueprint for an AI Bill of Rights

10 January 2023

Springbok AI response to the US Blueprint for an AI Bill of Rights

10 January 2023

Springbok AI response to the US Blueprint for an AI Bill of Rights

10 January 2023

Springbok AI response to the US Blueprint for an AI Bill of Rights
Springbok AI response to the US Blueprint for an AI Bill of Rights

Here at Springbok, one of our mantras is that AI can be harnessed as a force for good, and respect for data privacy is at the core of our operations. However, it would be naive to deny that tech giants routinely neglect and violate fundamental ethical principles; the saga of Facebook and Google being chastised for neglecting ethics in favour of profit rolls on. We, therefore, applaud Biden’s initiative to help guide the necessary steps to protect the rights of the American public.

The challenge, however, lies in how the Blueprint is not legally binding. Until we see it enshrined in legislation, Congress cannot mobilise tangible change in the tech industry, and yet reform is far from easy. This thought piece will scrutinise some principles laid out by the Bill.

For context, public awareness of surveillance capitalism is catching up with its era-defining 21st century digital revolution. In other words, it is no secret nor surprise that, as users, our data trail pays for our “free” services and, further, that our behaviour is manipulated, our weaknesses exploited, and our eyes hooked so we leave behind more data. This is the lucrative business model of the internet. The Blueprint is a long-anticipated initial step towards AI regulation and curbing associated dangers.

To recap, the 5 principles outlined in the Blueprint are:

1. Safe and effective systems

You should be protected from unsafe or ineffective systems.

2. Algorithmic discrimination protections

You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

3. Data privacy

You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

4. Notice and explanation

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

5. Human alternatives, consideration & fallback

You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The way the Bill is written clearly manifests its advisory nature: “Biden uses suggestive “should” which indicates the advisory nature of this document.” When examining the White House’s hesitancy in imposing stricter rules, it would be easy to attribute it to their inclination towards free market economics and light-touch regulation. However, the challenge is felt not solely among those lobbying for change, but also among technologists themselves in fixing the algorithms to make them ethical.

Principle 2, 3, and 5 have particular relevance to Springbok as they specifically address the topics of algorithms, data privacy, and the role of humans — areas we carefully consider in each project to ensure that solutions reflect our core company values. In the next section, we will provide a deeper discussion of each of these principles as well as our recommendations for best practices.


Principle 2: Algorithmic discrimination protections

The second principle aims to counter algorithmic discrimination. This is a critical issue — AI mirrors and magnifies bias already prevalent in society, because it makes predictions based on existing data sets. Bias creeps into recruitment of new employees, predictions of “creditworthiness” by banks, and organ transplant allocation, penalising people for their race or gender. For example, Amazon scrapped their recruitment tool that showed bias against women.

However, it is not a straightforward issue to fix, especially with a merely advisory Blueprint. One problem is the variety of definitions of fairness, and how to program it into a mathematical system. Moreover, it is difficult to identify where the bias arose. In the case of Amazon’s recruitment tool, it was re-coded to ignore explicitly gendered words. This did not solve the problem, because it still detected implicitly gendered vocabulary used disproportionately more by one sex than the other. And yet, there remains value in educating tech companies to encourage them to comply with practices.

By nature of our work, we at Springbok are regularly confronted with commercial, technical and UX design choices, and that’s where having a strong company ethics and values code has been imperative. Whether it’s turning down engagements (e.g. westernising offshore call center accents, or building a catfishing chatbot for a German dating website), regularly carrying out internal design — and code reviews, or building with Accessibility in mind, there are huge tangible and intangible benefits to making value-based decisions in your business, particularly as your company grows.

Principle 3: Privacy

Privacy is the most fundamental pillar in data ethics. The scope for harm, alongside other dimensions, is strongly curtailed by effective legislation limiting what data can be collected, how it can be used, and what control users have over it. GDPR in the EU and UK, and state laws on consumer privacy in California and others (e.g. Colorado, Virginia) are a move in the right direction, but it remains challenging to enforce. From the perspective of a tech company, regulatory alignment is crucial, as having many geographies each with their own standards becomes a burden to working internationally.

Companies easily fall into the trap of collecting data simply because they can, without properly weighing up the benefits against the risks. Instead, assess from first principles whether working with Personal Identifiable Information (PII) is strictly necessary for the use case and avoid the storage of PII wherever possible. Emphasising user input, human-in-the-loop training, and extensive testing and monitoring in your development, also foster the development of ethical — and useful! — AI.

Principle 5: Human alternatives, consideration & fallback

The fifth principle is especially pertinent to one of our niches: conversational AI (chatbots). Modern chatbots can elevate customer experience (CX) to new heights. With their 24/7 availability, rapid responses, and infinite patience with tedious conversations, they can often trump a real team. However, it would be dubious to declare that automation can ever be completely reliable. Chatbots are not infallible in their ability to converse and troubleshoot — human escalation and fallback procedures are thus essential. This includes major decisions about users, and instances where providing flawed information may be harmful.

AI systems should not immediately be put into production with complete autonomy, but rather first be deployed as assistive tools with a human-in-the-loop. Once in place, the performance and biases of the AI recommendations should be closely monitored. Depending on the domain, control can then gradually be handed over to the AI, with an emphasis on continuous monitoring, and only after consulting both the human-in-the-loop, who is ceding control and anyone affected by the decisions. In some domains like criminal justice, employment, and healthcare, users should always have recourse to an actual human, which needs clear signposting and should not be hidden behind arcane escalation procedures. As such, intelligent assistants (chatbots) should all have some form of fallback: ideally via direct handoff to a human agent, or in async fashion via email or ticketing system.

Overall, along the road to enforcing the principles discussed here, legislation like the EU’s proposed AI Liability Directive and the US’s Algorithmic Accountability Act are both exciting steps in the right direction. We hope that Congress will take further action in drawing up the Blueprint’s principles in legislation, to promote ethical practices in the tech industry, and that tech companies will be taking these principles to heart.

To find out more about our practices and what algorithmic accountability means for your business, book a call with our team today.

Sign up

Sign up to our blog

Sign up

Sign up to our blog