How responsible tech safeguards against unintended ethical consequences
Co-written by Saranyah Douse and Isha Sharma
Application of technologies such as artificial intelligence and machine learning are fundamental to a wide range of start up business models such as recruitment, healthcare diagnostics and credit risk assessment. Whilst there is significant opportunity in addressing real-world challenges at scale, sometimes technology designed with the best intentions can have unintended ethical consequences, with a serious knock-on effect on their reputation – and finances.
Enter: Responsible Tech.
This describes the approach organisations are taking as they seek to design and scale a product in a controlled, sustainable and safe way that works for everyone.
The EU AI Act is looking to address these concerns and, at the same time, large tech firms are now publishing their approach and processes to responsible AI. As a result, it will only be a matter of time before founders will be required to think through these issues. Whilst responsible tech touches all our investment focus areas at Octopus Ventures, we thought we’d take a closer look at how some of our B2B Software portfolio companies are approaching the topic – and what lessons in good practice they might hold for founders building new companies in the space.
Fairness by Design
Fairness by design involves founders enabling relevant stakeholders to look under the hood of an underlying AI model, to explore the data used to train it and understand the steps in the decision-making process.
Recruitment is a sector that has been radically overhauled by AI. Where previously it fell to human recruiters to sift through CVs, automation has now become more prevalent to drive speed and eliminate issues associated with human biases. But as Amazon’s well documented attempt at mechanising the process showed, it’s not always that straightforward. Amazon’s AI hiring model used its own past hiring data of previously successful applicants as the ground truth dataset, but this resulted in higher rejection rates of female candidates.
In any industry where AI is being used to select from a mixed group of people, the question of bias is pivotal. In the recruitment space, it’s a point that Octopus Ventures portfolio company Sova, has set out to address. Their digital candidate assessment platform helps organisations remove biases in candidate screening. Alan Bourne, CEO, explains: “At Sova, our core methodology is built on delivering fairness by design at every stage in the recruitment process and predicting high performance. Machine learning enables timely monitoring of the data so biases can be promptly flagged, addressed, and prevented from the start of a candidate’s journey. We disaggregate the data and look at a combination of analytics, people and predictive algorithms.”
Human in the loop
Developing tactics to mitigate bias or data deficiencies is a human’s job and hence the people implementing the technology are just as important as the technology itself. Bias and irrationality are innate human qualities and that is why it is important to implement process which results in inclusive and fairer decisions. A huge part of this process is “diversity of thought” and “diversity of testing”. Alan tells us, “If you don’t have a diverse development community in AI, you don’t have a diverse community applying AI and you may not spot those biases in the datasets you are using.”
The second strand to this is expertise: do the people involved in the AI or machine learning process have the relevant expertise to classify data? Software engineers have their part to play, but it’s also worth drawing on specialists. Sova’s models were designed by organisational psychologists and this added level of expertise from humans assist in the building, testing and overseeing of algorithms which goes a long way in de-risking the application of datasets.
Think about the impact on all users
The Safeguarding Company, another business in the Octopus Ventures portfolio, uses their software to protect children and vulnerable adults. Their mission is driven by law, ethics and emotion, which is why it is so important the company brings all their users along on the journey when introducing new solutions. To do this, The Safeguarding Company did a lot of beta testing for the product to ensure that the process and workflow worked for all stakeholders (students, teachers, social services, police etc).
When it comes to safeguarding, the protection of children is of paramount importance, but teachers and staff also need to feel secure. According to Martin Baker, CEO, “if you don’t provide the staff with appropriate tools to protect the children, you are putting the entire system at risk.”
Education and testing help the end user to understand the need for the solution, feel equally protected and as a result use the product more effectively. As Martin says, “at the very least, it makes people unconsciously competent in what can sometimes be a complex and highly regulated world.”
Encourage a culture of transparency
From a practical perspective, human interactions with technology are critical to the development of any product. According to Tim Bolot, CEO at Tendable, a clinical audit software solution for healthcare professionals, “a lot of what we are trying to do is encourage a culture of transparency. If you take the view that your solution isn’t always right, you will find room for improvement much faster.”
The idea that things aren’t perfect and need to be fixed is a positive culture to embed. How a company deals with difficult hurdles and collaborates to improve will lead to faster and more efficient progress. In this respect, human influence is a crucial part of responsible tech. It is also important to think through the typical use cases and how these vary by customer segment, including those edge cases which are likely to be higher risk situations.
Customers are at the heart of everything Tendable do and having face to face time with customers is an important element of every employee’s induction process. Everyone in the company, from the product engineering team to operations, finance to HR, is encouraged to speak with customers regularly to understand their view on the service and highlight their biggest challenges and frustrations.
By taking practical steps and focusing on consistent feedback, the Tendable team are able to closely observe how their service works on the frontline, and deliver a solution that is tailored to the needs of their users.
Respect data and security
In the world of responsible tech, it’s essential not to overlook the basics. A business can have great processes in place, but if they aren’t also focused on security then these quickly become irrelevant. Aside from regulation, law and policy, preventing data breaches and reducing the risk of data exposure is an essential component of ethical responsibility. Since the introduction of GDPR it may feel like stating the obvious, but when creating new and rapidly evolving technologies, it is essential to keep this front of mind to avoid any consequences which could be reputationally damaging.
Conclusion
Businesses should develop products that are effective, reliable, and as fair as possible before launching to customers. Whilst the positive impact of any new solution can be scaled, the negative can too – if it gets off to the wrong start. Founders should be mindful of their stakeholders (not just customers), keeping laser-focused on fairness and transparency, while maintaining an awareness of risks around data and its application. In a world of agile development and continuous improvement, structured workshops focused on responsible tech need to form part of the development process.
With every problem comes opportunity; the AI market is huge and could soon face stringent regulation for the first time. The newly proposed EU AI Act is the first attempt to regulate AI and could soon make lack of transparency a problem for European start ups, especially those operating in more sensitive sectors like finance, health and employment. Companies will need to create new technologies and processes to assist with the documenting, monitoring and training of AI systems. In the same way that dashboard tools like Jira helped developers build software faster, the next iteration needs to help developers build software better.
If you have any thoughts on responsible tech or want to talk more about how to embed good practice in your company culture, please get in touch – we’d love to hear your views! You can reach us at [email protected] or [email protected]