Home > The Oxford Scholar Program > AI and the Balance of Trust: Why Understanding Matters‬

AI and the Balance of Trust: Why Understanding Matters‬ ‭

Claire Ziebart‬ | Oxford Scholar Programme 2024

The views expressed are solely those of the author (s) and not of Oxford Global Society.

Industries are often moving towards convenience and ease for their customers—and quite rapidly. Shown through the consumerism and materialism prevalent in society, it is clear that people tend to prioritize products and services that make life easier and more convenient. As we navigate the complexities of artificial intelligence (AI)—a field dedicated to making human tasks easier and more convenient—an important question arises: does it matter whether we understand what truly goes on within an AI system as long as it serves one of its central purposes, which is making tasks easy and convenient?

At first glance, many might think that convenience and ease are all we need from AI tools and that a complete understanding of their inner workings is unnecessary. However, convenience and ease of use do not inherently imply safety or fairness. While convenience may simplify tasks and allow us to shift our focus to other important areas, safety and fairness are foundational principles that should always take precedence because convenience is useless if people are struggling to keep themselves out of danger. Yet, the only way to truly ensure safety and fairness is to control and understand what these machines are doing and why they are doing it. Boaz Barak from OpenAI’s super-alignment team states, “If you can do amazing things but you can’t really control it, then it’s not so amazing. What good is a car that can drive 300 miles per hour if it has a shaky steering wheel?”

So far, there have not been significant issues with the lack of knowledge about AI algorithms, so, as Lauro Langosco, a technology expert at European AI Office, argues, “It might be a medium-size problem right now, but it will become a really big problem in the future as models become more powerful.”

This essay will explore why it matters that understanding AI systems is crucial for our future, not only for developers but also for users, as well as how this understanding impacts the safety, reliability, and ethical use of technology.

In terms of technology, convenience often translates to automated processes and reduced manual labor. These attributes are desirable because they save time and effort, making our daily lives easier. For example, AI-driven chatbots can handle customer service around the clock, freeing up human agents for more complex tasks. However, convenience does not guarantee that the technology is safe or fair. Consider, for example, the use of a chatbot to write a school essay. While this serves the convenience of saving time and effort for a student, it poses significant dangers to the student’s academic integrity and career, as well as being unfair to others who put in the work to write their essays themselves. This example highlights the importance of prioritizing safety and fairness over mere convenience. Ultimately, the goal of AI is to help humans, which means that we must, at some point, trust its ability to give us the best advice or complete a task that will positively affect us.

To achieve maximum reliability of AI, programmers must implement the highest levels of safety and fairness in these systems. This is only possible if they have a clear understanding of the inner workings and algorithms of AI. However, there is a distinct difference between the roles of users and programmers in the development of AI.

While programmers and developers must possess a deep understanding of the technologies they create to ensure the safety, fairness, and reliability of systems, users do not necessarily need to understand the intricate details of how these systems operate. Users interact with technology on a superficial level and do not contribute to the creation of AI technology. Nonetheless, this does not excuse users from understanding other essential aspects of AI. Our role as users is to operate and use the system, which means that it is our responsibility to operate it safely. In simpler terms, programmers are responsible for programming it safely, and users are responsible for using it safely.

To program safely, an intricate understanding of how the system works is required. This includes but is not limited to knowing how and why AI reached its conclusions; understanding the risks associated with different inputs; navigating potentially dangerous or unexpected outputs; identifying what went wrong when errors occur; and learning how to fix these issues. This knowledge is known as AI transparency, which essentially means tracing why and how AI came to its conclusions. “Everything we want to do with them in order to make them better or safer or anything like that seems to me like a ridiculous thing to ask ourselves to do if we don’t understand how they work,” states Ellie Pavlick of Brown University, a researcher of the AI black box phenomenon, which will soon be further explained. Pavlick underscores how the advancement of AI seems unreasonable, considering that the fundamental understanding of how it works is not yet there. She is indicating that experts are skipping steps in terms of new technological developments and need to backtrack before moving forward with improvements.

Transparency is crucial for building trust and accountability in AI systems, as it enables users and regulators to verify the appropriateness of AI actions. Without this level of understanding, making meaningful improvements or ensuring that technology is safe and reliable is difficult and risky. It is likely that attempting changes on something we don’t understand can do us more harm than good. Currently, however, even experts lack this level of understanding, primarily outlined by a phenomenon known as the “AI black box.”

An AI black box refers to a situation where the internal workings of an AI system are not fully understood or accessible. This obscurity makes it challenging to know how the AI makes decisions, especially in complex models such as deep learning neural networks. As The Conversation explains, “You can feed them input and get output, but you cannot examine the system’s code or the logic that produced the output.” Until AI experts can decipher what happens within the black box, AI can never truly be reliable. And if it is unreliable, why would humans trust it to make decisions for us?

This lack of understanding creates a significant barrier to the responsible use of AI. Some people might argue that we don’t need a deep understanding of something to trust it, citing our everyday reliance on other humans to make decisions for us without fully grasping their biases, thinking processes, and intentions. While this perspective suggests that “understanding” is not always a prerequisite for trust, it highlights a key difference between humans and AI: humans operate within established rights, boundaries, and ethical frameworks, whereas AI, as a creation of our own design, is subject to no such inherent constraints. Given this, it is not only feasible but prudent to strive for a deep understanding of AI systems before deploying them widely, ensuring that we fully comprehend every step of their decision-making processes and can account for how and why decisions are made.

AI has the potential to transform nearly every aspect of our lives, from healthcare and education to transportation and entertainment. However, with this potential comes crucial responsibility. One of the most significant risks associated with AI is not that it will take over or act autonomously in ways we cannot control but that we will use it incorrectly or irresponsibly. This misuse can stem from a lack of understanding, insufficient oversight, or a failure to consider the ethical implications of AI.

To prevent these risks, it is essential that both developers and users are educated about AI. Developers must ensure that AI systems are designed with safety, fairness, and transparency in mind, which means achieving a deep understanding of AI’s inner workings and algorithms. Users, on the other hand, should be aware of the capabilities and limitations of AI, understand how to use it responsibly, and be vigilant about potential risks.

Even if there is a deep understanding of the inner workings of AI among developers, the safety and effectiveness of AI also depend on the behavior of users. Much like cars, the risk associated with AI often comes from the operator’s inability to use the technology safely, responsibly, or with adequate knowledge of how to protect themselves. Therefore, the most realistic risk of AI is misuse by the user. This makes it imperative for users to be educated and informed about how to use AI technologies properly.

For instance, AI-driven systems in healthcare can significantly enhance diagnostic accuracy and treatment plans, but they require medical professionals to understand the limitations and potential biases of these systems to avoid misdiagnosis or inappropriate treatment recommendations.

Similarly, in the legal field, AI tools can assist in case analysis and research but should only be relied upon with human oversight and ethical considerations.

For instance, AI-driven systems in healthcare can significantly enhance diagnostic accuracy and treatment plans, but they require medical professionals to understand the limitations and potential biases of these systems to avoid misdiagnosis or inappropriate treatment recommendations. Yet, simply recognizing these biases may not be enough. For example, if an AI system performs less accurately for certain groups due to biased training data, healthcare professionals might need to adjust their approach—perhaps conducting additional diagnoses or relying on their own judgment in specific cases. While this could help address fairness and accuracy, it introduces new challenges, such as higher costs, slower processes, and the risk of diminished diagnostic skills from an over-dependence on AI.

Similarly, in the legal field, AI tools can greatly assist in case analysis and research, but their use must be carefully balanced with human oversight and ethical considerations to mitigate these complexities and unintended consequences.

While convenience and ease of use are essential aspects of technology, they should never come at the expense of safety and fairness. In the age of AI, it is crucial that we prioritize these principles and ensure that both developers and users understand the technologies they interact with. This understanding is essential for building trust, ensuring reliability, and preventing misuse.

As we continue to integrate AI and other advanced technologies into our lives, we must remain cautious about their potential risks and take proactive steps to mitigate them. By fostering a culture of understanding, transparency, and responsibility, we can use AI to its fullest potential while protecting the interests of individuals and society as a whole. Ultimately, the success of AI depends not only on the brilliance of its developers or the wisdom and caution of its users, but also on the government and widespread restrictions. While it is reasonable to say that AI can stay relatively safe as long as users are responsible and aware of potential risks, not everybody chooses to act cautiously; there are always those who act recklessly and riskfully. For example, seatbelts in cars can tremendously mitigate injuries from car crashes, but unless they are enforced, certain people will choose not to wear them. Similar to AI, in order to hold each user accountable for their own protection, governmental policies must be put in place to ensure the safe and fair use of AI.

There are numerous acts in place that already demonstrate safety regulations, such as the U.S. Algorithmic Accountability Act (A.A.A.), which aims to enhance transparency and control over automated decision-making systems. It ensures that companies disclose the AI decision-making, data used, and potential user risks. Additionally, the EU released its AI Act, which aims to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

With the combination of user responsibility, government involvement, and the programmers’ deep understanding of AI algorithms, the future may not be as bleak as we fear. But in order to achieve this, it is necessary that certain people have the knowledge of the inner workings and algorithms of AI.