top of page
Public Transportation

The Sound of Silence:
Public Trust in the Age of AI

An often ignored voice in the development of an AI world is that of the public, a denial that has profound impact.

UNTRUSTWORTHY

Globally, public trust for those building and selling AI is 53%, while in the US its 35%

Earning public trust begins (but does not end) with these essential considerations:
 

  1. Is the developer trusted?

  2. What is the AI used for?

  3. What reputation precedes the AI for the developer, deployer, and product.

  4. Can the the developer, deployer, and product be proved to be trustworthy (can users trust it, not that it passes testing) to the public?

  5. Does the organization have prior successes with cutting-edge technology or innovations?

As mentioned by KPMG report, Trust in artificial intelligence: A global study 2023:

​

"Most people are wary about trusting AI systems and have low or moderate acceptance of AI. Trust and acceptance depend on the AI application.

​

  • Three in five (61 percent) are wary about trusting AI systems.

  • 67 percent report low to moderate acceptance of AI.

  • AI use in human resources is the least trusted and accepted, while AI use in healthcare is the most trusted and accepted.

  • People in emerging economies are more trusting, accepting and positive about AI than people in other countries" (KPMG Study)

Nuance provided by World Economic Forum:

​

Technology companies have maintained trust with the public with a score of 76%. However, only 30% of people accept AI while 35% reject it. This means that big tech (Alphabet, Amazon, Apple, Meta, and Microsoft) are securely trusted, however AI companies see no such benefit.

​

The risk is that big tech is turning to AI and even becoming AI-company-like and maintaining trust under their new identities will be an uphill climb - or one where their trust prior to AI may help them. Whether this will be a temporary boost or lasting depends on their next choices.

In a report commissioned by the US Department of State two significant themes emerged in the discussion of existential risk:
 

  1. Weaponization
    AI systems could do take actions that cause large-scale harms, wars, attacks, or become a WMD (weapon of mass destruction).

  2. Loss of Control
    AI is capable of making decisions and choices outside of those informed by or programmed by humans.
     

Either of these could lead to extinction events. Of course, these conversations greatly affect user trust and confidence in AI.


Other trust topics worthy of exploration and understanding:

​

  • Lack of Transparency

  • Bias and Discrimination

  • Job Displacement

  • Privacy Concerns

  • Ethical Concerns

  • Lack of Accountability

  • Over-Reliance on Technology

  • Misinformation and Hype

  • Historical Misuse of Technology

  • Lack of Regulation

  • Complexity and Inaccessibility

  • Uncertain Impact on Society

  • Deepfakes

  • Tech Skills Gap

Book a Discovery Call

Schedule a video call consultation today!

bottom of page