top of page

Philosophers: The First Critical Thinkers


Philosophy focuses on the impacts on individuals and society. While most commonly associated with good, fair, right, and just actions, it also uniquely identifies and explains their opposites—the not-good, unfair, unjust, and wrong actions. What makes this most unique is that it relies on logic to avoid merely choosing to do what we want. Ethics in AI means understanding our actions and choosing better ones using logic applied to these topics:
​
  1. Ethical frameworks

  2. Value alignment

  3. Moral reasoning

  4. Responsibility and accountability

  5. Justice and fairness

  6. Privacy and autonomy

  7. Existential risk

  8. Human-AI relationships

  9. Epistemology and uncertainty

  10. Cross-disciplinary dialogue

Marble Statue Of The Ancient Philosopher Socrates.jpg

"Teaching AI ethics is like teaching a car to appreciate a sunset—it requires a leap of imagination, a dash of creativity, and a whole lot of patience [and understanding why the sunset matters]." - ChatGPT (italics NextGen Ethics)

Top 10 issues in AI + Philosophy:

​

  1. Ethical implications

  2. Consciousness and personhood

  3. Job displacement and economic inequality

  4. Bias and fairness

  5. Privacy and surveillance

  6. Existential risk

  7. Human-AI relationships

  8. Accountability and responsibility

  9. Intellectual property and ownership

  10. Control and autonomy

Image by rivage

“We need to make sure that communication is done in a manner such that it doesn’t seem like people who are talking about the responsible application [of AI] are gatekeeping, which we are not,” they said. “We are advocating for the safe and sustainable development of products.” - Washington Post

Ethics, AI, and Notable Tech Companies
 

Nearly all notable tech companies have laid off their ethics teams or distributed them throughout the company diluting their efforts:

  1. Twitch (owned by Amazon)

  2. Twitter

  3. Microsoft

  4. Facebook (parent of Meta)

  5. Google

  6. Snap

​

"The slashing of teams tasked with trust and safety and AI ethics is a sign of how far companies are willing to go to meet Wall Street demands for efficiency, even with the 2024 U.S. election season — and the online chaos that’s expected to ensue — just months away from kickoff. AI ethics and trust and safety are different departments within tech companies but are aligned on goals related to limiting real-life harm that can stem from use of their companies’ products and services."

​

Tech layoffs ravage the teams that fight online misinformation and hate speech >

Ethics Washing

 

“The last few years have seen a proliferation of initiatives on ethics and artificial intelligence (AI). Whether formal or informal, led by companies, governments, international and non-profit organizations, these initiatives have developed a plethora of principles and guidance to support the responsible use of AI systems and algorithmic technologies. Despite these efforts, few have managed to make any real impact in modulating the effects of AI.” Carnegie Council for Ethics in International Affairs >

​

Ethics washing most often looks like:

  1. Surface-level emphasis on ethics

  2. Neglect of deeper ethical issues

  3. Token gestures

  4. Misleading portrayal

  5. Lack of accountability

Trends to watch in philosophy +AI

​

Bringing AI mainstream has led to its share of challenges. Within those themes have emerged that are worthy of critical thinking and consideration. Those working in AI will be confronting these in the near future - if not today.

​

Some content is not publicly available, however we're always willing to talk shop so reach out today to discuss!

Anthropomorphizing AI -
Do we see ourselves in AI?

​

Anthropomorphizing technology means attributing human qualities to it, prompting the question of AI's similarity to humans. While many seek a simple yes or no, a nuanced perspective reveals the answer lies somewhere in between.


Miller's Spectrum View of Anthropomorphization - Contact us to see AI through a new lens >

AI as a Legal Entity -
A Question of Legal Accountability

​

Air Canada tried to have a chatbot declared a legal entity, but the court rejected the request. Legal experts are discussing whether technology can fulfill the requirements to be considered a legal entity, which include the ability to enter contracts and file lawsuits.


Air Canada: A Chat bot as a Separate Legal Entity >

The Intelligence of AI -
Is it able to think like we do?

​

We're striving for AI that mirrors human thought and knowledge. Questions linger about how we'll recognize this achievement or if AI has already attained it. Fortunately, parallels in our world offer insights into this dilemma.


AI and the Epistemic Peer Challenge - Contact us to learn how to evaluate the expert status of AI!>

Philosophy + AI Links

​

  1. Makkula Center for Applied Ethics - an amazing site for resources and philosophical dialogue on ethics in AI (and in other areas).

  2. University of Helsinki AI Resources - another great resource for all things AI (and they have free training offers as well - see below).

  3. University of Helsinki has an amazing course on AI ethics, highly rated across every training list - and it's free!

  4. NextGen Ethics Ethics Informed Vetting Process for career professionals wanting a robust and accurate view of the ethical challenges in AI - moving beyond privacy, trust, and data.

  5. Best Study Programs in Ethics in the World

  6. University of Minho - Applied Ethics Talks and Conference, held in June each year.

  7. NextGen Ethics Ethical AI Pledge for Public Trust aimed towards directing businesses, developers, and deployers to turn their attention toward user trust - without ethics washing.

  8. UNESCO Women for Ethical AI - "Women4Ethical AI leverages the knowledge, contribution and networks of leading Artificial Intelligence (AI) experts to advance gender equality in the AI agenda."

Image by KOBU Agency

'Women make up 57% of the overall workforce. Comparatively, women make up only 27% of the workforce in the technology industry. Of the 27% that join the technology industry, more than 50% are likely to quit before the age of 35, and 56% are likely to quit by midcareer." - The Retention Problem

bottom of page