Hi there, thank you for coming to this site! It's still very much a work-in-progress and subject to changes. I'm also actively seeking feedback on this primer, so just drop me a message at sweekiat [at] stanford [dot] edu! I hope you enjoy the read!

Often, when we first fall in love, the person of our affection seems to be perfect. But the happy honeymoon is cut short when we realize they are not that perfect. Turns out, they've got annoying habits. They wake up with bad breath. They burp. And oh my god their farts smell just as bad as ours.

When you realize the love of your life is human.

In the same way, our honeymoon with artificial intelligence (AI) is quickly giving way to a realization that AI is not perfect. Turns out, AI is not neutral. It is not necessarily right or fair. The recommendations of AI systems can be just as sexist or racist as any human.

When you realize the AI in your life is biased.

There are so many ways that AI can go wrong. There are so many guidelines from governments, companies, non-governmental organizations (NGOs). There are so many new algorithms, datasets and papers on ethical AI. It can all be a bit hard to take in, so this guide is here to help.

Send. Help. Please.

At the moment, the guide is targeted at AI practitioners and assumes some understanding of AI technologies. This mainly includes researchers and engineers. But it may also be useful for anyone helping to implement or recommend AI solutions.

The current version of the guide focuses on algorithmic bias. Future work will include other AI-related problems such as black boxes, privacy violations, ghost work and misinformation.

Here is a quick overview of what we will cover!

Get a quick introduction to AI ethics, featuring the most important question when implementing AI solutions!

~10 minutes read

Learn about qualitative and quantitative ways to define fairness and try your hand at tuning a classifier to fulfill different fairness metrics!

~15 minutes read

Check out some real-world examples of algorithmic bias, including risk scores used in criminal justice and Google Image Search!

~10 minutes read

We look at some possible sources of bias when deploying an AI system, across data preparation, algorithm design and the actual deployment!

~10 minutes read

Finally, we will end off with a Summary Checklist and a list of Resources.