AI Ethics Risks and Best Practices
Let me tell you about the moment I realized how easily AI can go wrong.
A few years ago, I was reviewing student submissions for a class on natural language processing. One student had used a text-generation model to summarize recent news articles. The result? Convincing prose, but one article it summarized never existed.
The AI hadn’t misunderstood, it had invented.
The summary referenced real political figures, real locations, even real dates. But the event it described was fiction. Not exaggeration. Not bias. Just… made up.
And the student hadn’t caught it.
That’s when I knew we needed to teach not just how to use AI, but how to use it ethically, cautiously, and with oversight.
In this lesson, we’ll learn what “responsible AI” really means, look at the dangers of unchecked tools, and, most importantly, learn how to make AI work for us without causing harm.
Because AI isn't just a productivity tool. It’s a mirror of human judgment. And when it reflects our blind spots? That’s where things get dangerous.
What does “responsible AI” mean?
The term “responsible AI” gets thrown around a lot these days, usually by companies who want to sound like they’re doing the right thing.
But let’s cut through the buzzwords.
Responsible AI means AI that is:
- Ethical: Designed and used in ways that respect people’s rights
- Fair: Does not reinforce or amplify bias or discrimination
- Transparent: Users understand how it works and what its limitations are
- Accountable: Developers and users take responsibility for outcomes
This isn’t theoretical. It’s practical.
If you use AI to summarize an article, generate a hiring email, or create customer-facing content, you’re shaping someone’s experience. And with that comes responsibility.
Common harms and biases in AI models
Let me be blunt: AI models are biased, not because they’re evil, but because they learn from us. From our internet, our articles, our posts, our habits.
When those patterns are toxic, AI absorbs them.
Example:
In 2018, Amazon abandoned an AI hiring tool after discovering it downgraded résumés that included the word “women” or mentioned all-women colleges. The model had trained on 10 years of data from male-dominated tech teams. It “learned” that men were the ideal hire, not because of performance, but because of historical bias.
Other examples include:
- Facial recognition systems that misidentify people of color at much higher rates
- AI-generated health tools that under-prioritize patients based on zip codes
- Language models that reinforce gender stereotypes in job descriptions
The point? Bias doesn’t need to be intentional to be harmful.
That’s why your awareness matters—even if you’re “just using it to write emails.”

Privacy and data security with AI tools
Here’s a rule I live by, and tell all my students:
If you wouldn’t copy-paste it into a public forum, don’t give it to an AI model.
AI tools like ChatGPT, Bard, and others process your input through their servers. Even if they say “we don’t store data,” remember: it’s still passing through their systems.
Don’t input:
- Confidential company data
- Personal identification numbers
- Passwords or credentials
- Private emails or legal drafts
- Patient or student records
Do input:
- General questions
- Public info
- Sanitized or anonymized content
- Tasks where you remain the decision-maker
Tip: Many organizations are creating internal policies for safe AI use. If your workplace doesn’t have one yet, this is the time to ask.
Misinformation and AI-generated content
One of the most dangerous flaws in modern LLMs? They sound right, even when they’re wrong.
This is what we call AI hallucination, where the model produces outputs that are:
- Factually incorrect
- Fabricated sources
- Misleading conclusions
- “Confidently wrong”
I’ve seen AI generate fake citations in academic papers. I’ve seen it misquote real people. I’ve even seen it invent psychological studies that never happened.
And in the wrong hands, it gets worse: deepfake videos, fake news headlines, voice cloning, and more.
How to protect yourself:
- Always verify any factual claims made by AI.
- Never rely on AI for medical, legal, or financial advice.
- Avoid publishing AI-generated content without your final edit.
Use AI to draft. You do the truth-checking.
Best practices for ethical, safe use
You don’t need a PhD to use AI responsibly. You just need to follow some practical rules.
Here’s the checklist I use for myself and my students:
Dr. Anderson’s AI Responsibility Rules:
- Disclose when appropriate: If you’re submitting AI-assisted content, say so. Transparency builds trust.
- Fact-check everything: Especially if it looks polished. Don’t confuse fluency with truth.
- Sanitize sensitive data: Use placeholders if needed (“[CLIENT NAME]”)
- Use your judgment: Don’t outsource ethics, tone, or human empathy.
- Stay updated: AI policies, capabilities, and risks change quickly. Read documentation. Ask questions.
Remember: every time you use AI, you’re shaping how the tool learns and interacts. You’re part of the feedback loop now.
When AI goes wrong
Let me give you a real-world example for reflection:
In 2023, an attorney used ChatGPT to draft a court filing. The problem? The model invented six case citations that didn’t exist.
The attorney didn’t verify them. The court couldn’t find them. The lawyer ended up being sanctioned, and the story made global headlines.
Why did this happen? Because the attorney treated the model as an expert, when it’s really just a pattern generator.
Case Reflection Exercise:
Take 5 minutes and reflect on this question:
What three lessons would you take from this example to apply in your own AI use?
Write them down in your notes or journal. Here’s a starter:
- Never use AI-generated outputs in critical settings without verification.
- Understand the tool’s limitations before trusting its results.
- Your name is still on the work—even if the AI helped write it.
Conclusion
Let me leave you with this:
AI is a tool. You are the conscience.
It’s exciting. It’s powerful. But it’s not perfect—and never will be.
Your job is not just to get the job done. It’s to do it well. Responsibly. With integrity.