The Ethical Imperative in AI and Learning
- Anke Sanders

- Sep 20
- 7 min read
Updated: Sep 21
From effort to inclusion, what accessibility, affect, and social capital teach us about building a future worth enabling
I attended an ATD Houston event for talent development leaders this week. Rob Lauber and Ryan Austin (Cognota) shared insights and answered questions and it wasn’t surprising that the conversation at one point turned to AI. The questions were familiar: How do we use it? Where do we start? How does it shape L&D? Will it replace us/training? What role do we play?
I made a remark to the group that as L&D or talent professionals, we have a responsibility to think of AI not just as technical. There is an ethical imperative.
Later, as I was reflecting on our exchange, I was taken back to my early days as a learning designer, where accessibility was too often treated as a compliance checkbox at the end. But we know now that when we built accessibility in from the start, something remarkable happens: learning becomes better for everyone.
I believe the same is true for ethics in AI enablement. If we treat it as an afterthought, we miss the chance to build something stronger.
From Efficiency to Enablement
The first benefit people mention about AI is usually saving time. I understand that appeal and I’ve experienced it in my own work. Oftentimes, we rush to streamline, to get things out faster, to prove efficiency. But maybe we need to stop, take a deep breath and think about what we risk when efficiency is our only or primary measure.
The deepest learning in my life hasn’t come from fast and effortless moments. It came from the ones that stretched me, frustrated me, and called for my persistence. Learning English as a second language, for example, was not efficient. It was awkward, slow, and full of mistakes (and still is!). But it gave me a voice in a new world.
That’s why I really don’t like to think in terms of AI maturity as an end point we should strive for. Instead, I think in terms of AI enablement that is rooted in ethical considerations and applications with concentric circles radiating outward from its core.

When the ethical imperative is at the core, literacy gains depth, fluency builds resilience, and enablement creates trust at scale.
Why Ethics Cannot Be Deferred
like accessibility in learning design, too often, ethics is seen as something to revisit later. But later can be too late. Without ethics at the core, each stage of AI enablement falters and we put ourselves at risk:
Without ethics, literacy is shallow. People learn the terms but not the consequences.
Without ethics, fluency is reckless. Practices spread, but risks scale even faster.
Without ethics, enablement is fragile. Trust evaporates the moment something goes wrong.
Ethics isn’t an add-on. It is the center of gravity.
Literacy, Fluency, and Enablement in Practice
The model of concentric circles is helpful in theory. But what does it look like when organizations put it into practice?
Literacy — Amex GBT
When Amex Global Business Travel launched its two-track AI program, the goal wasn’t mastery. It was something foundational that many skip over: shared language. All employees took an AI 101 foundation course, while some had a deeper track. The real innovation wasn’t just in the content, but in embedding ethical use as non-negotiable from day one.
This is literacy with substance: not just “what is AI?” but “what does responsible AI look like in our context?”
Fluency — Launch Consulting
Fluency is what happens when awareness turns into applied practice. At Launch Consulting, fluency was treated as a culture shift, not a course. Online modules were paired with assessments, an internal AI hub, policies, and change champions. By mid-program, 91% of employees were already applying AI to their daily work.
Fluency here meant confidence with accountability and ultimately using AI not recklessly, but responsibly.
Enablement — AstraZeneca
Enablement goes beyond fluency: building the systems that make responsible AI sustainable. At AstraZeneca, this looked like a year-long ethics-based audit of AI systems. Instead of lofty principles, they tested whether ethics could be operationalized and tracked across teams.
The result was a repeatable framework for scaling AI responsibly, ensuring that ethics wasn’t a one-off initiative but part of the organization’s infrastructure.
Ethics as a Multiplier of Human–AI Collaboration
Taken together, these case studies show the ripple effect of enablement: literacy builds shared awareness, fluency embeds practice, and enablement secures trust at scale. But that only works with an ethical imperative.
So what sits beneath all three is something harder to quantify: how much ethics amplifies the results.
Here’s one way to picture it:

This is, of course, a very much simplified and theoretical model and definitely not a prediction. But it illustrates a truth many of us sense:
Low ethics leads to fragile, flat outcomes. Adoption may happen, but collaboration breaks down quickly.
Medium ethics produces steady, incremental gains.
High ethics compounds returns, because trust, transparency, and inclusion scale alongside adoption.
In other words: ethics isn’t just a safeguard but a multiplier. The higher the ethics, the more sustainable and productive human–AI collaboration can become.
And just like accessibility once reshaped learning design for everyone, ethics has a similarly expansive effect: it makes collaboration not only safer, but stronger.
Effort and Affect: The Hidden Side of Learning
But even the strongest systems aren’t enough on their own. Technical skill and governance are only part of the story. To really understand the ethical imperative, we also need to look at the human side of learning where effort, emotion, and belonging come into play.
Research shows that effort “feels bad:”
Learners underestimate their growth when tasks are difficult (Baars et al., 2024).
Mental effort strongly correlates with negative emotions (Boksem et al., 2024).
When effort goes unrecognized, people disengage (Liu et al., 2024).
Here lies the paradox: the very struggle that makes us doubt ourselves is often what produces the deepest learning.
As an immigrant learning English, I lived that paradox. Every mispronounced word still can feel like evidence I don’t belong. Yet those moments of discomfort also somehow expand my capacity, resilience, and even my voice.
If AI makes learning feel effortless, we risk creating learners who disengage at the first sign of struggle. That isn’t just a design challenge it’s a belonging challenge. And once again it brings me back to self-determination theory. (Stay tuned, I think that will be worth a separate full fledged discussion).
AI and the Fragile Fabric of Social Capital
Learning is not only cognitive but relational. It happens in peer exchanges, mentoring conversations, and spontaneous moments of connection.
Now, if AI bypasses those interactions in the name of efficiency, we don’t just save time. We risk hollowing out the social capital that sustains organizations. For example, over-reliance on AI narrows networks and weakens bonds (Bayraktar, 2025) while strong social capital predicts AI readiness and resilience (Ode et al., 2025). In other words: community is not a byproduct. It’s a condition for adaptability. Protecting social capital isn’t optional but an ethical responsibility and a strategic advantage.
The Human Side of the Ethical Imperative
Ethics also shows up in who gets excluded when we don’t design carefully. Let’s think about “Neurodivergent Bias.” AI models often associate autism- or ADHD-related terms with “disease” or “badness” even in neutral contexts.
For neurodivergent employees, this can mean assistive tools that distort their voice instead of amplifying it.
Or when looking at research on Second-Language Speakers: AI detectors often misclassify non-native English writing as “AI-generated” (Stanford HAI, 2023), while large models underperform in underrepresented languages (Stanford News, 2025). The result is an invisible linguistic tax on those already navigating difference (Yomu.ai).
As someone who still feels the weight of her accent or having to correct the pronunciation of my first name in most introductions, I know how belonging can hang by a thread. AI must widen the circle, not narrow it.
The Ethical Imperative for L&D and Leadership
So what do we do with all this?
If we know that:
Effort feels unpleasant and undermines confidence,
Social capital sustains learning and adaptability, and
AI risks amplifying bias against marginalized groups…
…then the ethical imperative is clear:
Design for struggle. Make space for difficulty and iteration, but pair it with recognition.
Protect informal learning. Guard the spaces where people connect without AI intermediating every interaction.
Measure affect. Ask not just what people learned, but how they felt.
Use AI to enable, not replace. Free us for deeper human connection, not strip it away.
Efficiency matters. But the true measure of AI in learning is whether it strengthens (or on the flipside erodes) the human connections that make growth and belonging possible.
That is the ethical imperative.
When I think back to the event that triggered this blog post, I realize the questions that were asked are important but we should start with this:
What kind of future are we building when we put ethics at the center?
Because in the end, being Future Fluent isn’t about adopting technology faster. It’s about ensuring we arrive in the future together, enabled.
References
Amex Global Business Travel. (2024). Empowering Our Colleagues with AI Fluency. Retrieved from: https://www.amexglobalbusinesstravel.com/blog/empowering-our-colleagues-with-ai-fluency/
Launch Consulting. (2024). AI Fluency as the Next Frontier. Retrieved from: https://www.launchconsulting.com/case-studies/ai-fluency-as-the-next-frontier
AstraZeneca. (2024). Ethics-Based Auditing of AI Systems. Retrieved from: https://arxiv.org/abs/2407.06232
Baars, M. et al. (2024). The Relation Between Perceived Mental Effort, Monitoring Judgments, and Learning Outcomes: A Meta-Analysis. Springer. Retrieved from: https://link.springer.com/article/10.1007/s10648-024-09782-6
Boksem, M. et al. (2024). The Aversiveness of Mental Effort: A Meta-Analysis. Psychological Bulletin. ResearchGate. Retrieved from: https://www.researchgate.net/publication/379234723_The_Aversiveness_of_Mental_Effort_A_Meta-Analysis
Liu, J. et al. (2024). Effort–Reward Imbalance and Learning Engagement. Frontiers in Psychology (PMC). Retrieved from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10748527/
Bayraktar, S. (2025). The Effects of Artificial Intelligence on Social Capital. SSRN. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4950623
Ode, E. et al. (2025). Social Capital and Artificial Intelligence Readiness in SMEs. Information Systems Frontiers (Springer). Retrieved from: https://link.springer.com/article/10.1007/s10796-025-10526-2
Pavlopoulos, J. et al. (2024). Bias Against Neurodivergence in AI Language Models. PubMed. Retrieved from: https://pubmed.ncbi.nlm.nih.gov/38284311/
Stanford HAI. (2023). AI Detectors Biased Against Non-Native English Writers. Retrieved from: https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers
Stanford News. (2025). Digital Divide: AI Excludes Non-English Speakers. Retrieved from: https://news.stanford.edu/stories/2025/05/digital-divide-ai-llms-exclusion-non-english-speakers-research
Yomu.ai. (2024). How AI Paper Writers Assist Non-Native Speakers in Academic Writing. Retrieved from: https://www.yomu.ai/resources/how-ai-paper-writers-are-assisting-non-native-speakers-in-academic-writing









Comments