Posture Before Progress: Why AI Adoption Is About Philosophy, Not Just Skill
- Anke Sanders

- Oct 5, 2025
- 7 min read
This summer, my daughter went to a two-week sleep away camp; no devices, no advanced tech, but connection, courage, and lots of skill building. Durable skills as Camp Director Erec Hillis explained in one of his truly inspiring and insightful newsletters to parents.
The following quote was from the first edition of his Newsletter from March 2025:
(…) durable skills kids practice at camp include: Communication – Getting those ideas out clearly and listening to others. Collaboration – Working with different people and perspectives to solve problems. Critical Thinking – Making thoughtful and logical decisions. Creativity – Finding new and maybe different ways to approach challenges. Responsibility – Owning decisions and then owning the repercussions. Resilience – Bouncing back up when things don’t go as planned. Flexibility – Adapting to change, not resisting it.
This edition deserves its own blog post (stay tuned, I’m working on it!). And while it connects to the broader topic of skills and provides context for this very blog post, I want to share about a more recent reflection Erec shared. It just hasn’t left me. It was about a group of teenagers. He asked them a simple question: “How do you use AI tools at home and in school?”
And then, as he described, the floodgates opened.
Below is his full reflection. It’s worth reading in its entirety.
Guest Reflection:
This is Director's Newsletter #18, sharing thoughts from camp every other week during the school year.
“Dear Anke,
This summer, I asked a group of high schoolers about how they use AI tools at home and in school.
It opened a floodgate.
The next hour was stories, frustrations, and feelings about a topic that is clearly weighing on them.
What struck me wasn't just what they said about AI tools, but the sense of relief in having a space where they could talk openly about their experiences.
More than once, it was clear that others were going through the same frustrations. Different schools. Different places. Same feelings.
And it was apparent that these soon-to-be-adults felt they were stuck without an obvious way forward.
The Reality They're Living
To be very (very!) clear, I'm not here to weigh in on the positives, negatives, or neutralities of AI. Different discussion. Different day.
This is simply an attempt to see things through the eyes of teenagers growing up in a world that’s very different from when I was in high school.
AI is changing education faster than schools can keep up. There was talk of confusion, unclear expectations, and acceptance that most high schoolers are “cheating” on homework and tests (their words, not mine).
There was an intense feeling of pressure to keep up, even if it meant using AI unethically (in an academic sense).
When I asked how this situation makes them feel, the responses came quickly:
Sad, mad, lost, frustrated. One even said she felt “embarrassed to try” if others were going to cut corners to get similar grades.
This wasn’t a group celebrating easy shortcuts. They were distressed about a situation with inconsistent rules and unclear expectations.
This stuff is still so new.
We have an opportunity, as adults, to help our teenagers make sense of a rapidly changing world.
After hearing their frustrations and concerns, here is how we started to think about how they might navigate this evolving realm of AI.
Four Approaches
To me (and I’d love feedback on this), it seems like there will be at least four different groups moving forward.
Group 1: The Shortcut Users
AI is primarily used to cut corners and free up time for other activities. As one student put it: "Some kids use AI as a shortcut so they can spend more time on TikTok."
Group 2: The Amplifiers
Here, AI is something that can make us more capable and productive. Amplifiers see AI as a set of tools to enhance creativity and accomplish more of what they want.
Group 3: The Zaggers
A few campers recognized the power of AI, but prioritized developing skills that AI can’t replace. One said he was interested in being a Master Electrician, because the profession suited his interests and would still be around in 30 years.
Group 4: The Wait-and-See Approach
Others were understandably unsure, and preferred a wait-and-see approach. That’s fair to an extent, but to editorialize slightly, AI tools aren’t going anywhere. It’s not a fad. The sooner we get comfortable with them, the sooner we can decide how they will best serve our goals.
There’s one big reason I bring all of this up: Teenagers need a chance to talk openly about this issue so they can make good decisions about how to navigate it moving forward.
When I used the word “floodgate” at the beginning, it wasn’t an exaggeration.
If there was one thing that was clear to me from this conversation, it’s that our Senior Campers hadn’t had enough chances to talk openly with adults about AI.
A Camp Connection
It may seem odd for a tech-free summer camp to write about how kids can navigate a world of AI.
Your campers spent their days this summer swimming, playing games, and making friends around a campfire.
AI won’t change these camp experiences in any meaningful way.
Decades from now, we will still be building flexibility, resilience, and collaboration skills at camp.
Which is good, because kids will still need these core competencies no matter what the future holds (whether we’re colonizing Mars or being chased by T-1000s).
Camp isn't going to solve our AI questions.
But that’s okay.
Camp helps build confidence to navigate the unfamiliar. It reminds us that we can lean on each other, and we can practice making decisions that are aligned with who we want to be.
And honestly, after listening to those high schoolers, I'm more convinced than ever these skills are exactly what this next generation is going to need.
These are durable human skills (that T-1000s will never have).
Happy Friday,
Erec”
What Stuck With Me
When I first read this, two things struck me immediately:
The relief these teenagers felt in finally having a space to talk openly about AI.
The patterns in how they were making sense of it: shortcuts, amplification, zagging into durable skills, or waiting on the sidelines.
But underneath their stories was something deeper: an ethical discomfort.
One student admitted she felt “embarrassed to try” if others were cutting corners. That’s not a technical problem. That’s an ethical one.
Floodgates at Work
Like teenagers, most employees are also navigating AI in silence.
They are trying things quietly.
They are worried about “getting caught.”
They are unclear about expectations.
They feel pressure to keep up, even if it means cutting corners.
And when they’re finally invited into a safe, open conversation? The floodgates open, too.
That’s why the ethical imperative matters so much as I explained in my previous post. Without clear expectations and dialogue, shortcuts feel normal, and trust quietly erodes.
Posture: The Missing Piece in AI Adoption
When we talk about AI adoption, we often default to skills: prompt engineering, workflows, governance.
But skill only tells us what someone can do. It doesn’t tell us what they will do.
That’s where posture comes in.
Posture means the stance or orientation a person (or organization) takes toward AI. It has three parts:
Attitude: Do I see AI as a shortcut, an amplifier, a threat, or an opportunity?
Orientation: Do I lean in, hold back, or zag toward something different?
Ethical stance: Am I using it responsibly, or cutting corners just to keep up?
Skill is what you can do. Posture is how you choose to do it. And without the right posture — anchored in ethics — skills risk being misused or underused.
The Human Dimension of AI Adoption
Erec’s four groups — Shortcut Users, Amplifiers, Zaggers, and Wait-and-See — are powerful because they give us a simple entry point. But each group can show up at different skill levels.
That’s where the 12-box model comes in that I developed.

Shortcut Users Show Weak Posture Across the Grid
Unaware Users (low skill)
Lazy Dependents (intermediate skill)
Uncritical Exploiters (advanced skill)
Clever Cheaters (expert skill)
Posture unites them: shortcuts over responsibility.
Amplifiers Show Strong Posture Across the Grid
Responsible Experimenters (intermediate skill)
AI-Human Collaborators (advanced skill)
Responsible Amplifiers (high skill)
Posture unites them: AI as enhancer, not crutch.
Zaggers are Anchored in Human Strengths
Ethical Beginners (low skill)
Emerging Users (intermediate skill, balancing human-only skills with AI)
Posture unites them: protecting the human edge.
Wait-and-See Show Moderate Posture Across the Grid
Careful Skeptics (low skill)
Curious Dabblers (basic skill)
Pragmatic Adopters (intermediate skill)
Posture unites them: uncertainty, watching first.
From Individuals to Organizations
This dynamic doesn’t stop at individuals. Organizations also adopt postures.
BCG recently outlined four archetypes of companies scaling AI:
The Scaler: productivity by layering AI onto intact structures.
The Horizon Builder: investing to enable new capabilities while scaling.
The Reinventor: breaking silos and redesigning teams around AI.
The Streamliner; collapsing roles for efficiency.
These organizational postures mirror the individual ones:
Shortcut Users → Scalers (optimize today, risk ethical blind spots).
Amplifiers → Horizon Builders (expand capability responsibly).
Zaggers → Reinventors (reimagine systems around human strengths).
Wait-and-See → Streamliners (cost-driven caution, sidestepping reinvention).
Whether we’re talking about people or organizations, the truth is the same: AI adoption is as much about posture and philosophy as it is about skill or technology. And the philosophy is inseparable from ethics.
Why This Matters
Too often, we reduce AI adoption to tool rollouts, policy documents and if we’re lucky, some training programs. All are necessary, but not sufficient.
The questions we need to be asking:
What postures are people and organizations adopting?
Are they shortcutting, amplifying, zagging, or waiting?
Do they preserve the old, or reimagine the model?
And underneath it all: what ethical anchors guide those choices?
Until we surface those questions, until we make space for the floodgates to open like Erec, we’ll mistake activity for adoption. Whether it’s teenagers in classrooms or executives in boardrooms, the challenge of AI isn’t just what can this tool do? It’s about the stance we will I take in a world where this tool exists. In the end, AI adoption is posture before progress. And, well, posture without ethics isn’t progress at all.
I’d love feedback as I am working on a leader guide to help assess team on their stance and support development towards a particular stance that an organization aims to embrace.
What resonated, what did not?




Comments