Skip to content
← Back to LOPE

Your team, confident with AI.

Practical, no-hype training that gets your existing team working with AI tools in their own roles. Keyboards out, real tasks. Whether they write code or not.

Never used AI · 84% Tried a free chatbot · 16% Pays for AI · ~0.3% Uses coding tools · ~0.04%

Each dot = ~3.2 million people. 2,500 dots = 8.1 billion humans.

Less than 1% of people pay for AI tools.

16% have tried a free chatbot. The remaining 84% have never used AI at all.

Your technical staff are experimenting informally. Nobody owns the process. The non-technical people haven't started. Nobody's speaking the same language about what AI can actually do.

This training closes the gap.

What we cover

Everyone starts from real work.

Where is your team right now? What do they already know? We establish a baseline, then go hands-on with tasks from the room. The difference between a vague chatbot answer and a genuinely useful one becomes obvious fast.

Adoption baseline

Where your team sits. Most are further ahead than they think.

Hands-on with real tasks

Context-giving, prompt iteration, system configs. Not a product demo.

Same language

The whole team leaves understanding what AI can and can't do for the business.

Then we go deeper

Non-technical teams

Ops, finance, marketing, HR. Your team doesn't need to write code to get serious value from AI. They need someone to show them the tools and get out of the way.

Claude Cowork. Claude in Excel.

Tools built for people who don't write code but do real, complex work every day. This is where the biggest unlock happens for non-technical teams.

AI for your actual work

Each person picks a task from their own role and tries it with AI support. Not a canned demo. Real output they can use.

They walk away with

Real tasks done with AI from their own work
Configured tools they can use Monday morning
Understanding of what's possible without code
Data governance checklist in plain language
Confidence. Not theory.

Technical teams

Your devs have probably been experimenting already. The question is whether that experimentation is structured, reviewed, and heading somewhere useful.

Prompt discipline

Prompts are instructions. Instructions need the same discipline as code. What gets lost when someone else runs your prompt? What assumptions don't survive the handoff?

Claude Code, skills, and plugins

Competence with Claude Code for safe AI-assisted development. Skills and plugins for repeatable workflows. How to build with AI, not just use it.

Production safety

PR discipline for AI-generated code. Worktrees and sandboxing. Code review requirements. How to trust what the model produces without blindly shipping it.

They walk away with

Shared understanding of when and how to use AI
Prompt skills grounded in real experience
Competence with Claude Code and AI dev tooling
Code review practices for AI-generated output
Production-grade habits, not demo-grade excitement

Built around your team, not a fixed syllabus.

Every team starts from a different place. The training flexes to match: the content, the depth, the format, and the time. An afternoon or a full day. In-person or remote. Technical, non-technical, or both in the same room.

Flexible timing

An afternoon to a full day.

Mixed audiences

Same room, different depth.

In-person or remote

Whatever works for your team.

Want your team confident with AI? Let's talk about what they need.

Or reach me directly: training@lope.works LinkedIn