Every time an engineer switches tasks, they pay a tax to re-ground the AI: re-pasting context, re-explaining the goal, restarting threads that drifted. Linguist names that cost, measures it in an instrumented sandbox, and trains it down with a published rubric.
Token spend, code review variance, "feels draining" — those are symptoms. The cause is that engineers re-pay the same context tax on every task, every interruption, every paused conversation. A team without shared patterns for re-grounding burns minutes per task, hours per week, weeks per quarter. And nobody measures it because nobody has a name for it.
Headline outcome. Trained engineers re-ground faster and resume paused work cleanly.
Every completer reaches the Enabled tier — competent, deliberate use of AI for real work.
Instrumented sandbox + published rubric. Bracket placement before, movement after.
| Component | Per engineer | Notes |
|---|---|---|
| Baseline audit | €0 | Pre-engagement, included. |
| Enabled programme (6 wk) | €800 | Cohort or internal-only. |
| Post-training assessment | €200 | Same rubric, same sandbox. |
| Advanced upgrade | €600 | Engineering teams only. |
Indicative. Minimum 10 engineers. Volume terms above 50. Optional quarterly refresh available post-engagement.
Every completer reaches the Enabled tier as defined by our published rubric, or we extend the programme at no additional cost until they do.
Linguist is the only published rubric for AI-fluency capability; Cyborg is the only instrumented sandbox built around it. Next step: a 30-min call to scope your team and book the free baseline — linguist.vuelolabs.com/teams