The Tier Zero Problem: Why AI Literacy Isn’t One Thing
A Cognitive Security essay on competency tiers, organizational risk, and why treating AI literacy as a single checkbox might be the most expensive mistake your institution makes.
Section: Cognitive Security | The Pattern Field
A note to readers: This is the second Cognitive Security essay. The first -- Push-Button Cybersecurity — examined how abstraction and workforce pressure are producing cybersecurity operators instead of practitioners. This essay extends that pattern to something broader: how we’re approaching AI literacy with the same flawed logic --treating a spectrum as a single point.
Somewhere right now, someone in your organization is pasting confidential data into a public AI model.
They’re not trying to be reckless. They’re trying to be efficient. No one told them the difference between a local deployment and a cloud API. No one explained that the data goes somewhere, that the prompt isn’t a private conversation. They just needed a summary of a long document, and the tool was right there, and it worked. Probably beautifully.
I call this AI Literacy “Tier Zero.” And many organizations are still living here without knowing it.
The Monolithic Fallacy
Over the past two years, I’ve seen many institutions -- universities, enterprises, government agencies -- scramble to address “AI literacy.” The pattern is consistent: someone in leadership declares that the workforce needs to be AI-literate, a committee forms, a training module gets built, everyone clicks through it, and a compliance box gets checked.
Done. AI-literate. Move on.
Except nothing actually changed, because the training treated AI literacy as one thing. One competency. One bar. One size, all roles, all contexts. The CEO and the data scientist got the same module. The adjunct faculty member and the IT director sat through the same webinar. Everyone emerged equally “trained” and equally unprepared, because the training was designed around the question “Do our people know about AI?” when the real question was “Do our people know what they need to know about AI for what they’re actually doing with it?”
That distinction between general awareness and role-specific competency is where a large share of AI literacy efforts quietly fail.
What Tier Zero Actually Looks Like
Before I describe the tiers, I need to name what lives below them. Because the frameworks don’t talk about this. They start at the first rung and assume everyone’s at least standing on the ladder. In my experience, most people are still on the ground. And some are digging.
Tier Zero is not ignorance -- it's often a lack of foundational understanding dressed up as a position.
It’s the staff member who uses ChatGPT daily and has never once questioned its output. It’s the faculty member who bans AI from the classroom but can’t articulate what a large language model actually does. It’s the executive who approved an AI initiative they don’t understand, based on a vendor demo that made everything look simple and sleek.
Tier Zero often manifests in three escalating levels of risk:
Risk to the organization. Confidential materials uploaded to public models, often via Bring-Your-Own-AI. Sensitive strategy documents summarized through tools with unknown data retention policies. Internal communications processed by APIs that may train on your input. This isn’t theoretical. It’s happening now, in organizations that consider themselves sophisticated.
Risk to the individual. Cognitive disengagement. The slow replacement of critical thinking with output consumption. Not “help me think through this” but “do this for me.” The person who stops reading the full report because the AI summary seems good enough, and never realizes the summary is incomplete. Or worse, doesn’t have enough foundational knowledge to catch what’s missing. This is the beginning of what I call the hollowing out problem: organizations using LLMs to produce important materials -- documentation, reports, strategic plans -- without understanding the tool’s failure modes. The output looks professional. It reads well. But key nuances are lost, critical context dropped, complexity smoothed into false simplicity. The document looks complete but the thinking it is meant to represent is merely a veneer.
Risk to society. A population that consumes AI-generated reasoning without the capacity to evaluate it. Something subtler than misinformation: the erosion of the habit of verification itself. When the tool is fast, designed to be confident, and mostly right, the incentive to check begins to dissolve rapidly. Multiply that across an entire workforce, and if we aren’t careful, across an entire generation of students. That’s not a technology problem -- it’s a civilizational one - in a time when we’re obviously still reeling from tech-led societal upheavals like algorithmic content lenses and dopamine loops.
These risks compound through specific failure modes that most AI literacy programs never cover -- the kinds of things a Tier 1 curriculum should include but almost never does:
Lost-in-the-middle failures. LLMs systematically underweight information in the center of long contexts. Feed a model a 50-page report and ask for a summary, and the findings on pages 20-30 may effectively disappear. If you don’t know this, you’re making decisions on incomplete information. And you don’t know it.
Temporal blindness. Models present outdated information with the same confidence as current information. They have no internal sense of "this may have changed." A user asking about a regulation, a vendor policy, or a market condition may receive an answer that was accurate 18 months ago and is wrong today. No flag, no warning, no expiration date on the information.
Sycophancy bias. Models tend to agree with the user’s stated premise rather than challenge it. Ask a leading question and you’ll get a confirming answer. If your assumption is wrong, the AI is likely to reinforce it rather than correct you. For a Tier Zero user who already has misplaced confidence in the tool, this is the failure mode that compounds all the others.
These aren’t edge cases. They’re how the technology works. If your literacy program isn’t covering them, your people are still at Tier Zero.
And this is where AI literacy diverges from every other kind of organizational training. Fire safety, HR compliance, security awareness -- these are comparatively more stable bodies of knowledge. Learn them, refresh annually, move on. AI is different. The capabilities shift every few months. The same tool carries radically different risk depending on what you’re doing with it. And unlike most tools, when AI fails, it fails invisibly -- giving you confident, professional-looking wrong answers that you have no reason to question unless you’ve been trained to.
That’s why achieving real Tier 1 literacy across an organization -- foundational understanding of what these tools are, how they fail, and where the ethical boundaries fall -- is a significant accomplishment. Most organizations haven’t even defined what Tier 1 means for their context, let alone achieved it. And it’s why we can’t stop at Tier 1. The nature of the technology demands contextual, role-specific literacy in a way that most organizational training simply doesn’t.
The Frameworks Already Exist -- Nobody’s Reading Them Together
When I started looking at this seriously, I realized the frameworks for tiered AI literacy already exist. However it appears that few folks are paying them much attention just yet.
UNESCO published competency standards. EDUCAUSE developed institutional readiness frameworks. The University of Florida built a role-based model. Independently, they arrived at similar structures -- three tiers of competency, differentiated by role and context.
When we synthesized them, the convergence was clear:
Tier 1 -- Foundational (Understand & Ethics). The baseline. AI citizenship: what these tools are, what they can and can’t do, where the ethical boundaries fall. Data privacy. Bias and hallucination. Impact assessment. Every person in every role needs this, and achieving it across an organization is a real accomplishment, not a formality.
Tier 2 -- Operational (Apply & Engage). Effective use. Writing prompts that produce useful output. Integrating AI tools into existing workflows. Understanding when AI-assisted output needs human verification and when it can be trusted. This tier looks different for every role: a faculty member at Tier 2 is using AI to enhance pedagogy and research; a staff member at Tier 2 is using it to improve operational efficiency; a student at Tier 2 is using it to deepen learning without replacing it.
Tier 3 -- Strategic (Create & Lead). Designing AI-augmented systems. Setting policy. Making governance decisions. Building new capabilities. Leading organizational AI transformation. This is the smallest population in any institution, and the one that needs the deepest understanding of both the technology and its implications.
The key insight isn’t the tiers themselves. It’s that different roles need to be at different tiers, and the competencies within each tier change based on what you actually do.
A faculty member at Tier 1 needs to understand AI’s impact on learning outcomes. A staff member at Tier 1 needs to understand data privacy and institutional policy. A student at Tier 1 needs to understand AI citizenship and the ethics of academic integrity. Same tier. Completely different content. Treat them identically and you’ve wasted everyone’s time.
“You Must Be This Tall to Ride”
One of my team recently said something that stuck with me: “AI needs to be our first question before we work on something. The question, not the tool. ‘Wait -- how could AI help with this?’”
He’s right. The cultural habit hasn’t formed yet. Most people, even technical people, default to old methods. They don’t think to reach for AI. The reflex isn’t there yet. Building that reflex is important.
But that same reflex without literacy is Tier Zero strapped into a jet pack.
If “how could AI help?” becomes the first question but “what are the risks of using AI for this?” never becomes the second, you’ve just automated your way into the danger zone faster. The habit is necessary. The understanding is what makes it safe.
This is where the “you must be this tall to ride” framing comes in. Different roles, different rides, different minimum heights. Your CISO, CTO, and CAIO need to be at Tier 3 -- they’re setting policy and making governance decisions about AI deployment across the enterprise. Your IT helpdesk staff need solid Tier 1 with movement toward Tier 2 -- they need to understand data handling and use AI tools effectively in their workflows. Your marketing team might need Tier 2 for content generation but Tier 1 awareness on data privacy.
The point isn’t that everyone needs to reach the top tier. Every role has a minimum, and that minimum isn’t the same for everyone. Defining those thresholds, role by role, is the actual work of AI literacy strategy as we move forward.
The Governance Gap
In my graduate work on AI security policy, I spent considerable time examining how different nations approach AI governance. The gap is real.
The European Union, through the AI Act and supporting frameworks, has established structured approaches to AI literacy that account for risk levels and use contexts. They’re thinking in tiers, even if they don’t use that exact language. High-risk AI applications get different scrutiny than low-risk ones. The humans in the loop get different competency requirements based on what they’re overseeing.
The United States, as of this writing, lacks comparable federal coherence. We have useful building blocks -- the NIST AI Risk Management Framework provides excellent guidance on identifying and mitigating AI risks, and several executive orders have pushed in the right direction. But there’s no unified literacy standard that connects technology governance to human competency requirements. We’re building the car and setting speed limits but not requiring a driver’s license. Even the EU AI Act, which mandates AI literacy under Article 4, requires organizations to “take measures to ensure” competency -- but imposes no obligation to actually measure it. You can be legally compliant and functionally illiterate.
And this is where a critical nuance gets lost in most AI literacy conversations: WHAT you’re doing with AI is just as important as HOW you’re using it.
Using AI to draft a meeting agenda is a fundamentally different risk context than using AI to analyze patient data. Using AI to help a student brainstorm essay topics is different from using AI to grade those essays. The literacy required -- the tier, the specific competencies, the governance guardrails -- should reflect that difference. But the monolithic approach treats all AI use as equivalent, which means it prepares people for none of it specifically.
NIST’s AI Risk Management Framework gets this right in principle: risk is contextual, and governance should be proportional. What’s missing is the translation layer -- the bridge from abstract risk categories to concrete human competency requirements, role by role, use case by use case.
The Higher Ed Blind Spot
Universities are in a peculiar position. They’re tasked with preparing students for an AI-transformed workforce while their own faculty and staff are, in many cases, still at Tier 1. Or Tier Zero.
Faculty are wrestling with AI in the classroom -- some embracing it, some banning it, most somewhere in between and unsure. Staff are working through AI tools being introduced into administrative workflows without structured guidance. Students are using AI constantly, with varying degrees of sophistication, and often with more fluency than the people teaching them -- though fluency and literacy are very different things.
The honest assessment: most faculty at most institutions are currently at Tier 1. And that’s not a failure of the individuals. It’s a failure of institutional strategy. If your AI literacy effort is a single workshop or a self-paced online module, you’ve invested in awareness, not competency. Awareness is the start of Tier 1. It’s not the finish.
The deeper issue is that universities often approach this as a technology training problem when it’s actually a cultural transformation. The question isn’t “Can our faculty use AI?” It’s “Do our faculty understand AI well enough to make sound pedagogical decisions about its role in their specific disciplines?” Those are different questions requiring different investments.
Some classes may require deeper AI engagement than others -- a data science course and a creative writing seminar have fundamentally different relationships with generative AI. Treating them the same, through a universal AI policy, is the monolithic fallacy applied to curriculum. The tiered model offers a way out: define the appropriate tier for each context, then build toward it.
The Enterprise Mirror
Everything I’ve described in higher education applies directly to the enterprise -- and if anything, the stakes are higher because the consequences arrive in the form of breached data, regulatory penalties, and competitive failure rather than pedagogical shortcomings.
As a CISO, I’ve observed organizations often make the same mistake: one AI training, one acceptable use policy, one set of guidelines for everyone from the executive suite to the operations floor. The CEO gets the same PDF as the security analyst. The compliance officer gets the same webinar as the software developer.
The result is predictable. The CEO remains at Tier Zero with a certificate that says Tier 1. The developer builds AI-integrated systems without understanding model failure modes. The compliance officer can’t evaluate AI risk because the training never addressed governance at that level. Everyone is “AI-trained.” Nobody is AI-literate for their actual role.
The tiered model gives organizations something they desperately need: a shared vocabulary for AI readiness that isn’t binary. Instead of “AI-literate” or “not AI-literate,” you get a matrix. You know where each role should be, where each person actually is, and what the gap looks like. That’s not just a training plan. It’s a risk map.
And it sets a cultural standard. When an organization adopts tiered AI literacy, it’s making a statement: we take this seriously enough to differentiate. We’re not going to pretend that checking a box makes you ready. We’re going to define what ready means for your specific work and then help you get there.
What Comes Next
I’m not anti-AI -- I use it extensively in my own work, including security operations, research, and teaching. The defensive and creative applications are real and transformative. The tools will keep getting better. The failure modes I’ve described will improve. And that’s precisely when the literacy question becomes most urgent -- because when the tool gets good enough that you can’t tell when it’s wrong, the only thing protecting you is your own understanding.
The frameworks exist. The convergence across UNESCO, EDUCAUSE, and institutional models is clear. The tiered structure works. What’s missing is:
Adoption of tiered thinking. Stop asking “Are our people AI-literate?” Start asking “AI-literate for what?” Define the tier requirement for every role. Make it specific. Make it measurable. Make it matter.
Tier Zero recognition. Acknowledge that most of your organization is probably below Tier 1. That’s not shameful -- it’s an honest starting point. But you can’t move from where you are if you don’t admit where you are.
Role-specific pathways. Begin to orient and build literacy programs that account for the fact that a faculty member’s Tier 2 looks nothing like a system administrator’s Tier 2.
Governance integration. Connect literacy tiers to policy and risk management. If your AI acceptable use policy doesn’t account for competency tiers, it’s a document, not a governance framework. Lean on NIST AI RMF. Learn from the EU’s risk-based approach. Build something that ties human competency to organizational risk tolerance.
Cultural commitment. Make tiered AI literacy part of the organizational identity -- an ongoing commitment, not a one-time training event. The goal isn’t compliance. It’s a shared standard for how seriously your organization takes the most powerful technology to enter the workplace in a generation.
The organizations that get this right -- that treat AI literacy as a spectrum rather than a switch -- will have something their competitors won’t: agility grounded in understanding. They’ll move faster because their people understand the tool, not despite having spent time learning it.
The ones that don’t will keep clicking through modules, checking boxes, and wondering why their AI initiatives keep producing results that look right but aren’t. They’ll have literacy on paper and Tier Zero in practice.
The choice is the same one it’s always been: depth or surface. Understanding or operation.
This essay builds on a synthesis of AI competency frameworks from UNESCO, EDUCAUSE, and the University of Florida, mapped into a unified Tri-Core model. The original matrix is available upon request.
This is the second Cognitive Security essay in The Pattern Field -- exploring how abstraction, automation, and institutional inertia reshape the relationship between humans and the systems they depend on. Different domains. Same pattern: depth rewards those who pursue it.
What’s your institution’s AI literacy strategy? One-size-fits-all training, or something more differentiated? I’d like to hear what’s working — and what isn’t — in the comments.
James Thomas Webb is a CISO, cybersecurity educator, and the builder behind The Pattern Field. He writes about systems, security, and what happens when organizations mistake operation for understanding.
AI transparency Statement: This essay was developed through a form of Socratic dialogue where AI acts as a partner to query and challenge positions; This use is meant to accelerate my own ability to present my own ideas at speed while also managing my many commitments. Most Importantly: These are my own thoughts, experiences, and editorial judgments.
References
UNESCO. “AI Competency Framework for Teachers.” 2023. Framework for integrating AI literacy into educator preparation across foundational, operational, and strategic competencies.
EDUCAUSE. “AI Landscape Study and Framework.” Institutional readiness frameworks for AI integration in higher education, including role-differentiated competency models.
University of Florida. “AI Across the Curriculum” initiative and EDUCAUSE Student Report. Role-based AI competency standards for faculty, students, and staff.
National Institute of Standards and Technology (NIST). “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” January 2023. Risk-based approach to AI governance emphasizing contextual risk assessment and proportional controls.
European Union. “Artificial Intelligence Act.” Regulation (EU) 2024/1689. Risk-tiered regulatory framework establishing differentiated requirements based on AI application context and potential harm.
Liu, Nelson F., et al. “Lost in the Middle: How Language Models Use Long Contexts.” Transactions of the Association for Computational Linguistics, 2024. Demonstrates that LLMs systematically underweight information placed in the middle of long input contexts.
Webb, James Thomas. “AI Security Policy: A National Regulatory Framework.” aisecuritypolicy.org. Graduate policy work proposing risk-based AI governance structures for the United States.
Webb, James Thomas. “Push-Button Cybersecurity: The Dangerous Comfort of Not Knowing How Things Work.” The Pattern Field, 2026. First Cognitive Security essay examining how abstraction and workforce pressure produce operators instead of practitioners.




