1. Framing the Framework
For more than two decades, I’ve been circling this topic—exploring propaganda models, cult psychology, marketing strategies, digital influence, and the ethics of persuasion. Over the years, one pattern kept jumping out at me: the methods of influence and control overlap across wildly different domains of knowledge.
The same basic mechanics that shape a Super Bowl ad also appear—amplified and weaponized—in the sermons of cult leaders like Jim Jones. The tactics that drive a public safety campaign like Smokey Bear’s wildfire prevention or the old DARE anti-drug program are built on the same scaffolding as election interference operations run by nation-states. The difference lies not in the underlying tools, but in the intent, transparency, and ethical boundaries.
Yet in all my reading and research, I’ve never found a single framework that brings it all together—one that spans the entire spectrum from harmless persuasion to coercive control, from methods used for social good to those deployed for manipulation, exploitation, or harm.
That’s what this meta-taxonomy aims to do: to provide an overarching map of influence and control that cuts across psychology, sociology, and technology. It’s not perfect, and it’s not final. Methods evolve, contexts shift, and new tactics emerge faster than we can fully document them.
So consider this a living framework—a working tool designed to help us see the patterns, name the methods, and place them on the spectrum from benign to destructive. I’ll refine it as we go, and I welcome your insights. What’s missing? What’s overstated? Where do you see examples that challenge the categories?
2. The Meta-Taxonomy of Influence and Control
This table is the working “map” we’ll revisit throughout the series. It organizes influence into three primary layers—psychological, sociological, and technological—and a hybrid category for operations that span more than one layer.
Each entry includes the core mechanisms, a contemporary example, and the primary risk or ethical concern.
| Layer | Core Mechanisms | Contemporary Example | Primary Risk |
| Psychological | Cognitive biases, framing, repetition effect, emotional priming, gaslighting, anchoring, priming | Viral TikTok “health hacks” with no medical basis | Misinformation shapes beliefs and habits |
| Sociological | Groupthink, charismatic authority, ostracism, ridicule, cultural framing, identity enforcement | Online fandoms enforcing ideological purity through public shaming | Suppression of dissent, narrowing of acceptable views |
| Technological | Algorithmic amplification, microtargeting, synthetic media, deepfakes, surveillance capitalism | AI-generated political robocalls in U.S. primaries | Erosion of trust in democratic processes |
| Hybrid Systems | Coordinated narrative warfare, memetic campaigns, media framing, agenda-setting | State-aligned influencer networks amplifying foreign policy narratives across YouTube, TikTok, and Twitter | Manipulation of public opinion at scale |
This table is a condensed framework—a snapshot rather than a complete atlas. The mechanisms, examples, and risks listed here are not exhaustive; they are simply the tip of a very large iceberg. And this is one iceberg we can’t afford to ignore. Beneath the visible surface lies a vast network of interconnected tactics, hidden influences, and evolving strategies that shape how we think, what we believe, and how we act. The posts that follow will dive deeper into each layer, uncovering more of what’s lurking below the waterline.
How to Use This Framework
- Layers: Think of these as vantage points—psychological (inside the mind), sociological (in the group), technological (in the medium), and hybrid (across layers).
- Mechanisms: The specific tools or patterns used to shape beliefs or behaviors.
- Examples: Chosen for their recency and recognizability without being tied to a single partisan narrative.
- Primary Risk: Why the tactic matters and what’s at stake if left unchecked.
This table isn’t meant to lock each example in a single box. Many real-world influence efforts operate across multiple layers—just as a targeted political ad might combine emotional priming (psychological), identity signaling (sociological), and algorithmic targeting (technological).
Side Note: I’ve always loved taking complex subjects and conceptualizing them into a framework. It helps me see the moving parts more clearly, and then I can test the framework over and over again as real-life examples come up. I’m especially excited about this one because there’s an overwhelming amount of influence at work in the world right now—and most of it is far from benign.
3. Worked Example – Scientology Through the Framework
To make this framework more concrete, let’s apply it to a well-documented case: the Church of Scientology. This organization has been studied extensively by cult experts, journalists, and former members. Steven Hassan, PhD—author of Combating Cult Mind Control—analyzes groups like Scientology through the BITE model (Behavior, Information, Thought, and Emotional control), which offers a structured way to identify coercive systems. Leah Remini’s memoir Troublemaker and her Emmy-winning series Scientology and the Aftermath provide vivid, first-hand accounts of how these mechanisms play out in practice.
| Layer | Core Mechanisms in Scientology | Example | Primary Risk |
| Psychological | Loaded language, confession auditing (“security checks”), emotional manipulation through fear of “SP” status | Requiring members to confess “transgressions” in detail, often recorded, creating vulnerability and control | Erosion of self-trust, heightened dependency |
| Sociological | Groupthink, ostracism, charismatic authority, isolation | “Disconnection” policy cutting members off from family/friends labeled as “suppressive persons” | Social isolation, loss of external reality checks |
| Technological | Information control, surveillance of members and critics | Internal dossiers compiled on members; alleged monitoring of ex-members’ public activity | Chilling effect, silencing of dissent |
| Hybrid Systems | Coordinated harassment (“Fair Game”), legal pressure, media narrative management | Using lawsuits, PR campaigns, and organized picketing to intimidate defectors and journalists | Suppression of investigative reporting, public fear |
Key Insight
This case demonstrates how influence efforts rarely fit neatly into a single layer. Psychological tools (fear, emotional priming) are reinforced by sociological pressures (isolation, group loyalty) and, at times, technological methods (information control, surveillance). Together, they form a closed loop that makes exit difficult and outside correction nearly impossible.
How This Fits the Framework
This exercise shows how the meta-taxonomy can act as a diagnostic lens. Start by identifying what layer or layers a tactic operates in—does it primarily target the mind (psychological), the group (sociological), the medium (technological), or a combination (hybrid)? Then list the core mechanisms you observe and match them to real-world examples. Finally, note the primary risk—the harm that could result if the tactic goes unchecked.
With Scientology, the picture becomes clear: the same mechanisms that can be used for benign community-building are amplified and combined in ways that enforce loyalty, isolate dissenters, and silence criticism. The framework makes it easier to see not just the what, but the how—and that is the first step toward spotting these patterns in other contexts, whether in politics, marketing, or your own social circles.
4. Why Contemporary Examples Matter
It’s easy to spot influence tactics when they’re preserved in history books. It’s much harder—and far more important—to recognize them in the present, where they can still be challenged or countered. Modern platforms and political realities give us clear case studies of how psychological, sociological, and technological mechanisms of influence operate in real time.
| Example | Layer(s) | Mechanisms | Impact/Risk |
| TikTok Influence Campaigns | Psychological / Technological | Algorithmic amplification, emotional priming, identity framing, repetition effect | Rapid shaping of youth attitudes and beliefs, sometimes without source transparency or fact-checking |
| Twitter in the Arab Spring | Sociological / Technological | Networked mobilization, peer-to-peer amplification, agenda-setting, hashtag activism | Enabled mass organization and dissent, but also facilitated state surveillance and counter-messaging |
| Election Interference Ops | Hybrid | Coordinated narrative warfare, targeted disinformation, deepfake content, microtargeted advertising | Undermines trust in electoral integrity, polarizes populations, and manipulates voter behavior |
Why They Matter to the Framework
These examples show that:
- The tools evolve, but the levers remain constant—today’s algorithmic feeds and viral memes are simply new delivery systems for age-old persuasion methods.
- Multiple layers interact—TikTok influence blends psychological triggers with technological amplification; the Arab Spring mobilization mixed sociological network effects with digital platforms.
- The stakes scale up—while a misleading ad might change brand preference, coordinated disinformation in an election can alter the trajectory of an entire nation.
Recognizing these patterns while they’re happening is the goal of this series. The sooner we can identify the layer, mechanism, and risk, the sooner we can respond with informed skepticism or active countermeasures.
5. Forward Linkage
This hub is the anchor point for the series—a place you can return to as we explore each layer, mechanism, and example in more detail. Think of it as the map legend. Every future post will link back here so you can see where a specific tactic or case fits into the bigger picture.
The next post, “It’s All Lies (But Not All Equal),” will break down the spectrum of falsehoods—from harmless puffery to deliberate disinformation and malicious misuse of truth. We’ll explore why understanding the type of lie matters as much as catching it in the first place, and how context and intent change the ethical stakes.
Maps don’t prevent danger, but they help you see where the traps are laid. As this framework evolves, it’s your compass for recognizing influence—before it recognizes you.
I want you to test everything—like the Bereans in Acts 17:11, who received the message with eagerness but examined it daily to see if it was true. That applies to this framework, and to everything you encounter: the news you read, the ads you see, the posts in your feed, what people tell you, and what you hear on TV or in conversation. Pay attention not only to what is being said, but how it’s trying to influence you.
You can’t realistically test everything—but you can prioritize scrutiny. The greater the potential impact on your beliefs or behavior, the greater your responsibility to examine it carefully.
Take this framework for a test drive. Apply it, challenge it, and tell me where it works and where it needs refinement. Your feedback will help this living map become a sharper tool for everyone.
| Related Posts for Influence & Control Meta-Framework |
| Explore my series exploring the psychological tools, logical distortions, and social mechanisms that shape how influence and undue control operate. |
| Posts | References |
Excerpt
A living framework for spotting influence—from harmless persuasion to coercive control. Test it like the Bereans: examine claims, weigh impact, and scrutinize what most shapes your beliefs or behavior. The greater the potential effect, the greater the need for discernment. Influence is constant—learn to see it.



Leave a comment