Jessica Marie, Founder and CEO of Omnia Strategy Group, leads her company at the intersection of technology, ethics, and impactful leadership. With a focus on “marketing as truth” and a vision of technology serving humanity, Jessica challenges the status quo in tech communication and strategic innovation. In this interview, she shares insights on navigating the complexities of AI, balancing innovation with ethical considerations, and how tech leaders can foster real societal change through bold thought leadership. Read on for a deeper look into her approach to shaping the future of technology.
In your vision, thought leadership plays a transformative role. What are some ways that you believe tech leaders can transcend traditional thought leadership to genuinely inspire societal change and foster more profound public engagement?
Many of today’s so-called “thought leadership” efforts amount to little more than echo chambers: the same ideas recirculated in boring press releases that nobody reads, or three-minute conference presentations. It’s no surprise that most people are tuning out. What they’re drawn to, instead, are platforms like long-form podcasts—two or three hours of real conversation that dives beneath surface-level talking points and addresses complex topics with unvarnished honesty.
Tech leaders who want to transcend this old, superficial model need to let go of the fear of being controversial. By definition, true thought leadership challenges entrenched ideas. If you’re hedging every statement, watering down opinions, or scrambling not to offend anyone, you’re just adding to the noise. Instead, boldness—paired with genuine curiosity and willingness to learn—is what captures people’s attention. That means being prepared for pushback, for misunderstandings, and occasionally for outright disagreement. But that’s the price of cutting through the fluff and offering something real.
Leaders can also deepen engagement by being unafraid of nuance. We live in a world that craves depth, yet most public statements are bullet points designed to fit a social media post. If you can speak or write in a way that embraces complexity—discussing not just the shiny possibilities of a new technology, but also its limitations, trade-offs, and moral weight—you’ll find an audience hungry for that candor. Long-form discussions reveal what matters: how you arrived at your viewpoint, what you learned from failures, and why your solution could actually improve lives.
Even seemingly simple innovations can drive profound societal shifts. A tech startup that introduces a simpler file-sharing tool is, at some level, challenging the old way of doing things. The difference between commonplace and transformative thought leadership is the willingness to present those changes as part of a bigger story—and to do so with conviction. That might mean spelling out exactly why the current system is broken, how the new approach addresses it, and what it will take to move forward responsibly. Yes, it’s riskier than publishing a polite press release, but it’s the only way to foster the kind of dialogue that leads to real societal impact.
Your philosophy of “marketing as truth” is a powerful and unconventional approach in an industry often driven by buzzwords and surface-level messaging. How did this philosophy evolve, and how do you implement it when guiding companies to craft their narratives authentically?
“Marketing as truth” began as a blunt reaction to the endless string of generic messaging that’s normalized in enterprise tech and cybersecurity. Everywhere I looked, companies were too busy messaging to their competition, not their customers, or even better, more “hacks” to search algorithms, rather than engaging real humans with real problems. Companies seemed more interested in stuffing their communications with acronyms, buzzwords, and platitudes that left me wondering, “Is anyone actually reading—or believing—this?” That realization sparked a new direction for me: it’s time for a totally new approach.
But championing straight talk is not for the faint of heart; it demands a radical rethinking of risk.
I purposely avoid using the word “authentic,” because even that term has been drained of meaning by overuse. At Omnia Strategy Group, we ask founders and leaders to examine their own appetite for risk. It takes guts to be candid. It takes guts to critique your industry’s sacred cows (and your own), and say something that is actually meaningful. It takes guts to hold a real opinion about what your technology solves—and what it doesn’t. Yet it’s precisely this boldness that separates companies who genuinely connect with their audiences from those who blend into the countless other companies saying the same thing.
When I work with companies, the first step is to banish the idea that we need to use the same terms and sound like everyone else just to “show up in Google search”. Rather than defaulting to the usual talk about “industry-leading” solutions, we dig into the founders’ core motivations, the challenges they’ve faced (professional and personal), and even the failures that shaped their products. We then turn those honest conversations into content that holds a point of view—whether that means admitting a product’s limitations or calling out complacency in the industry. By deliberately taking these risks, leaders prove they have nothing to hide. And that real, transparent approach is what creates the kind of loyalty and credibility that no AI-driven “hack” can replicate.
You speak passionately about technology as a servant to humanity. How do you navigate the balance between innovation and ethical considerations when advising leaders, particularly in fields as complex as AI and cybersecurity?
I believe technology should serve humanity and not the other way around. When advising leaders, I often start with a simple question: ‘Will we use technology, or will technology use us?’ It’s not just theoretical—we’re already seeing technology shape behavior in ways that don’t serve our highest good. We’re seeing a flood of legislation—over 120 AI-related bills in Congress, state-level actions in 45 states, and the EU’s first comprehensive AI Act—but regulation alone can’t capture the deeper societal, psychological, and even spiritual implications of these technologies. How can we effectively regulate what we don’t yet understand?
Too much of our current AI discourse is stuck in a narrow loop—debating jobs automated, money saved, or ethical lines crossed. Yes, these matter, but they barely scratch the surface of what AI and emerging tech mean for society. There’s a deeper dimension—societal, psychological, and spiritual considerations underneath the current conversations. How we handle that bigger conversation will determine whether these innovations ultimately help us evolve and expand or just feed into another wave of hype and confusion.
Sometimes, I wonder if it’s all part of a broader human narrative—one that intersects with our values, our emotional well-being, and even our sense of purpose. This means going beyond safe, predictable ‘ethics checkboxes.’ How might AI change the way we understand ourselves? How might it shape our relationships, our culture, or even our belief systems? These questions directly impact how a company positions its products, trains its workforce, and addresses public concerns.
AI and automation are reshaping business landscapes at an unprecedented pace. What long-term impacts do you foresee on organizational structures and workforce dynamics, and how can companies prepare for this shift without sacrificing human-centered values?
The rush to embrace both AI and automation makes sense—many tools are massively improving on how work gets done. But there’s a critical distinction that affects how organizations should prepare: while basic automation follows predetermined rules, AI-powered automation requires high-quality data to make intelligent decisions. Without clean, organized data, even the most sophisticated AI systems will produce nothing more than flashy misfires.
My LinkedIn feed is littered with “2025 will be the year of AI agents,” posts, yet it’s more likely to be the year organizations scramble to get their data house in order. That process—establishing clear data strategies, ensuring company-wide data literacy, and refining AI maturity—will be far more challenging than simply rolling out another chatbot or generative model.
We’re already seeing automation take root everywhere: virtual assistants field routine questions, AI-driven platforms handle outbound sales efforts, and software tools summarize long discussions or schedule entire calendars. This might seem impressive, but for true transformation, companies can’t skip the hard work of clarifying how and why data is collected, stored, and used. “Garbage in, garbage out” remains a universal rule, and you simply can’t spin crap data into AI gold.
And of course, automation has been reshaping work since the Industrial Revolution – from steam-powered looms replacing handweaving to assembly lines transforming manufacturing. Today’s AI and digital automation are simply the latest chapter in this centuries-long story. Like previous waves of automation, they will eliminate some jobs while creating entirely new ones. Many repetitive tasks are prime candidates for AI-driven workflows, but this also opens up opportunities for people to develop new skills that demand deeper analytical thought, contextual awareness, and the kind of human judgment technology still can’t replicate.
For leaders, this ultimately means getting serious about data and people at the same time. Before rolling out advanced AI initiatives, organizations should invest in data integrity, robust training programs, and strategies that align AI developments with genuine human-centered values. That includes reskilling employees so they can thrive in roles where they enhance, rather than compete with, AI. The individuals who are able to 10x their work by becoming power users of AI will be the winners.