Flipside AI (aka: edisyl) — Strategic Overview, 2026

The data is there.
It just doesn't work.

edisyl builds and deploys AI agents across unstructured data sets, to deliver decisive actions and increase efficiency, to make humans more powerful.

edisyl — 2026

What this is

edisyl makes unstructured and complex data usable by AI agents. We've built a three-layer system — ingestion, semantic understanding, and coordinated agent deployment — that enables the data enterprises already have to be performant.

"Every enterprise has data. Almost none of it is AI-ready. And the gap between having data and acting on it is where most AI investments die."

01
The
Problem
Why AI stalls
02
The
Architecture
Three layers
03
Proof
In production
04
Why
Now
Arrival, not pivot
05
Team
Practitioners

Time to stop wondering if your numbers are right or why your AI is 'hallucinating', so you can focus on results.

At a glance
8
Years building data infrastructure at scale
Hundreds
Agents deployed across enterprise data environments
250+
Skills actively orchestrating together
Chapter 01 — The Problem

Data exists.
AI can't use it.

The bottleneck in enterprise AI is not the model. It is everything underneath the model. Enterprises have years of accumulated data — in CRMs, email archives, note fields, document repositories, warehouses — and almost none of it was collected with AI in mind. Unstructured, disconnected, inconsistent. Agents guess at it. They break against it. And AI initiatives stall here before anything meaningful happens.

"Better models do not fix unstructured data. Faster infrastructure does not fix it. A different architectural approach does."

The pattern
Step 01
Data everywhere
CRMs, emails, notes, databases. Collected for years without AI in mind.
Step 02
Agents can't see it
Unstructured, inconsistent. Off-the-shelf agents guess and break.
Result
AI initiatives stall
Not model failure. Data failure. Cloud spend doesn't fix this.
The evidence
34%
of enterprises are truly reimagining their business with AI — the rest are stuck at efficiency gains
Deloitte, 2026 AI Report
20%
are currently growing revenue through AI. 74% plan to. That gap is a data infrastructure problem.
Deloitte, 2026 AI Report
43%
of enterprises evaluating agentic AI in 2026. Most will discover data readiness is the binding constraint.
2026 Market data
Why this matters
  • The data problem is not a storage problem and it is not a model problem. It is a preparation problem. Most enterprise data was collected across many systems, over many years, without any consideration for how an AI agent would need to access or interpret it.
  • Unstructured data is the harder version of this problem — and the less solved one. The majority of an organization's most valuable information lives in notes, emails, documents, and conversations that have no schema, no labeling, and no consistent format.
  • The organizations that solve this first will have a structural advantage. The ones that don't will keep launching AI pilots that stall before they deliver anything measurable.
Chapter 02 — The Architecture

Three layers.
One system.

Each layer depends on the one before it. All three together is what makes agents reliable at enterprise scale. Most organizations trying to deploy AI are starting at layer three — the agents — without having built layers one and two. This is why they fail.

"The architecture is the argument. Agents without a semantic layer are guessing. Agents without clean data underneath are guessing at noise."

The three layers
  • 01
    Data Ingestion

    Extract and standardize data from wherever it lives — structured databases, CRM systems, cloud warehouses, document repositories, email archives, and unstructured note fields. The result is a unified, agent-accessible data layer. This is the step most organizations skip. It is the reason most AI deployments fail.

    Structured Unstructured CRM Documents Warehouses
  • 02
    Semantic Layer

    Encodes how an organization understands its own data — what terms mean, how categories relate, what a high-value signal looks like in context. Agents stop guessing and start interpreting the way a domain expert would. This is what separates an agent that produces outputs from one that produces the right outputs.

    Org Logic Definitions Context Scoring
  • 03
    Lattice — Agent Fleet

    Coordinated agents with persistent memory. Unlike off-the-shelf agents that lose context as tasks grow, Lattice saves every completed step. Any agent picks up exactly where the last one left off. This makes multi-day, multi-step enterprise workflows reliable — something that is simply not possible with standard agentic tooling today.

    Persistent Memory Multi-Agent Long-Running
The flow
Layer 01
Data in
Clean, standardized, unified
Layer 02
Meaning applied
Org logic encoded
Layer 03
Agents deployed
Persistent, coordinated
Result
Outcomes delivered
Measurable, repeatable
Chapter 03 — Proof

Two patterns.
Both in production.

We focus on two validated application lanes. Both draw on the same underlying architecture. Both are in active deployment with enterprise clients today. Not prototypes. Not pilots awaiting sign-off. Working deployments producing measurable outcomes.

"The proof is not that the architecture is clever. The proof is that it works — on real data, for real organizations, in days rather than months."

Pattern one — Intelligence from hidden data
Leading performing arts institution
Turn unstructured data into an intelligence engine.

Years of contact history. Unscored. Unranked. Invisible to any agent without the right preparation underneath it. An edisyl agent fleet ingested 807K records, learned how the institution defined donor value, applied that semantic layer to the entire contact base, and surfaced 17,000 high-priority leads — written back into HubSpot, ready to act on, in six days.

807K
Contacts ingested
17K
High-priority leads surfaced
6
Days from start to live agent
Model accuracy
0.734
Pattern two — Workflow and pipeline efficiency
Major financial institution
Automate what consumes your engineers.

Enterprise data engineering teams spend the majority of their time on work that is complex but not creative: building transformation pipelines, writing boilerplate code, validating outputs at scale. edisyl agents take this on entirely. Given the data environment and a specification, the fleet generates production-ready DBT pipeline code and iterates on quality with a consistency human teams cannot match at volume.

80%+
Runnable output target
50%+
Engineering time reduction
7 Fig.
Results delivered
Pipeline automation
80%+
Chapter 04 — Why Now

Eight Years.
AI Finally Caught Up.

For eight years, Flipside built and operated data infrastructure at a scale most enterprise teams never encounter. Working with blockchains (the most complex data set imaginable) we maintained 7 trillion rows of data, and resolved over 700 million data entities across more than 20 networks. We learned how to make messy, high-volume, partially structured data usable at speed. The enterprise AI market has spent two years discovering that model capability is not the constraint. We have been solving the actual constraint since 2017.

"The companies winning in 2026 are not the ones who deployed the most AI tools. They are the ones who did the hard infrastructure work first."

The timeline
2017 — 2022
Built blockchain data platform at scale. Processed trillions of rows. Resolved 700M+ entities. Built the infrastructure discipline.
2023 — 2024
Developed the AI layer. Lattice orchestration. Skills architecture. Semantic data engine.
2025 — Now
First enterprise deployments live. Major consulting and financial services partners engaged. Market confirmed.
The market signal
171%
Average projected ROI from agentic AI deployments in 2026. Companies that get the data foundation right capture this return.
43%
Of enterprises evaluating agentic AI this year. Most will hit the data wall within their first deployment.
Why this moment
  • The enterprise AI market is at an inflection point where the bottleneck has shifted from model capability to data infrastructure and agent orchestration. This is not a theoretical argument — it is what [Major Consulting Firm], [Major Financial Institution], and others are discovering in their client deployments right now.
  • Flipside's eight years working at scale with complex, high-volume data across more than 20 networks was direct preparation for this moment. The instincts, the tooling, and the team are not borrowed from adjacent fields. They were built specifically for this class of problem.
  • The window for establishing a differentiated position in enterprise AI infrastructure is open. It will not stay open indefinitely. The organizations that move now will define the standard. The ones that wait will find the standard already set.
Chapter 05 — The Team

Practitioners.
Not theorists.

These are people who have shipped things at scale, worked with major enterprises, and are building this from conviction and evidence — not trend-following. The claims in this document are not hypothetical because of the people behind them.

"The best argument for the product is the team that built it. Their track record precedes what they're building now."

The people who built this
Dave Balter
CEO · Co-Founder
Thirty years and seven companies. Executive roles at dunnhumby and Pluralsight before co-founding Flipside. Has run M&A processes, scaled enterprise sales organizations, and built through multiple market cycles. Approaches each transition the same way: find where the real problem is, and build the thing that solves it.
Jim Myers
CTO · Co-Founder
Architected Lattice from the ground up. Has been building Flipside's data infrastructure since day one. The depth of what the system can do reflects eight years of learning what hard data problems actually require. The technical claims in this document are not hypothetical because of him.
Eric Stone
Chief Data Scientist · Co-Founder
Forward-deployed. Inside client environments, working through data architectures, navigating organizations, and translating what the platform can do into what a specific client actually needs. The Interlochen deployment is his. The Fidelity pipeline methodology is his. He does not build to demonstrate capability. He builds to deliver outcomes.
Team composition — 34 people · March 2026

The team is built around two deep capability clusters. Combined, they represent the full stack required to deploy AI agents on complex enterprise data — from raw infrastructure to applied intelligence to client outcomes.

AI Architecture
9 engineers · 2 data scientists
  • CTO & Co-FounderPluralsight · Smarater
  • COOdunnhumby (Global Head Product Eng)
  • Director, EngineeringUKG · Brightcove · Hasbro
  • Principal EngineerPromobase
  • Principal Software EngineerDraftKings · Mil Crypto Lab
  • Senior Software EngineerDraftKings · Rainier
  • Software EngineerDataAssembly · AECOM
  • Senior DevOps EngineerFinancial Recovery Tools (Fin)
  • VP, Data Science (PhD)Good Judgment Inc · TD Bank
  • Senior Data ScientistFlipside (Data Lead)
Pluralsight · DraftKings · J.P. Morgan · Booz Allen · Accenture · dunnhumby
Quantitative Intelligence
6 analytics · 3 data scientists · 1 GTM eng
  • Chief Data Scientist & Co-FounderPluralsight · USAA Statistics
  • VP, Data AnalyticsPluralsight · MBA · Smarater · BotAgent
  • Manager, Data AnalyticsMcKinsey · Framebridge · Catalant
  • Manager, Analytics EngineeringSSB (BI Manager) · DIRECTV
  • Manager, Analytics EngineeringRazor · Syndicate · Chainlink
  • Senior Analytics EngineerNF Think · Barokaidi · Couaang
  • Analytics EngineerMino Games
  • Junior Analytics EngineerPurdue (Blockchain Instructor) · Xanga
  • Senior Data ScientistKeybank (Quantitative Risk)
  • Principal, GTM EngineeringCyCognito · Kinsa · Puppet
3.5 yrs avg tenure  ·  73% engineering/data background
Chapter 06 — Contact

The right conversation
is not a demo.

We forward deploy AI data specialists to understand and build POCs for enterprises grappling with large complex data sets, where unstructured information creates opportunities for solutions.

Let's talk about
the real problem.

If you are a technology or strategy leader who has hit the data wall on an AI initiative, or an insurmountable data challenge, we'll bring expertise to your doorstep, and solutions that can be customized to your needs.

[email protected]
We are doing
Enterprise AI over unstructured data
We are doing
Workflow and pipeline automation at scale
Partnership type
Strategic acquisition or joint venture
Not this
Demos. RFPs. Pipeline fills.
What to expect
  • A first conversation focused on understanding your context — not presenting ours. We want to understand what you're trying to accomplish before we say anything about what we've built.
  • If there's a fit, we'll show you the architecture in depth and walk through the proof points in detail. We can move fast once we understand the problem we're solving together.
  • We are selective about who we spend time with. If this document resonated, that is already a signal worth following up on.