Fluree Core Knowledge Graph Intelligent Database
Fluree Sense Structured Data AI Golden Record Pipeline
Fluree CAM Unstructured Data Auto Content Tagging
Fluree ITM Taxonomy Manager Controlled Vocabularies
Fluree HQ 486 Patterson Ave Ste 221 Winston-Salem, NC 27101 – – –
Fluree NY 222 Broadway New York, NY, 10038 – – –
Fluree India 5th Floor, Trifecta Adatto, c/o 91 Springboard Business Hub Pvt Ltd, 21 ITPL Main Rd, Garudachar Palya, Near Phoenix Mall, Karnataka-560048 India
Fluree CO 14143 Denver West Parkway, Suite 100 Golden, CO 80401 – – –
Fluree EMEA 18, rue de Londres 76009 Paris, France
If you’ve deployed an AI assistant for your business, you’ve probably experienced this moment: An executive asks a simple question about customer data, and your LLM confidently delivers an answer that’s completely wrong. It cites customers that don’t exist. It invents product features. It contradicts itself across different conversations. And worst of all, it does this with unwavering confidence. Welcome to the enterprise AI hallucination problem—the single biggest barrier preventing enterprises from trusting AI for critical decisions.
The problem isn’t your LLM. It’s how you’re feeding it information.
In this deep dive, we’ll explore why traditional Retrieval Augmented Generation (RAG) struggles with enterprise data, and how semantic GraphRAG achieves accuracy rates of 95%+—making hallucinations virtually extinct.
Traditional RAG fails for two distinct reasons, depending on whether you’re working with unstructured documents or structured enterprise data. Despite the different approaches, both problems lead to the same outcome: an 80% accuracy ceiling where one in five answers is wrong.
Let’s start with how RAG handles documents—PDFs, contracts, emails, meeting notes. The process seems logical at first glance.
Traditional document RAG begins by breaking your documents into smaller pieces, typically paragraphs or sentences. These chunks are then converted into vectors, which are essentially mathematical representations that capture the semantic meaning of the text. When you ask a question, the system converts your question into a vector as well, then searches for chunks whose vectors are mathematically similar. The system hands those matching chunks to the LLM, which generates an answer based on what it found.
Sounds reasonable, right?
Here’s where it breaks down: vector similarity isn’t the same as semantic accuracy.
Consider a real-world scenario. Your legal team asks: “What are our liability limits in the Johnson Industries contract?”
Vector search finds these chunks with high similarity scores: “…liability shall not exceed $5M annually…” scores 0.94 for similarity. “…Johnson Industries indemnification provisions…” comes in at 0.89. “…under Section 4.2, liability limits apply…” matches at 0.87.
Based on these highly similar chunks, the LLM confidently responds: “Your liability limit with Johnson Industries is $5M annually.”
But here’s what the LLM doesn’t know, and can’t know from isolated chunks. Is this the 2019 contract or the 2023 amendment? Is this for the parent company, Johnson Industries Inc., or the subsidiary, JI Holdings? Is this limit still active or was it superseded by newer terms? Does this apply to product liability, service liability, or both? Are there exceptions buried in Section 8 that modify this limit?
The chunk contains the number, but the LLM lacks the context that makes that number meaningful.
Vector search successfully found mathematically similar text, but it failed to provide the critical metadata that determines accuracy. It couldn’t tell you which contract this came from, which version applies, or what year these terms were negotiated. It missed the cross-references that connect this clause to exceptions, amendments, or related provisions. It didn’t understand the entity relationships between parent companies, subsidiaries, and partners. And it had no way to distinguish between original terms and current terms that may have been modified over time.
The result is predictable: the LLM fills these gaps with statistically probable guesses. Sometimes it gets lucky and guesses correctly. Often, it hallucinates.
Now let’s look at a different problem with the same outcome.
When your data already lives in structured systems—Salesforce, ERP, Zendesk, billing systems—vector search isn’t even the issue. Here, the problem is how relational databases store information without encoding semantic relationships between facts.
Imagine an executive asks: “Which customers are at risk of churning?”
Your data exists in disconnected tables scattered across multiple systems. Salesforce maintains a Customers table with basic information like customer ID 12345 for Acme Corp, noting they’re an Enterprise tier account. Meanwhile, Zendesk stores support tickets in a completely separate table, showing that customer 12345 has submitted tickets T-001 and T-002, both marked as High severity on dates in mid-January. Your billing system tracks invoices in yet another table, revealing that invoice INV-789 for customer 12345 shows Overdue status with a due date of January 1st. The marketing platform maintains its own engagement metrics, recording that customer 12345 has opened zero emails in the past 30 days.
Traditional RAG queries each of these tables separately. It retrieves “Acme Corp, Enterprise tier” from the customers table. It pulls “8 tickets, 5 high severity” from support. It finds “Invoice overdue 15 days” in billing. It discovers “0 email opens in 30 days” from marketing. Then it hands all these disconnected results to the LLM.
Consider what the LLM actually receives: isolated facts from different systems with no explicit relationships between them. There’s no semantic meaning explaining what “High severity” actually indicates about churn risk. There’s no business context connecting overdue payments to support issues. The LLM is left to make educated guesses.
Are these facts even about the same customer, or is it just matching customer IDs to company names? Do payment delays indicate customer dissatisfaction, or are they simply cash flow timing issues? Is 8 tickets high or actually normal for Enterprise customers? Are these signals independent events, or are they correlated indicators of deeper problems? Most critically, what’s the causal relationship between support volume and actual churn?
The LLM responds by making statistically probable inferences based on patterns it learned during training. Sometimes those patterns match your business reality. Sometimes they don’t.
The result is that familiar 80% accuracy ceiling. One in five customers gets flagged incorrectly—either false positives that waste your sales team’s time on healthy accounts, or false negatives that miss real churn risks until it’s too late.
Whether you’re using vector search on documents for unstructured RAG, or SQL queries on databases for structured RAG, you inevitably hit the same ceiling: roughly 80% accuracy.
The root cause is fundamentally the same across both approaches. Traditional RAG treats information as disconnected fragments. Documents get broken into isolated chunks with no broader context. Database tables store isolated facts with no semantic meaning connecting them. LLMs receive these fragments and are forced to guess at the connections between them.
Both approaches are missing the same critical element: explicit semantic relationships.
Knowledge graphs take a fundamentally different approach. Instead of treating your data as disconnected chunks of text, they represent information as a network of explicitly defined relationships.
Think of it this way:
Traditional Database Thinking:
Knowledge Graph Thinking:
See the difference? Every piece of information exists in context, with explicit connections to everything else.
Here’s where it gets really interesting. Fluree conducted research comparing how LLMs perform when retrieving information from three different data sources:
The methodology was straightforward: Give an LLM a series of 20 questions, starting simple and progressively getting harder. The questions required retrieving information from multiple systems—the kind of complex queries businesses ask every day.
With relational databases:
With centralized knowledge graphs:
With semantic knowledge graphs:
The secret lies in how LLMs actually process information. Here’s something fascinating: LLMs are themselves massive networks of statistical correlations. They understand relationships naturally because they are relationships.
When you give an LLM data structured as a knowledge graph using GraphRAG, you’re speaking its native language.
Traditional RAG converts everything to numbers and matches similar numbers. GraphRAG uses semantic standards like RDF (Resource Description Framework) that explicitly define what things mean.
For example:
The difference? One is pattern matching. The other is understanding.
Remember that revenue example earlier? Here’s how GraphRAG handles it differently:
Query: “How did Q3 revenue perform?”
Traditional RAG: Finds chunks mentioning “Q3” and “revenue,” might return contradictory information
GraphRAG issues actual queries to real data using explicit relationships:
Q3 2024 → Revenue → Actual Amount
Q3 2024 → Revenue Target → Target Amount
Actual vs. Target → 15% increase
Every fact is traceable to its source. No guessing. No hallucinations. Just a series of actual queries to actual data that is unified and integrated.
Think of ontologies as a universal language for your business. Instead of having “customer” in your CRM, “client” in your ERP, and “account” in your billing system, an ontology defines that these are all the same concept: a Business Entity that Purchases Products.
When an LLM works with ontology-based data:
This provides a level of intelligence and understanding that traditional RAG just cannot achieve.
For GraphRAG to work in production environments—not just research labs—it needs three critical capabilities: universal connectivity into data sources, 100% verifiable accuracy, and embedded security.
Your enterprise data lives everywhere:
The Reality: Most organizations have 10+ disconnected systems. Traditional RAG requires massive ETL projects to consolidate this data. By the time you finish integrating everything, the requirements have changed.
GraphRAG Solution: Connect to any source where it lives. Use semantic standards to create a unified view without physically moving data. When new sources emerge, connect them to the graph—no full reintegration required.
Here’s what separates real enterprise AI from chatbots: Every answer must be traceable to its source.
In regulated industries—healthcare, finance, legal—you can’t act on information you can’t verify. “The AI said so” isn’t acceptable when auditors come knocking.
GraphRAG provides complete lineage:
When an LLM tells you a patient is allergic to penicillin, you need to know that it came from their verified medical record, not from a pattern match on similar patient names.
Traditional RAG has a dangerous assumption: If you can ask a question, you can see all the data needed to answer it.
This creates two terrible options:
Fluree offers a third way: Embed security and governance rules directly into the data graph.
Example: An HR manager asks, “What’s the average salary in the engineering department?”
Traditional RAG: Retrieves all salary data, returns answer, potentially exposes individual salaries
Fluree GraphRAG with embedded policy:
The privacy protection happens at the data level, not the application level. The LLM itself never sees data it shouldn’t—even in its context.
Let’s make this concrete. Here’s what happens when you move from 80% accuracy to over 95% accuracy:
At 80% accuracy (1 in 5 answers wrong):
At 95%+ accuracy (at least 19 in 20 answers correct):
That extra 15% is the difference between “interesting experiment” and “business transformation.”
Here’s a fascinating discovery from the research: When you express database schemas as semantic ontologies (using RDF instead of SQL DDL) and ask ChatGPT to generate queries, accuracy jumps 3x immediately—without any training.
Why? Because LLMs were trained on massive amounts of linked data, semantic web standards, and graph-structured information. When you give an LLM data in semantic format, you’re working with its training, not against it.
Even more interesting: ChatGPT and Claude already knows how to convert SQL schemas into semantic ontologies. This means the barrier to entry is lower than most organizations think.
You don’t need to rebuild your entire data infrastructure. You can start by creating semantic views over existing systems, letting the graph layer handle translation.
The research showed that decentralized knowledge graphs achieved the highest accuracy (95%+). But what does “decentralized” actually mean, and why does it matter?
Traditional enterprise data warehouses promise “single source of truth.” But in reality, centralizing all your data is:
Decentralized knowledge graphs flip the model:
Real-world example: Your sales team in Germany needs to answer: “Which US customers have similar profiles to our most profitable EU customers?”
Traditional approach: Move customer data from US to EU (GDPR violation), or vice versa (expensive), or give up on the question.
Decentralized GraphRAG:
If you’re thinking, “This sounds great, but isn’t building a knowledge graph impossibly complex?” let’s address that directly.
Don’t try to create a universal enterprise ontology on day one. Pick a high-value problem—say, customer churn prediction—and build the graph to solve that specific question. As you expand to new use cases, extend the graph.
Knowledge graphs grow organically, unlike relational databases that require complete schema design upfront.
You don’t rip out your Oracle database or Salesforce instance. The semantic layer creates a unified view while data remains in source systems. Think of it as a smart integration layer that LLMs can query directly.
The W3C (World Wide Web Consortium) established these standards years ago. They’re not bleeding-edge research—they’re the same standards that power major parts of the web, from Google’s Knowledge Graph to Wikipedia’s structured data.
Phase 1: Prove the Concept (2-4 weeks)
Phase 2: Expand Coverage (2-3 months)
Phase 3: Scale Enterprise-Wide (6-12 months)
Let’s talk about ROI. If you’re evaluating GraphRAG, someone will ask: “Is the accuracy improvement worth the investment?”
Here’s a framework for thinking about it:
Low-stakes hallucinations:
Medium-stakes hallucinations:
High-stakes hallucinations:
When your AI is accurate enough to trust with critical decisions:
For most enterprises, the ROI case isn’t marginal—it’s overwhelming.
We touched on embedded security earlier, but this deserves deeper attention because it’s often the dealbreaker for enterprise AI adoption.
When an LLM needs to answer a question, traditional RAG faces a dilemma:
This approach has three failures:
Failure #1: Over-retrieval You retrieve more data than needed, expanding attack surface. Even if you filter before LLM sees it, that data moved through your system.
Failure #2: Context leakage LLMs need context to answer well. If you strip too much sensitive data, answers become useless. If you leave it in, you risk exposure.
Failure #3: Audit trail gaps When things go wrong, can you prove what data the LLM actually saw? Often, no.
In Fluree, security works differently:
1. Policy lives with the data Each node in the graph carries its own access policy:
2. Query-time enforcement When an LLM queries the graph:
3. Complete lineage Every query is logged:
No. Vector databases store mathematical representations of text for similarity search. Knowledge graphs store explicit semantic relationships between entities. They serve different purposes and often work together—vectors for unstructured content, graphs for structured relationships.
Not primarily. Knowledge graphs are maintained by domain experts (the people who understand the business) rather than data scientists. The semantic model reflects business logic, not statistical models. Your finance team can define how financial concepts relate; your compliance team defines regulatory connections.
GraphRAG excels with real-time data. Unlike data warehouses that batch-load overnight, semantic graphs can ingest updates continuously. When a customer places an order, that relationship appears in the graph immediately. Your LLM queries live data, not yesterday’s snapshot.
Initial setup takes longer than “throw it in a vector store,” yes. But the payoff comes from not having to rebuild when requirements change. Add a new data source? Connect it to the graph. Need to query differently? Extend the ontology. Unlike relational systems that require schema migration and ETL rewrites, graphs evolve incrementally.
Welcome to the club—every enterprise has messy data. The advantage of GraphRAG is that semantic standards help you clean as you go. When you discover that “customer,” “client,” and “account” mean the same thing, you encode that once in your ontology. The graph handles translation automatically from that point forward.
While the broader market is just beginning to explore GraphRAG, Fluree has already solved the implementation challenges that keep most enterprises stuck at 80% accuracy. Here’s what sets Fluree apart:
What others require: Custom integration work for each data source, long development cycles to connect legacy systems
What Fluree does: Out-of-the-box connectors to virtually any enterprise data or content system. From day one, data can be discovered and added to the semantic layer without custom development.
Your advantage: Connect Oracle, SAP, Salesforce, SharePoint, PDFs, APIs, and more—immediately. No six-month integration projects. No expensive middleware. Data flows into your knowledge graph as soon as you need it.
What others promise: Centralized data warehouses masquerading as “distributed” systems
What Fluree delivers: Real federated queries across virtually any data store adhering to Knowledge Graph standards. Query spans multiple systems, multiple clouds, multiple geographies—in real-time, at query time.
Your advantage: Achieve the 95%+ accuracy of decentralized knowledge graphs without moving sensitive data across borders. Maintain data sovereignty while enabling global intelligence. Comply with GDPR, data residency requirements, and industry regulations automatically.
What others offer: Static security rules defined once at implementation, requiring code changes to update
What Fluree provides: Advanced logic within the ontology combined with policy-as-data enables dynamic policy updates that adapt automatically to context, risk level, and regulatory changes.
Your advantage: Compliance that evolves with your business. When regulations change, update policies without touching code. When risk profiles shift, security adapts automatically. When new data sources connect, governance extends seamlessly.
If you’re leading enterprise AI initiatives, here’s the strategic takeaway:
The bottleneck isn’t the LLM. It’s the data architecture.
You can use the most advanced language model in the world, but if you’re feeding it fragmented, poorly connected data, you’ll get impressive-sounding hallucinations.
The organizations winning with enterprise AI aren’t necessarily using different LLMs. They’re using better data architecture—specifically, semantic knowledge graphs that give LLMs the structured, explicit relationships they need to reason accurately.
If no: You have an accuracy problem waiting to bite you
If yes: You’re ahead of most organizations
If weeks/months: Your architecture is brittle
If days: You’ve achieved the flexibility needed for AI evolution
This question cuts to the heart of whether your AI implementation is truly production-ready and trustworthy enough for critical business decisions.
If semantic GraphRAG resonates with your challenges, here’s how to move forward:
Step 1: Identify Your Hallucination Pain Where are inaccurate AI responses causing the most damage? Focus there first.
Step 2: Map Your Data Sources List the systems that need to connect to answer that question. You probably have 5-10.
Step 3: Define Success Metrics What accuracy rate would make you trust AI with this decision? 85%? 90%? 95%? Be specific.
Step 4: Build a Proof of Concept Connect a subset of your data, implement basic semantic model, measure accuracy against baseline.
Step 5: Measure and Iterate GraphRAG isn’t all-or-nothing. Start with high-value use case, prove ROI, expand from there.
Traditional RAG transformed LLMs from interesting demos to useful tools. But “useful” isn’t enough when you’re making million-dollar decisions or operating in regulated industries.
Semantic GraphRAG represents the next evolution: from tools that might be right to systems you can trust with critical business operations.
The research is clear: Knowledge graphs deliver 4x better zero-shot accuracy, and up to 95%+ accuracy with proper implementation. More importantly, every answer is verifiable, traceable, and governed by embedded security policies.
As Gartner recently noted, knowledge graphs have moved from “emerging technology” to “critical enabler” for enterprise AI. Organizations that adopt semantic GraphRAG now are building the foundation for the next decade of AI-driven business transformation.
Join our expert-led session to discover how GraphRAG and Model Context Protocol (MCP) are revolutionizing AI architecture. Learn practical implementation strategies that enhance accuracy, reduce hallucinations, and unlock the full potential of your enterprise data.
Ready to implement Model Context Protocol in your own environment? Access our comprehensive documentation and start building with Fluree’s MCP Server today. Get step-by-step guidance for local setup and integration.
Fill out the form below to sign up for Fluree’s GenAI Sandbox Waitlist.
"*" indicates required fields
Semantic Partners, with its headquarters in London and a team across Europe and the US, is known for its expertise in implementing semantic products and data engineering projects. This collaboration leverages Fluree’s comprehensive suite of solutions, including ontology modeling, auto-tagging, structured data conversion, and secure, trusted knowledge graphs.
Visit Partner Site
Report: Decentralized Knowledge Graphs Improve RAG Accuracy for Enterprise LLMs
Fluree just completed a report on reducing hallucinations and increasing accuracy for enterprise production Generative AI through the use of Knowledge Graph RAG (Retrieval Augmented Generation). Get your copy by filling out the form below.
Fill out the form below to schedule a call.
Fluree is integrated with AWS, allowing users to build sophisticated applications with increased flexibility, scalability, and reliability.
Semiring’s natural language processing pipeline utilizes knowledge graphs and large language models to bring hidden insights to light.
Industry Knowledge Graph LLC is a company that specializes in creating and utilizing knowledge graphs to unlock insights and connections within complex datasets, aiding businesses in making informed decisions and optimizing processes.
Cobwebb specializes in providing comprehensive communication and networking solutions, empowering businesses with tailored services to enhance efficiency and connectivity.
Deploy and Manage Fluree Nodes on Zeeve’s Cloud Infrastructure.
Visit Partner Site More Details
Sinisana provides food traceability solutions, built with Fluree’s distributed ledger technology.
Lead Semantics provides text-to-knowledge solutions.
TextDistil, powered by Fluree technology, targets the cognitive corner of the technology landscape. It is well-positioned to deliver novel functionality by leveraging the power of Large Language Models combined with the robust methods of Semantic Technology.
Project Logosphere, from Ikigai, is a decentralized knowledge graph that empowers richer data sets and discoveries.
Cibersons develops and invests in new technologies, such as artificial intelligence, robotics, space technology, fintech, blockchain, and others.
Powered by Fluree, AvioChain is an aviation maintenance platform built from the ground up for traceability, security, and interoperability.
Thematix was founded in 2011 to bring together the best minds in semantic technologies, business and information architecture, and traditional software engineering, to uniquely address practical problems in business operations, product development and marketing.
Opening Bell Ventures provides high-impact transformational services to C-level executives to help them shape and successfully execute on their Omni-Channel Digital Strategies.
Datavillage enables organizations to combine sensitive, proprietary, or personal data through transparent governance. AI models are trained and applied in fully confidential environments ensuring that only derived data (insights) is shared.
Vitality Technet has partnered with Fluree to accelerate drug discovery processes and enable ongoing collaboration across internal departments, external partners, and regulatory offices through semantics, knowledge graphs, and digital trust technologies.
SSB Digital is a dynamic and forward-thinking IT company specializing in developing bespoke solutions tailored to meet the unique needs and challenges of clients, ranging from predictive analytics and smart automation to decentralized applications and secure transactions.
Marzex is a bespoke Web3 systems development firm. With the help of Fluree technology, Marzex completed one of the first successful blockchain-based online elections in history.
Semantic Arts delivers data-centric transformation through a model-driven, semantic knowledge graph approach to enterprise data management.
Intigris, a leading Salesforce implementation partner, has partnered with Fluree to help organizations bridge and integrate multiple Salesforce instances.
Follow us on Linkedin
Join our Mailing List
Subscribe to our LinkedIn Newsletter
Subscribe to our YouTube channel
Partner, Analytic Strategy Partners; Frederick H. Rawson Professor in Medicine and Computer Science, University of Chicago and Chief of the Section of Biomedical Data Science in the Department of Medicine
Robert Grossman has been working in the field of data science, machine learning, big data, and distributed computing for over 25 years. He is a faculty member at the University of Chicago, where he is the Jim and Karen Frank Director of the Center for Translational Data Science. He is the Principal Investigator for the Genomic Data Commons, one of the largest collections of harmonized cancer genomics data in the world.
He founded Analytic Strategy Partners in 2016, which helps companies develop analytic strategies, improve their analytic operations, and evaluate potential analytic acquisitions and opportunities. From 2002-2015, he was the Founder and Managing Partner of Open Data Group (now ModelOp), which was one of the pioneers scaling predictive analytics to large datasets and helping companies develop and deploy innovative analytic solutions. From 1996 to 2001, he was the Founder and CEO of Magnify, which is now part of Lexis-Nexis (RELX Group) and provides predictive analytics solutions to the insurance industry.
Robert is also the Chair of the Open Commons Consortium (OCC), which is a not-for-profit that manages and operates cloud computing infrastructure to support scientific, medical, health care and environmental research.
Connect with Robert on Linkedin
Founder, DataStraits Inc., Chief Revenue Officer, 3i Infotech Ltd
Sudeep Nadkarni has decades of experience in scaling managed services and hi-tech product firms. He has driven several new ventures and corporate turnarounds resulting in one IPO and three $1B+ exits. VC/PE firms have entrusted Sudeep with key executive roles that include entering new opportunity areas, leading global sales, scaling operations & post-merger integrations.
Sudeep has broad international experience having worked, lived, and led firms operating in US, UK, Middle East, Asia & Africa. He is passionate about bringing innovative business products to market that leverage web 3.0 technologies and have embedded governance risk and compliance.
Connect with Sudeep on Linkedin
CEO, Data4Real LLC
Julia Bardmesser is a technology, architecture and data strategy executive, board member and advisor. In addition to her role as CEO of Data4Real LLC, she currently serves as Chair of Technology Advisory Council, Women Leaders In Data & AI (WLDA). She is a recognized thought leader in data driven digital transformation with over 30 years of experience in building technology and business capabilities that enable business growth, innovation, and agility. Julia has led transformational initiatives in many financial services companies such as Voya Financial, Deutsche Bank Citi, FINRA, Freddie Mac, and others.
Julia is a much sought-after speaker and mentor in the industry, and she has received recognition across the industry for her significant contributions. She has been named to engatica 2023 list of World’s Top 200 Business and Technology Innovators; received 2022 WLDA Changemaker in AI award; has been named to CDO Magazine’s List of Global Data Power Wdomen three years in the row (2020-2022); named Top 150 Business Transformation Leader by Constellation Research in 2019; and recognized as the Best Data Management Practitioner by A-Team Data Management Insight in 2017.
Connect with Julia on Linkedin
Senior Advisor, Board Member, Strategic Investor
After nine years leading the rescue and turnaround of Banco del Progreso in the Dominican Republic culminating with its acquisition by Scotiabank (for a 2.7x book value multiple), Mark focuses on advisory relationships and Boards of Directors where he brings the breadth of his prior consulting and banking/payments experience.
In 2018, Mark founded Alberdi Advisory Corporation where he is engaged in advisory services for the biotechnology, technology, distribution, and financial services industries. Mark enjoys working with founders of successful businesses as well as start-ups and VC; he serves on several Boards of Directors and Advisory Boards including MPX – Marco Polo Exchange – providing world-class systems and support to interconnect Broker-Dealers and Family Offices around the world and Fluree – focusing on web3 and blockchain. He is actively engaged in strategic advisory with the founder and Executive Committee of the Biotechnology Institute of Spain with over 50 patents and sales of its world-class regenerative therapies in more than 30 countries.
Prior work experience includes leadership positions with MasterCard, IBM/PwC, Kearney, BBVA and Citibank. Mark has worked in over 30 countries – extensively across Europe and the Americas as well as occasional experiences in Asia.
Connect with Mark on Linkedin
Chair of the Board, Enterprise Data Management Council
Peter Serenita was one of the first Chief Data Officers (CDOs) in financial services. He was a 28-year veteran of JPMorgan having held several key positions in business and information technology including the role of Chief Data Officer of the Worldwide Securities division. Subsequently, Peter became HSBC’s first Group Chief Data Officer, focusing on establishing a global data organization and capability to improve data consistency across the firm. More recently, Peter was the Enterprise Chief Data Officer for Scotiabank focused on defining and implementing a data management capability to improve data quality.
Peter is currently the Chairman of the Enterprise Data Management Council, a trade organization advancing data management globally across industries. Peter was a member of the inaugural Financial Research Advisory Committee (under the U.S. Department of Treasury) tasked with improving data quality in regulatory submissions to identify systemic risk.
Connect with Peter on Linkedin
Turn Data Chaos into Data Clarity
Enter details below to access the whitepaper.
Pawan came to Fluree via its acquisition of ZettaLabs, an AI based data cleansing and mastering company.His previous experiences include IBM where he was part of the Strategy, Business Development and Operations team at IBM Watson Health’s Provider business. Prior to that Pawan spent 10 years with Thomson Reuters in the UK, US, and the Middle East. During his tenure he held executive positions in Finance, Sales and Corporate Development and Strategy. He is an alumnus of The Georgia Institute of Technology and Georgia State University.
Connect with Pawan on Linkedin
Andrew “Flip” Filipowski is one of the world’s most successful high-tech entrepreneurs, philanthropists and industry visionaries. Mr. Filipowski serves as Co-founder and Co-CEO of Fluree, where he seeks to bring trust, security, and versatility to data.
Mr. Filipowski also serves as co-founder, chairman and chief executive officer of SilkRoad Equity, a global private investment firm, as well as the co-founder, of Tally Capital.
Mr. Filipowski was the former COO of Cullinet, the largest software company of the 1980’s. Mr. Filipowski founded and served as Chairman and CEO of PLATINUM technology, where he grew PLATINUM into the 8th largest software company in the world at the time of its sale to Computer Associates for $4 billion – the largest such transaction for a software company at the time. Upside Magazine named Mr. Filipowski one of the Top 100 Most Influential People in Information Technology. A recipient of Entrepreneur of the Year Awards from both Ernst & Young and Merrill Lynch, Mr. Filipowski has also been awarded the Young President’s Organization Legacy Award and the Anti-Defamation League’s Torch of Liberty award for his work fighting hate on the Internet.
Mr. Filipowski is or has been a founder, director or executive of various companies, including: Fuel 50, Veriblock, MissionMode, Onramp Branding, House of Blues, Blue Rhino Littermaid and dozens of other recognized enterprises.
Connect with Flip on Linkedin
Brian is the Co-founder and Co-CEO of Fluree, PBC, a North Carolina-based Public Benefit Corporation.
Platz was an entrepreneur and executive throughout the early internet days and SaaS boom, having founded the popular A-list apart web development community, along with a host of successful SaaS companies. He is now helping companies navigate the complexity of the enterprise data transformation movement.
Previous to establishing Fluree, Brian co-founded SilkRoad Technology which grew to over 2,000 customers and 500 employees in 12 global offices. Brian sits on the board of Fuel50 and Odigia, and is an advisor to Fabric Inc.
Connect with Brian on Linkedin
Eliud Polanco is a seasoned data executive with extensive experience in leading global enterprise data transformation and management initiatives. Previous to his current role as President of Fluree, a data collaboration and transformation company, Eliud was formerly the Head of Analytics at Scotiabank, Global Head of Analytics and Big Data at HSBC, head of Anti-Financial Crime Technology Architecture for U.S.DeutscheBank, and Head of Data Innovation at Citi.
In his most recent role as Head of Analytics and Data Standards at Scotiabank, Eliud led a full-spectrum data transformation initiative to implement new tools and technology architecture strategies, both on-premises as well as on Cloud, for ingesting, analyzing, cleansing, and creating consumption ready data assets.
Connect with Eliud on Linkedin
Get the right data into the right hands.
Build your Verifiable Credentials/DID solution with Fluree.
Wherever you are in your Knowledge Graph journey, Fluree has the tools and technology to unify data based on universal meaning, answer complex questions that span your business, and democratize insights across your organization.
Build real-time data collaboration that spans internal and external organizational boundaries, with protections and controls to meet evolving data policy and privacy regulations.
Fluree Sense auto-discovers data fitting across applications and data lakes, cleans and formats them into JSON-LD, and loads them into Fluree’s trusted data platform for sharing, analytics, and re-use.
Transform legacy data into linked, semantic knowledge graphs. Fluree Sense automates the data mappings from local formats to a universal ontology and transforms the flat files into RDF.
Whether you are consolidating data silos, migrating your data to a new platform, or building an MDM platform, we can help you build clean, accurate, and reliable golden records.
Our enterprise users receive exclusive support and even more features. Book a call with our sales team to get started.
Fluree Core Enterprise Inquiry - General. Located on Fluree Core page.
Download Stable Version Download Pre-Release Version
Register for Alpha Version
By downloading and running Fluree you agree to our terms of service (pdf).
General Nexus Beta Sign Up; Eventually to be replaced with a landing page.