Fluree Core Knowledge Graph Intelligent Database
Fluree Sense Structured Data AI Golden Record Pipeline
Fluree CAM Unstructured Data Auto Content Tagging
Fluree ITM Taxonomy Manager Controlled Vocabularies
Fluree HQ 486 Patterson Ave Ste 221 Winston-Salem, NC 27101 – – –
Fluree NY 11 Park Place New York, NY, 10007 – – –
Fluree India 5th Floor, Trifecta Adatto, c/o 91 Springboard Business Hub Pvt Ltd, 21 ITPL Main Rd, Garudachar Palya, Near Phoenix Mall, Karnataka-560048 India
Fluree CO 14143 Denver West Parkway, Suite 100 Golden, CO 80401 – – –
Fluree EMEA 18, rue de Londres 76009 Paris, France
LLMs promise to help with everything from predicting wildfires to assisting you while you drive. They keep getting faster and, as DeepSeek recently proved, cheaper. Use cases keep expanding.
The hype conceals a powerful truth. For as much progress as enterprises have made in integrating LLMs across large datasets, errors persist. In fact, data quality is still a major headache for organizations, and a major roadblock for AI initiatives. According to VentureBeat, GenAI implementations grew 17% in 2024, yet organizations report that their data quality dropped significantly.
Only after solving the accuracy problem will truly trusted artificial intelligence emerge. Once GenAI agents are able to act upon an enterprise’s proprietary data and knowledge base, LLMs will be able to do much more than predict the next line of code or write decent marketing copy. They’ll be able to extract, validate and analyze data, automate pattern recognition, and speed up labor-intensive processes. Automated underwriting, intelligent claims processing, and personalized financial advice are a few emerging use cases.
The caveat? These systems must be virtually error-free.
LLMs do not naturally get along with structured- and, to a lesser extent, unstructured data. This is also called the semantic gap. In the race to be data-centric, enterprises have layered on new data management systems to bridge the gap. While smart and necessary, these data management systems haven’t solved a basic underlying problem. In order to have an error-free LLM, you need good data, and in order to have good data at enterprise scale, you need a good ontology.
Enterprises thus find themselves at a crossroads. Tackle the ontology, or buy a bunch of bespoke LLMs that dig into structured data in one siloed platform. If you did the latter, you would be committing the same software bloat that happened in the SaaS era. At Fluree, we obviously want you to tackle the ontology–a process from which we’ve hopefully removed most of the headache. To understand where we’re coming from, read on.
Consider how marketing professionals currently use ChatGPT. They leverage its content writing capabilities while supplying the missing pieces—specific product features, differentiators or branding—from their own human memories. In theory, you should be able to connect an LLM directly to an enterprise database and have it populate responses with its own “memory” of accurate data.
In fact, tuning an LLM to query a database works reasonably well for small datasets and simple use cases. The LLM generates a SQL query, which executes against a database full of structured data. Then you receive a human-readable response. You never know, however, whether the LLM will query the correct column name, JOIN, table relationship, or other schema details.
Tuning works for small, simple use cases. A team can catch and correct LLM interpretation problems as they occur. Multiply that by dozens to hundreds of databases, though, and the approach won’t scale. Layer in various types and formats of data, and it’s nearly impossible to replicate what a human might be able to do.
You could also prompt engineer. Prompt engineering is both art and science, and very much one off. For instance, prefacing queries with specific prompts like “you are a semantic data expert” can suddenly improve accuracy, though the reasons aren’t always clear.
That kind of labor doesn’t scale to the complex schemas and sophisticated queries needed for enterprise use. Nor does it work when different departments need different questions answered about different data sets (unless you want to hire a bevy of full-time prompt engineers). The more databases you have, the more tuning and prompt engineering become unviable.
The need to get LLMs to work at enterprise scale is driving adoption of new solutions. One such solution is retrieval-augmented generation (RAG). It’s a way to handle several big data sets at once.
RAG lets LLMs query multiple databases and come up with a unified response. LLMs need vectors in order to process their answers. Vectors are raw data converted to numbers, which in turn live in a vector database. The LLM travels between vectors, finding likely matches to predict answers.
This, too, presents a new challenge. Getting an LLM to use big sets of structured- and unstructured data means inserting a vector database between it and enterprise data. While unstructured data works well with a vector database, structured data does not.
Vector databases are designed to handle unstructured data. LLMs were trained on the public internet, which is full of unstructured data. Text, images, video, audio and other forms of unstructured data come in many different formats. Categorically, such data is not consistent enough to follow a pre-defined database schema. It’s generally stored in raw form in a NoSQL database or data lake, using a schema-on-read approach. Before it can be uploaded into a vector database, the raw data needs to be pre-processed into vector embeddings using a machine-learning model that encodes semantic or numerical relationships.
This pre-processing is easy compared to structured data. Such data must be sourced from its data warehouse or relational database, where it is already in a schema. It must then be cleaned and serialized into text-based formats like CSVs or HTML before being fed into a model. If data is sensitive, it should also be tokenized—converted into numerical form—for security. Large enterprise data sets often exceed LLM token limits, so data engineers need to be precise about which data they use.
Moreover, structured data, which includes pricing information, sales data, metrics, and so on, changes frequently. This makes it particularly hard for an LLM, which reasons using vectors and probabilities, to give an accurate reply.
Bespoke LLMs like Salesforce Genie and Workday Illuminate exist to solve the challenge of getting an LLM to give accurate replies from structured data. But they don’t work outside of their branded platforms. Such bespoke solutions also add unnecessary software bloat.
To solve the challenge of getting an LLM to read structured data, enterprise data teams are now turning to knowledge graphs, which unify structured data from multiple sources into a single semantic layer.
Knowledge graphs let you identify key entities in a structured database, such as customer IDs, and represent them as nodes. You then map relationships between those nodes (such as purchased product) as edges. As long as you define an ontology for your knowledge graph that matches the entities in your structured databases, your graph can pull from many databases at once to display new relationships and insights.
For example, if you searched your graph for customer IDs, purchased products, and purchase date, you could figure out when the majority of customers purchased a certain kind of product. If an LLM piggybacks upon that effort, you need only enter a natural-language question and the LLM will generate an accurate answer.
Or will it? If you’re tired of new technologies creating more problems, join the thousands of enterprise data scientists who feel the same way. As it turns out, the way the LLM synthesizes information from the knowledge graph creates … you guessed it … more inaccuracies.
When a user inputs a natural language query, the LLM interprets it by extracting key entities, relationships, and context. The system then decides whether to pull data from a knowledge graph (structured data), a vector database (unstructured data), or both. The retrieved information is then integrated to enrich the LLM’s context, with additional filtering or refinement to ensure precision. Finally, the LLM synthesizes everything into a clear and coherent response.
The LLM uses probabilities to interpret key entities, relationships, and context. If these are not fixed in a universal ontology, the LLM doesn’t have an overarching schema to guide how sources are integrated. So information gets synthesized differently each time, as the LLM probabilistically infers data sources or relationships.
That’s why digging into a topic yields inconsistent LLM replies. When you query something for the first time, you get a reasonable answer. If you keep digging for more answers, though, the LLM reprocesses the same document in its entirety, thinking “my probabilities say I should try these other vectors instead,” and gives you different answers, even for the same question. For example, John the data analyst asks for payment terms for the Johnson contract. The LLM gives three different answers for the same prompt:
“Net 30 days, with a 2% discount for early payment.”
“Weekly deliveries on Tuesdays, except holidays.”
“Net 30 days from receipt of products or invoice, whichever is later.”
For all the power of knowledge graphs and similarity search, LLMs are still acting like interns when confronted with enterprise data. That won’t change until LLMs gain access to an authoritative master schema within the knowledge graph–that is, they infer from a universal ontology.
At this point, there’s a temptation to continue to Frankenstein together a system, maybe by adding those branded, bespoke LLMs that only operate on one type of structured database. That’s not necessary. Instead, the problem can be fixed by returning to the roots of the knowledge graph–that is, the ontology.
A universal ontology (also known as a Universal Data Model) organizes and tags data in a meaningful, consistent way that both humans and machines can understand. It makes data interoperable across systems, so it’s always in a consistent format, no matter where it originates. The universal ontology lets the knowledge graph do the heavy lifting by consolidating similar data into unified entities. Instead of accessing or reprocessing the same document for related queries, the LLM can work with structured, pre-organized information.
Our approach at Fluree is that you need a universal ontology. All of your information should be tagged and organized to this model so that it means something to all humans and machines, including LLMs. Only then will everything else fall into place and be findable, accurate, interoperable and reusable. Data will be freed from its siloes, and everyone from HR to those out in the field will be able to view and analyze data through their own lens. LLMs will connect to the knowledge graph, which runs on the universal ontology, and receive close to 100% accurate answers (ChatGPT, by contrast, is about 60% accurate). If the data does not exist, the LLM will say so, instead of making things up. Everyone will be able to do their job better.
The problem with creating a universal ontology is that you can’t do it all at once. You have to integrate data from diverse sources with inconsistent formats and standards. It’s hard to scale a universal ontology to handle more and more datasets while maintaining performance across the organization. Employees must adapt to new workflows and standards.
Fluree has out-of-the-box tooling to create your universal ontology quickly. We use machine learning models that tag data to the ontology, including structured data from Salesforce, SAP and other popular systems.
While we might ultimately want a central AI brain for our entire company, current technology limitations require us to break this brain into optimized pieces. The best path forward is to focus on specific domains rather than attempting to create a single, all-encompassing AI system. Whether you use Fluree or hire an ontologist, you’ll have to implement your universal ontology one domain at a time.
Focus on individual domains first. Let’s say you want the LLM to draw upon information from three different data sets: a structured database sitting in Oracle, a content management system, and an application database. Your LLM should have access to an ontology of concepts to understand what the user is looking for, and then use the knowledge graph to find where the answers would be. Finally, you should put a plan together as to how the LLM will access that data.
Your plan should include:
1) The user’s domain/expertise. For example, someone in supply chain logistics might have a different view of terms related to “clients” or “partners” that someone else in finance would have. To specify, you want to layer multiple ontologies onto the data, and route appropriate terms within the context of the user’s domain.
2) The question itself. For example, if you ask about a particular shipment, and then ask a follow up question about the supplier, make sure that you promote the knowledge graph as the primary source of truth for the LLM through prompt engineering or other techniques. That way, the LLM will understand that a line of questioning is contextual to that particular shipment.
3) Understand user interactions. Over time, you will develop an understanding of how users from particular domains interact with the data from questions and self-teaching the right context. This tooling exists both at the model level as well as the Fluree level (in terms of ontology layering).
As you walk forward, domain by domain, you will eventually build a huge, distributed graph that comes in all shapes, sizes and formats. It will be universally accessible because you’ve laid the groundwork of a universal ontology (with subsets of domain ontologies) and have mapped data sources to that ontology.
In the broader sphere, once we perfect AI’s ability to access enterprise information, we might witness a dramatic transformation in how businesses operate. Imagine asking AI to show you a table of deals expected to close next month, no Salesforce needed. This could potentially eliminate the need for traditional enterprise applications entirely. All you’d need is a knowledge database and a well-tuned AI system.
Hopefully by now you understand why building a universal ontology is the right choice for reaching that elusive data-centric reality.
Fill out the form below to sign up for Fluree’s GenAI Sandbox Waitlist.
"*" indicates required fields
Semantic Partners, with its headquarters in London and a team across Europe and the US, is known for its expertise in implementing semantic products and data engineering projects. This collaboration leverages Fluree’s comprehensive suite of solutions, including ontology modeling, auto-tagging, structured data conversion, and secure, trusted knowledge graphs.
Visit Partner Site
Report: Decentralized Knowledge Graphs Improve RAG Accuracy for Enterprise LLMs
Fluree just completed a report on reducing hallucinations and increasing accuracy for enterprise production Generative AI through the use of Knowledge Graph RAG (Retrieval Augmented Generation). Get your copy by filling out the form below.
Fill out the form below to schedule a call.
Fluree is integrated with AWS, allowing users to build sophisticated applications with increased flexibility, scalability, and reliability.
Semiring’s natural language processing pipeline utilizes knowledge graphs and large language models to bring hidden insights to light.
Industry Knowledge Graph LLC is a company that specializes in creating and utilizing knowledge graphs to unlock insights and connections within complex datasets, aiding businesses in making informed decisions and optimizing processes.
Cobwebb specializes in providing comprehensive communication and networking solutions, empowering businesses with tailored services to enhance efficiency and connectivity.
Deploy and Manage Fluree Nodes on Zeeve’s Cloud Infrastructure.
Visit Partner Site More Details
Sinisana provides food traceability solutions, built with Fluree’s distributed ledger technology.
Lead Semantics provides text-to-knowledge solutions.
TextDistil, powered by Fluree technology, targets the cognitive corner of the technology landscape. It is well-positioned to deliver novel functionality by leveraging the power of Large Language Models combined with the robust methods of Semantic Technology.
Project Logosphere, from Ikigai, is a decentralized knowledge graph that empowers richer data sets and discoveries.
Cibersons develops and invests in new technologies, such as artificial intelligence, robotics, space technology, fintech, blockchain, and others.
Powered by Fluree, AvioChain is an aviation maintenance platform built from the ground up for traceability, security, and interoperability.
Thematix was founded in 2011 to bring together the best minds in semantic technologies, business and information architecture, and traditional software engineering, to uniquely address practical problems in business operations, product development and marketing.
Opening Bell Ventures provides high-impact transformational services to C-level executives to help them shape and successfully execute on their Omni-Channel Digital Strategies.
Datavillage enables organizations to combine sensitive, proprietary, or personal data through transparent governance. AI models are trained and applied in fully confidential environments ensuring that only derived data (insights) is shared.
Vitality Technet has partnered with Fluree to accelerate drug discovery processes and enable ongoing collaboration across internal departments, external partners, and regulatory offices through semantics, knowledge graphs, and digital trust technologies.
SSB Digital is a dynamic and forward-thinking IT company specializing in developing bespoke solutions tailored to meet the unique needs and challenges of clients, ranging from predictive analytics and smart automation to decentralized applications and secure transactions.
Marzex is a bespoke Web3 systems development firm. With the help of Fluree technology, Marzex completed one of the first successful blockchain-based online elections in history.
Semantic Arts delivers data-centric transformation through a model-driven, semantic knowledge graph approach to enterprise data management.
Intigris, a leading Salesforce implementation partner, has partnered with Fluree to help organizations bridge and integrate multiple Salesforce instances.
Follow us on Linkedin
Join our Mailing List
Subscribe to our LinkedIn Newsletter
Subscribe to our YouTube channel
Partner, Analytic Strategy Partners; Frederick H. Rawson Professor in Medicine and Computer Science, University of Chicago and Chief of the Section of Biomedical Data Science in the Department of Medicine
Robert Grossman has been working in the field of data science, machine learning, big data, and distributed computing for over 25 years. He is a faculty member at the University of Chicago, where he is the Jim and Karen Frank Director of the Center for Translational Data Science. He is the Principal Investigator for the Genomic Data Commons, one of the largest collections of harmonized cancer genomics data in the world.
He founded Analytic Strategy Partners in 2016, which helps companies develop analytic strategies, improve their analytic operations, and evaluate potential analytic acquisitions and opportunities. From 2002-2015, he was the Founder and Managing Partner of Open Data Group (now ModelOp), which was one of the pioneers scaling predictive analytics to large datasets and helping companies develop and deploy innovative analytic solutions. From 1996 to 2001, he was the Founder and CEO of Magnify, which is now part of Lexis-Nexis (RELX Group) and provides predictive analytics solutions to the insurance industry.
Robert is also the Chair of the Open Commons Consortium (OCC), which is a not-for-profit that manages and operates cloud computing infrastructure to support scientific, medical, health care and environmental research.
Connect with Robert on Linkedin
Founder, DataStraits Inc., Chief Revenue Officer, 3i Infotech Ltd
Sudeep Nadkarni has decades of experience in scaling managed services and hi-tech product firms. He has driven several new ventures and corporate turnarounds resulting in one IPO and three $1B+ exits. VC/PE firms have entrusted Sudeep with key executive roles that include entering new opportunity areas, leading global sales, scaling operations & post-merger integrations.
Sudeep has broad international experience having worked, lived, and led firms operating in US, UK, Middle East, Asia & Africa. He is passionate about bringing innovative business products to market that leverage web 3.0 technologies and have embedded governance risk and compliance.
Connect with Sudeep on Linkedin
CEO, Data4Real LLC
Julia Bardmesser is a technology, architecture and data strategy executive, board member and advisor. In addition to her role as CEO of Data4Real LLC, she currently serves as Chair of Technology Advisory Council, Women Leaders In Data & AI (WLDA). She is a recognized thought leader in data driven digital transformation with over 30 years of experience in building technology and business capabilities that enable business growth, innovation, and agility. Julia has led transformational initiatives in many financial services companies such as Voya Financial, Deutsche Bank Citi, FINRA, Freddie Mac, and others.
Julia is a much sought-after speaker and mentor in the industry, and she has received recognition across the industry for her significant contributions. She has been named to engatica 2023 list of World’s Top 200 Business and Technology Innovators; received 2022 WLDA Changemaker in AI award; has been named to CDO Magazine’s List of Global Data Power Wdomen three years in the row (2020-2022); named Top 150 Business Transformation Leader by Constellation Research in 2019; and recognized as the Best Data Management Practitioner by A-Team Data Management Insight in 2017.
Connect with Julia on Linkedin
Senior Advisor, Board Member, Strategic Investor
After nine years leading the rescue and turnaround of Banco del Progreso in the Dominican Republic culminating with its acquisition by Scotiabank (for a 2.7x book value multiple), Mark focuses on advisory relationships and Boards of Directors where he brings the breadth of his prior consulting and banking/payments experience.
In 2018, Mark founded Alberdi Advisory Corporation where he is engaged in advisory services for the biotechnology, technology, distribution, and financial services industries. Mark enjoys working with founders of successful businesses as well as start-ups and VC; he serves on several Boards of Directors and Advisory Boards including MPX – Marco Polo Exchange – providing world-class systems and support to interconnect Broker-Dealers and Family Offices around the world and Fluree – focusing on web3 and blockchain. He is actively engaged in strategic advisory with the founder and Executive Committee of the Biotechnology Institute of Spain with over 50 patents and sales of its world-class regenerative therapies in more than 30 countries.
Prior work experience includes leadership positions with MasterCard, IBM/PwC, Kearney, BBVA and Citibank. Mark has worked in over 30 countries – extensively across Europe and the Americas as well as occasional experiences in Asia.
Connect with Mark on Linkedin
Chair of the Board, Enterprise Data Management Council
Peter Serenita was one of the first Chief Data Officers (CDOs) in financial services. He was a 28-year veteran of JPMorgan having held several key positions in business and information technology including the role of Chief Data Officer of the Worldwide Securities division. Subsequently, Peter became HSBC’s first Group Chief Data Officer, focusing on establishing a global data organization and capability to improve data consistency across the firm. More recently, Peter was the Enterprise Chief Data Officer for Scotiabank focused on defining and implementing a data management capability to improve data quality.
Peter is currently the Chairman of the Enterprise Data Management Council, a trade organization advancing data management globally across industries. Peter was a member of the inaugural Financial Research Advisory Committee (under the U.S. Department of Treasury) tasked with improving data quality in regulatory submissions to identify systemic risk.
Connect with Peter on Linkedin
Turn Data Chaos into Data Clarity
Enter details below to access the whitepaper.
Pawan came to Fluree via its acquisition of ZettaLabs, an AI based data cleansing and mastering company.His previous experiences include IBM where he was part of the Strategy, Business Development and Operations team at IBM Watson Health’s Provider business. Prior to that Pawan spent 10 years with Thomson Reuters in the UK, US, and the Middle East. During his tenure he held executive positions in Finance, Sales and Corporate Development and Strategy. He is an alumnus of The Georgia Institute of Technology and Georgia State University.
Connect with Pawan on Linkedin
Andrew “Flip” Filipowski is one of the world’s most successful high-tech entrepreneurs, philanthropists and industry visionaries. Mr. Filipowski serves as Co-founder and Co-CEO of Fluree, where he seeks to bring trust, security, and versatility to data.
Mr. Filipowski also serves as co-founder, chairman and chief executive officer of SilkRoad Equity, a global private investment firm, as well as the co-founder, of Tally Capital.
Mr. Filipowski was the former COO of Cullinet, the largest software company of the 1980’s. Mr. Filipowski founded and served as Chairman and CEO of PLATINUM technology, where he grew PLATINUM into the 8th largest software company in the world at the time of its sale to Computer Associates for $4 billion – the largest such transaction for a software company at the time. Upside Magazine named Mr. Filipowski one of the Top 100 Most Influential People in Information Technology. A recipient of Entrepreneur of the Year Awards from both Ernst & Young and Merrill Lynch, Mr. Filipowski has also been awarded the Young President’s Organization Legacy Award and the Anti-Defamation League’s Torch of Liberty award for his work fighting hate on the Internet.
Mr. Filipowski is or has been a founder, director or executive of various companies, including: Fuel 50, Veriblock, MissionMode, Onramp Branding, House of Blues, Blue Rhino Littermaid and dozens of other recognized enterprises.
Connect with Flip on Linkedin
Brian is the Co-founder and Co-CEO of Fluree, PBC, a North Carolina-based Public Benefit Corporation.
Platz was an entrepreneur and executive throughout the early internet days and SaaS boom, having founded the popular A-list apart web development community, along with a host of successful SaaS companies. He is now helping companies navigate the complexity of the enterprise data transformation movement.
Previous to establishing Fluree, Brian co-founded SilkRoad Technology which grew to over 2,000 customers and 500 employees in 12 global offices. Brian sits on the board of Fuel50 and Odigia, and is an advisor to Fabric Inc.
Connect with Brian on Linkedin
Eliud Polanco is a seasoned data executive with extensive experience in leading global enterprise data transformation and management initiatives. Previous to his current role as President of Fluree, a data collaboration and transformation company, Eliud was formerly the Head of Analytics at Scotiabank, Global Head of Analytics and Big Data at HSBC, head of Anti-Financial Crime Technology Architecture for U.S.DeutscheBank, and Head of Data Innovation at Citi.
In his most recent role as Head of Analytics and Data Standards at Scotiabank, Eliud led a full-spectrum data transformation initiative to implement new tools and technology architecture strategies, both on-premises as well as on Cloud, for ingesting, analyzing, cleansing, and creating consumption ready data assets.
Connect with Eliud on Linkedin
Get the right data into the right hands.
Build your Verifiable Credentials/DID solution with Fluree.
Wherever you are in your Knowledge Graph journey, Fluree has the tools and technology to unify data based on universal meaning, answer complex questions that span your business, and democratize insights across your organization.
Build real-time data collaboration that spans internal and external organizational boundaries, with protections and controls to meet evolving data policy and privacy regulations.
Fluree Sense auto-discovers data fitting across applications and data lakes, cleans and formats them into JSON-LD, and loads them into Fluree’s trusted data platform for sharing, analytics, and re-use.
Transform legacy data into linked, semantic knowledge graphs. Fluree Sense automates the data mappings from local formats to a universal ontology and transforms the flat files into RDF.
Whether you are consolidating data silos, migrating your data to a new platform, or building an MDM platform, we can help you build clean, accurate, and reliable golden records.
Our enterprise users receive exclusive support and even more features. Book a call with our sales team to get started.
Download Stable Version Download Pre-Release Version
Register for Alpha Version
By downloading and running Fluree you agree to our terms of service (pdf).
Hello this is some content.