Local or Cloud
AI/ML Data Cleansing
Golden Record Pipeline
486 Patterson Ave
Winston-Salem, NC 27101
– – –
11 Park Place
New York, NY, 1007
– – –
Bagmane Laurel, Krishnappa
Garden, C V Raman Nagar,
Karnataka 560093, India
– – –
1644 Platte Street
Denver, CO 80202
– – –
Lange Dreef 11
4131 NJ Vianen
It started innocently enough. Sometime in the mid-2000s, everyone and everything began to generate data. Seeking to derive value from the data, companies hired data analysts. The analysts grew tired of pulling data from multiple sources—such as various SaaS apps—before they could analyze it. IT suggested copying all relevant data into a single data warehouse, where it would be easier to pull and analyze.
The analysts were happy. IT looked smart. CEOs received data-driven insights. The data warehouses filled up with copies of data.
Unlike gold bars or vintage cars, however, most data does not appreciate in value. Aside from a few exceptions, such as annual financial statements, the longer you store data, the more it perishes. Outside of the data warehouse, people are constantly interacting with software, and the data updates accordingly. Inside of the warehouse, the old copies grow outdated and useless.
Around the world, acres of data warehouses brim with useless data. Companies pay thousands to millions of dollars in storage and inventory fees under the assumption that the data they save will come in handy someday. Most of it won’t. Executives are simply soothing their nerves about losing potential insights.
Surrounded by data FOMO, it’s easy to miss the fact that you don’t need a data warehouse at all. Data can exist in a network rather than a giant storage building. In fact, data is more useful, fresh, secure, and trustworthy that way. Like the people in the TV show Hoarders, it’s time to let go of old notions of meaning and tidy up how we perceive and use data.
There are three reasons to be wary of the data warehouse.
1. Most of your data is perishable.
2. More data does not lead to better insights.
3. Data security disappears.
To paraphrase Marie Kondo, there is cost-saving magic in tidying your data. A midsized company with a warehouse full of old data might be paying 5% of its profits in data storage while only ever using 2% of that data. If a team takes the time to figure out what data it’s relatively sure it will use and trashes the rest, data warehousing might only cost 1% of profits.
But, you might say, big tech companies like Google and Microsoft have entire warehouses full of digital exhaust, and keep finding new use cases for it, most recently in their AI models. Isn’t there a chance of figuring out how to use that old data eventually? Yes, there’s a chance—especially if you’re a tech giant that has been collecting data since the early 2000s and working on AI for about as long. Unless you’re playing in that league, with equivalent resources, your data will probably just continue to perish. Better to work with what you can use and, if AI is a concern, see whose model and data set you can access instead of trying to become a down-market version of Big Tech (or collect the kind of niche data that Big Tech won’t focus on – which also requires thought and deliberation).
Another potential concern with the data warehouse is that data loses permissions and security.
You can create all the permissions you want for a SaaS app. Once you rip data out of the back end and dump it into a warehouse, however, all those permissions are stripped away. It’s a requirement for data analysis. If someone steals credentials, sensitive data is exposed. Re-implementing the SaaS permission model in a separate system is an option, but costs time and money, and complicates workflows.
There is an alternative to the data warehouse. It’s called the data network. To understand how it works, it’s worth looking at the manufacturing sector as an analogy.
In the 1980s, traditional manufacturing was up-ended through just-in-time practices. Manufacturers built small factories that responded to product demand. Instead of storing mass-produced inventory in warehouses and waiting for demand to strike, manufacturers could produce responsively, and then send products to be fulfilled by a third party.
Similarly, for many use cases, storing big data in a big warehouse makes no sense. If you set up and manage your data strategically, you may not need to move it into a giant warehouse at all.
Instead, you can use decentralized data, which I covered in my latest post at Forbes Tech Column. In short, decentralized data is akin to hyperlinking data the way we currently link websites on the internet. Networks of data are created through these links, which are stored in a semantic knowledge graph database. Whenever you query the database, results come from the data network. The data itself is constantly updated as people interact with the software generating it. Each piece of data is fresh, and there is no need for a warehouse.
Decentralized data is also secure. It doesn’t all sit in one warehouse, stripped of permissions. Rather, it exists at its point of origin, linked through the knowledge graph database. Nobody needs to make copies. Whoever manages the data can also wrap it in permissions. Anyone who queries the data has to meet those permissions, which reduces security risks. Because data lives at its point of origin, each piece of sensitive data can also come with a history of its own creation and use, so that whoever queries it knows they can trust it.
I have a dream about the entire internet operating in this way, with linked, secure and trustworthy data. That’s called Web3, and it is still very much a work in progress. Any organization, however, can begin building data networks right now and growing them over time. Even companies with massive stockpiles of old data in a warehouse can start by auditing and moving their data into networks. And for anyone considering buying space in a data warehouse, I have this advice: Don’t do it. Take your data seriously. Start with where the internet is today, not where it was 20 years ago. Particularly as AI integrates into almost every workflow, and the stakes for trustworthy, secure data become higher than ever, decentralized data is a bet that will pay off in the long run.
Follow us on Linkedin
Join our Mailing List
Subscribe to our LinkedIn Newsletter
Subscribe to our YouTube channel
Partner, Analytic Strategy Partners; Frederick H. Rawson Professor in Medicine and Computer Science, University of Chicago and Chief of the Section of Biomedical Data Science in the Department of Medicine
Robert Grossman has been working in the field of data science, machine learning, big data, and distributed computing for over 25 years. He is a faculty member at the University of Chicago, where he is the Jim and Karen Frank Director of the Center for Translational Data Science. He is the Principal Investigator for the Genomic Data Commons, one of the largest collections of harmonized cancer genomics data in the world.
He founded Analytic Strategy Partners in 2016, which helps companies develop analytic strategies, improve their analytic operations, and evaluate potential analytic acquisitions and opportunities. From 2002-2015, he was the Founder and Managing Partner of Open Data Group (now ModelOp), which was one of the pioneers scaling predictive analytics to large datasets and helping companies develop and deploy innovative analytic solutions. From 1996 to 2001, he was the Founder and CEO of Magnify, which is now part of Lexis-Nexis (RELX Group) and provides predictive analytics solutions to the insurance industry.
Robert is also the Chair of the Open Commons Consortium (OCC), which is a not-for-profit that manages and operates cloud computing infrastructure to support scientific, medical, health care and environmental research.
Connect with Robert on Linkedin
Founder, DataStraits Inc., Chief Revenue Officer, 3i Infotech Ltd
Sudeep Nadkarni has decades of experience in scaling managed services and hi-tech product firms. He has driven several new ventures and corporate turnarounds resulting in one IPO and three $1B+ exits. VC/PE firms have entrusted Sudeep with key executive roles that include entering new opportunity areas, leading global sales, scaling operations & post-merger integrations.
Sudeep has broad international experience having worked, lived, and led firms operating in US, UK, Middle East, Asia & Africa. He is passionate about bringing innovative business products to market that leverage web 3.0 technologies and have embedded governance risk and compliance.
Connect with Sudeep on Linkedin
CEO, Data4Real LLC
Julia Bardmesser is a technology, architecture and data strategy executive, board member and advisor. In addition to her role as CEO of Data4Real LLC, she currently serves as Chair of Technology Advisory Council, Women Leaders In Data & AI (WLDA). She is a recognized thought leader in data driven digital transformation with over 30 years of experience in building technology and business capabilities that enable business growth, innovation, and agility. Julia has led transformational initiatives in many financial services companies such as Voya Financial, Deutsche Bank Citi, FINRA, Freddie Mac, and others.
Julia is a much sought-after speaker and mentor in the industry, and she has received recognition across the industry for her significant contributions. She has been named to engatica 2023 list of World’s Top 200 Business and Technology Innovators; received 2022 WLDA Changemaker in AI award; has been named to CDO Magazine’s List of Global Data Power Wdomen three years in the row (2020-2022); named Top 150 Business Transformation Leader by Constellation Research in 2019; and recognized as the Best Data Management Practitioner by A-Team Data Management Insight in 2017.
Connect with Julia on Linkedin
Senior Advisor, Board Member, Strategic Investor
After nine years leading the rescue and turnaround of Banco del Progreso in the Dominican Republic culminating with its acquisition by Scotiabank (for a 2.7x book value multiple), Mark focuses on advisory relationships and Boards of Directors where he brings the breadth of his prior consulting and banking/payments experience.
In 2018, Mark founded Alberdi Advisory Corporation where he is engaged in advisory services for the biotechnology, technology, distribution, and financial services industries. Mark enjoys working with founders of successful businesses as well as start-ups and VC; he serves on several Boards of Directors and Advisory Boards including MPX – Marco Polo Exchange – providing world-class systems and support to interconnect Broker-Dealers and Family Offices around the world and Fluree – focusing on web3 and blockchain. He is actively engaged in strategic advisory with the founder and Executive Committee of the Biotechnology Institute of Spain with over 50 patents and sales of its world-class regenerative therapies in more than 30 countries.
Prior work experience includes leadership positions with MasterCard, IBM/PwC, Kearney, BBVA and Citibank. Mark has worked in over 30 countries – extensively across Europe and the Americas as well as occasional experiences in Asia.
Connect with Mark on Linkedin
Chair of the Board, Enterprise Data Management Council
Peter Serenita was one of the first Chief Data Officers (CDOs) in financial services. He was a 28-year veteran of JPMorgan having held several key positions in business and information technology including the role of Chief Data Officer of the Worldwide Securities division. Subsequently, Peter became HSBC’s first Group Chief Data Officer, focusing on establishing a global data organization and capability to improve data consistency across the firm. More recently, Peter was the Enterprise Chief Data Officer for Scotiabank focused on defining and implementing a data management capability to improve data quality.
Peter is currently the Chairman of the Enterprise Data Management Council, a trade organization advancing data management globally across industries. Peter was a member of the inaugural Financial Research Advisory Committee (under the U.S. Department of Treasury) tasked with improving data quality in regulatory submissions to identify systemic risk.
Connect with Peter on Linkedin
Turn Data Chaos into Data Clarity
"*" indicates required fields
Enter details below to access the whitepaper.
Pawan came to Fluree via its acquisition of ZettaLabs, an AI based data cleansing and mastering company.His previous experiences include IBM where he was part of the Strategy, Business Development and Operations team at IBM Watson Health’s Provider business. Prior to that Pawan spent 10 years with Thomson Reuters in the UK, US, and the Middle East. During his tenure he held executive positions in Finance, Sales and Corporate Development and Strategy. He is an alumnus of The Georgia Institute of Technology and Georgia State University.
Connect with Pawan on Linkedin
Andrew “Flip” Filipowski is one of the world’s most successful high-tech entrepreneurs, philanthropists and industry visionaries. Mr. Filipowski serves as Co-founder and Co-CEO of Fluree, where he seeks to bring trust, security, and versatility to data.
Mr. Filipowski also serves as co-founder, chairman and chief executive officer of SilkRoad Equity, a global private investment firm, as well as the co-founder, of Tally Capital.
Mr. Filipowski was the former COO of Cullinet, the largest software company of the 1980’s. Mr. Filipowski founded and served as Chairman and CEO of PLATINUM technology, where he grew PLATINUM into the 8th largest software company in the world at the time of its sale to Computer Associates for $4 billion – the largest such transaction for a software company at the time. Upside Magazine named Mr. Filipowski one of the Top 100 Most Influential People in Information Technology. A recipient of Entrepreneur of the Year Awards from both Ernst & Young and Merrill Lynch, Mr. Filipowski has also been awarded the Young President’s Organization Legacy Award and the Anti-Defamation League’s Torch of Liberty award for his work fighting hate on the Internet.
Mr. Filipowski is or has been a founder, director or executive of various companies, including: Fuel 50, Veriblock, MissionMode, Onramp Branding, House of Blues, Blue Rhino Littermaid and dozens of other recognized enterprises.
Connect with Flip on Linkedin
Brian is the Co-founder and Co-CEO of Fluree, PBC, a North Carolina-based Public Benefit Corporation.
Platz was an entrepreneur and executive throughout the early internet days and SaaS boom, having founded the popular A-list apart web development community, along with a host of successful SaaS companies. He is now helping companies navigate the complexity of the enterprise data transformation movement.
Previous to establishing Fluree, Brian co-founded SilkRoad Technology which grew to over 2,000 customers and 500 employees in 12 global offices. Brian sits on the board of Fuel50 and Odigia, and is an advisor to Fabric Inc.
Connect with Brian on Linkedin
Eliud Polanco is a seasoned data executive with extensive experience in leading global enterprise data transformation and management initiatives. Previous to his current role as President of Fluree, a data collaboration and transformation company, Eliud was formerly the Head of Analytics at Scotiabank, Global Head of Analytics and Big Data at HSBC, head of Anti-Financial Crime Technology Architecture for U.S.DeutscheBank, and Head of Data Innovation at Citi.
In his most recent role as Head of Analytics and Data Standards at Scotiabank, Eliud led a full-spectrum data transformation initiative to implement new tools and technology architecture strategies, both on-premises as well as on Cloud, for ingesting, analyzing, cleansing, and creating consumption ready data assets.
Connect with Eliud on Linkedin
Get the right data into the right hands.
Build your Verifiable Credentials/DID solution with Fluree.
Wherever you are in your Knowledge Graph journey, Fluree has the tools and technology to unify data based on universal meaning, answer complex questions that span your business, and democratize insights across your organization.
Build real-time data collaboration that spans internal and external organizational boundaries, with protections and controls to meet evolving data policy and privacy regulations.
Fluree Sense auto-discovers data fitting across applications and data lakes, cleans and formats them into JSON-LD, and loads them into Fluree’s trusted data platform for sharing, analytics, and re-use.
Transform legacy data into linked, semantic knowledge graphs. Fluree Sense automates the data mappings from local formats to a universal ontology and transforms the flat files into RDF.
Whether you are consolidating data silos, migrating your data to a new platform, or building an MDM platform, we can help you build clean, accurate, and reliable golden records.
Our enterprise users receive exclusive support and even more features. Book a call with our sales team to get started.
Download Stable Version
Download Pre-Release Version
Register for Alpha Version
By downloading and running Fluree you agree to our terms of service (pdf).
Hello this is some content.