I created this research paper (most of theresearch done with AI itself) to support a 2-part thesis:

  1. AI is taking over the enterprise and its happening at a faster rate than we previously thought
  2. That the key to AI unlocking hundreds of millions of dollars of enterprise value in enterprises is not data, but data context.

Part one of this research paper looks at the current state of AI.  Part two explains what data ontologies are and how they provide context to enterprise data.  Part three explains how those ontologies unlock enterprise value in the workflow.  This is not a technical paper, but rather technical-leaning.  I try to call out when we are discussing more technical items, but any leader in any company will be able to grasp the fundamental concepts here.

Part 1: The Rapid Rise of AI Adoption (2023–2025)

AI is being adopted at an unprecedented paceacross both enterprises and the general public. This surge is evident fromrecent data and trends:

AI Adoption in Enterprises (Business Perspective)

Data growth visualization graphic

Businesses worldwide have rapidly embraced AI. According to surveys, 78% of organizations were using AI in 2024, up from 55% in 2023. This dramatic jump in one year reflects the breakout of generative AI – for example, many companies started experimenting with tools like GPT in 2023. In fact, 60% of organizations that had adopted any AI reported using generative AI by 2023. Enterprise AI investment has also skyrocketed: total corporate investment in AI reached $252 billion in 2024, a 44% increase over the previous year . Likewise, the number of newly funded AI startups nearly tripled in 2024, indicating a booming ecosystem of AI solutions.

This broad adoption is not confined to any single industry – a majority of CEOs across sectors are exploring AI. Common enterprise uses include customer service automation, fraud detection, and product recommendations. Surveys show 72% of businesses had implemented AI in at least one function by 2022, and that climbed toward ~80% by 2025. Companies are also planning to invest even more: over 90% of firms intend to increase AI spending in the next three years. In short, AI has moved from niche to mainstream in the enterprise, with the vast majority of companies either using or actively exploring AI today.

Despite this enthusiasm, many organizations have only scratched the surface of AI’s potential. Surveys find that fewer than one-third of companies use AI in more than one or two business units, a share largely unchanged since 2021. In other words, a typical firm might have a few pilot AI projects, but not a company-wide transformation. Additionally, only about 23% of firms report significant (>5%) bottom-line impact from AI so far. This suggests there is much more value on the table. One major hurdle is data: a recent study noted that 63% of organizations lack proper data management practices for AI, and Gartner predicts 60% of AI projects will be abandoned by 2026 if they don’t have “AI-ready” data. In fact, data silos and poor data quality are often cited as the top reasons AI initiatives falter. As an expert put it, fragmented data makes it “difficult for AI agents to understand context,” and “eliminating silos is essential for AI to…deliver meaningful enterprise value.” In summary, enterprises are investing heavily in AI, but without addressing data organization and context, they struggle to scale initial successes into sustainable value.

AI Adoption by Consumers and Technology Trends (Technical Perspective)

The public’s uptake of AI has been extraordinarily swift. The clearest example is ChatGPT’s explosive growth. OpenAI’s ChatGPT reached 100 million monthly users just two months after its late-2022 launch, making it the fastest-growing consumer application in history. For comparison, it took TikTok about 9 months and Instagram 2.5 years to hit 100 million users. This user adoption curve is virtually unheard of – ChatGPT amassed users at a rate that eclipsed any platform before it. In fact, it garnered a million users in its first 5 days, whereas services like Instagram took 2–3 months to reach that milestone. Such viral growth underscores how rapidly AI can penetrate everyday life when the technology reaches a useful threshold.

Cropped photo of man in a white shirt looking at his cell phone

Beyond user counts, AI is increasingly embedded in daily tools. By 2023–24, we saw mainstream use of AI in products like voice assistants, translation apps, and image generators. On the technical side, the capabilities of AI systems have leapt forward, which further drives adoption. For example, new large language models and image models in 2023 achieved feats that were science fiction a few years prior – from writing code to generating photorealistic art – making AI appealing to integrate everywhere. The cost of AI technology has also plummeted. One analysis found that the “inference” cost for a model performing at GPT-3.5 level dropped 280× from late 2022 to late 2024, thanks to more efficient models and hardware. This massive cost reduction means it’s far more feasible to deploy AI at scale in 2025 than it was just two years ago. In parallel, academic and R&D output in AI is at record levels – for instance, China now publishes more AI papers and patents than any other country , and the U.S. produced 40 of the top new AI models in 2024 alone. All these trends point to a technology that is maturing fast and spreading widely.

Importantly, consumers have shown they will quickly adopt AI-powered products that deliver value. ChatGPT’s popularity spurred the integration of GPT-based features into office software, search engines, and countless apps, effectively bringing AI to millions more users. We also see AI in everyday services: self-driving car services provided hundreds of thousands of autonomous rides in 2023 , and AI-enabled medical devices approved by the FDA jumped from only 6 in 2015 to 223 in 2023, illustrating exponential growth in real-world AI applications. In short, from individual consumers to developers and researchers, the technical adoption of AI has been extremely rapid. The world is embracing AI technologies at all levels – but to truly capitalize on this, organizations must ensure their data foundation is up to par. The next section explains ontologies: a critical approach to organizing and contextualizing data, which many now view as the “secret sauce” for getting real value from AI in the enterprise.

Part 2: What Are Data Ontologies and What Does It Take toBuild One?

With AI adoption booming, enterprises are realizing that organizing and contextualizing their data is the key to unlocking AI’s full value. This is where data ontologies come in. An ontology provides a structured, contextual map of data – a sort of “knowledge blueprint” for a domain. Unlike raw data tables or siloed databases, an ontology formally defines the meaning of data: the entities involved, their properties, and how those entities relate to each other.

In simple terms, an ontology is a formal system for modeling concepts and their relationships . It’s essentially a shared vocabulary and structure that describes the things (objects, concepts) in a domain and how they connect. For example, in a retail business ontology, you might define concepts like Product, Customer, Order, and relationships like Customer places Order or Order contains Product. Each concept can have attributes (a Product has a name, price, category, etc.) and relationships (a Product is part of a Category, supplied by a Vendor, and so on). By capturing this information in a single semantic model, an ontology makes the data’s context explicit.

Group of young business people in the modern office

Ontologies are often implemented as knowledge graphs – networks of nodes and links where nodes represent entities (e.g. a specific customer or product) and links represent relationships (e.g. “purchased”, “located in”, “reports to”). Unlike a traditional relational database (which might split information into many tables), an ontology-based knowledge graph can directly connect any piece of data to any other via meaningful relationships. This makes it much easier for both humans and AI systems to traverse the data and understand context. In fact, ontologies were a cornerstone of the Semantic Web idea: to represent web data in a way that computers can “understand” concepts like people, places, and events and how they relate. For example, an ontology would allow a search engine to know that “Paris” can mean a city and that a city has properties like population, tourist sites, etc., so that when you search “Paris” you get relevant facts (location, demographics, attractions) instead of just pages containing the word “Paris”. In essence, ontologies add a layer of meaning and interconnection on top of raw data.

Key characteristics of ontologies include being extensible (you can add new concepts or relationships as your knowledge grows), shareable (the ontology provides a common language for data across systems), and machine-readable (often using standard formats like RDF/OWL so software can readily parse and reason over the data). Crucially, because an ontology encodes relationships, it enables inference: you can derive new facts from known ones. For instance, if your ontology knows “All managers are employees” and “Alice is a manager,” an AI can infer that Alice is an employee even if that wasn’t explicitly stored – the relationship is inferred from the ontology’s logic.

Building a data ontology does require effort and planning. It’s both a technical and knowledge-centric exercise. Here are the typical steps and best practices involved in creating an ontology for your data:

  1. Define the Domain and Scope: Clearly identify what domain of knowledge your ontology will cover and how detailed it needs to be. Determine the key entities (concepts) and questions that the ontology should be able to represent. For example, if you’re building an ontology for customer support, will it include just customers and support tickets? Does it also include products, support agents, knowledge base articles, etc.? Defining scope ensures you cover all necessary concepts without boiling the ocean.
  2. Reuse Existing Ontologies (if possible): It’s often beneficial to start from established ontologies or schemas in your industry. Many domains have standard ontologies (for instance, healthcare has ontologies for diseases, finance has FIBO for financial concepts). Reusing these can save time and promote interoperability. You can extend or customize them as needed, but leveraging pre-built vocabularies means you’re aligning with a common language that others understand.
  3. Choose a Representation Language/Format: Ontologies are typically represented in formal languages like OWL (Web Ontology Language) or RDF Schema. These languages allow you to formally define classes (concepts), subclasses, properties, and constraints in a machine-readable way. Choosing a language might depend on the tools you plan to use. For instance, OWL is very expressive and works with reasoning tools; for simpler needs, a lighter schema might suffice. Many ontology developers use tools like Protégé (an ontology editor) to design the ontology structure in OWL/RDF.
  4. Design the Ontology Schema (Concepts & Relationships): This is the core modeling step. You enumerate the classes (also called types or categories) for the key concepts in your domain, arrange them in hierarchies if applicable (e.g. Employee and Customer might both be subclasses of Person), and define the properties and relationships among them. For each class, ask what attributes it has (for a Customer, attributes might be name, customer ID, industry, etc.) and what relationships link it to other classes (a Customer places Orders; a Customer has an Account Rep who is an Employee; etc.). It’s helpful to also define any rules or constraints here (for example, you might specify that “every Order must be placed by exactly one Customer” or “a Project can be in one of the following status states”). These constraints (often called ontology axioms) ensure data consistency and reflect business rules in the ontology. The outcome of this step is a formal schema – essentially a blueprint of your knowledge domain that lists all classes and allowed relationships.
  5. Populate the Ontology with Data (Instances): Once the schema is in place, the next step is to integrate actual data. This means adding individual instances of those classes and filling in their properties/relationships. For example, in the ontology schema you defined Product as a class; now you add all your actual products (Product A, Product B, etc.) as instances/nodes of that class, each with its details and links (Product A is in category “Electronics”, supplied by Vendor X, etc.). This population can be done by converting data from databases or spreadsheets into triples/graph form that the ontology can consume. It often requires writing mappings or scripts to transform raw data into the ontology format. During this step, validation is important – checking that the data conforms to the ontology’s rules (no broken links, required attributes present, no contradictory relationships). Tools can run consistency checks using the ontology’s logic.
  6. Engage Domain Experts: Building a good ontology is not just a technical task; it’s a knowledge modeling task. It’s crucial to involve people who deeply understand the domain (e.g. finance experts for a financial ontology, engineers for an IT configuration ontology). They will help identify the key concepts and relationships that might not be obvious to IT staff alone. Domain experts validate that the ontology’s structure truly captures real-world nuances and doesn’t omit important context. Their input can prevent mistakes like misclassifying concepts or using wrong terminology. Typically, ontology development is an iterative dialogue between knowledge engineers (who know the modeling formalisms) and subject-matter experts (who know the content).
  7. Iterate and Refine: An ontology is a living artifact. After an initial build, it will likely need adjustments as you uncover new requirements or edge cases. Plan for an iterative process of refining the ontology over time. This includes versioning the ontology (to manage changes) and continuously integrating feedback from those using it. For instance, you might start with a limited scope, deploy it in an AI application, then realize you need to add a few more relationships or split a concept into sub-types. Version control and clear documentation of changes are best practices so that everyone knows how the ontology evolves.
Steps and best practices involved in creating an ontology for your data - graphic

Throughout this process, maintaining good ontology design principles is important. Aim for clarity (each concept has a clear definition), minimize redundancy (don’t represent the same thing in two different ways), and ensure the ontology actually supports the queries or AI use cases you have in mind (a technique used is writing “competency questions” – e.g. “Can the ontology answer: Which customers bought Product X in the last 2 months?” – if not, you may need to model something differently). The process can be complex and typically requires specialized tools and skills, but there is ample payoff. A well-built ontology becomes a powerful asset: it is the single source of truth for how your business concepts are defined and connected.

It’s worth noting that building an ontology is an upfront investment. It takes planning, coordination, and ongoing governance. However, the benefits far outweigh the costs for data-driven organizations. An ontology-driven approach yields data that is well-structured, consistent, and rich with context. This makes integration of disparate data much easier (since everything maps to a common model), enables re-use of data (different teams can leverage the same knowledge graph for different AI applications), and improves data quality (enforcing standards and relationships reduces errors and duplicates). In effect, an ontology unlocks your data’s potential – turning raw data into a connected knowledge asset that AI systems can truly understand. In the next part, we’ll see how this translates into dramatically better AI outcomes.

Part 3: How Ontologies Supercharge AI Workflows

Why go through the trouble of building an ontology? Simply put, because an ontology can massively enhance the effectiveness of AI in the enterprise. When your AI systems have access to a rich, connected ontology (often via a knowledge graph), they perform better on multiple fronts. Here are the key ways ontologies “supercharge” AI workflows, along with examples:

  1. Providing Context and Reducing “Blind Spots” (Better Accuracy)
    – AI models are only as good as the information given to them. If data is scattered or lacks context, AI may make mistakes or “hallucinate” incorrect answers. An ontology addresses this by giving AI a unified, context-rich view of the domain. For example, consider a customer service chatbot. Without an ontology, it might separately query a customer database, a product FAQ, and an order system, trying to stitch answers together – and it might miss connections (leading to wrong or generic answers). With an ontology, all relevant facts about a customer (profile, purchase history, support tickets, knowledge base articles) are linked in one knowledge graph. The chatbot can pull a complete picture in one go. This drastically reduces errors. In technical terms, strategies like Retrieval Augmented Generation (RAG) use the ontology/knowledge graph to feed real-time factual data into an AI model’s prompts . The result is more accurate, up-to-date responses. In one case, when an LLM (large language model) was enhanced with relevant knowledge graph data, it not only gave a correct answer but was also able to provide the source of its information as a citation. By grounding AI in the ontology’s facts, we greatly reduce hallucinations and increase trust. As experts note, an ontology allows the AI to “explicitly understand the relationships” in the data rather than making statistical guesses. The AI’s output becomes validated and actionable insights rooted in the organization’s data, instead of boilerplate or invented text.
  2. Enabling Complex Reasoning and Insights
    – Ontologies let AI do more than surface individual data points; they enable the AI to reason about the data. Because an ontology encodes rich relationships, AI can traverse those links to answer complex queries or discover non-obvious patterns. A classic example is Google’s Knowledge Graph. Google built an ontology of entities (people, places, things) and their relationships. So if you search for “Barack Obama”, the AI doesn’t just look for the keyword – it knows Obama is a Person, a former President, has a spouse Michelle Obama, served 2009–2017, etc. The search results therefore include an info panel with Obama’s birthdate, office term, family, and related topics. That is ontology-powered AI in action: the system understands your query in context and can retrieve relevant knowledge rather than just matching text. Similarly, recommendation systems use ontologies to improve suggestions. Netflix and Amazon, for instance, leverage knowledge graphs of content and user preferences. If you liked a certain movie, the system doesn’t only note the genre – it also knows about the movie’s director, cast, themes, and even that “people who like this actor often also like that other movie”. This is why sometimes an unexpected recommendation pops up that actually aligns with your interests. The AI is reasoning over a web of connections (movies → actors → other movies, etc.). In fact, Netflix has used knowledge graphs to connect content by attributes and make cross-genre recommendations that a simpler system might miss. In finance, ontologies help AI link entities like accounts, transactions, and regulations, enabling detection of complex fraud rings (by seeing relationships between entities that are not obvious from any single database). Overall, ontologies turn data into a knowledge network that AI can navigate, allowing for deductive and even inductive reasoning. Machines can infer new knowledge (e.g. “if X is part of Y and Y is part of Z, then X is part of Z”) thanks to the ontology’s structure . This leads to deeper insights and more “intelligent” AI behavior – the AI isn’t just a statistical engine, but also a reasoning agent leveraging the organization’s collective knowledge.
  3. Integrating AI into Decision-Making Workflows
    – Perhaps the most strategic benefit is that an ontology acts as a bridge between AI and real business operations. Enterprises often struggle to take AI prototypes and actually embed them in day-to-day processes – largely because of data fragmentation. An ontology solves that by being the unified data layer that both humans and AI interact with. It essentially creates a shared language for people, processes, and AI. One prominent enterprise technology firm described their ontology as “a full-fidelity semantic representation of the enterprise” that “serves as the foundation for powerful AI-driven workflows.” In practice, this means AI systems can plug into the ontology to get whatever information they need (with proper access controls), and can also write back outcomes to the knowledge graph. For example, imagine an AI system in a manufacturing company that optimizes supply chain decisions. With a connected ontology, the AI can see the real-time state of production lines, inventory levels, supplier deliveries, and even financial data – all in one model. It can then suggest decisions (like reallocate stock from warehouse A to B due to a supplier delay), and those decisions are executed and recorded via the ontology (updating the status across all linked systems). This is not theoretical – companies like Palantir use an ontology approach to connect operational data and feed AI recommendations directly into frontline decision-making. The ontology ensures that AI has access to all relevant context (the “nouns” of the business), and can interact through defined operations (the “verbs”). The result is AI-driven automation or decision support that is far more effective because it’s using a live model of the business. In a nutshell, ontologies allow AI to be embedded in enterprise workflows safely and scalably.

    To illustrate, consider a scenario from the InformationWeek article on data silos: a customer-facing AI agent needed info about a client’s contract, support tickets, and forum posts, but those were in three separate systems. Without an ontology, the AI (or a human agent) has to manually gather and reconcile that data, often losing time and context. With an ontology, all those pieces (contract details, support history, customer comments) would be linked under the customer’s profile in a knowledge graph. The AI agent could instantly query the knowledge graph and get a holistic answer (e.g. “the customer has X contract value, currently 2 open support issues about Y product, and recently expressed frustration on the forum which our sentiment analysis flagged”). Armed with this context, the AI (or the human) can respond in a highly informed manner, prioritizing the right issues. This level of insight only comes when your AI can traverse connected data – exactly what ontologies enable. As one CTO summarized, “fragmented datasets make it difficult for AI agents to understand context… eliminating silos [via integration] is essential for AI to scale and deliver value.” Ontologies are the means to eliminate those silos at a semantic level.
  4. Consistency, Compliance, and Knowledge Retention
    – Another often underrated benefit is that ontologies enforce a consistent data vocabulary, which in turn makes AI outputs more consistent and interpretable. In regulated industries, using an ontology can ensure that an AI’s decisions or recommendations are based on formally defined terms (for example, a bank’s ontology might define risk categories and exposure limits – the AI must use those in its calculations, ensuring compliance). It also helps with explainability: if an AI recommendation traces back through a knowledge graph, an analyst can follow the chain of reasoning (why did the AI flag this transaction? It can point to the links in the graph that match a fraud pattern, for instance). By capturing expert knowledge in ontology form, you also retain that know-how even if personnel change – the institutional knowledge lives on in the knowledge graph and continues to fuel AI models and analytics.

In sum, data ontologies are the “secret unlock” for enterprise AI. They transform raw data into a connected, contextualized asset that AI can leverage to the fullest. With an ontology in place, AI systems become significantly more powerful: they can access the right information at the right time, understand the significance of that information, and even take or suggest actions based on a holistic view. Companies that have invested in ontologies and knowledge graphs report that their AI initiatives moved from toy experiments to core business drivers. It’s the difference between an AI that gives a one-off insight and an AI that continuously delivers value because it’s wired into your organization’s knowledge fabric. As AI continues to advance, ontologies will play a fundamental role in ensuring that these advances translate into real enterprise value – by bridging human context and machine intelligence in a sustainable, scalable way.

About the Author

Blue dotted circleRohit Garewal

Rohit Garewal

CEO

Rohit is a forward-thinking eCommerce evangelist, especially focused on re-energizing the B2B sector and merging the old disciplines with new technology opportunities. He is passionate about delivering profitable growth through people-driven digital transformation. Watch his talk on digital transformation.


Latest Posts

Cropped photo of a man using tablet device

Looking for help?

We're here for you. Schedule a quick call.

SCHEDULE NOW