When your organization wants to speed up, nothing will stall progress like a data integration or data analysis challenge. Data silos, database technical debt, data that’s far from being accessible in real time—business and IT leaders know these problems all too well. Developers can’t gain speed when data dilemmas stand in their way, nor can line-of-business teams. And that’s a problem when both groups have ambitious digital transformation goals that require increased speed and agility. Data fabric brings significant new power to solve some of these decades-old data dilemmas.
Let’s take a closer look at what data fabric is and how it can help you increase software development speed.
[ How does data fabric fit into a modern automation strategy? Get the Gartner® Report: Build Your Hyperautomation Portfolio. ]
The traditional world of data warehouses is all about collecting data. Data fabric, on the other hand, is all about connecting data. Data fabric is an architecture layer that uses data virtualization to centralize data from disparate systems. This means that with data fabric, you can keep the data in its source systems, like CRM or ERP applications. You can also access the data in real time and connect it across different systems. The data may be on-premises or in a cloud service such as AWS.
In essence, you can think of data fabric as an abstraction layer for managing your data. If you’re familiar with Kubernetes, the concept probably rings a bell: Kubernetes is an abstraction layer for managing a large number of containers.
Data fabric represents quite a different architecture compared to the traditional model many of you know well: extracting data from the source systems, transforming it to clean it up and duplicate it, and loading it into a data warehouse (if it’s structured data) or data lake (if it’s unstructured).
Many aspects of that traditional approach bring challenges that rob teams of speed. Let’s delve into some of the common ones and examine how data fabric changes the equation.
Systems that don’t talk to each other are at the root of many teams’ dashed hopes for improved speed. Large enterprises have a wide variety of systems (ranging from critical systems like CRM and ERP to many legacy systems) that have no native way to connect and share data. That’s where all the data extracting and collecting comes into play.
There was a time when some IT leaders chose to make a big bet on one technology vendor, because at least you knew those systems would talk to each other. But that time has passed. Think of all the different applications in your software portfolio ranging from legacy technology to cutting-edge additions that beg for access to enterprise data.
Disruptions ranging from a pandemic to a disruptive rival product launch may force you to spin up a new application in weeks, not months. You need that data accessible to both legacy and future applications.
Without data fabric, making that data accessible involves a time-consuming data migration process and a large number of developers wrangling database records and views.
How data fabric helps:
With the data fabric approach, an organization skips the data migration step. The data stays in the source applications and the data fabric layer handles developers getting access to the data. No data migration means those developers don’t wait weeks—or more—for access to that data. This in turn equals significantly faster app development. Teams using DevOps or agile approaches can sprint at a whole new speed with this arrangement.
How many database admins will you need to ensure that your organization’s high-profile application or project happens on time? That’s another key part of this equation, the human part.
How many hours will those IT pros spend creating reports and views? Do you have strong database talent in house? If you do, can you retain those people, given the high turnover rates on IT teams in the past few years? How expensive will it be? Or, maybe you’ll depend on an outside partner to help provide database expertise, knowing that comes with its own set of costs and risks. IT leaders take great care with talent strategy, and for good reason.
How data fabric helps:
The more people you need, the more vulnerable you are to project delays. With the data fabric approach, IT teams are not doing any data migration work. That’s a huge plus, because now you don’t need scores of people working on traditional database ETL work. That frees you up to hire for other skills and lets you point IT talent at more innovative development work.
Data fabric also helps developers and end users by automatically optimizing queries and indexes for speedy performance—tuning work that database admins would otherwise have to do. That means faster queries for users and less work for developers.
What are the other realities of traditional database work? First, the data silos don’t go away, which is bad news for the integrity of the data. In the real world, if you have data in two systems, it is often wrong in both of them. Now multiply that problem by the number of SaaS applications in your organization. For IT teams, trying to keep disparate systems up-to-date becomes a fruitless time sink.
Additionally, as data models change over time throughout the business, those models need updating. However, even minor edits to models used across applications and workflows can take months or even years to complete.
How data fabric helps:
The data virtualization layer means that you are bypassing data migration in favor of real-time data syncing. This eliminates the issue of “which system has the accurate data right now?”
The data virtualization approach delivers clear benefits in terms of data integrity. But IT leaders can realize additional benefits by using a platform that includes a data virtualization layer, codeless data modeling, and record-level security.
Codeless data modeling means that you do not need to know SQL in order to interact with data that lives in applications such as Oracle, Salesforce, or Microsoft Sharepoint. It also means that you can use the data in workflows and applications that people create with visual drag-and-drop tools rather than traditional code. Think about what that means for creating reports and dashboards, for example.
Record-level security ensures that people get access only to the records that they need. This has advantages not only for data analysis but also for projects such as customer and partner portals, which can give people outside your organization needed access to up-to-date data, but only the data you specify.
That’s important, because just as every IT leader has talent worries, they also have security and regulatory concerns. Enterprise data spread out among disparate systems creates security risks and holes that can leave confidential data in the hands of people who don’t require it. Plus, regulations for your specific industry and geography can change quickly, necessitating system updates. IT leaders need to help protect their organizations from data breaches and violations that carry financial penalties.
How data fabric helps:
With data fabric keeping all the data in one virtualized data model, you get a complete view of all your different systems. This allows for consistent governance, even when people create new data patterns, because the data is related in the centralized, virtual layer.
As noted earlier, record-level security delivers precise control, which is valuable when people are building reports and workflows using data from multiple applications. You might reference data in your CRM system to enforce whether specific rows of data from your ERP should be accessible or not. A good data fabric helps secure data across many joins and nested relationships. This is important not only inside your organization but also when you’re sharing data with outside groups like franchisees, field technicians, or customers.
[ Which emerging data integration and automation trends deserve your attention now? Get the Gartner® Hyperautomation 2022 Trends Report. ]