Responsive Data Fabrics and the Business Need
In organizational settings where pure data analytics form the core of specific outcomes — tracking sales and purchases of commodities on an exchange, among others — in-memory databases make logical sense. Low latency processing, with no wasting cycles on pesky disk I/O for example, means that fast ingress rates are covered, and the same tables can handle a modicum of queries too, given enough memory is available to handle the required compute.
Now, as anyone who has run an OS in memory-only on an over-specced desktop machine will tell you, while the performance is amazing, there are a few downsides to the model. The obvious one is a power outage, but the primary drawbacks are up-front cost and lack of scalability. The former problem won’t trouble many enterprise in-memory databases, but in an IT environment where cloud computing and virtualization offer compute, storage, and networking elasticity, having expensive yet finite resources like fixed number of SO-DIMMs seems like a pretty big anachronism.
There’s also the issue of data availability. A decade ago, analysts would be tasked by business functions to produce specific data transforms or BI reports. Today, the need for data analysis and processing is much more widespread across the enterprise. In short, everyone wants in on the action: unlikely candidates like organizations’ marketing divisions want to present personalized point-of-sale messaging, or HR departments want to predict when recruitment cycles should be beginning and ending.
Data analysis and insight-creation are finding new advocates, too, away from the “traditional” stomping grounds of medical and finance sectors. Even in traditional “heavy” industries like manufacturing, there’s demand for facilities wanting to monitor multiple sensors from huge arrays of IIoT devices — and attenuate them in real-time according to data processed in milliseconds.
Even advanced in-memory technology wasn’t designed for the variety of needs in today’s enterprise, and even if it were, overall capacity would be constrained. That’s why many organizations are looking for specialist platforms that offer the ability to handle massive ingest rates while performing complex processing and transformations. Yet to be able to do so without a king’s ransom in costs — perhaps it’s too much to ask!
Therefore, what might come as a surprise is that the InterSystems IRIS data platform outperforms even the fastest IMDBs, and, as a data fabric, also creates what’s effectively a unified resource of application and data silos from which to draw information. As a hybrid cloud-based service, too, it can scale either in step with business growth or — more likely today — adapt to burst demands without missing a beat (or timestamped data event, more accurately).
Client-side developers writing applications to use the fabric can write Python, Java, C++ and .NET (to name a few), and the NoSQL data store can be queried via SQL or through native APIs directly on the data. That means there’s a very gentle technical ramp to negotiate for most teams, and application development that has the entire organization’s data resources available becomes a reality. By providing this to all parts of the enterprise’s various business functions, the asset that’s canonical information becomes freely available.
Compared to every commonly deployed technology combination that aims to achieve similar aims, InterSystems IRIS outperforms in apples-with-apples comparisons, with ingest and retrieval rates typically in the hundreds of percent better (the company offers Docker images to pull and run or documentation on provisioning home-baked Kubernetes — however you want to run tests).
The InterSystems Cloud Manager (ICM) deploys the platform in containers and comes too as binaries for Linux bare metal/VMs. Those options create a malleable architecture that can be provisioned at any scale required for ingress- or process-heavy operations on-premise, in cloud clusters or spread over any topology.
For LoB owners, InterSystems IRIS means the availability of a huge range of applications that increase efficiency and manage risk better. IRIS works as an intelligent data fabric leveraging previous investments, like data lake platforms and siloed application data. Additionally, a great deal of on-fabric capability can be leveraged: NLP and ML algorithms, business intelligence, trend-tracking, and a great deal more. That capability means existing assets don’t get tied up.
Capabilities extend well beyond applications using internal data to using third-party information at huge scales, allowing both startup fintechs and established enterprises access to business-changing (and ever-changing) information.
The smart data fabric from InterSystems goes above and beyond the data lake and in-memory technologies, providing the data-based enterprise access to resources that it was aware of, but could never leverage to the extent needed. Without having to plow resources into new architectures, the enterprise-wide information bank is available.
To learn more, head over to the InterSystems site and draw on the decades of experience the company offers in data management and intelligence. Your preferred memory chip vendor need not be disturbed.
READ MORE
- Data Strategies That Dictate Legacy Overhaul Methods for Established Banks
- Securing Data: A Guide to Navigating Australian Privacy Regulations
- Ethical Threads: Transforming Fashion with Trust and Transparency
- Top 5 Drivers Shaping IT Budgets This Financial Year
- Beyond Connectivity: How Wireless Site Surveys Enhance Tomorrow’s Business Network