Analyst News

The Coming Intersection of HPC and the Enterprise Data Center – Forbes blog by John Webster

Analyst News - Tue, 02/20/2018 - 09:32

High Performance Computing (HPC) traditionally exists as a separate and distinct discipline from enterprise data center computing. Both use the same basic components—servers, networks, storage arrays—but are optimized for different types of applications. Those within the data center are largely transaction-oriented while HPC applications crunch numbers and high volumes of data. However, an intersection is emerging, driven by more recently by business-oriented analytics that now fall under the general category of Artificial intelligence (AI).

Data-driven, customer-facing online services are advancing rapidly in many industries, including financial services (online trading, online banking), healthcare (patient portals, electronic health records), and travel (booking services, travel recommendations). The explosive, global growth of SaaS and online services is leading to major changes in enterprise infrastructure, with new application development methodologies, new database solutions, new infrastructure hardware and software technologies, and new datacenter management paradigms. This growth will only accelerate as emerging Internet of Things (IoT)-enabled technologies like connected health, smart industry, and smart city solutions come online in the form of as-a-service businesses.

Business is now about digital transformation. In the minds of many IT executives, this typically means delivering cloud-like business agility to its user groups—transform, digitize, become more agile. And it is often the case that separate, distinctly new cloud computing environments are stood-up alongside traditional IT to accomplish this. Transformational IT can now benefit from a shot of HPC.

HPC paradigms were born from the need to apply sophisticated analytics to large volumes of data gathered from multiple sources. Sound familiar? The Big Data way to say the same thing was “Volume, Variety, Velocity.” With the advent of cloud technologies, HPC applications have leveraged storage and processing delivered from shared, multi-tenant infrastructure. Many of the same challenges addressed by HPC practitioners are now faced by modern enterprise application developers.

As enterprise cloud infrastructures continue to grow in scale while delivering increasingly sophisticated analytics, we will see a move toward new architectures that closely resemble those employed by modern HPC applications. Characteristics of new cloud computing architectures include independent scaling compute and storage resources, continued advancement of commodity hardware platforms, and software-defined datacenter technologies—all of which can benefit from an infusion of HPC technologies. These are now coming from the traditional HPC vendors—HPEIBM and Intelwith its 3D-XPoint for example—as well as some new names like NVIDIA, the current leader in GPU cards for the AI market.

To extract better economic value from their data, enterprises can now more fully enable machine learning and deep neural networks by integrating HPC technologies. They can merge the performance advantages of HPC with AI applications running on commodity hardware platforms. Instead of reinventing the wheel, the HPC and Big Data compute-intensive paradigms are now coming together to provide organizations with the best of both worlds. HPC is advancing into the enterprise data center and it’s been a long time coming.


The post The Coming Intersection of HPC and the Enterprise Data Center – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

What Comes after CI and HCI? – Composable Infrastructures

Analyst News - Mon, 02/19/2018 - 18:07

There’s an evolution occurring in IT infrastructure that’s providing alternatives to the traditional server-, storage- and SAN-based systems enterprises have used for the past two decades or so. This evolution was first defined by “hyperscalers”, the big public cloud and social media companies, and encompasses multiple technology approaches like Converged, Hyperconveged and now Composable Infrastructures. This blog will discuss these combined “Integrated Infrastructure” approaches and look at how they attempt to address evolving needs of IT organizations.

Hyper-scale Infrastructures

Companies like Google, Facebook and AWS had to address a number of challenges including huge data sets, unpredictable capacity requirements and dynamic business environments (first, public cloud and social media, now IoT, AI, etc.) that stressed the ability of traditional IT to deliver services in a timely fashion. So they created a new model for IT that incorporated software-based architectures (developed internally), that ran on standard hardware and provided the needed flexibility, scale and cost-containment.

But enterprise IT doesn’t have the expertise to support this kind of do-it-yourself infrastructure, nor the desire to dedicate the resources or take on the risk. In general, enterprises want to use trusted suppliers and have clear systems responsibility. They need much of what hyper-scale systems provided, but with integrated solutions that are simple to operate, quick to deploy and easy to configure and re-configure.


Converged Infrastructure solutions were some of the first attempts at an integrated infrastructure, creating certified stacks of existing servers, storage, networking components and server virtualization that companies bought by the rack. Some were sold as turnkey solutions by the manufacturer and others were sold as reference architectures that VARs or enterprises themselves could implement. They reduced the integration required and gave companies a rack-scale architecture that minimized set up costs and deployment time.

Hyperconverged Infrastructures (HCIs) took this to the next level, actually combining the storage and compute functions into modules that users could deploy themselves. Scaling was easy too, just add more nodes. At the heart of this technology was a software-defined storage (SDS) layer that virtualized the physical storage on each node and presented it to a hypervisor that ran on each node as well, to support workloads and usually the SDS package itself.

HCIs come in several formats, from a turnkey appliance sold by the HCI manufacturer to a software-only model where the customer chooses their hardware vendor. Some enterprises even put together their own HCI-like solution, running an SDS package on a compatible server chassis and adding the hypervisor.

While Converged and Hyperconverged Infrastructures provide value to the enterprise, they don’t really provide solution for every use case. HCIs were great as a consolidation play for the lower end, SMB and mid-market companies. Enterprises use them too, but more for independent projects or remote environments that need a turnkey infrastructure solution. In general, they’re not using HCIs for mainstream data center applications because of concerns about creating silos of infrastructure and vendor lock-in, but also a feeling that the the technology lacks maturity and isn’t “mission critical” (based on 2017 Evaluator Group Study “HCI in the Enterprise”).

While they’re comprised of traditional IT infrastructure components, CIs present a system that’s certainly mature and capable of handling mission critical workloads. But CIs are also relatively expensive and inflexible, since they’re essentially bundles of legacy servers, storage and networking gear, instead of software-defined modules of commodity hardware with a common management platform. They also lack the APIs and programmable aspects that can support automation, agility and cloud connectivity.

Composable Infrastructure

Composable Infrastructure (CPI) is a comprehensive, rack-scale compute solution that combines some characteristics of both Converged and Hyperconverged Infrastructures. CPI disaggregates and then pools physical resources, allocating them at run time for a specific compute job, then returns them to the pool. It provides a comprehensive compute environment that supports applications running in VMs, containers and bare metal OS.

CPI doesn’t use SDS, as HCIs do, to share the storage pool, but supports direct-attachment of storage devices (like drives and SSDs), eliminating the SDS software latency and cost. CPI also doesn’t require a hypervisor to run an SDS layer or workloads. Instead, it creates bare metal server instances that can support containers or a hypervisor if desired, reducing software licensing and hypervisor lock-in.

Composable Infrastructures are stateless architectures, meaning they’re assembled at run time, and can be controlled by 3rd party development platforms and management tools through APIs. This improves agility and makes CPI is well suited for automation. For more information see the Technology Insight paper “Composable – the Next Step in Integrated Infrastructures”.


The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post What Comes after CI and HCI? – Composable Infrastructures appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (2/5-2/12)

Analyst News - Thu, 02/15/2018 - 09:48

Systems and Storage News Roundup (2/5-2/12)



If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at



2/12 – Escaping the Gravitational Pull of Big-Data

CA Technologies

2/8 – Erica Christensen, CA Technologies, Discusses The Importance of Producing STEM10


2/5 – Global Cloud Index Projects Cloud Traffic to Represent 95 Percent of Total Data Center Traffic by 2021

Dell EMC

2/6 – Dell EMC Expands Server Capabilities for Software-defined, Edge and High-Performance Computing


2/8 – Huawei Releases E2E Cloud VR System Prototype


2/6 – Infinidat Backup Appliance: When the Quality and Cost of Storage Really Matters


2/8 – MapR Simplifies End-to-End Workflow for Data Scientists


2/8 – Five Requirements for Hyper-Converged Infrastructure Software


2/9 – NEC Succeeds in Simultaneous Digital Beamforming That Supports 28 GHz Band For 5G Communications


2/12 – Oracle Cloud Growth Driving Aggressive Global Expansion

Supermicro Computer

2/7 – Supermicro Expands Edge Computing and Network Appliance Portfolio with New High Density SoC Solutions


2/9 – Multicloud Storage Mitigates Risk of Public Cloud Lock-In


2/5 – Automation and The Age of The Self-Driving Data Center

The post Systems and Storage News Roundup (2/5-2/12) appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/15-1/22)

Analyst News - Thu, 01/25/2018 - 10:35
Categories: Analyst News

Hybrid Cloud: 4 Top Use Cases

Analyst News - Wed, 01/17/2018 - 12:19

Interop ITX expert Camberley Bates explains which hybrid cloud deployments are most likely to succeed.

In the early days of cloud computing, experts talked a lot about the relative merits of public and private clouds and which would be the better choice for enterprises. These days, most enterprises aren’t deciding between public or private clouds; they have both. Hybrid and multi-cloud environments have become the norm.

However, setting up a true hybrid cloud, with integration between a public cloud and private cloud environment, can be very challenging.

“If the end user does not have specific applications in mind about what they are building [a hybrid cloud] for and what they are doing, we find that they typically fail,” Camberley Bates, managing director and analyst at Evaluator Group, told me in an interview.

So which use cases are best suited to the hybrid cloud? Bates highlighted three scenarios where organizations are experiencing the greatest success with their hybrid cloud initiatives, and one use case that’s popular but more challenging.

1. Disaster recovery and business continuity

Setting up an independent environment for disaster recovery (DR) or business continuity purposes can be a very costly proposition. Using a hybrid cloud setup, where the on-premises data center fails over to a public cloud service in the case of an emergency, is much more affordable. Plus, it can give enterprises access to IT resources in a geographic location far enough away from their primary site that they are unlikely to be affected by the same disaster events.

Bates noted that costs are usually big driver for choosing hybrid cloud over other DR options. With hybrid cloud, “I have a flexible environment where I’m not paying for all of that infrastructure all the time constantly.” she said. “I have the ability to expand very rapidly if I need to. I have a low-cost environment. So if I combine those pieces, suddenly disaster recovery as an insurance policy environment is cost effective.”

2. Archive

Using a hybrid cloud for archive data has very similar benefits as disaster recovery, and enterprises often undertake DR and archive hybrid cloud efforts simultaneously.

“There’s somewhat of a belief system that some people have that the cloud is cheaper than on-prem, which is not necessarily true,” cautioned Bates. However, she added, “It is really cheap to put data at rest in a hybrid cloud for long periods of time. So if I have data that is truly at rest and I’m not moving it in and out, it’s very cost effective.”

3. DevOps application development

Another area where enterprises are experiencing a lot of success with hybrid clouds is with application development. As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process.

Bates said, “The DevOps guys are using [public cloud] to set up and do application development.” She explained, “The public cloud is very simple and easy to use. It’s very fast to get going with it.”

But once applications are ready to deploy in production, many enterprises choose to move them back to the on-premises data center, often for data governance or cost reasons, Bates explained. The hybrid cloud model makes it possible for the organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.

4. Cloud bursting

Many organizations are also interested in using a hybrid cloud for “cloud bursting.” That is, they want to run their applications in a private cloud until demand for resources reaches a certain level, at which point they would fail over to a public cloud service.

However, Bates said, “Cloud bursting is a desire and a desirable capability, but it is not easy to set up, is what our research found.”

Bates has seen some companies, particularly financial trading companies, be successful with hybrid cloud setups, but this particular use case continues to be very challenging to put into practice.


Read this article on here.

The post Hybrid Cloud: 4 Top Use Cases appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/8-1/15)

Analyst News - Tue, 01/16/2018 - 11:28
Categories: Analyst News

Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster

Analyst News - Sun, 01/14/2018 - 19:06

As cloud computing progresses forward, an ages-old problem for IT is resurfacing—how to integrate and secure data stores that are disbursed geographically and across a growing diversity of applications. Data silos, which have always limited an organizations ability to extract value from all of its data, become even more isolated. Consider the mainframe, with its stores of critical data going back decades, as the original data silo. Mainframe users still want to leverage this data for other applications such as AI but must overcome accessibility and formatting barriers in order to do so. It’s a task made easier by software vendors like Syncsort who’s Ironstream moves data from the ‘frame in a way that is easily digestible by Splunk applications for example.

But as cloud computing progresses, the siloed data issue becomes even more apparent as IT executives try to broaden their reach to encompass and leverage their organization’s data stored in multiple public clouds (AWS, Azure, GCP) along with all that they store on site. The cloud-world solution to this problem is what is now becoming known as the Data Fabric.

Data Fabrics—essentially information networks implemented on a grand scale across physical and virtual boundaries—focus on the data aspect of cloud computing as the unifying factor. To conceive of distributed, multi-cloud computing only in terms of infrastructure would miss a fundamental aspect of all computing technology – data. Data is integral and must be woven into multi-cloud computing architectures. The concept that integrates data with distributed and cloud-based computing is the Data Fabric.

The reason for going the way of the Data Fabric, at least on a conceptual basis, is to break down the data siloes that are inherent in isolated computing clusters and on-and off-premises clouds. Fabrics allow data to flow and be shared by applications running in both private and public cloud data centers. They move data to where it’s needed at any given point in time. In the context of IoT for example, they enable analytics to be performed in real time on data being generated by geographically disbursed sensors “on the edge.”

Not surprisingly, the Data Fabric opportunity presents fertile ground for storage vendors. NetApp introduced their conceptual version of it four years ago and have been instantiating various aspects of it ever since. More recently, a number of Cloud Storage Services vendors have put forward their Data Fabric interpretations that are generally based on global file systems. These include ElastifileNasuni, and Avere—recently acquired by Microsoft.

Entries are also coming from other unexpected sources. One is from the ever-evolving Big Data space. MapR’s Converge-X Data Fabric is an exabyte scale, globally distributed data store for managing files, objects, and containers across multiple edge, on-premises, and public cloud environments.

At least two-thirds of all large enterprise IT organizations now see hybrid clouds—which are really multi-clouds—as their long-term IT future. Operating in this multi-cloud world requires new data management processes and policies and most of all, a new data architecture. Enterprise IT will be increasingly called upon to assimilate a widening range of applications directed toward mobile users, data practitioners and outside business partners. In 2018, a growing and increasingly diverse list of vendors will offer their interpretations of the Data Fabric.

The post Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

HCI – What Happened in 2017 and What to Watch for in 2018

Analyst News - Sun, 01/14/2018 - 18:53

The Hyperconverged Infrastructure (HCI) segment continued to grow throughout 2017. Evaluator Group research in this area expanded in 2017 as well, adding products from NetApp, Lenovo and IBM. In this blog we’ll review some of the developments that occurred in 2017 and discuss what to look for in 2018.

2017 saw some consolidation in the HCI market plus new hyperconverged solutions from three big infrastructure companies. Early in the year HPE bought SimpliVity and standardized on this software stack for their HCI offering. IBM joined the HCI market with a Nutanix-based product running on their Power processors and Lenovo added the vSAN-based VX Series to their existing Nutanix-based HX Series HCI solution. NetApp released a new HCI solution, using the architecture from their SolidFire family of scale-out, all-flash arrays.

Going Enterprise?

In 2017 HCI vendors were touting their enterprise credentials, listing their Fortune 100 customers and providing some detail on selected use cases. The strong implication is that “enterprise” customers means “enterprise” use cases and that HCIs are capable of replacing the tier-1 infrastructures that supports mission-critical applications. While there are likely some examples of this occurring, our data suggests this isn’t happening to the extent that HCI vendors are suggesting.

Taken from contact with end user customers and through our HCI in the Enterprise studies the past two years, our information indicates that enterprises are indeed buying HCIs, but not currently replacing tier-1 infrastructure with these systems. They’re using them for new deployments and to consolidate existing tier-2 and tier-3 applications. However, this doesn’t prove that HCIs are not capable of supporting tier-1 use cases.

More Clouds

Companies want the ability to connect to the public cloud as a backup target, to run workloads and do cloud-native developments as well. Many HCI vendors have responded with “multi-cloud” solutions that run their software stack on premises and in the cloud with support for containers and DevOps platforms. Some are including cloud orchestration software that automates backend processes for IT and provides self-service operation for developers and application end users.

NetApp and IBM Release HCI Solutions

In 2017 NetApp released its HCI based on the SolidFire Element OS. This product promises lower TCO as well, due to the performance of its SolidFire software stack that was designed for all-flash, and its architecture that separates storage nodes from compute nodes. This allows NetApp HCI clusters to scale storage and compute resources independently.

2017 also saw IBM announce an HCI product using the Nutanix software stack running on IBM Power servers. The company is positioning this offering as a high performance HCI solution for big data, analytics and similar use cases. They’re also touting the lower TCO driven by their ability to support more VMs per node than other leading HCIs, based on IBM testing.

VMware’s vSAN 6.6 release added some features like software-based encryption and more data protection options for stretched clusters, but not the big jump in functionality we saw in 2016 with vSAN 6.2. vSAN came in as the most popular solution in our HCI in the Enterprise study, for the second year in a row. The Ready Nodes program seems to be a big success as the vast majority of vSAN licenses are sold with one of more than a dozen OEM hardware options.

Nutanix kept up their pace of introducing new features as AOS 5.0 and 5.1 added over two dozen and announced some interesting cloud capabilities with Xi Cloud Services, Calm and a new partnership with Google Cloud Services (going up against Microsoft’s Azure and Azure Stack). See the Industry Update for more information.

Nutanix continues to grow their business every quarter, but have yet to turn a profit – although they claim they’re on road to becoming profitable. The CEO said they are “getting out of the hardware business”, instead emphasizing their software stack and cloud-based services, (as evidenced by the Xi and GCS activities). That is fine, as long as they plan to only sell through others, but to date they continue to sell their own systems. They will continue to sell the NX series of HCI appliances using Supermicro servers but won’t recognize revenue from it and have taken it off the comp plan. Given the consolidation we’re seeing in the HCI market, and the fact that server vendors like HPE and Cisco have their own software, this is probably the right move.

What to watch for in 2018

NetApp – see how their HCI does in the market. Their disaggregated technology is unique and seems to address one of the issues HCI vendors have faced, inefficient scaling. While this is a new HCI product, the SolidFire software stack is not new and NetApp has a significant presence in the IT market.

HPE SimpliVity – after some reorganization and acquisition pains, this product may be poised to take off. We have always liked SimpliVity’s technology and adding the resources and server expertise, plus a captive sales organization and an enterprise data center footprint may do the trick. That said, HPE has some catching up to do as Nutanix and the vSAN-based products are dominating the market.

Cisco – the company claims 2000+ customers in the first 18 months of HyperFlex sales but a large portion of those deals are most likely to existing UCS customers. The Springpath technology continues to mature but the question is what kind of success Cisco can have in 2018, as they expand beyond their installed base.

IBM – can they make HCI work in the big data and analytics space? Can they be successful with Nutanix running on a new server platform? We shouldn’t underestimate IBM but this is a new direction for technology that has been based on industry standard hardware and it’s being sold to a new market segment.

Orchestration and Analytics – HCI vendors have been introducing more intelligence into their management platforms with some providing policy-based storage optimization and data protection, plus monitoring and management of the physical infrastructure. We expect this to continue with the addition of features that leverage data captured from the installed based and analyzed to generate baselines and best practices.

For more information see Evaluator Series Research on Hyperconverged Infrastructures including Comparison Matrix, Product Briefs, Product Analyses and the study “HCI in the Enterprise”.

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post HCI – What Happened in 2017 and What to Watch for in 2018 appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/1-1/8)

Analyst News - Fri, 01/12/2018 - 08:30

Systems and Storage News Roundup (1/1-1/8)



If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at



1/8 – Actifio Update: Disruptive, Profitable and Growing…


1/8 – Cisco Announces Infinite Broadband Unlocked for cBR-8


1/8 – HPE Provides Customer Guidance in Response to Industrywide Microprocessor Vulnerability


1/7 – New 8th Gen Intel Core Processors with Radeon RX Vega M Graphics Offer 3x Boost in Frames Per Second in Devices as Thin as 17mm


1/4 – Mellanox Ships BlueField System-on-Cip Platforms and SmartNIC Adapters to Leading OEMs and Hyperscale Customers


1/8 – Micron and Intel Announce Update to NAND Memory Joint Development Program


1/8 – Toshiba Unveils Mainstream RC100 NVMe SSD Series at CES 2018


1/8 – Western Digital Unveils New Solutions to Help People Capture Preserve, Access and Share Their Ever-Growing Collections of Photos and Videos


1/8 – Seagate Teams Up With Industry-Leading Partners To Offer New Mobile Data Storage Solutions At CES 2018


1/8 – SolarWinds Acquires Loggly, Strengthens Portfolio of Cloud Offerings


1/8 – Supermicro Unveils New All-Flash 1U Server That Delivers 300% Better Storage Density with Samsung’s Next Generation Small Form Factor (NGSFF) Storage

The post Systems and Storage News Roundup (1/1-1/8) appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (12/25-1/1)

Analyst News - Thu, 01/04/2018 - 12:32

Systems and Storage News Roundup (12/25-1/1)


If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at


CA Technologies

12/28 – Three Ruling Technology Trends



12/29 – China Mobile Chooses Huawei’s CloudFabric Solution for its SDN Data Center Network


The post Systems and Storage News Roundup (12/25-1/1) appeared first on Evaluator Group.

Categories: Analyst News

Who Will Be the Haves and Have Nots? – Forbes blog by John Webster

Analyst News - Wed, 01/03/2018 - 13:16

Tis the season for industry analysts to make stunningly insightful predictions about the coming year. They make interesting reading at best. Rarely do analysts review what they wrote the year before to see if they actually got it right. Years ago, I predicted that a technology called iSCSI would take over the storage networking world. It didn’t.

Today, the second day of the new year, I break with that tradition. Rather than try to peer into the future, I simply want to ask you to do something that I believe will improve the futures of our children.

The rapid advance of technology, an advance that is occurring exponentially, is creating a new gap within American society – those who can harness the power of technological advancement and those who can’t. This widening gap will have lasting economic consequences.

It disturbs me greatly that local educational systems don’t get that. In my home state of New Hampshire, the cover story of a widely distributed local weekly featured an article enumerating 7 ideas for updating NH schools. The authors talked to local school administrators and other leaders in education about what they think will boost student learning and they suggested things like more recess. Learning the ability to manipulate and exploit computing technology did not even make the list. Can they be that blind?

It’s no secret known only to technologists that there are approximately half a million jobs available right now that require some level of computational skills – jobs that could be filled today by young women and men with the right skills. But colleges and universities in the US graduated only 46,000 last year with degrees in computer science. And I would guess that the majority of them were male.

Computer coding is no longer the realm of geeks with advanced math degrees. Programs are now available for kids—girls and boys—starting at the first-grade level. Educators can go to and for classroom-based activities, guidance for teachers on how to teach coding, and suggestions for ways to get the local community involved. You use what you find here to encourage them—as forcefully as you like. has already reached 25% of US students and 10M of them are female. Also, check-out the Hour of Code program and a TEDx talk by Hadi Partovi, founder and CEO of

Unfortunately, local school boards move slowly. It may take years to convince them that learning to speak the language of technology is as important as math and the spoken word. To fill the gap, each of us can get involved by starting with a few kids at a time. You don’t even have to know how to code. All you need is a few PCs or lap tops (available used), a space for kids to gather such as a church basement or community center, and someone to supervise. Everything else is available online from and

As we go forward in time, I believe that those who have and have not within any culture will be increasingly defined by an ability to manipulate technology. Vast computing warehouses like Amazon Web Services and Google Cloud Platform are available to any one of us for the swipe of a credit card. All one needs to know is how to use them.

The post Who Will Be the Haves and Have Nots? – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (12/11-12/18)

Analyst News - Fri, 12/22/2017 - 11:07

Systems and Storage News Roundup (12/11-12/18)


If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at



12/12 – Moving Beyond Backups to Enterprise-Data-as-a-Service

CA Technologies

12/14- Predictions 2018: How DevOps, AI Will Impact Security

Catalogic Software

12/18 – Catalogic Software Updates Flagship DPX Data Protection Software


12/11 – Fujitsu Develops WAN Acceleration Technology Utilizing FPGA Accelerators


12/13 – IBM and Blue Prism Deliver Digital Workforce Capabilities


12/12 – NEC Develops Deep Learning Technology to Improve Recognition Accuracy

The post Systems and Storage News Roundup (12/11-12/18) appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (12/4-12/11)

Analyst News - Wed, 12/13/2017 - 11:10
Categories: Analyst News

Data Protection: Research Outlook and Changes in the Market

Analyst News - Fri, 12/08/2017 - 13:07

I’m just a little over thirty days into my role at Evaluator Group covering various aspects of Data Management including Data Protection and Secondary Storage. Even with a good understanding of the technology and what vendors are doing in this space, I’ve been drinking from the proverbial firehose getting caught up on the detailed analysis done by my colleagues on the myriad of Data Protection vendors and solutions we cover. I’m excited to be part of the Evaluator Group team and add my perspectives to the analyses.

There are some important changes happening in the Data Protection market and I believe this segment is as dynamic as ever. The Data Protection requirements of customers have never been wider, and the number and type of solutions have never been greater. Software vendors continue to enhance their solutions by bringing in new data management and visibility to the data they manage working through the backup process. Hardware vendors continue to build on technology advancements to deliver more performance, increased scalability, and enhanced efficiencies. Other vendors are evolving secondary storage systems to be scale out architectures that grow as customers needs grow, scale without data migration and often support integration with public clouds. A few vendors are combining all the hardware and software into hyperconverged secondary storage systems that promise fast deployment and ease of use within an enterprise’s data protection activities.

With the explosion of data and the rapid move towards hybrid IT environments using one or more private or public clouds, we see a number of organizations evaluating their Data Protection strategies and solutions and determining they need to make changes to meet evolving business and legal requirements. To help our customers evaluate what these changes mean for them, we will be making some changes and additions to our coverage of this market segment. We’re starting with Data Protection, updating product analyses as needed, adding an Eval(u)Scale™ assessment to product briefs, and thinking about how we can update the Evaluation Matrices to reflect the changes in the industry while maintaining much of the historic information that our users depend on.

Expanding beyond Data Protection, we are planning additional coverage into multiple aspects of Enterprise Data Management. Data Management is about knowing everything about your data, everywhere it is, and deriving more value from that data while still protecting it and optimizing its storage. Many of the Data Management concepts are not new to IT administrators and Evaluator Group analysts have covered the management of data from a number of perspectives. The opportunity here is to organize all this expertise into a comprehensive, integrated approach to Data Management.

We’ll have more information on what’s driving changes in Data Protection in the next blog and details about Data Management coverage as well as our insights in future blogs.

The post Data Protection: Research Outlook and Changes in the Market appeared first on Evaluator Group.

Categories: Analyst News

How does Hyperconverged Fit in the Enterprise? — findings from Evaluator Group Research

Analyst News - Thu, 11/30/2017 - 17:07

Hyperconverged Infrastructures (HCIs) have enjoyed significant success over the past several years, based partly on their ability to reduce the effort and shorten the timeframe required to deploy new infrastructures. HCI’s early success came largely in small and mid-sized companies, but as this technology has matured, adoption is being seen in more enterprise-level companies and in more critical use cases, to a certain extent.

Evaluator Group conducted a research study “HCI in the Enterprise” to determine how HCIs fit in enterprise environments by examining where companies are deploying or planning to deploy these products, which applications they are supporting and what benefits they expect to derive from that deployment. The study also explored the expectations of IT personnel involved in the use or evaluation of these products, including where an HCI may not be an appropriate solution. This blog discusses some the findings from this study, which included an email survey and follow up interviews with IT professionals from enterprise-level companies (over 1000 employees).

Seven out of eight enterprises we contacted are using or evaluating HCIs, or plan to in the near future. These companies told us their top IT priorities were to improve efficiency, reduce cost and increase “agility” – the ability to respond quickly to changes in infrastructure driven by their dynamic environments. Most are looking at this technology to help simplify their IT infrastructures as a way to achieve these objectives.

Simplification was a recurring theme, with infrastructure consolidation being the most common use case among the companies surveyed. These organizations are looking to HCIs as a way to reduce the number of systems in their environments and standardize on a single platform for multiple applications. This is the same approach that smaller and mid-market companies have taken for the past several years, but with a major difference.

Where smaller companies have adopted HCIs as a solution for most of their applications, enterprise IT is taking a more measured approach. They’re using HCI, but not for their most mission critical applications. The rationale seems to be that there are plenty of benefits to be had with this technology in tier 2 and tier 3 applications. When we asked why HCIs were not chosen, the reason most often given was maturity of the technology, with some concern over the fact that HCIs vendors are often start-up companies as well.

In the study we asked enterprise IT which products they were either using, evaluating or planned to evaluate. Out of the 14 products lists, the solution most often cited was VMware vSAN, with Cisco HyperFlex and Nutanix Enterprise Cloud Platform coming in second and third, respectively.

Another area we asked about was decision factors; what characteristics were the most important when comparing one HCI solution with another. Hyperconverged infrastructure products are generally “feature rich”. Most HCI vendors have also assembled a variety of models to choose from. But features and models weren’t even among the top criteria enterprise IT folks used in an evaluation. The number one characteristic was actually performance and number two was economics. What’s interesting about this is that these two characteristics are closely related.

Cost as a comparative factor is best defined in terms of ability to do useful work. In a hyperconverged cluster that work is measured by the number of virtual machines it can support. Each VM consumes storage and compute resources, more specifically CPU cycles and storage IOPS. This means HCIs with better storage performance and more CPU cores can usually handle more VMs, on a per-node basis. So HCIs with better performance often end up costing less as well. In these situations, it’s imperative that a testing suite designed for hyperconverged infrastructures, such as IOmark, be used in order to capture accurate performance data.

This study also contains detailed information on the findings mentioned above, including where and why HCIs are not appropriate, as well as input on networking, hardware platforms, the infrastructure being replaced, etc. More information and report details are available on Evaluator Group website, or contact us.

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post How does Hyperconverged Fit in the Enterprise? — findings from Evaluator Group Research appeared first on Evaluator Group.

Categories: Analyst News
Syndicate content