• Navigation
  • Warenkorb
Planning reliability in IT

How to make your storage infrastructure fit for the future

Scroll

The challenge: planning reliability in IT

No matter how much buffer IT managers plan for storage, it's never enough in the end. Soon, they will have to reschedule, postpone, and patch things up.

 

However, IT has enough other to-dos and should not have to deal permanently with storage planning.

 

In this article we will answer the following questions:

  • How can IT managers plan more confidently?
  • How can they meet the challenges around data growth, interdependencies, and IT skills shortages?
  • In what ways can they save time and money in planning?

3 vs. 30 years – the dilemma

Storage planning is designed for three to five years in most organizations.

 

Data retention goes well beyond this time horizon. Especially in the area of archiving, data must be kept for 7, 15, sometimes 30 years and more. This data must be available, immutable, and transferable.

 

The gap between 3 years of planning and 30 years of retention understandably causes headaches in IT: especially when even the 3-year plan quickly becomes a guessing game.

Data growth, lack of time, ransomware etc.

The above example is an extreme case. But the dilemma between storage planning and data management is only one of many challenges. IT is also struggling:

  • Data growth
    A new research project generates mountains of unplanned data, a completely new use case is suddenly added, the company grows more than anticipated - and the plan is already obsolete.

  • Unclear need
    Estimates of what and how much storage space is needed come from the departments. But an unplanned project or a miscalculation is enough to eat up even generous buffers.

  • Technological progress
    The IT and storage world turns quickly. For this reason, systems are often only planned for three to five years. Data must therefore be migrated to new systems on a regular basis. This costs time and money - and is sometimes not even feasible if there is a dependency (vendor lock-in) on the existing system.

  • IT skills shortage
    Diverse areas of responsibility, few brains - this sums up the situation in most IT departments. And it is not getting any easier to recruit well-qualified employees in the highly competitive IT job market. However, the tasks remain the same, which is why alternative solutions are in demand.

  • IT security and disaster recovery
    To ensure the operation and security of an organization, multiple backups must be kept. To be protected against natural disasters, at least two copies of data are required at remote locations. Best practice against ransomware attacks, among others, is the 3-2-1-1-0 rule, which dictates three copies, at least one of which is immutable. This means that many backups and copies are added to the actual stored data, which is not a pleasant prospect with increasing data growth.

The solution: How to make your storage infrastructure fit for the future

The basis: independence and security

Even if the storage infrastructure is planned for three years, the underlying concepts must be geared to the long term.

 

IT managers must first be aware of what reliance on a particular vendor, hardware, or on proprietary APIs means.

 

The storage costs per gigabyte can still be so tempting: If changing the system is not possible due to a vendor lock-in, high costs and efforts are incurred in the long run. This can be the case, for example, with hardware-bound systems with proprietary APIs. But it can also happen in the cloud if the public and private clouds are obtained from the same provider.

 

The security of the systems must also be ensured from the outset, which can be implemented, for example, through access restrictions and encryption and immutability of the data.

 

Software-based storage systems help to overcome dependencies and create flexibility. They are based on inexpensive standard hardware which can be easily replaced and so simplify data migrations. They can also show clear strengths in terms of cost and effort - but more details on this below.

The framework: Scale-out

Scale-out storage keeps organizations from simply "filling up" systems and is the answer to data growth across industries.

 

In these horizontally scalable systems, the overall performance is provided by a large number of storage server nodes (clusters). These act externally as an overall system and enable capacity and performance to be expanded as required.

 

However, it is also important that organizations can start small, because not every company moves into the triple-digit terabyte or petabyte range right from the start.

The backbone: simplicity and management

Increasing data volumes + increasing requirements = more effort and complexity?

 

For a future-proof infrastructure, the calculation must look different. Especially with increasing requirements and data volumes, IT managers must ensure that their infrastructure is manageable and that expenses are low.

 

On the one hand, the centralization of storage solutions helps. Why should backup data, research data, and archive data be stored on different systems? These systems, in turn, need to be maintained, updated, and backed up. By consolidating multiple use cases into a single storage platform, IT departments can reduce complexity and their workload.

 

Another building block is the outsourcing of routine tasks. Why does the IT department have to deal with many different systems, always stay up to date, perform administration tasks, and install updates and security patches? This time can be better used to further modernize IT and support the business.

 

With a managed services concept, the manufacturer or a service provider takes over the majority of the tasks so that, in the best case, the systems simply run and there are therefore no points of contact. This also increases security and saves resources and nerves.

The operating model: Agility and flexibility

The public cloud has given the IT world a boost in the last decade and it is hard to imagine life without it. But a complete switch to the public cloud is difficult to make and rarely makes sense. Hybrid solutions and as-a-service offerings are therefore a sought-after approach.

 

Organizations don't have to choose between public cloud and on-premises, between black and white. They can transfer the advantages of the public cloud - scalability, agility, flexibility - to their own data center and combine them with the added values of on-premises - data sovereignty, control, performance.

 

This is made possible by software-based architectures which separate the storage hardware and software intelligence. This allows IT managers to use cost-effective standard hardware and connect it to a flexibly expandable scale-out cluster.

 

The procurement and payment methods of the public cloud can also be transferred to this model. With solutions such as HPE GreenLake, for example, the storage infrastructure and much more can be procured in an "as-a-service" and "pay-per-use" model. This allows companies to close the gap between planning and reality and react agilely to changes.

The calculation: works out - with a TCO view

All well and good - but what does it cost, then?

 

When it comes to planning and future security, costs are of course one - if not the decisive - factor. If data storage is to be designed for the long term, the costs must be transparent and low.

 

The pure storage costs are not the only decisive factor, as already mentioned above. Storage planning must always focus on the overall costs. For example, the costs for administration and knowledge development and technological development also play an important role.

 

Analysts at Enterprise Strategy Group (ESG) compared the total cost of ownership in long-term storage for the leading public cloud storage with a software-based on-premises solution. They looked at a five-year period for a petabyte of storage. The result: The cloud is 53% more expensive than the solution in the own data center and causes 61% more time expenditure. The holistic and long-term view of the total costs saves money and time - and guarantees that the calculation and planning will work out in the end.

 

Here you can download the complete total cost comparison.

And how does this work in practice?

The path to more planning reliability for your storage infrastructure does not have to be a marathon and a hurdle race. What is crucial is that the infrastructure is independent, scalable, simple, and flexible in design - and considers the total cost of ownership.

 

Do you want to learn how to put these best practices into practice?

 

Then get to know the iCAS FS storage platform and make a personal consultation appointment to discuss the challenges and possible solutions.

Insights, News & Events | Stay up to date!
Subscribe to our Newsletter