Home / Trends / Cloud storage is expensive? Are you doing it right?

Cloud storage is expensive? Are you doing it right?

In my daily job, I speak with many end users. And in terms of the cloud, there are still many differences between Europe and the US. The European cloud market is much more fragmented than the US for several reasons, including slightly different regulations in each country. Acceptance of clouds is slower in Europe and many organizations prefer to maintain data and infrastructure in their premises. The European approach is quite pragmatic and many companies use the experiences of similar organizations across the pond. A similarity exists in cloud storage or better in terms of cloud storage costs and responses.

The fact that data is growing at an incredible pace everywhere is nothing new and often faster than predicted in previous years. At first glance, an all-in-cloud strategy looks very compelling, low $ / GB, less CAPEX and more OPEX, more flexibility, and more, until your cloud bill gets out of hand.

As I wrote in one of my recent reports "Alternatives to Amazon AWS S3" the $ / GB is the first point on the bill, there are several others, including exit fees that come afterwards. An aspect that is often overlooked at first and later has unpleasant consequences.

There are at least two reasons why a cloud storage bill can get out of control:

  1. The application is not spelled correctly. In fact, someone has written or migrated an application that was not specifically designed for use in the cloud and has no resources. This is often the case with legacy applications that are migrated without modification. Sometimes the problem is difficult to solve because redeveloping an old application is simply not possible. In other cases, application behavior could be corrected by better understanding the API and the mechanisms that regulate the cloud (and how they are calculated).
  2. There is nothing wrong with the workload, only data created is read and moved more than before.


Begin optimizing the cloud storage infrastructure. Many vendors add additional storage levels and automations to support this. In some cases, complexity increases (someone has to manage new policies and make sure they are working properly). No big deal, but probably no big savings.

Also try tweaking the application. However, this is not always easy, especially if you have no control over the code and the application was not already written with the intention of running in a cloud environment. This could pay off in the medium to long term, but are you ready to invest in that direction?


One common solution that is now being adopted by a large number of organizations is data refe- rence. Retrieve local data (or a colocation service provider) and access it locally or from the cloud. Why not?

The larger the infrastructure, the lower the $ / GB and, most importantly, no additional fees to worry about. If you're thinking about petabytes, there are several ways to optimize and take advantage of them, which can significantly reduce your $ / GB capacity: Fat Nodes with a lot of hard drives, multiple media layers for performance and cold data, traffic optimization, and more when translating into low and predictable costs.

If this is not enough or if you want to maintain a balance between CAPEX and OPEX, go hybrids. Most storage systems on the market now allow you to classify data into S3-compatible storage systems, and I'm not just talking about object storage ̵

1; NAS and block storage systems can do the same. I've covered this topic in detail in in this report but contact your storage vendor and I'm sure there will be solutions to help.


Another option that does not negate what was written above is the implementation of a multi-cloud storage strategy. Instead of focusing on a single cloud storage vendor, abstract the level of access and access what's best for your application, workload, cost, and more, based on your needs. Multi-cloud data controllers are gaining in importance, and major vendors are starting their first acquisitions ( RedHat with NooBaa ), and the number of solutions is growing steadily. In practice, these products provide a standard front-end interface, which is typically S3-compliant, and can distribute data to user-defined policies in multiple back-end repositories. The end user thus has much freedom of choice and flexibility in the location (or migration) of data while being able to access it transparently, regardless of where it is stored. Last week, for example, I met Leonovus a convincing solution that combines what I've just described with strong security features.

In terms of cloud storage, there are several alternatives to major service providers . Some of them focus on better prices and lower or no egress fees, while others also work with high performance. As I wrote in another blog last week starting with a single service provider all-in might be an easy decision in the beginning, but a big risk in the long term.


Data storage is expensive and cloud storage is no exception. Those who think they save money by transferring all their data to the cloud make a big mistake. For example, cold data fits perfectly into the cloud thanks to its low $ / GB capacity. However, as you access them again and again, costs can rise to an unsustainable level.

In order not to avoid this problem later, it is best to think about the right strategy now. Planning and implementing the right hybrid or multi-cloud strategy can certainly help keep costs under control, while providing the flexibility and flexibility needed to maintain the IT infrastructure and, therefore, the competitiveness of the business.

Learn more about Multi-Cloud Data Controllers Alternatives to AWS S3 and Two-tier Storage Strategy can be found in my GigaOm reports. And subscribe to Voices in Data Storage Podcast to hear the latest news, market and technology trends with opinions, interviews and other stories from the data and data storage area

Originally published on Juku.it

Source link