You’ve got backup – but how safe are you?

backup

We hear from Ian Richardson, Head of Innovation, CSI, about having a reliable back-up strategy in place.

Most businesses have backup facilities in place to help them in the event of a data breach or physical disaster that renders their offices or data unusable. But how many know that they can retrieve that data and have their business up and running again in minutes?  

Server room floods, ransomware, fires – however, your data is damaged, lost, or digitally encrypted – do you know how quickly you can retrieve it or even if you can?  iland found in a recent survey that just 50% of businesses are testing their disaster recovery (DR) plans only annually or at less frequent intervals, while seven percent did not test their DR at all. Of the organizations testing less frequently, half said their disaster recovery plan may be inadequate based on their most recent DR test, while 12% encountered issues that would result in sustained downtime. Zero respondents said that their DR test was completely or moderately successful. Everyone reported experiencing issues. 

So, with most companies remaining badly behind the curve, what steps are needed to ensure that you can retrieve your data after a data breach or disaster?  

Understanding your data  

The datasets of organizations are huge, but the ability to retrieve 100s of terabytes in minutes is like having a spare car in your garage just in case your main one doesn’t work – it’s expensive to have it all waiting on the off chance you need it. And the faster you need it back, the more it costs. 

Therefore, a core aspect of a DR strategy is to prioritize the data that is most critical to the business and focus your efforts around protecting that data first. To understand your data, look at your entire estate and define what’s critical to your business operations. Prioritize it in order of how it would impact customer delivery most if lost. It will give you a focus, and in turn, you can develop measures to minimize data loss in the event of a cyber-attack or disaster. You can also catalog it by how much data can be lost by invoking a recovery (RPO) and its priority for recovery (RTO). 

Obviously, there is a cost implication for any backup and with datasets increasing, it can be very expensive to store all your data in multiple, high availability data centers. In some cases, the costs are too prohibitive to justify. Virtualization tools at the server or storage layer often use cloning or snapshot capabilities that serve as ‘back-up’, but these consume space in your production storage which is likely to be the most expensive in the environment 

Using one method for priority data backup and another for less important data can reduce costs here. Ideally, mixing disk, tape, and cloud storage strike the right balance between cost and speed.  Archived data could sit happily on cheaper tape, but your essential systems, applications, and databases should be committed to replicated disk. That way, you’ll be ready to restore essential systems rapidly if disaster strikes. 


Protecting backups effectively 

But it’s not just the process of backing up your data that’s important; it’s what happens to it after. Historically DR processes would have been slow. As datasets have grown, the emphasis has been more on cloud backup rather than disks. Here it’s prone to the same risk of cyber-attack – meaning someone could get hold of your backup as well as the company data, challenging a full recovery.  So how do you ensure this data is safe?  

There is no one-size-fits-all when it comes to data backup. Whether on the cloud, disk, or tapes it’s critical to protect these backups as you would any other data. If using a physical backup, consider storing these offsite in another location, or at least a different building. You may have to qualify this to regulatory audits or your own security assessment. A fire or natural disaster could be all it takes to wipe out all your data along with your backups. 

If storing digitally, use a separate file system or cloud storage service that’s located either on a physically or logically separated network. Minimize who has access to these login credentials and keep them on a separate enterprise directory to minimize cyber-attack-induced risks. Keeping your data offline and inaccessible s also an effective way of keeping your data out of the hands of cybercriminals. This is known as an ‘air gap’.  

Test, test, and test again 

Backup is an insurance policy for your businesses, but unfortunately, the process is often run on a shoestring budget and deprioritized over other more visible projects – until it’s needed. Most businesses don’t have a backup strategy and if they do, the error is that they’re not testing it. Cyber security frameworks strongly advise regular testing including who to tell if there is a loss, where the backups are stored, how long it’s going to take to recover, and how to ensure these backups are stored safely. Automation technology can also locate new servers and applications that have been added to the network and provide notifications if it doesn’t look like it’s been backed up.  

Read More:

Get to your backups in minutes 

As data sets get bigger and bigger so recovering 100s of terabytes of data can be critical to your business continuity. You can’t afford for that recovery to take days or weeks. Brands lose customers instantly when a data breach is reported in the media and it can take months or years to undo the damage. But retrieving and securing your data within minutes of a breach or physical disaster goes a long way to reducing the negative impact of data loss.  

By knowing your data, what is critical to your business, how to back that data up securely, and testing your DR processes regularly, you are much less likely to fall victim to a disaster. Proper governance of critical data can maximize revenue, customer satisfaction, and operational cost-efficiency leaving your business resilient against the threat of data loss. 

Click here to discover more of our podcasts

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter

Ian Richardson

Ian Richardson, Head of Innovation, CSI

Ian has been with CSI since 2007 holding a number of technical support and solution design roles. He is currently responsible for the pre-sales of global accounts along with the development of some CSI services.

Ab Initio partners with BT Group to deliver big data

Luke Conrad • 24th October 2022

AI is becoming an increasingly important element of the digital transformation of many businesses. As well as introducing new opportunities, it also poses a number of challenges for IT teams and the data teams supporting them. Ab Initio has announced a partnership with BT Group to implement its big data management solutions on BT’s internal...

WAICF – Dive into AI visiting one of the most...

Delia Salinas • 10th March 2022

Every year Cannes held an international technological event called World Artificial Intelligence Cannes Festival, better known by its acronym WAICF. One of the most luxurious cities around the world, located on the French Riviera and host of the annual Cannes Film Festival, Midem, and Cannes Lions International Festival of Creativity. 

Bouncing back from a natural disaster with resilience

Amber Donovan-Stevens • 16th December 2021

In the last decade, we’ve seen some of the most extreme weather events since records began, all driven by our human impact on the plant. Businesses are rapidly trying to implement new green policies to do their part, but climate change has also forced businesses to adapt and redefine their disaster recovery approach. Curtis Preston,...