As more companies put sensitive data in the public cloud – so the security threats increase

More organisations are putting their sensitive data in the public cloud – so it comes as no surprise that cloud threats, and mistakes in SaaS, IaaS and PaaS implementation are at an all-time high.

That is the key finding from a new report by McAfee, which argues the old bugaboo of shared responsibility continues to kick in and give organisations a kick in the teeth when it comes to cloud security.

By James Bourne, 30 October 2018, 0 comments. Categories: Data & Analytics, Data Loss, Infrastructure, Research, Security, Vulnerabilities.

Protecting your company’s crown jewels: Building cloud-based backup and DR into ransomware defence

It’s a sad fact of life that whenever someone owns anything of value, there’s someone else out there who wants to get their hands on it illegally. Today’s corporate crown jewels are the critical data on which organisations depend and the highwaymen are cybercriminals, who have built a lucrative industry from ransomware attacks that disrupt businesses, steal data and aim to extract payment from their victims.

Tackling this scourge is a critical challenge for IT managers on several levels, but...

Companies moving to the cloud without assessing outage possibilities, research argues

Organisations are moving to the cloud without evaluating the impact of a cloud outage, according to the latest study from data management provider Veritas.

The report, titled The Truth in Cloud and put together alongside Vanson Bourne, found that an ‘alarming majority’ of firms shift full responsibility for data protection, compliance and uptime on cloud service providers.

Three in five (59%) of the 1,200 global business and IT decision makers polled said dealing with cloud service interruptions...

By James Bourne, 16 March 2018, 0 comments. Categories: Data & Analytics, Data Loss, Data Management, Infrastructure, SLA.

Continuing in the face of disaster: Assessing disaster recovery in the cloud age

With 73% of businesses having had some type of operations interruption in the last five years, business continuity is becoming a concern for many organisations, especially the SMEs. Business continuity incorporates pre-emptive measures such as cyber-defences to minimise risk, proactive tactics such as system backups in case a problem arises and plans for a reactive strategy, which should include disaster recovery (DR), ready in case the worst...

The glitch economy: Counting the cost of software failures

In today’s increasingly digitalised world, the effect of a software glitch can be dramatic. Take an example from July this year when a glitch caused the stock prices of well-known Nasdaq companies such as Amazon, Apple, Alphabet, eBay and Microsoft to be inaccurately listed on websites well after that day’s closing bell.

Even though the actual prices of the stocks were unchanged, the sites showed some had plummeted in price and others had nearly doubled. Unsurprisingly, many people were fooled and...

By Dalibor Siroky, 30 October 2017, 0 comments. Categories: Data Loss, Data Management, Software, Vulnerabilities.

Mind the gap: User demand and IT delivery not on the same page, says Veeam

(c)iStock.com/MarioGuti

More than four in five enterprises globally are facing the dreaded ‘availability gap’ between user demand and what IT can deliver, according to a new report from disaster recovery and backup firm Veeam.

The study, the firm’s sixth annual Veeam Availability Report, polled more than 1,000 senior IT leaders across 24 countries and found that unplanned downtime costs enterprises on average $21.8 million per year, up 36% compared to the past 12 months.

More than two thirds...

By James Bourne, 25 April 2017, 0 comments. Categories: Data Loss, Data Management, Disaster Recovery, Infrastructure.

A guide: Using SmartNICs to implement zero-trust cloud security

In an age of zero-trust security, enterprises are looking to secure individual virtual machines (VMs) in their on-premise data centres, cloud or hybrid environments to prevent increasingly sophisticated attacks. The problem is that firewalling individual VMs using tools like software appliance firewalls or Connection Tracking (Conntrack) is operationally challenging to manage. It delivers bad performance, restricting VM mobility and consuming many CPU cycles on servers, which limits their ability to process...

By Abhijeet Prabhune, 23 March 2017, 0 comments. Categories: Data Centres, Data Loss, Infrastructure, Security, Vulnerabilities.

How often should you test your disaster recovery plan?

(c)iStock.com/Aslan Alphan

By Lily Teplow

As a savvy managed service provider (MSP), you know that having an effective backup and disaster recovery (BDR) solution and disaster recovery (DR) plan is a necessity in today’s business landscape – just in case your client opens an umbrella indoors and their whole IT network crashes. However, having these reliable solutions in place is of no value if the processes aren’t...

Why you can’t let disaster recovery slide off your IT budget in 2017

(c)iStock.com/olm26250

As we welcome in the New Year, we are already seeing multiple blogs prognosticating 2017 trends, setting priorities and suggesting resolutions. We are also rapidly approaching the 2017 budget cycle. I am sure you will read many articles concerning new plans or resolutions for the coming year, but this one will be about an old resolution: IT disaster recovery (DR).

When disaster strikes, organisations need to be able to recover IT systems as quickly as possible. Not having a disaster...

By Monica Brink, 04 January 2017, 3 comments. Categories: Data Loss, Data Management, Disaster Recovery, Infrastructure.

Research argues overconfidence in disaster recovery is ‘common and costly’

(c)iStock.com/roberthyrons

A new UK study from cloud disaster recovery provider iland has found that 95% of respondents have faced an outage or data loss in the past year – with 87% of that number saying it triggered a failover.

The survey, conducted by Opinion Matters and which specifically polled 250 UK decision makers responsible for their company’s IT disaster recovery plans, also found that of the 87% who had executed a failover, while 82% said they were confident it would be successful, 55%...

By James Bourne, 22 September 2016, 0 comments. Categories: Data Loss, Data Management, Disaster Recovery, Security.

Netskope research shows how cloud malware and ransomware remain issues

(c)iStock.com/DundStock

According to the latest research from cloud security provider Netskope, almost 44% of malware found in cloud apps have delivered ransomware, while almost 56% of malware-infected files in cloud apps are shared publicly.

The study, which appears in the company’s latest Netskope Cloud Report, found that the number of cloud apps keep going up in enterprises; 824 on average, up from 777 during the last quarter. Microsoft continues to beat Google as the most popular cloud app, with...

By James Bourne, 08 September 2016, 0 comments. Categories: Data & Analytics, Data Loss, Data Management, Security, Vulnerabilities.

Cloud data backup: Inexperience and ignorance key fear factors

(c)iStock.com/-MG-

Organisations’ fears of cloud-based backup are mostly down to inexperience or ignorance of how the systems work rather than technological issues, according to new survey results released by analyst house Clutch.co.

The research, which surveyed more than 300 small to medium US businesses to measure the benefits and challenges of cloud-based backup, found that 87% of respondents said online backup was either equally or more secure than on-premises equivalents. 24% argued it was...

By James Bourne, 07 July 2016, 0 comments. Categories: Data & Analytics, Data Loss, Data Management, Infrastructure, Research.

A disaster recovery plan: What is your IT team keeping from you?

(c)iStock.com/Dimitrios Stefanidis

Your disaster recovery program is like a parachute - you don’t want to find yourself in freefall before you discover it won’t open. But amid hastening development cycles, and cost, resource and time pressures, many CIOs are failing to adequately prioritise DR planning and testing.

While IT teams are running to stand still with day-to-day responsibilities, DR efforts tend to be focused solely on infrastructure, hardware and software, neglecting the people and...

Ransomware may be a big culprit for data loss – but it’s the wrong fall guy

(c)iStock.com/Big_Ryan

With researchers seeing a 3500% increase in the use of net infrastructure which criminals use to run ransomware campaigns, it’s not surprising that ransomware has been making big headlines.

The media laments the growing rings of cyber criminals that launch ransomware threats, but there’s another culprit that tends to slip under the radar: people like you and me. Sure, we’re not...

By Matt Kingswood, 22 June 2016, 0 comments. Categories: Data Loss, Data Management, Enterprise, Infrastructure, Security.

Big data loss: What to do when the almighty cloud fails

(c)iStock.com/maxiphoto

If you use the cloud, it's probably for a few main reasons: you can store large amounts of data, you can share your data easily, and you're very unlikely to lose your data. But cloud data loss does happen, and even if the chances are low, it still might happen to you. Make sure you've protected your data from every angle to make sure you never lose it, no matter what happens.

Server failure

When you or your business stores something on the cloud, to you it seems like it's backed up in...

By JT Ripton, 17 June 2016, 0 comments. Categories: Big Data, Data & Analytics, Data Loss, Data Management, Disaster Recovery.

AWS EC2 falls over in Sydney for six hours, stormy weather blamed

(c)iStock.com/DWalker44

Amazon Web Services (AWS) took a hit in its Sydney region for six hours over the weekend, according to official status updates, with stormy weather being blamed as the source of the problem.

The alarm was first raised on June 4 at 2247 PDT – or 1547 on June 5 Sydney time – with AWS announcing it was investigating increased connectivity issues for EC2 instances in the AP-SOUTHEAST-2 region. An hour later, a “power event” was cited as the culprit, with power...

By James Bourne, 06 June 2016, 0 comments. Categories: Amazon, Data Loss, Data Management, Disaster Recovery, Infrastructure.

Top 10 UK business disasters revealed: Is your business ready for the worst?

(c)iStock.com/P_Wei

Managed IT services provider IT Specialists (ITS) has put together a list of the top 10 business continuity disasters to hit the UK over the past 12 months, including storms Abigail, Desmond, Katie, as well as a fire at Holborn and power cuts at the Royal Berkshire Hospital and Heathrow Airport.

The list of disasters encompass various business situations, from the Forth Road Bridge Closure in December where 80,000 vehicles were diverted daily for 19 days, to the Heathrow power cut which...

Opinion: How to achieve a solid business continuity strategy

(c)iStock.com/natthawon

Over the last 12 months, the UK has seen floods and fires upend businesses of all sorts – from hospitals, to factories, to recycling centres. Commuters faced their own significant set of challenges, with incidents such as the closure of the Forth Road Bridge and the Heathrow Airport power cut, which diverted more than 130 flights. Meanwhile, the Met Office has plenty of opportunities to test the new convention of naming storms, with the likes of Katie, Abigail and Desmond...

How watertight is your case for keeping data safe and dry?

(c)iStock.com/xenotar

By Steve Davis, marketing director, NGD

It is 2016 and Britain is on flood alert - again. The latest terrible flooding suffered by residents and business owners in the North of England were made even worse with it happening in the run up to Christmas and through the New Year period.

Flooding of course is not limited to the North. It is a nationwide phenomenon bearing in mind the dreadful events down in Somerset a couple of years ago, the same winter which also saw the Thames burst its...

By Steve Davis, 11 January 2016, 0 comments. Categories: Data & Analytics, Data Centres, Data Loss, Data Management.

Disaster recovery: How to reduce the business risk at distance

(c)iStock.com/natasaadzic

Geographic distance is a necessity because disaster recovery data centres have to be placed outside the circle of disruption.

The meaning of this term depends on the type of disaster. This could be a natural phenomenon like an earthquake, a volcanic eruption, a flood or a fire. Calamities are caused by human error too; so the definition of the circle of disruption varies. In the past, data centres were on average kept 30 miles apart as this...

By Graham Jarvis, 18 November 2015, 0 comments. Categories: CIO, Data & Analytics, Data Centres, Data Loss, Data Management, Disaster Recovery.