Protecting Sensitive Data: When “Putting It in the Cloud” Doesn’t Cut It

Have you ever had an executive tell you to just “put it in the cloud?” It’s a deceptively simple command, and an increasingly common one, too. No one wants to break the news to their boss that the task isn’t as easy or inexpensive as it seems, especially when you’re dealing with sensitive data. That’s because it requires planning, strategy, and expert implementation to successfully place that data into a cloud ecosystem.

What is Sensitive Data?

The definition may vary from company to company, but sensitive data is generally considered any information that makes up the most important components of mission-critical business systems. When storing these files, it’s important to consider the following tenets of data security and governance:

  • Data availability
  • Confidentiality
  • Integrity
  • Regulatory and compliance considerations
  • Performance and cost issues

In other words, sensitive data must be quickly and efficiently accessed without interruption, but it cannot be breached or hard to control.

How Can Organizations Strategically Protect Sensitive Data?

The leading cloud strategies of today are the result of years of evolution within the industry. Cloud solutions are more agile, secure, and accessible than ever before, and they’re multiplying at a torrid rate, with enterprises across all industries using an average of 1,181 separate cloud services.1 That’s an alarmingly high number that raises several security concerns around the sharing of sensitive data. However, with a more careful approach to your strategy, you can avoid the overuse of cloud services.

Planning Considerations

  1. Think of your journey to the cloud as incremental steps, not an all-or-nothing proposition—especially when it comes to mission-critical applications.
  2. Consider a hybrid environment that offers a combination of public cloud services and a hosted IT infrastructure. This will allow for more private handling of mission-critical workloads and sensitive data that require PCI or HIPAA compliance.
  3. If your data contains sensitive financial or government information, ensure that your cloud solutions offer strict controls to help you comply with country-specific requirements, such as GDPR regulations.
  4. Make note of any legacy applications that may require traditional bare-metal solutions and isolation from other customers.

Isolating Sensitive Data With a DMZ

In cases where sensitive data is maintained in a public cloud, a multi-tier network architecture is deployed. This strategy creates a so-called “demilitarized zone,” or DMZ, that isolates sensitive data from public-facing web servers. Alternatively, customers may opt to utilize a “zero-trust” architecture to secure their sensitive data from both external and internal threats. In this scenario, those who try to access data throughout the network are asked to verify their identity and permissions each time they want access, when they change locations, or after they attempt to breach pre-determined parameters. This type of architecture becomes even more attractive in a hybrid solution where full control of all endpoints is not possible or preferable.

Currently, we’re seeing very few companies store sensitive data solely in the cloud. In fact, only 23% of organizations felt they could completely trust public clouds to keep their data secure in 2017.2 Companies in their infancy might give pure cloud deployments a try, but the reality is that as their business grows, cost, complexity, and security concerns come into play. Wide-scale data breaches have also left customers wary of cloud-based storage, and many brand reputations have been damaged in the wake of these incidents. While these issues are most often due to misconfigurations, they sometimes can be traced to the rapid and uncontrolled sprawl of cloud services, a weak integration strategy, and a failure to adopt a true hybrid cloud architecture that isolates sensitive data while enabling elasticity.

The Case for Hybrid Cloud Services

To keep sensitive data safe from breaches, many enterprises are opting to keep it on dedicated, bare-metal infrastructures. With hybrid cloud services, such as Direct Connect capabilities through their hosting and MSP partners, they can bridge to the public cloud at the same time. This allows organizations to set up virtual private clouds that can securely transmit data between an on-premises or dedicated hosted infrastructure to their public cloud resources.

Hybrid cloud adoption increased three-fold between 2016 and 2017,2 likely due to the fact that this strategy allows organizations to:

  • Pay for infrastructure on a monthly basis
  • Enjoy the performance and control of bare-metal infrastructures
  • Avoid large Capex outlays
  • Physically secure their infrastructure

A Promising New Chapter for Sensitive Data

It should go without saying that no two organizations’ needs are the same, especially when it comes to storing sensitive data. But one thing is clear—a rush job isn’t going to cut it. Instead, select a technology partner that offers a custom and targeted approach. Together, you can carefully plan and develop a solution that’s secure, affordable, and customized to meet the needs of your organization.

See the Hybrid Cloud in Action

Ntirety’s global enterprise customers trust our expertise to manage their infrastructure containing their most sensitive  data.  When Samsung needed to secure their SmartTV application, it was Ntirety that enabled them to be the first smart television app to be PCI compliant globally. Ntirety has the in-depth expertise to design, build, secure, and operate infrastructures containing highly-sensitive, mission-critical data, including PCI-, FERPA-, and HIPAA-compliant infrastructures.

Talk to one of our Security Experts, or get a free Security Assessment »

  1. Netskope, February 2018.
  2. McAfee, 2017. Building Trust in a Cloudy Sky: The state of cloud adoption and security.

Where the Business Cloud Weathers the Storm

Ntirety president and CEO Emil Sayegh provides a thought leadership piece on InfoWorld’s website. The column, titled, “Where the Business Cloud Weathers the Storm” discusses disaster preparedness for data centers and the benefits of utilizing cloud technologies to prevent potential interruptions in your mission-critical applications.

This month, Emil reflects on the recent storms from this year’s hurricane season and the impact they have on productivity, availability, and overall infrastructure of communications operations. He explains how cloud hosting services such as Ntirety’s can provide the assurance and capability to manage and minimize downtimes whether brought on by nature or human error.

Read the full post now: Where the Business Cloud Weathers the Storm

4 Steps to GDPR Compliance in the Cloud

The General Data Protection Regulation (GDPR) goes into effect May 25, 2018. While GDPR is an EU-established protocol, it affects any business worldwide that collects data on EU citizens. Non-European enterprises providing any form of goods or service to European citizens will need to comply with the new mandate.

Is your company GDPR-compliant? While you may think it is, a recent research report shows only 2% of companies that believe they are compliant actually are, in respect to meeting specific GDPR provisions.

Many Companies Unprepared for May 25

The aforementioned research revealed that 48% of companies that stated they were GDPR-ready did not have adequate visibility over personal data loss incidents. As high as 61% from the same group revealed they have difficulty identifying and reporting an incident within 72 hours of a breach, which is mandatory when there is a risk to data subjects.

Much of this unpreparedness is in part due to an insufficient understanding regarding the provisions of GDPR. Penalties for non-compliance are stiff, with fines as high as $21 million or 4% of global annual turnover, whichever is greater. These risks are raising red flags, with a study from Veritas revealing that 86% of companies surveyed worldwide have expressed concerns over non-compliance, both in terms of penalty fees and damage to their brand image.

Steps to GDPR Compliance

The steps outlined below needs to be part of a collaborative effort between companies and their cloud service providers (CSP). While CSPs are also responsible for conforming to GDPR guidelines, it’s a mistake for companies to ignore GDPR and believe that their CSP will completely take the responsibility off their hands.

The steps outlined below needs to be part of a collaborative effort between companies and their cloud service providers (CSP). While CSPs are also responsible for conforming to GDPR guidelines, it’s a mistake for companies to ignore GDPR and believe that their CSP will completely take the responsibility off their hands.

1. Perform Data Privacy Impact Assessment (DPIA)

Your organization needs to conduct a routine DPIA to identify compliance shortcomings. Customers should understand how their data is protected as it traverses through various networks and storages.

2. Acquire Data Subject Consent

Companies must have client consent before processing their personal data. Under GDPR, the consent must be voluntary, and clients have the right to revoke it any time. Consent must also be recorded and stored.

3. Protect Data Subject Rights

On the CSP’s end, administrators must allow their clients’ customers to access their data upon request. Customers have the right to transfer or make corrections to the data. CSPs must respond to requests within a specified timeframe, usually within 30 days.

4. Satisfy New Obligations

Under new GDPR guidelines, organizations are obligated to inform clients of a data breach within 72 hours. Companies need to coordinate with their CSP if the breach occurred in the latter’s end. GDPR policy states that customers can hold both the company and its CSP liable.

Prepare for GDPR Compliance with HOSTING

With the GDPR deadline looming, it helps to work with compliance experts to ensure you’re taking the right steps. At HOSTING, our team of compliance experts is ready to help your organization build, migrate and manage a compliant cloud environment with offerings and a level of experience that’s unmatched in the industry. Click here to learn more about our offerings.

Safety, Security and Hyperscale Public Cloud

Almost everyone uses hyperscale public cloud, whether they’re aware of it or not. The underlying technologies enabling consumption and interaction with web applications are powered by a number of multibillion dollar giants like Amazon, Google, IBM, and Microsoft that together have created a cloud horsepower and technology arms race.

As a result, a commodity market has emerged that includes consumption-based pricing, geolocation, rapid provisioning, extensive developer tools and resources, rapid storage, and more. The last 10 years have seen the emergence of some exciting high-demand workloads ─ machine learning, artificial intelligence, massive multiplayer online games, big data, mobile applications, and the internet of things, all of which have been fueled by hyperscale public cloud.

Highly Scalable. Capable of handling large data volumes. Distributed. Fast. Cost effective. These are some of the benefits hyperscale platforms deliver. Most people get it – public cloud services provide an immediate advantage in terms of flexibility and ease of use. Unfortunately, what is often overlooked are the security risks that have entered the picture in an increasingly hyperscale world.

The Enemy Within

Tremendous advantages aside, if you’re a business leader, you need to be aware of how hyperscale public cloud solutions can harbor hidden dangers if not properly architected, configured, and managed.

A few very recent examples:

• BroadSoft, a global communications software and service provider had a massive unintentional data exposure. Cloud-based repositories built on the AWS S3 platform were misconfigured, allowing public access to sensitive data belonging to millions of subscribers.

• In another case, improper configuration of AWS S3 storage and insufficient security solutions exposed records of 14 million Verizon customers.

• TigerSwan, a private military contractor left “Top Secret” data similarly unprotected on an AWS S3 storage bucket.

The Real Problem

The problem is not with the hyperscale public cloud itself. These breaches show how human error and limited security are the weak links in the information security chain. A single misstep, lapse in process, or misconfiguration can result in a massive exposure of data to the entire world. Organizations that use hyperscale computing can remain at risk from these and other kinds of security incidents because they often utilize bolt-on security solutions, manual security auditing, manual incident remediation, and other legacy practices and tools that threaten overall security posture.

You CAN Hyperscale – Safely

As the cloud story continues to evolve, we will witness more stories about security breaches. But it’s not a story that has to happen to your business or organization.

Today’s issues can be addressed by implementing security best practices and technologies that protect modern data, system configurations, and applications. Constructs such as infrastructure automation, cloud-aware security policies and technologies, and coded system policies are examples of next-level security that minimizes risk, particularly in the cloud. It’s not a simple path (especially without a high level of security expertise), but the rewards are most certainly worth the effort.

Organizations of all sizes have simplified and improved their security posture by working with a trusted managed service provider such as Ntirety to help architect, configure, and manage a comprehensive cloud security solution designed specifically for their unique requirements. Ntirety has both the expertise and broad portfolio of managed security services to help ensure that your cloud solution is properly configured to meet your objectives while protecting your business and customer data.

To learn more about building a highly secure and scalable hyperscale public cloud solution, contact a Ntirety expert at +1.866.680.7556 or chat with us today.

The 6 Stages of a Malicious Cyber Attack

You don’t have to look very far to find an example of a malicious cyberattack. For example, the June 2017 hack of password manager OneLogin. Intruders accessed a set of Amazon Web Services (AWS) keys and were able to unencrypt data that was assumed to be secure. What makes this breach even scarier is that many people who use a password manager like OneLogin don’t just use it for personal passwords. They use it for work passwords, too.

Knowing that the potential for a breach lies both within your business infrastructure and through employees as a point of access should spur any organization into getting serious about understanding how security is compromised. One of the best places to start is by arming yourself with a baseline understanding of the tactics used by cybercriminals.

The first step in understanding these tactics is educating yourself about the types of attacks that can occur. The two most common are web application compromises (usually seen in the finance, entertainment, and education industries) and distributed denial of service (DDoS) attacks (prevalent across every industry).

The next step is to understand the stages of a breach.  Although the types of compromises can vary, most attacks involve the following stages:

  1. Reconnaissance – Forming the attack strategy.
  2. Scan – Searching for vulnerabilities.
  3. Exploit – Beginning the attack.
  4. Access Maintenance – Gathering as much data as possible.
  5. Exfiltration – Stealing sensitive data.
  6. Identification Prevention – Disguising presence to maintain access.

Hackers and Healthcare Data: Love at First Breach

2016 was a record-breaking year for the healthcare industry, and not in a positive way. Last year saw the most healthcare data breaches in history, and this trend hasn’t slowed in 2017.

HIPAA Journal reported that 2017 is likely to break even more records for the medical sector. More than 16.6 million healthcare records were stolen or in some way compromised in 2016, and the first three months of this year alone exposed more than 1.7 million records.

Why Are Healthcare Records So Prized by Hackers?

In recent years, the healthcare industry has seen more breaches than several other sectors combined, and the reason for all of these attacks is simple – medical care providers require more details from patients than other types of organizations, and thus have more sensitive information on hand.

Patient details, including names, birthdates, addresses, social security numbers, medical histories, and payment information, can be used for all types of nefarious purposes. From receiving fraudulent treatments and medications to selling individuals’ personal information in underground marketplaces, this industry represents fertile ground for today’s hackers.

In fact, CNBC reported that stolen data gathered from healthcare breaches has even been used to file fraudulent tax returns. Organized crime rings have been known to connect with specialists who can help streamline malicious schemes.

“You have experts in different fields,” noted Etay Maor, IBM Security executive advisor. “There are those who are great at obtaining information. And then there are other guys who will buy this data and use it to commit fraud.”

High Prices for Stolen Data

One of the main motivations behind any hack is the potential for profit. The market for stolen healthcare data reached a peak in 2015 and 2016, with sources like CNBC reporting in 2016 that medical records could be as much as 60 times more valuable than credit card details. Reuters released similar findings in 2014, noting at the time that health care information was worth 10 times more than credit card numbers.

In 2016, a single medical record earned a hacker $60, as the file contained a wealth of details, including the patient’s name, birth date, address, phone number and employment information. At the same time, one Social Security number fetched only $15.

Times have changed recently, as underground markets became flooded with stolen information garnered from the rash of healthcare data breaches over the last months and years. In fact, CSO reported at the end of 2016 that medical records had dropped to $10 a piece on the black market, with some files selling for as low as $1.50 each.

“The market has become saturated,” CSO contributor Maria Korolov wrote. “With about 112 million records stolen in 2015 alone, the medical info of nearly half of all Americans is already out there.”

However, just because the price per record has fallen doesn’t mean hackers are abandoning the healthcare industry as a top target. There’s still considerable money to be made through these malicious purposes, and several notable breaches have already taken place this year. Since the beginning of 2017, the data of more than 900,000 seniors has been stolen and exposed after a former HealthNow Networks employee published a backup database online. ABCD Children’s Pediatrics was also breached, putting over 55,000 patients at risk.

In this way, healthcare data is still incredibly valuable for cybercriminals, and health care organizations must continue to increase their protections. One way to do this is to work with a cloud hosting provider with deep expertise serving clients managing protected health information (ePHI).

With over 19 years of cloud hosting experience, Ntirety is a HIPAA-compliant service provider offering highly secure, BAA-backed, third-party-audited and -approved cloud hosting solutions that support a wide range of requirements and budgets.

To find out more about how Ntirety can help support your compliance goals in the cloud, call +1.866.680.7556 or chat with us today.

Still learning about HIPAA compliance and whether your business needs it? Download our simplified eBook.

  

A Layered Approach to Ransomware Protection

The latest string of ransomware attacks, plaguing more than 100 countries worldwide, has many researchers scratching their heads as to exactly how it happened and how it spread so quickly.

While the researchers are doing their digging, there’s one thing we do know for sure: there are steps you can be taking right now to limit your risk of exposure. You may be taking some of them already, but if you’re not taking a layered approach to ransomware protection, you may be leaving your company exposed in ways you don’t realize. Read on to learn more.

Some background for those who are newer to the ransomware scene:

Malware is any code installed maliciously or accidentally that provides a 3rd party access to data on a computer/server system (VM or Dedicated). Ransomware is a specific type of malware that takes all or a subset of files on a given system and applies, typically, an encryption algorithm that effectively blocks useful access to the rightful owner of the files until a ransom is paid or some other extortion demand is met.

The most obvious and effective way to limit risk of this type of exposure is to make sure that your system isn’t vulnerable or exposed to the exploit in the first place. Thus, a properly configured Managed Firewall, Managed Patching, and Managed Anti-Virus are the best and most pertinent first lines of defense.

But because of the resourcefulness of the black hat hacker community and malware developers, there will always be risk of a new zero day exploit, which firewall, patching, and AV (Antivirus) will not be able to stop.

At HOSTING, we believe it’s very important, and most effective, to apply security in a layered approach. The more layers that can be applied to a system, the more protected it will be. Therefore, we recommend data backups and CRS services to roll back data points to a usable state.

Here are the layers that we provide for our customers to help mitigate their risk and keep them from falling prey to these attacks:

  1. Managed AV (Antivirus)

    a. Benefits – Effective in stopping known exploits and malware for currently supported Operating Systems.

    b. Limitations – Only effective if it’s configured properly and managed. Ensure that your current service provider can effectively manage your AV solution.

  2. Managed Patching

    a. Benefits – Stops known exploits and malware for currently supported Operating Systems.

    b. Limitations – Only effective if patches are applied when released by the OS (operating system) or application vendor. Ensure that you’re applying your patches when they’re released by the OS, or find a provider who will develop a custom SOW and work with you to ensure patching is executed in a timely manner.

  3. Backups

    a. Benefits – Offers a point-in-time recovery option for infected files.

    b. Limitations – In some scenarios, a full-level system restore is not possible, so recovery from malware may be time consuming. This is not a limitation of the service itself, but the more time between the exposure and the restore, the higher the risk of data loss due to RPO (recovery point objective) considerations. If a backup exceeds the retention period of the date of infection, the file(s) could be unrecoverable.

  4. CRS (Cloud Recovery Services)

    a. Benefits – Offers limited point-in-time recovery option for entire VMs.

    b. Limitations – CRS is available only for virtual machines, not for dedicated servers. The journal length dictates the maximum age of a restore, which is typically much shorter than backup retention periods. CRS are most effective in reversing exposure if the exposure is caught quickly.

  5. Managed Firewall

    a. Benefits – Blocks malicious code that is not destined for a customer/service required port.

    b. Limitations – Outbound traffic isn’t typically blocked via policy, making accidental user-initiated exposure possible. Vulnerabilities that leverage customer/service required ports are also not blocked.

  6. Other Services (Threat Manager, Log Manager, Web Application Firewall or WAF)

    a. Benefits – Threat Manager and Log Manager are effective in identifying and alerting to potentially malicious activity. A WAF blocks a very specific subset of known vulnerabilities at the front-end web tier.

    b. Limitations – Threat Manager and Log Manager can’t block malware activity. They are only effective for identifying and alerting. A WAF is effective for blocking a very specific subset of front-end web exploits and is limited in scope. Zero Day exploits won’t likely be identified and blocked as it takes some time for the new exploit definitions to be imported into the service.

-Roland Scarinci, Senior Director of Solution Architecture at HOSTING

Click here to learn more about how HOSTING Unified Cloud Security services keeps our customers – and their data – safe and secure.

When Security is Your DR Strategy

Both Disaster Recovery (DR) and Security/Compliance are critical aspects of your business, but many fail to see how these strategies relate to each other. For example, many security/compliance standards have requirements that speak to the state of the backups you’re running – i.e. Are they encrypted? How long are they kept?

Beyond this simple example, many of the tools and processes used in one area directly apply to the other, and the best way to achieve your goal – 100% application uptime – is to use them in conjunction with each other.

Let’s take ransomware as another example, one which has been on the rise over the last few years and predicted to continue its rise in 2017. The recent WannaCry attack, impacting over 100 countries, is one such very painful example of ransomware’s capabilities and alarming rise. We won’t recap WannaCry here today, in part because I can almost guarantee that most of the articles covering the attack also focused on common protective measures such as antivirus, intrusion detection, and the importance of regularly updating your OS.

There are alternative mitigation techniques, which take advantage of Disaster Recovery services, that I’d urge you to consider along with these common suggestions:

  1. Regular backups of your data, including the OSIf the ransomware is trying to hold your data hostage, it won’t matter if you have other copies of that data. And all the better if you have the ability to perform bare metal restores as well.
  2. Replication technology with JournalingSome technology used for replicating servers and virtual machines, such as Zerto, has a journaling feature that allows you to fail over to your DR site in a historic state rather than a current one. Similar to using backups, but potentially much less downtime if you have a complete DR solution that is regularly tested.

In both examples above there is one caveat – you have to accept the fact that you’re not using the most absolute recent version of your data, but potentially something hours (or days, or weeks) old in order to failback to a time before the ransomware was installed.

So we’ve seen how DR can be applied to enhance Security – is the converse statement true? Absolutely. While having a DR site and the ability to failover to it is nice, there is always a price to pay. Whether in loss of data (which could be minimal, admittedly) or in the time it takes to complete the failover, which grows in exponential proportion to the complexity of the environment, DR is not likely the whole solution.

Case in point, most companies address DR by considering only potential catastrophes such as natural disasters and by ensuring they only need to failover in an absolute emergency, e.g. a natural disaster at the primary datacenter. But, you know what’s not a natural disaster? Being hacked, that’s what. And though it’s not natural, it can have the same disastrous potential. That’s where Security, applied with DR, provides some good news – if you’re addressing both as part of your Cloud DR strategy, you’re covering as many bases as you can.

To sum up, both categories of service should go hand in hand, and significant investments in one should lead to similar investments in the other, or else you’re just shifting your risk from one bucket to another one, and hackers don’t care about your buckets. Too often I see companies put artificial limits on their spend based on categories that are meaningless – i.e. “We can spend 10% of revenue on Security, but only 5% on DR”. That’s like choosing whether to leave your front door unlocked or your windows wide open (pardon the pun there). In either case, be prepared to lose some furniture.

-Jamie Price, Account Executive at HOSTING

Shared Security Model – Where Does Your Service Provider Draw the Line?

FISMA… GLBA… SOX… HIPAA… PCI… If this looks like alphabet soup to you, you’re not alone. Many of our customers don’t know the full scope of the requirements that the government or other regulatory bodies hold them responsible for. It’s easy to get lost with the vast amounts of information and jargon in any compliance framework.

When you take the step to move your compliance based operation to the Cloud there is one major point to consider, the shared security model. If you are working with a service provider and they say that they can provide a HIPAA or a PCI solution (or whatever alphabet soup regulation your business is under), proceed VERY cautiously. In most cases you will be engaging in a relationship where the service provider may, and likely will, provide only some of the controls to assure the compliance you require.

Simply stated, the shared security model means that different entities will be responsible for providing the security controls to meet all of the requirements of a regulation. This very often includes responsibilities assigned to the customer (you!). This model is not new. From the very early days of Cloud hosting, most providers drew the line in terms of security controls to exclude the customer’s application code. This was because, in general, the customer would have a MUCH better view of the development methodologies and be in a better position to assure proper security focused development was occurring. Additionally, many regulations require a lot of internal policies and procedures, and documentation thereof. Again, most customers would have greater visibility and control over that type of activity.

Times have changed a bit. There are now certainly organizations that will handle all aspects of your compliance requirements but that service comes with a hefty price tag. What you want to make sure happens, is that all aspects of the compliance framework, in terms of responsibility, are clearly defined by the service provider. What you don’t want to happen is to find out, possibly after an infraction, that a control that you thought your service provided covered, was in fact, in your realm of responsibility.

This is not an exhaustive list, and you will want to reference your specific regulatory framework to create a comprehensive list of controls, but below are some items you may want to consider from a responsibility standpoint:

  • Physical Control of Facilities
  • Physical Access Controls of Staff Based on Job Function
  • Encryption of Sensitive Data
  • Network Security (such as)
    • Firewall Configuration and Management
    • Log Management
    • Intrusion Detection Systems
    • Web Application Firewalls
  • File Integrity Monitoring
  • Customer Code Security Review
  • Governance, risk management, and compliance tracking
  • Network Communication Encryption

-Roland Scarinci, Senior Director of Solution Architecture at HOSTING

Make sure you don’t make a mess of your alphabet soup! See how HOSTING can help.

Designing for Resilience…It’s Not About the Platform

Forget about not being able to access critical business applications like SalesForce, Office365, GoToMeeting, ADP or GitHub, can you imagine a world (or even 30-minutes) without access to your favorite social media sites like Twitter, Facebook and Snapchat…some should consider a break but that’s a subject for another day. The recent outage of a portion of the Amazon AWS hyper-scale cloud environment has thrust application availability to the forefront of public discussion once again.

What IT professionals need to remember is that it doesn’t matter what platform your application is running on, you, as the steward of the application, are responsible for designing an architecture that ensures the viability and resilience of your applications and data.

Below are five architectural design best practices to keep in mind when deploying your applications on any of the public, hyper-scale cloud platforms (Amazon AWS, Google Cloud Platform, Microsoft Azure, etc.) or private environments (physical servers, Microsoft HyperV, VMware ESXi, etc.).

1. Provider Redundancy

While all of the major public, hyper-scale cloud platforms have done a great job ensuring redundancy in their cloud platforms, there is always the chance that a software bug, human error or cyber-attack could dramatically affect the performance and/or availability of a large portion of the hyper-scale cloud provider’s environment.

Developing a redundant, provider-agnostic architecture also provides an organization with the flexibility to move any given application to another cloud provider to meet future business and/or IT objectives.

A number of providers have sprung up over the last 12 – 18 months to address the need for platform flexibility.  Some of these solutions, such as the ZERODOWN software developed by ZeroNines, allow for the quick shifting of application workloads from one hyper-scale cloud provider to another without the loss of functionality or data, thus making an outage or severe performance issue with one of the hyper-scale cloud providers a non-event.

2. Geographic Redundancy

Whether an organization decides to develop a multi-provider architecture or not, it is important to have geographic redundancy when it comes to where you host the data and applications. Geographic redundancy is important to guard against regional risks such as:

  • Political turmoil (Ukraine, Middle East, North Korea, etc.)
  • Large natural disasters (think Hurricane Katrina in 2005)
  • Large power outages (think the large power outage that affected the entire Northeast part of the United States in 2003 or the rolling blackouts that have occurred in Southern California)
  • Large-scale telecommunication outages

To adequately guard against geographic uncertainties, a geographically redundant architectural design should provide a minimum of 250 miles separation between data centers.  This amount of separation will typically guard against natural disasters, power-grid failures and network failures.

Of course, the distance alone will not guard against political risks, depending on the stability of the country hosting the applications and data, a further separation may be required to protect against any political unrest.

Along with native solutions provided by the hyper-scale cloud providers, organizations can look at solutions such as Zerto’s Virtual Replication in order to replicate applications and data from one geographic region to another.

3. Data Center Redundancy

Data center redundancy takes two forms…

The first is hosting your applications and data within multiple data centers, even if they are both within the same general geographic area (less than 250-miles apart).  This physical data center redundancy guards against localized weather events, terrorist attacks, local power grid and network failures, building failures (fire, flood, etc.) and the preverbal “backhoe through the cables”.

In addition to physical data center redundancy, organizations must ensure that there is redundancy built into the individual data centers themselves.  This is important because the most successful disaster plan is the one that will not need to be executed in a real-life situation.  By ensuring that your data center is highly redundant within the walls of the data center, you dramatically decrease the chances that you will actually have to declare a disaster and move workload requests to another location.  Intra-data center redundancy means eliminated single points of failure within the data center for such items as power delivery, distribution & backup power, internal and external network connectivity, fire suppression systems and cooling systems.

Along with native solutions provided by the hyper-scale cloud providers, organizations can look at solutions like Zerto’s Virtual Replication that easily replicate applications and data within a data center, as well as between data centers.  If a virtualization solution such as VMware’s ESXi or Microsoft’s HyperV solution is used, organizations can take advantage of virtualization-platform specific toolsets to seamlessly migrate workloads within or between data centers.

4. Application Redundancy 

To further protect applications from a failure, it is highly recommended that an organization look at ways to make the applications themselves highly redundant.  An application can fail for a number of reasons, including hardware failure; operating system failure; application bugs; cyber-attack; and unplanned spikes in utilization.

An application architecture that leverages micro-services and horizontal scaling, along with a well-defined software development and deployment methodology will help guard against failures at the application level.

Tools such as application and web load balancers allow you to route work requests to the available resources.  In the case of an individual application server failing (whether due to a hardware, software or network failure), the workloads can be seamlessly routed to the remaining application servers.  The load balancer architecture will also protect the performance and availability of the application In the case of unplanned spikes in utilization.

5. Data Redundancy

Without the data, an application is of no use.  For this reason, organizations should look for ways to replicate their data.  This replication not only protects against hardware and software failures but also human error.

To protect against hardware and software errors, organizations should look to data clustering solutions such as the SIOS DataKeeper technology to replicate their data from one storage device to another.

To protect against intentional and unintentional human error, organizations should look at making backups of their data on a regular basis and keeping old copies of the data around for an agreed upon and documented period of time.  Unlike clustering, where the data is copied from the primary location to the secondary location, backups are done at a scheduled time making it easy to “roll-back” an inadvertent update to the data.

Conclusion

Whether an organization hosts their applications and data in a public, hyper-scale cloud platform (Amazon AWS, Google Cloud Platform, Microsoft Azure, etc.) or a private environment (physical servers, Microsoft HyperV, VMware ESXi, etc.), it is the responsibility of the IT department to ensure the data center, infrastructure, applications, and data have all been designed to meet the specific resiliency requirements of the business for the specific application and data.  Leveraging a cross-platform consulting and managed services company, such as HOSTING, to design, build, monitor and manage can dramatically increase the resiliency of an organization’s applications and data.

About the Author

Michael McCracken is currently the Vice President of Advanced Solutions at HOSTING. With over 25 years of IT industry experience, Michael has extensive knowledge on infrastructure and application transformation, including solutions in the areas of security & privacy, data center design, business continuity & resiliency, information lifecycle management, storage & server consolidation/virtualization, infrastructure high-availability, LAN/WAN/wireless networks and mobile computing.