Security Gap Gives Hacker Access to 100 Million Bank Customers’ Personal Information

Capital One is the Latest Enterprise to Hit the Headlines Over a Data Breach

On Monday, July 29, 2019, Capital One Financial Corp. announced that more than 100 million of its credit card customers and card applicants in the U.S. and Canada had their personal information hacked in one of the largest data breaches ever.

Paige Thompson, a software engineer in Seattle, is accused of breaking into a Capital One server and gaining access to 140,000 Social Security numbers, 1 million Canadian Social Insurance numbers and 80,000 bank account numbers in addition to an undisclosed number of people’s names, addresses, credit scores, credit limits, balances, and other information. The Justice Department released a statement Monday confirming that Thompson has been arrested and charged with computer fraud and abuse.

As the CISO of a global IT solutions provider, I am always hesitant to comment on these situations because if it can happen to one of the biggest players in the industry, then everyone is at risk. Bad actors have unlimited time, resources and motivations—that’s why advancing a cybersecurity program is critical to every organization’s maturity process. We, the cybersecurity community, must do better collectively.

While the Capital One data breach is staggering with more than 100 million affected, this is just another event in a long list of massive data incidents during recent years, including Equifax, Marriott, Home Depot, Uber, and Target. Adding to the list of compromised information, “improper access or collection of user’s data” like Cambridge Analytica or WhatsApp have also made recent unsettling headlines.

Don’t Wait for Hackers to Find the Vulnerabilities from Within

Court filings in the Capital One case report that a “misconfigured web application firewall” enabled the hacker to gain access to the data. As infrastructures, support structures, and data flows become more complex, the security and need for visibility exponentially increases. Fundamentals like asset management, patching, and user access with role-based access is critical and cannot be over looked.

These pillars of protection are achievable with the help of experienced partners, like the managed security experts at Hostway|HOSTING, focused on finding and filling any gap in existing infrastructure and applications.

Learn more about how Hostway|HOSTING’s Managed Security services can be the better shield for your data against hackers. >>

Take Charge on a Personal Level by Using a Passphrase

Even with all the internal work and effort businesses put towards protecting data, consumers should still take precautions and be proactive protecting their identity. Never give personal information out over the phone—even if the caller appears to be from a reputable organization like Capital One. Phishing scams through calls, emails, and text messages are only increasing. Even offers for IT protection from unvetted parties can be attempts to gather or “fill in” additional information for malicious purposes.

One of the quickest ways to boost protection of your personal information is to change your password to a passphrase. Create a great passphrase in three easy steps:

  • Use personally *meaningless* passphrases
  • A pseudo-random mixed 15-character password
  • Pick a minimum of 4 words—RANDOMLY

Simply combining random words (like DECIDE OVAL AND MERRY = Decide0val&andmerry) can build a new passphrase far more secure than “12345” or “password1”.

Let Partners Provide You Peace of Mind Against Security Threats

While every individual should be an active participant in protecting their identity and personal data, enterprise companies can’t ignore the devastating regularity of these hacks and breaches. IT security is a crucial component for any modern business, and equally important is the constant vigilance to keep those security measures validated and updated. Vulnerabilities emerge with every new technological advance, making an experienced partner to keep a steadfast watch necessary to allow organizations’ own IT teams to focus on innovation and business goals.

Hostway|HOSTING’s Managed Security services bridges the gaps every company faces as systems, tools, and data grow rapidly. Expert monitoring and risk reduction and mitigation from trusted IT partners empower internal teams to focus on pushing business forward. Don’t trust that your basic security is enough to keep your company out of the hacker headlines—get real peace of mind with cybersecurity experts like Hostway|HOSTING watching your backend systems, infrastructure, and applications.

Schedule a consultation with Hostway|HOSTING today to proactively protect your data from hacker threats and data breaches.

Top 4 Hybrid Cloud Use Cases

For today’s enterprise business, a wholesale migration to the cloud likely isn’t the right solution. From firsthand experience since the early days of the cloud, we witnessed out a lift-and-shift migration of traditional workloads to cloud services didn’t deliver the expected benefits – whether those were reduced cost, better resiliency, or increased performance. Sometimes, a hybrid deployment could improve the outcome where public cloud alone couldn’t deliver. We thought this might be a transitional state while cloud technologies matured.

Since the early days, public cloud functionality has grown by leaps and bounds – features and functionality have improved at near exponential rates. Even still, public cloud doesn’t often deliver all the benefits companies need or expect. And keeping your IT workloads where they run today certainly isn’t going to improve things either. Instead, a hybrid approach to cloud adoption has proven to be the winning strategy across a variety of workloads and use cases—and that’s why hybrid cloud has become the new normal.

Let’s explore several use cases to highlight some of the benefits that hybrid cloud can provide.

You’re Looking to Optimize Costs

The rise of cloud adoption has proven that all cloud options are not created—or billed—equally.

The public cloud adopted a pricing model similar to a pay-per-use taxi, making it convenient and relatively low maintenance. Although a taxi is logical for short trips, it isn’t cost effective for long term transportation. For traditional workloads that run all the time, the public cloud functions like a taxi with its meter running all the time, unable to turn off. After experiencing the cost of the constantly-running meter with no off switch, more enterprise businesses are exiting the public cloud in search of something more economical.

Leveraging the public cloud to optimize costs requires a bit more work. Applications need to be rearchitected to leverage cloud-native concepts, such as auto-scaling groups, microservices, and application self-healing. This work can’t be ignored, but often the cost of a complete application rewrite is prohibitive, either in time or with lack of know-how. A hybrid approach can allow a more gradual upgrade path by allowing you to thoughtfully choose application components you’d like to migrate to the cloud piece by piece while maintaining more complex or hard-to-rebuild components in their current state. By taking this hybrid approach rather than forklifting all of your applications at once, you can see the benefits of cloud-based services, including a more optimized price tag.

Hybrid environments can also give organizations more control with a custom blend of cloud solutions and dedicated infrastructure. For enterprise businesses, the hybrid cloud often makes better financial sense.

You’re Still Using Legacy Application Components

Well-established enterprise organizations are often overladen with legacy systems and applications but look hopefully to the cloud to help. Unfortunately, the public cloud cannot be their cure-all in many scenarios, such as when a massive amount of data is too expensive to move, locked in too extensively to existing solutions, or demands specific requirements, such as compliant systems. Although a full lift-and-shift may seem appealing, the hybrid cloud is often the better solution for supporting legacy applications on dedicated systems and supplementing with cloud solutions to leverage advanced functionality and services, like machine learning.

However, before making the hybrid cloud shift, companies must dig deeper into current infrastructure to understand the costs and risks of moving elements to the cloud versus simply exposing these applications to cloud services to leverage new functionality. A strategy for each application that considers benefits and risks at this level will prepare a company much better for success in their digital transformation.

Even companies already in the public cloud—especially those might be questioning if they moved too fast—can take a similar step back to evaluate their strategy and find ways in which hybrid cloud solutions can save IT costs and gain better efficiency. Evaluating legacy applications and future IT goals with the guidance and insights of cloud solutions experts is key to making an optimal transformation.

You Need to Support Peak Traffic

The scalability and flexibility the hybrid cloud offers cloud makes it ideal to manage fluctuating traffic levels enterprise companies experience, from the high-peak usage to periods of steady plateaus. The holiday shopping season, sales promotions, or other events can cause rapid demand spikes, which may subside just as quickly once the event is over. Traditional solutions required companies to maintain extra resources to accommodate these peak times, which is an extremely costly option. Remember—the public cloud works like a taxi with the meter always running, but hybrid cloud solutions support these bursts while also supporting the baseline more efficiently.

While some components of your application may need to run all the time, others don’t. Let’s look at an example: Databases function best when they are always operational. Web servers or web services may be much easier to scale out as demand spikes. A hybrid approach to this example application (where the database would remain on a dedicated machine while web services can be rapidly added on demand in the public cloud) represents an approach that provides the right match of platform to application function. Pay-per-use spend on public cloud is optimized based on on-demand usage and can scale rapidly to follow demand spikes (both up and down), while database operations benefit from the stability and reliability of dedicated hardware.

While peak traffic can be predictable for some industries, enterprise organizations should use insights and analytics to best architect their hybrid cloud solutions. Along with upfront research, consistent testing and evaluation of traffic patterns and demand signals are critical for companies to take full advantage of the hybrid cloud.

You Want More Economical Options

A disaster recovery plan is crucial for every enterprise organization but implementing and maintaining one is easier said than done. Traditional disaster recovery plans require you to move entire applications to a dedicated backup environment—an expensive and tedious process. With such a financial burden attached to traditional methods, it isn’t that shocking that 30% of businesses don’t have any disaster recovery plan in place at all.

But building better IT resilience can be more affordable with a hybrid cloud approach. Providing businesses with scalable and more agile options to meet their specific disaster recovery needs, the hybrid cloud presents economic options not available through traditional offerings. For example, taking a traditional workload and creating a failover environment in the cloud can save significant costs. Rather than maintaining the failover environment in a running state, it can be built in the cloud, then snapshotted to storage. With proper scripting to ensure rapid re-provisioning of the environment, costs of maintaining the cloud DR site can be limited to storage costs, saving significantly over the costs of maintaining a full duplicate environment that is always running

Yet just like traditional strategies, the hybrid cloud still requires detailed planning that takes time and expertise. Although some disaster recovery service providers may not be current with best practices in the cloud, finding a capable managed cloud provider is often the key to a more economical disaster recovery plan in the hybrid cloud.

Working Better Together

The hybrid cloud brings multiple platforms together to solve problems—reducing IT costs, optimizing legacy systems, maintaining reliable performance, ensuring resiliency—and working with experienced managed cloud experts allows your business to harness the power of hybrid.

Start exploring how the hybrid cloud can transform your business with a free consultation today.

A Cautionary Tale: Sungard Files for Chapter 11

 

A Complete Failure for the Disaster Recovery Services Provider

Sungard Availability Services (“Sungard AS”), an IT services provider with more than 40 years’ experience specializing in disaster recovery, announced on April 1, 2019 that it would file for Chapter 11 in early May. With an annual revenue of approximately $1.4 billion, the company serves customers around the globe with tailored recovery services but will now file for bankruptcy in an effort to reduce nearly $1.3 billion of accumulated debt. Although Sungard often promoted their ability to help customers “adapt quickly and build resiliency,” it appears as though they were unable to employ those skills for their own business. The recent headlines surrounding the DR services provider can be viewed as a cautionary tale for other well-established IT service providers lagging to adopt new technologies—and their customers.

But what really went wrong? A long-standing company recognized in the industry announces prepackaged Chapter 11 with creditors in agreeance—this is all a signal of a significant flaw at Sungard.

Changing the Definition of Value in a New IT Environment

Forty years ago, the value or success of a service varied dramatically from what businesses and individuals seek today, and that’s true across industries. As most organizations that were around 40 years ago can tell you, disaster recovery services were often costly, difficult to maintain, reliant on colocation, and complex to execute internally. For over four decades, Sungard provided answers to these DR challenges with expertise and hardware, giving customers a valuable and needed service.

Yet as technology trends took a swift turn, traditional DR services no longer held the same value, and the Sungard business model wasn’t adapting fast enough. Sungard’s own Chief Executive Officer Andrew Stern pointed to the company’s inability to keep up with technology as a main driver of the announcement, stating, “The approach the company had taken to disaster recovery really hadn’t changed in 20 years—and the world had moved on…We had been slow in recognizing the business had to change.”

While Sungard continued to provide the brick-and-mortar infrastructure once necessary for tenable DR services, the introduction of the public cloud offered better control and accessibility for DR plans with significantly less cost. The public cloud brought greater scalability and options to DR services and shifted customers’ perception of value, just as many other industries and providers experienced during their own technology transformations.

“With the advent of cloud-based DRaaS solutions that offered customers more economical and more agile options than legacy approaches, a company like Sungard AS that applied a more traditional model was bound to be challenged,” Amy DeCarlo, GlobalData’s principal analyst of security and data center service. “For Sungard AS to make real progress, the company will need to revisit its core solution set and go-to-market model.” Although outside the organization, DeCarlo surmises that “the company has struggled mightily in recent years” and been increasingly challenged by their own design.

Sungard’s diminished traditional service value combined with more economical public cloud offerings may have resulted in the decision to file Chapter 11, but how the company moves forward after bankruptcy will certainly shape its future viability.

The Best Defense for Businesses and Service Providers

The Sungard story serves as a warning to both service providers and businesses:

  • Service providers should be prepared and invest early to meet customer needs and expectations as they change at the same rapid pace as technology
  • Customers should be aware of how well service providers are actively meeting and anticipating business needs and market trends

Both must continually evaluate how to measure value and proactively adapt to the shifting needs of the industry. To become a laggard in IT, for either customer or service provider, is to plant seeds for new challenges.

Outlooks After Disaster

Although their traditional business model may have been the key driver leading to bankruptcy, Stern stated in the press release, “Our creditors recognize the value in what we’ve built.”

While creditors convey confidence, customers may be asking how Sungard’s Chapter 11 plans will affect them. For IT services dedicated to disaster recovery, any lapse could be catastrophic. Although bankruptcy will restructure the well-established service portfolio familiar to customers, Sungard spokeswoman Karen Wentworth assured, “There will be no interruption to business.”

Maintaining support for current customers is a common pledge from companies after filing Chapter 11, but it can still leave analysts and customers skeptical what to expect as the process progresses or is unsuccessful—increased costs, disorganized service sets, or even collapse?

An Open-Ended Outcome

Sungard’s story shows that the wide-spread adoption of cloud-based solutions, service providers that cannot keep up with technology evolution are at real risk of becoming extinct. While there is no one-size-fits-all for disaster recovery or IT infrastructure, hybrid cloud solutions can address the technology changes rapidly unfolding for businesses across industry. The right hybrid cloud solutions can facilitate the goals of a modern DR strategy to reduce risk, optimize IT spend, and increase business agility. Proactive flexibility and scalability that can change alongside a business’ evolving cloud adoption can be realized most effectively with a hybrid cloud approach.

Worried about your own disaster recovery plan? Start with a free consultation to see what disaster you could be preventing today.

Updated on May 20, 2019

One month from the April 1 announcement that the tenured disaster recovery service provider would file for Chapter 11 bankruptcy, Sungard emerged with a new CEO at the helm, former Broadview Networks leader Michael K. Robinson. Restructuring for the IT company reduced its debt by $800 million and provided $100 million in new liquidity from its creditors.

Only time will tell if new leadership and a dramatic restructuring will allow Sungard to reestablish itself as a name to be trusted for disaster recovery.

IoT Privacy Threats and the 7 Best Ways to Avoid Them   

Things are getting smarter. From manufacturing to healthcare to the everyday devices in houses and cars, nearly every industry is looking for more ways to integrate the IoT’s remote monitoring and tracking capabilities into their everyday operations. For organizations that haven’t adopted IoT protocols yet, it’s only a matter of time until they do. A recent study projected that more than 24 billion internet-connected devices will be installed worldwide in the next two years. That equates to more than four IoT devices for every human on the planet, prompting new concerns about security and privacy—and rightfully so, because with more connectivity and an increasing amount of data being transferred comes more vulnerability.

What does this mean for end-users and organizations 

Without the right protections in place, a hacker could easily gain access to the network-connected devices that surround you every day, changing the temperature in your house or controlling your car stereo. There’s even the potential for these privacy and safety breaches to go beyond mere annoyances, turning the issues into one of life or death. Imagine, for instance, if criminals could use IoT-enabled home devices to track a family’s comings and goings, or if they found a way to hack into an IoT-enabled insulin pump or pacemaker, taking their victim’s health hostage in the process.

Developers must take these risks into consideration as they build products and software that are IoT enabled. Further, CIOs and CTOs should take note—your risk profile has changed. Any deception—whether executed deliberately or by mistake—will likely be perceived as your fault. All of this means that for society to accept your IoT-enabled devices and software, or for companies to accept IoT-enabled devices into their organizations, you must make privacy and safety your first priority—no exceptions.

What are the most common types of privacy concerns?

This is an experiment, and end-users are part of it. 

Much to the delight of those who want to mine data from consumers for advertising or other more nefarious purposes, the IoT is a jackpot of personal data. Every day, consumers are becoming the subjects of behavioral experiments that they didn’t sign up for. Recently, for example, it was discovered that Roomba was sharing information on its customers’ home dimensions with advertisers without asking permission to do so. And much of the United States was infuriated when it was discovered that Facebook data once thought to be private was sold to a political firm in an effort to influence their behavior.

All of this begs the question—if we can’t socialize with our friends online or vacuum the floor without being tracked, what does this mean for the devices in our lives that keep us healthy or safe? Could the biometrics pulled from your fitness tracker be used to determine your fear level, propensity to be intimidated, anxieties related to finances, or more? End-users want to know that when they interact with IoT devices, they won’t become a guinea pig.

More endpoints, more problems. 

To the delight of clever cybercriminals, the IoT also offers more endpoints to attack. If a person hacked into a computer or smartphone that controlled other devices, they may also be able to gain control to those secondary devices. In other words, they can attack an entire network of devices by gaining access to just one.

IoT vulnerabilities are already showing. 

Just after its release, a Google Mini was found to be recording everything it heard, and an Amazon Alexa recently recorded and sent a family’s private conversation to a random contact without permission. Not long ago, Google Home and Chromecast found that some of its user’s locations could be tracked within minutes of clicking on an innocent-looking link they received from phishing scammers. Although all of these issues have since been fixed, we know that they are only a few of the many issues out there—what will be next?

Devices are building more public profiles for their users—and planting the seeds for discrimination.

As users interact with IoT devices, the data gathered from each one is compiled into a profile. If that profile goes public for any reason, the user is then at risk of facing discrimination for employers, insurance companies, and other agencies. While there are laws in place to avoid discrimination against protected classes of people, some experts believe that government agencies aren’t prepared to handle IoT-based discrimination.

This is especially true when you consider how hard it can be to detect and prosecute the most traditional forms of discrimination. It was recently revealed, for example, that roughly 60% of jobs eliminated by IBM in the 1980s were those held by employees ages 40 and over. As IoT-enabled devices become more prominent, there is a fear that this sort of discrimination will go beyond age, race, and sexual orientation to include buying habits, physical activity, and much more. The possibilities for abuse are limitless.

What about compliance?  

Although government agencies are keeping a close watch on IoT technologies and the potential security concerns they pose, most compliance standards for IoT data security and privacy haven’t caught up entirely yet. The European Union now has GDPR, a set of legal regulations that includes guidelines for data collection and personal information processing. Developers should be aware of the ways these new regulations could extend to the IoT. Additional rollouts of more GDPR regulations are expected to come.

Medical devices could be one of the more frightening prospects to consider when it comes to data privacy compliance. Just one IoT breach could involve multiple HIPAA violations. A recent report on penetration and security risks classified the healthcare sector as one of the worst performing sectors when it comes to system security. The FDA has already issued recommendations on how healthcare facilities can ensure their devices stay protected, yet the complexity of monitoring all that data on so many devices is beyond the reach of many individuals and organizations. That’s why protecting user and data privacy is a must for any IoT system to be truly secure and fully accepted.

So, how can developers help protect the privacy of end-users?

1. Build your apps with security in mind.

CTOs and Application Developers must take a deliberate approach to building data privacy and security into every layer of their app. To best protect data across the board, IoT applications should align to the principles of:

•  Data privacy: A stored data record must not expose undesired properties, such as the identity of a person. This one area is a huge challenge for IoT—and IT in general. It was hard before, and now it’s harder.

•  Anonymity: The property of a single person should not be identifiable as the source of data or an action.

•  Pseudonymity: Link the actions of each person with a pseudonym, or random identifier, rather than an identity. This trades off anonymity with accountability.

•  Unlinkability: This qualifies pseudonymity in the sense that specific actions of the same person must not be linked together, effectively protecting against profiling.


2. Encrypt everything.

Use strong encryption across all of your devices and network, and never allow users to export data beyond its native application unless they’re entitled to do so. Your encryption should include:

•  AES-256 symmetric encryption for data stored to disks or archived

•  Bcrypt for one-way encryption of passphrases as needed

•  Mediated access to data classes via capability grants at the user/role level


3. Use role-based authentication and authorization.

User roles should always be defined by capabilities rather than via a structure built on Super Users. Each user in the structure should be anonymized so they are only traceable through event streams with privileged knowledge.


4. Set up multiple access layers, then carefully secure and monitor your data.

Every data store should have mandatory access controls, and those for interfaces and web application services should have discretionary access controls. You should also set up:

•  Firewalls and webs to protect your environment from known threats

•  Logins and permissions for presentation level code

•  SSL and SSH to protect your network

•  Multifaceted passwords or dual authentication where applicable


5. Log everything.

The security applications you use should log all security events within the platform in one centralized place, and you should always have access to an audit trail that provides a reconstruction of events. Audit trails should include:

•  Timestamps

•  Processes and process interactions

•  Operations attempted and executed

•  Success and failure elements in events


6. Be mindful of dependencies.

This is especially important when you are leveraging opensource. Don’t rely on other people’s code to be secure unless you are absolutely sure those individuals can be trusted and you have layered security to protect your organization—and your reputation.


7. Use trusted vendors

From the collaboration tools your organization uses to the infrastructure your products and services are built on, it’s important to partner with vendors that are compliant and put safety first.  Healthcare organizations that work with HIPAA-compliant and HITRUST-certified vendors, for instance, can expect to be exposed to fewer risks due to the rigorous and standardized methods of securing protected health information that these vendors must endure.

Ensuring IoT privacy and security can be a big undertaking, especially for smaller companies with limited IT resources that are already spread thin. That’s why many successful organizations are partnering with managed hosting providers. These expert teams can help keep business data private and secure without having to add costly resources, and allow organizations to transfer some of that risk to a trusted partner and expert.


To learn more about how we can help you protect your customer or patient data with simple, streamlined, and fully-managed hosted solutions, schedule a consultation today.

Driving Compliance with CaaS

If your company is scrambling to meet the European Union’s May 25, 2018 GDPR (General Data Protection Regulation) compliance deadline, you’re not alone. According to a survey by law firm Blake Morgan Research, nine out of ten businesses have not made crucial updates to their privacy policies. A full 23 percent admit to being unaware of the new data protection laws.

Additional results of the Blake Morgan survey reveal an even starker picture of GDPR readiness:

  • Only around one in ten businesses (13 percent) had updated privacy policies, one of the significant requirements of GDPR.
  • Around one in five businesses (21 percent) did not have a senior person in place responsible for data protection.
  • More than three-quarters of businesses (76 percent) had not put in place systems to ensure notifications of data security breaches are in accordance with GDPR requirements.
  • More than three-quarters of businesses (77 percent) had not reviewed their data processing contracts, which will be under greater scrutiny under GDPR.

What is GDPR?

GDPR gives EU citizens control over the privacy of their personal data. Enacted in 2016 with a two-year adoption grace period, GDPR is expected to impact all organizations worldwide that handle the data of EU citizens– not just companies located in the EU.

Understandably, many organizations are concerned about their ability to show they comply with GDPR by May 25. Leaders in IT, legal and privacy roles worry about their companies’ readiness to meet specific GDPR mandates. These include being able to locate and target specific data; having visibility into who is accessing what data and knowing when to delete it; and being able to automate data removal when an EU citizen requests it.

Given the large number of organizations that admit to being unprepared, the potential for widespread penalties is high. The cost of non-compliance with GDPR is severe, and can be as steep as 4 percent of annual global turnover or €20 million (currently about $25 million), whichever is greater. GDPR guidelines and penalties apply to any member of the supply chain who processes EU citizens’ data.

CaaS Emerges to Relieve Compliance Concerns

This uncertainty and unpreparedness is exactly why organizations are looking at Compliance-as-a-Service (CaaS) as a cost-effective alternative to achieving, managing and sustaining compliance regulations. It takes the guesswork out of complying with complex, ever-changing certification standards and regulations such as GDPR, PCI DSS (Payment Card Industry Data Security Standard) and HIPAA (Health Insurance Portability and Accountability Act) and puts it into the hands of a trusted provider.

HOSTING’s Compliance-as-a-Service (CaaS) provides access to knowledge, tools and expertise businesses require to meet compliance standards. Ultimately, reducing the complexity and cost of risk management, while helping mature business processes and programs through solutions built on industry standards and best practices. HOSTING’s CaaS will reduce the complexities of maintaining a strong risk posture and shift your businesses focal point back to your core competencies. Contact Ntirety to learn more.

Protecting Sensitive Data: When “Putting It in the Cloud” Doesn’t Cut It

Have you ever had an executive tell you to just “put it in the cloud?” It’s a deceptively simple command, and an increasingly common one, too. No one wants to break the news to their boss that the task isn’t as easy or inexpensive as it seems, especially when you’re dealing with sensitive data. That’s because it requires planning, strategy, and expert implementation to successfully place that data into a cloud ecosystem.

What is Sensitive Data?

The definition may vary from company to company, but sensitive data is generally considered any information that makes up the most important components of mission-critical business systems. When storing these files, it’s important to consider the following tenets of data security and governance:

  • Data availability
  • Confidentiality
  • Integrity
  • Regulatory and compliance considerations
  • Performance and cost issues

In other words, sensitive data must be quickly and efficiently accessed without interruption, but it cannot be breached or hard to control.

How Can Organizations Strategically Protect Sensitive Data?

The leading cloud strategies of today are the result of years of evolution within the industry. Cloud solutions are more agile, secure, and accessible than ever before, and they’re multiplying at a torrid rate, with enterprises across all industries using an average of 1,181 separate cloud services.1 That’s an alarmingly high number that raises several security concerns around the sharing of sensitive data. However, with a more careful approach to your strategy, you can avoid the overuse of cloud services.

Planning Considerations

  1. Think of your journey to the cloud as incremental steps, not an all-or-nothing proposition—especially when it comes to mission-critical applications.
  2. Consider a hybrid environment that offers a combination of public cloud services and a hosted IT infrastructure. This will allow for more private handling of mission-critical workloads and sensitive data that require PCI or HIPAA compliance.
  3. If your data contains sensitive financial or government information, ensure that your cloud solutions offer strict controls to help you comply with country-specific requirements, such as GDPR regulations.
  4. Make note of any legacy applications that may require traditional bare-metal solutions and isolation from other customers.

Isolating Sensitive Data With a DMZ

In cases where sensitive data is maintained in a public cloud, a multi-tier network architecture is deployed. This strategy creates a so-called “demilitarized zone,” or DMZ, that isolates sensitive data from public-facing web servers. Alternatively, customers may opt to utilize a “zero-trust” architecture to secure their sensitive data from both external and internal threats. In this scenario, those who try to access data throughout the network are asked to verify their identity and permissions each time they want access, when they change locations, or after they attempt to breach pre-determined parameters. This type of architecture becomes even more attractive in a hybrid solution where full control of all endpoints is not possible or preferable.

Currently, we’re seeing very few companies store sensitive data solely in the cloud. In fact, only 23% of organizations felt they could completely trust public clouds to keep their data secure in 2017.2 Companies in their infancy might give pure cloud deployments a try, but the reality is that as their business grows, cost, complexity, and security concerns come into play. Wide-scale data breaches have also left customers wary of cloud-based storage, and many brand reputations have been damaged in the wake of these incidents. While these issues are most often due to misconfigurations, they sometimes can be traced to the rapid and uncontrolled sprawl of cloud services, a weak integration strategy, and a failure to adopt a true hybrid cloud architecture that isolates sensitive data while enabling elasticity.

The Case for Hybrid Cloud Services

To keep sensitive data safe from breaches, many enterprises are opting to keep it on dedicated, bare-metal infrastructures. With hybrid cloud services, such as Direct Connect capabilities through their hosting and MSP partners, they can bridge to the public cloud at the same time. This allows organizations to set up virtual private clouds that can securely transmit data between an on-premises or dedicated hosted infrastructure to their public cloud resources.

Hybrid cloud adoption increased three-fold between 2016 and 2017,2 likely due to the fact that this strategy allows organizations to:

  • Pay for infrastructure on a monthly basis
  • Enjoy the performance and control of bare-metal infrastructures
  • Avoid large Capex outlays
  • Physically secure their infrastructure

A Promising New Chapter for Sensitive Data

It should go without saying that no two organizations’ needs are the same, especially when it comes to storing sensitive data. But one thing is clear—a rush job isn’t going to cut it. Instead, select a technology partner that offers a custom and targeted approach. Together, you can carefully plan and develop a solution that’s secure, affordable, and customized to meet the needs of your organization.

See the Hybrid Cloud in Action

Ntirety’s global enterprise customers trust our expertise to manage their infrastructure containing their most sensitive  data.  When Samsung needed to secure their SmartTV application, it was Ntirety that enabled them to be the first smart television app to be PCI compliant globally. Ntirety has the in-depth expertise to design, build, secure, and operate infrastructures containing highly-sensitive, mission-critical data, including PCI-, FERPA-, and HIPAA-compliant infrastructures.

Talk to one of our Security Experts, or get a free Security Assessment »

  1. Netskope, February 2018.
  2. McAfee, 2017. Building Trust in a Cloudy Sky: The state of cloud adoption and security.

Where the Business Cloud Weathers the Storm

Ntirety president and CEO Emil Sayegh provides a thought leadership piece on InfoWorld’s website. The column, titled, “Where the Business Cloud Weathers the Storm” discusses disaster preparedness for data centers and the benefits of utilizing cloud technologies to prevent potential interruptions in your mission-critical applications.

This month, Emil reflects on the recent storms from this year’s hurricane season and the impact they have on productivity, availability, and overall infrastructure of communications operations. He explains how cloud hosting services such as Ntirety’s can provide the assurance and capability to manage and minimize downtimes whether brought on by nature or human error.

Read the full post now: Where the Business Cloud Weathers the Storm

4 Steps to GDPR Compliance in the Cloud

The General Data Protection Regulation (GDPR) goes into effect May 25, 2018. While GDPR is an EU-established protocol, it affects any business worldwide that collects data on EU citizens. Non-European enterprises providing any form of goods or service to European citizens will need to comply with the new mandate.

Is your company GDPR-compliant? While you may think it is, a recent research report shows only 2% of companies that believe they are compliant actually are, in respect to meeting specific GDPR provisions.

Many Companies Unprepared for May 25

The aforementioned research revealed that 48% of companies that stated they were GDPR-ready did not have adequate visibility over personal data loss incidents. As high as 61% from the same group revealed they have difficulty identifying and reporting an incident within 72 hours of a breach, which is mandatory when there is a risk to data subjects.

Much of this unpreparedness is in part due to an insufficient understanding regarding the provisions of GDPR. Penalties for non-compliance are stiff, with fines as high as $21 million or 4% of global annual turnover, whichever is greater. These risks are raising red flags, with a study from Veritas revealing that 86% of companies surveyed worldwide have expressed concerns over non-compliance, both in terms of penalty fees and damage to their brand image.

Steps to GDPR Compliance

The steps outlined below needs to be part of a collaborative effort between companies and their cloud service providers (CSP). While CSPs are also responsible for conforming to GDPR guidelines, it’s a mistake for companies to ignore GDPR and believe that their CSP will completely take the responsibility off their hands.

The steps outlined below needs to be part of a collaborative effort between companies and their cloud service providers (CSP). While CSPs are also responsible for conforming to GDPR guidelines, it’s a mistake for companies to ignore GDPR and believe that their CSP will completely take the responsibility off their hands.

1. Perform Data Privacy Impact Assessment (DPIA)

Your organization needs to conduct a routine DPIA to identify compliance shortcomings. Customers should understand how their data is protected as it traverses through various networks and storages.

2. Acquire Data Subject Consent

Companies must have client consent before processing their personal data. Under GDPR, the consent must be voluntary, and clients have the right to revoke it any time. Consent must also be recorded and stored.

3. Protect Data Subject Rights

On the CSP’s end, administrators must allow their clients’ customers to access their data upon request. Customers have the right to transfer or make corrections to the data. CSPs must respond to requests within a specified timeframe, usually within 30 days.

4. Satisfy New Obligations

Under new GDPR guidelines, organizations are obligated to inform clients of a data breach within 72 hours. Companies need to coordinate with their CSP if the breach occurred in the latter’s end. GDPR policy states that customers can hold both the company and its CSP liable.

Prepare for GDPR Compliance with HOSTING

With the GDPR deadline looming, it helps to work with compliance experts to ensure you’re taking the right steps. At HOSTING, our team of compliance experts is ready to help your organization build, migrate and manage a compliant cloud environment with offerings and a level of experience that’s unmatched in the industry. Click here to learn more about our offerings.

Safety, Security and Hyperscale Public Cloud

Almost everyone uses hyperscale public cloud, whether they’re aware of it or not. The underlying technologies enabling consumption and interaction with web applications are powered by a number of multibillion dollar giants like Amazon, Google, IBM, and Microsoft that together have created a cloud horsepower and technology arms race.

As a result, a commodity market has emerged that includes consumption-based pricing, geolocation, rapid provisioning, extensive developer tools and resources, rapid storage, and more. The last 10 years have seen the emergence of some exciting high-demand workloads ─ machine learning, artificial intelligence, massive multiplayer online games, big data, mobile applications, and the internet of things, all of which have been fueled by hyperscale public cloud.

Highly Scalable. Capable of handling large data volumes. Distributed. Fast. Cost effective. These are some of the benefits hyperscale platforms deliver. Most people get it – public cloud services provide an immediate advantage in terms of flexibility and ease of use. Unfortunately, what is often overlooked are the security risks that have entered the picture in an increasingly hyperscale world.

The Enemy Within

Tremendous advantages aside, if you’re a business leader, you need to be aware of how hyperscale public cloud solutions can harbor hidden dangers if not properly architected, configured, and managed.

A few very recent examples:

• BroadSoft, a global communications software and service provider had a massive unintentional data exposure. Cloud-based repositories built on the AWS S3 platform were misconfigured, allowing public access to sensitive data belonging to millions of subscribers.

• In another case, improper configuration of AWS S3 storage and insufficient security solutions exposed records of 14 million Verizon customers.

• TigerSwan, a private military contractor left “Top Secret” data similarly unprotected on an AWS S3 storage bucket.

The Real Problem

The problem is not with the hyperscale public cloud itself. These breaches show how human error and limited security are the weak links in the information security chain. A single misstep, lapse in process, or misconfiguration can result in a massive exposure of data to the entire world. Organizations that use hyperscale computing can remain at risk from these and other kinds of security incidents because they often utilize bolt-on security solutions, manual security auditing, manual incident remediation, and other legacy practices and tools that threaten overall security posture.

You CAN Hyperscale – Safely

As the cloud story continues to evolve, we will witness more stories about security breaches. But it’s not a story that has to happen to your business or organization.

Today’s issues can be addressed by implementing security best practices and technologies that protect modern data, system configurations, and applications. Constructs such as infrastructure automation, cloud-aware security policies and technologies, and coded system policies are examples of next-level security that minimizes risk, particularly in the cloud. It’s not a simple path (especially without a high level of security expertise), but the rewards are most certainly worth the effort.

Organizations of all sizes have simplified and improved their security posture by working with a trusted managed service provider such as Ntirety to help architect, configure, and manage a comprehensive cloud security solution designed specifically for their unique requirements. Ntirety has both the expertise and broad portfolio of managed security services to help ensure that your cloud solution is properly configured to meet your objectives while protecting your business and customer data.

To learn more about building a highly secure and scalable hyperscale public cloud solution, contact a Ntirety expert at +1.866.680.7556 or chat with us today.

The 6 Stages of a Malicious Cyber Attack

You don’t have to look very far to find an example of a malicious cyberattack. For example, the June 2017 hack of password manager OneLogin. Intruders accessed a set of Amazon Web Services (AWS) keys and were able to unencrypt data that was assumed to be secure. What makes this breach even scarier is that many people who use a password manager like OneLogin don’t just use it for personal passwords. They use it for work passwords, too.

Knowing that the potential for a breach lies both within your business infrastructure and through employees as a point of access should spur any organization into getting serious about understanding how security is compromised. One of the best places to start is by arming yourself with a baseline understanding of the tactics used by cybercriminals.

The first step in understanding these tactics is educating yourself about the types of attacks that can occur. The two most common are web application compromises (usually seen in the finance, entertainment, and education industries) and distributed denial of service (DDoS) attacks (prevalent across every industry).

The next step is to understand the stages of a breach.  Although the types of compromises can vary, most attacks involve the following stages:

  1. Reconnaissance – Forming the attack strategy.
  2. Scan – Searching for vulnerabilities.
  3. Exploit – Beginning the attack.
  4. Access Maintenance – Gathering as much data as possible.
  5. Exfiltration – Stealing sensitive data.
  6. Identification Prevention – Disguising presence to maintain access.