Ntirety Achieves FERPA Compliance—What FERPA Means for Your Organization

Ntirety is pleased to announce that we are now FERPA compliant. When combined with our PCI, HIPAA, and other compliance attestations—along with our HITRUST certification—our ability to maintain FERPA compliance makes Ntirety one of the few hosting providers to have a full suite of compliance certifications. Further, this means that educational institutions and agencies can rely on us to store and maintain electronic student education records in accordance with FERPA regulations.

We accomplish this by employing multiple layers of security to guarantee the protection, privacy, and integrity of data. Our policies and processes—including strong authentication controls—keep data safeguarded at all times, both in physical storage and on the cloud.

What is FERPA?

The Family Educational Rights and Privacy Act (FERPA) is a federal law that ensures students’ paper and electronic education records stay private. In 1994, FERPA was amended to improve key areas, such as who can review education records, and how and when education records can be released to third parties, including parents.

This law applies to all public schools and state or local education agencies that receive federal education funds. Organizations that host and/or develop Integrated Data System (IDS) software must also ensure they are compliant with FERPA.

What counts as an education record?

Schools maintain education records, which include a range of information about a student. Examples include:

  • Medical and health records
  • Emergency contact information
  • Grades, test scores, courses taken, academic specializations, and activities
  • Disciplinary actions, attendance, schools attended, awards conferred, and degrees earned
  • Identification information, such as codes, social security numbers, pictures, etc.

What DOES NOT count as an education record?

There are a few forms of documentation that seem like they would be classified as student education records, but are not. These exceptions include:

  • Personal notes written by school officials that are not shared with others
  • School or district law enforcement records created and maintained by a school or district’s law enforcement unit
  • Directory information, such as a student’s name, address, telephone number, or photo

How does this apply to those who manage, transfer, and store educational data?

To ensure your educational data stays compliant, you need a security program with the right storage, authentication, and overall data management policies and procedures. Put simply, you must ensure your hosting and storage providers are equipped to keep you compliant.

For institutions and organizations that host and/or develop IDS software, FERPA requires that they ensure strong physical and IT security controls over the system, such as:

  • Clearly written and strictly enforced security policies and procedures; components that include physical security, network mapping, authentication, layered defense architecture, secure configurations, access controls, firewalls and intrusion detection/prevention systems, automated vulnerability scanning, patch management, incident handling, and audit and compliance monitoring
  • Documented guidelines and justifications for data collection, management, and access
  • An established framework for reviewing and approving individual uses of student data
  • Procedures for the external sharing of data analytics that ensure data is delivered in non-identifiable formats

What happens when an institution or agency violates FERPA?

If an institution or agency violates FERPA, that organization may lose some or all of their federal funding. To date, this kind of penalty has never occurred; most institutions that have been found in violation of the law have avoided losing any funding by correcting their practices.

Though ultimately, complying with FERPA is about more than just avoiding fines. It’s about protecting the overall data privacy of students so their information doesn’t fall into the wrong hands.

What does a FERPA compliant infrastructure need?

Whether your data is in the cloud or on-prem, you’ll want to follow these guidelines to stay compliant:

  • Keep your records in the U.S. Transferring PII and education records across international boundaries can be risky. It can be challenging to enforce privacy laws outside of the U.S. and hold non-U.S. entities accountable for violations.
  • Protect your data no matter where it lives. Review all appropriate administrative, physical, and technical safeguards that the provider may use to protect data, including how they destroy it.
  • Partner with expert help. An experienced, compliant hosting provider can help you pass your FERPA audits, enabling you to focus on your job while they handle both your security and compliance management for you.

How Ntirety’s FERPA-Compliant Solutions Can Help Your Organization

We are an EdTech company that works very closely with top educational institutions and students from around the globe, and we understand the extreme importance of maintaining the integrity of student data. We chose Ntirety as our global managed hosting provider because of their commitment to strong security, strict compliance, and great customer service. Their announcement that they are FERPA compliant now is just another reminder of why we choose them. — Jamey Vester, CISSP, Chief Technology Officer, IES Abroad

Ntirety helps organizations mitigate risks and gain the edge they need to become more agile, while spending less. Other than safely storing electronic student records, we also work with clients to host a wide variety of education applications, such as:

  • Content management systems
  • Digital education
  • On-demand learning materials and webinars
  • Academic research data
  • Digital media for speaking events, sport events, and fine arts performances

To learn more about how our team of experts can help you ensure your data stays secure and compliant with FERPA, contact us today for a complimentary consultation today.

Recapping CloudEXPO 2018: Focus, Focus, Focus

Our team had the pleasure of attending CloudEXPO this year, a conference focused on cloud technology, security, and trends. Our CEO, Emil Sayegh, presented a breakaway session titled Economics of the Cloud: Don’t Aim for the Sky on Everything. During this session, Emil covered the current cost challenges related to public cloud and how a hybrid approach enables organizations to decrease their IT spend and increase business agility.

Emil wasn’t alone in offering up helpful insights—there were several outstanding presentations given throughout the conference. Here, we’ve highlighted some of our key takeaways from these exceptional discussions.

Takeaway #1: Focus on Economics

During his session (view the slides here), Emil stressed the importance of building a careful strategy before rushing into the cloud to avoid overspending. With organizations estimating 30% of wasted public cloud spend and experts estimating an additional 15% in waste (RightScale,2017), it’s more important than ever to play your cards wisely when it comes to the cloud.

To do this, Emil suggests taking a healthy audit of your workloads. What are your use cases? What are your bandwidth needs? Do you have special security or compliance challenges? Once you have a better understanding of your organization’s needs, you can then begin to architect an optimized infrastructure that reduces IT spend across the board.

For example, before working with Ntirety, PlayPower’s high-definition renderings of their customized recreation equipment took up to 10 hours to complete. The company needed a solution that could shorten rendering times without putting a strain on IT resources, budgets, or the company’s infrastructure. Ntirety created a powerful managed cloud solution for PlayPower that reduced playscape rendering times down to one hour without needing to hire additional IT resources or investing in additional CAPEX.

Takeaway #2: Focus on Fixing Your Process

Daniel Jones from Engineer Better started off his discussion by pointing out that in order to deliver value, you need to plan for six key aspects: Databases, development, testing, infrastructure, networks, and deployment.

Where many organizations get into trouble, however, is that they try to have a broad and deep knowledge in each of these areas. This is a huge blow to an organization’s productivity. By automating or outsourcing some of these pieces, such as databases, infrastructure, and networks, your team can instead focus on development, testing, and deployment. Further, organizations that focus their attention on these areas can eventually move to a test-driven development structure that condenses several long phases into a faster, more efficient process.

This methodology has a name, of course: Continuous delivery. As Jones pointed out at CloudEXPO, continuous delivery allows developers to trim the fat off tasks and gives them more time to focus on the individual parts of the process. But remember—implementing this methodology requires organizations to offload management of databases, infrastructure, and/or networks to trusted vendors.

Takeaway #3: Focus on Data Storage  

In a session from CTERA, Chief Architect Saimon Michelson impressed upon the audience the need to move data to the edge. With edge computing, data is processed by local devices, rather than being transmitted to a data center. Michelson argues that doing so makes data more accessible and provides greater business agility, reducing recovery times from days or weeks to mere minutes.

Moving data to the edge is also more cost-efficient, allowing organizations to scale more quickly and grow their dataset more logically. From an availability standpoint, it allows for easier provisioning and makes data more readily available no matter where you are, effectively increasing productivity. Finally, edge computing allows large amounts of data to be processed near the source, creating more personalized experiences that respond in real time and reduce Internet bandwidth usage—another great way to decrease spend.

Following this storage strategy is not without its challenges, however. You’ll want to ensure that no matter where your mission-critical data is stored, it is fully protected. Edge computing requires organizations to add more data-generating devices to their networks, and sometimes in more locations. You’ll need to monitor these devices carefully to ensure they don’t become an open door for unwanted threats. Consider relying on a partner to manage the architecture and security of these devices so that you can reduce the complexity of your move to the edge.

Takeaway #4: Focus on Your Differentiator

One of our favorite sessions from CloudEXPO came courtesy of Avoka’s Chris Harrold. As the company’s Director of Developer Relations, Harrold has helped many financial institutions digitally transform over the years. Due to these experiences, he’s curated quite a list of dos and don’ts.

From this list, several insights stood out. First, Harrold cautions against building your own infrastructure and network in house. He urges financial institutions to leverage third-party expertise and resources, as it is cheaper and frees up time to focus on the user experience of your transformation. This is especially important for financial institutions, as most competitors have made the transition in recent years, so simply going digital isn’t enough. It’s crucial to find your organization’s differentiator, Harrold stresses. Back-end details are similar from organization to organization, so instead of spending time and money nailing down those particulars internally, hand it off to a trusted advisor and turn your attention to identifying the things that matter most for your customers, such as the website experience and application features.

Revisit Emil’s presentation

Did you miss Emil’s session or want to learn more about how to decrease spend while moving to the cloud? Explore his presentation slides below:

Ready to implement these takeaways into your organization’s infrastructure to decrease IT spend, increase business agility, and mitigate risk? Contact us now for a free consultation »

6 Must-See Sessions at CloudEXPO 2018 in New York City

From November 12-13, 2018, Ntirety will be attending CloudEXPO in New York City, where tech leaders can gather to learn about current cloud trends, engage in insightful discussions around cloud strategies, and ensure that their enterprises are on the right path to a successful digital transformation.

According to the CloudEXPO team, every Global 2000 enterprise in the world is trying to develop their own unique mix of cloud technologies and services, forming multi-cloud and hybrid cloud architectures and deployments across all major industries.

If your company has a similar goal and you or your colleagues plan on attending CloudEXPO, here are six great sessions we suggest checking out:

1. Notorious B.I.G Data: Wins and Fails When Building a Big Data Platform in the Cloud | Monday, November 12 • 11:00am – 11:40am

Unacast, a company specializing in location data, will dive into the ways cloud architecture and its use cases have evolved in the three short years since the company first migrated. Their first employee and VP of Engineering, Andreas Heim, will share some of the learnings from those three years by taking you through the history of Unacast from a data and engineering perspective.

Why you should attend: More than half of all big data projects don’t reach completion.1 This session will cover some of the biggest mistakes to avoid so that your company doesn’t become a statistic, too, including:

  • Non-scalable architectures
  • Systems bursting at its seams
  • Pricing that scales exponentially

Plus, you’ll learn how Unacast:

  • Processes 2 billion rows of data on a daily basis
  • Maintains full transparency in reporting and cost
  • Manages GDPR compliance across all teams
  • Optimizes deployment and developer experience

2. Continuous Delivery is Better for Your Brain | Monday, November 12 • 2–2:40 pm

Daniel Jones, CTO of EngineerBetter, leads this afternoon discussion on productivity and asks the question: Are we failing to exploit the benefits of modern technology? During his presentation, Jones will explore the concept of Continuous Delivery and how it leverages the findings of cognitive psychology and neuroscience to increase team productivity and happiness.

Why you should attend: Business agility is a top-of-mind priority for many organizations, and yet 71% of companies have a low business agility fluency.2 This session will help software teams discover:

  • How to improve willpower and decrease technical debt
  • Whether present bias real and how you can turn it to your advantage
  • How to increase a team’s effective IQ
  • How DevOps and product teams can increase empathy and measure the impact it has on productivity

3. Cloud-Native: A New Ecosystem for Putting Containers into Production | Monday, November 12 • 3:00pm – 3:40pm

The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications, including Istio, Jaeger, Prometheus, Grafeas, and Kritis. This presentation will cover the evolution of the cloud-native ecosystem that has been built around these containers and what that means for your organization.

Why you should attend: Making development, production, and/or transactions more efficient is a top priority for many organizations, and plenty rely on various open source tools to keep things running smoothly. This begs the question: How do open source tools fit into your cloud infrastructure? Settle into a chair at A New Ecosystem to gain insights that will help you find the answer.

4. Blockchain Power Panel | Monday, November 12 • 6:00pm – 6:40pm

Like AI, blockchain is a tech word that turns a lot of heads—and leaves others quite confused. During the CloudEXPO Blockchain Power Panel, a group of tech leaders will discuss their perspectives on the challenges presented by blockchain, roadblocks to adoption, and how organizations can avoid those issues.

Why you should attend: According to Gartner, blockchain has just passed the peak of the hype cycle curb3—and we’re still talking about it, which means that blockchain is a technology you’d be remiss in ignoring. However, is blockchain right for your organization? And if so, what does that mean for your infrastructure? With disruption and security being top of mind for many tech executives, this is a can’t-miss panel that will help shed some light on the trend.

5. CIA Triad: 3 Pillars for Securing the Internet of Things | Tuesday, November 13 • 10:00am – 10:40am

Start Day Two by learning a little lesson in IoT security with the CIA triad. No, not that CIA. We’re talking about the three pillars of IoT protection:

  • Confidentiality: For preserving authorized restrictions on access and disclosure
  • Integrity: For guarding against improper information modification or destruction and ensuring information nonrepudiation and authenticity
  • Availability: For ensuring timely and reliable access to the device and information

Why you should attend: The expansion of the IoT shows no signs of slowing down, but security is still a huge concern. As your organization moves to incorporate more IoT technologies, it’s important to understand how interconnection affects your end-users and how you can protect them. This presentation will provide an overview of the agency’s security tools, plus tips for how to implement them.

6. Economics of the Cloud: Don’t Aim for the Sky on Everything | Tuesday, November 13 • 11:00am – 11:40am

Finally, join Ntirety CEO Emil Sayegh for an insightful presentation on how mastering your IT budget is all about leveraging hybrid cloud environments and carefully planning a thoughtful technology strategy. Further, Emil will dive into IoT solutions and how their rapid growth affects security challenges and other aspects of the cloud landscape.

Why you should attend: As we’ve noted time and time again, 84% of digital transformations fail,4 probably because of the increasingly complex nature of the tech landscape. A successful transformation requires organizations to carefully plan their transformations with a strategy that accounts for risk and allows for agility. Economics of the Cloud will help you start or evolve your plan by covering:

  • Which workloads are most appropriate on-prem, and which are better suited running in a public, private, or hybrid cloud environment
  • Analysis of how end-users approach various cloud offerings
  • What Google, Azure, Amazon provide that is not otherwise available
  • The agility of the cloud when coupled with tools
  • How to maneuver through regulatory and compliance challenges
  • Consolidating with security and compliance in mind

Come say hello!

In between sessions, be sure to stop by the Exhibit Hall and visit the Ntirety booth (#220). Our team would love to chat with you about the challenges your organization is facing and how evolving your infrastructure could bring about decreased spend, reduced risk, and increased business agility.

Not able to make it to CloudEXPO? Find out where Ntirety is headed next!

Sources:

  1. Infochimps, 2013.
  2. Agilityhealth, 2018.
  3. Gartner, 2018.
  4. Forbes, 2016.

Featured image courtesy of CloudEXPO.

Top Three Reasons Your Connection to the Public Cloud Is Costing You

Hybrid cloud adoption has taken off.  Companies have identified efficiency or compliance requirements that call for integration of on-prem or hosted deployments combined with public cloud technology to manage data and workloads.

However, even with a solid hybrid cloud strategy, many enterprises still have an opportunity to improve the costs, especially in public cloud deployments.   According to the 2018 State of the Cloud Report from Right Scale, “optimizing cloud costs” is the top initiative for 58% of respondents.

That’s why Zayo and Ntirety have teamed up to shed some light on the top three reasons why public internet connections to the public cloud are costly.

Reason #1: Higher Data Egress Fees

Cloud service providers, such as AWS, Azure, and Google, recognize that direct connections are more secure and efficient than public internet connections and therefore reward companies for using direct connections.  Companies without a direct connection to their cloud service providers pay significantly more money in data egress fees. With a direct connection, such as Ntirety Cloud Connect, which is powered by CloudLink by Zayo, companies reduce their data egress fees by about 3x.  This translates into significant cost savings for companies utilizing a direct connection.

Reason #2: Lower Performance

The public internet is the easiest, most available connection protocol companies can utilize to connect to their public cloud, but it is also the lowest performing.  Public internet delivers higher latency times and higher data loading times since communication from many public cloud services share a common link.  Slow moving applications and workloads cost companies big money in the form of lost productivity, lower user satisfaction, and limited availability of key features.  Ntirety Cloud Connect eliminates the risks of slower performance and higher latency from using the public internet to access cloud workloads (Office 365, Salesforce, Google Drive) by offering speeds up to 10Gbps over predictable network routes and an undersubscribed network to provide consistent performance.

Reason #3: Increased Security Risks

Finally, companies utilizing the public internet to connect to their cloud service providers are opening themselves up to the unnecessary security risks that are inherent to the public internet.  Customers transferring data into and out of their public clouds over the public internet expose sensitive customer and user data to potential network hijacking, DDoS attacks, and various other external threats. Ntirety Cloud Connect is a direct, protected solution that eliminates the risks of the public internet ensuring that business’ critical data and workloads migrate to and from the cloud securely. Additionally, this solution meets stringent compliance standards for HIPAA and GDPR sensitive materials.

Though the use of the public internet is convenient for many situations, direct cloud connections are more reliable, more secure and perform better because the flow of data is controlled from end-to-end.  Together, Zayo and Ntirety provide customers with a direct and private connection to cloud service providers, to ensure traffic from a Ntirety data center location does not transverse the public internet to arrive at a public cloud, ultimately resulting in a high availability, more performant customer experience.

Hybrid cloud adoption is here to stay and today’s enterprises have a plethora of public and private cloud connectivity options.  By implementing direct connections, savvy enterprises are able to optimize cloud costs, boost performance, and increase security.

Want to perform a complimentary assessment of your public cloud environment? Contact a Ntirety expert today.

The Many Names and Faces of Disaster Recovery

When discussing disaster recovery, people often throw out a variety of words and terms to describe their strategy. Sometimes, these terms are used interchangeably, even when they mean very different things. In this post, we’ll explore these terms and their usage so you can go into the planning process well-informed.

Disaster Recovery:

This is a term that has been making the rounds since the mid- to late-seventies. Although the meaning has evolved slightly over time, the disaster recovery process generally focuses on preventing loss from natural and man-made disasters, such as floods, tornadoes, hazardous material spills, IT bugs, or bio-terrorism. Many times, a company’s disaster recovery plan is to duplicate their bare metal infrastructure to create geographic redundancy.


Recovery Time Objective (RTO): 

As you build your disaster recovery strategy, you must make two crucial determinations. First, figure out how much time you can afford to wait while your infrastructure works to get back up and running after a disaster. This number will be your RTO. Some businesses can only survive without a specific IT system for a few minutes. Others can tolerate a wait of an hour, a day, or a week. It all depends on the objectives of your business.


Recovery Point Objective (RPO):

The second determination an organization must make as they discuss disaster recovery is how much tolerance they have for losing data. For example, if your system goes down, can your business still operate if the data you recover is a week old? Perhaps you can only tolerate a data loss of a few days or hours. This figure will be your RPO.


IT Resilience:

This term measures an organization’s ability to adapt to both planned and unplanned failures, along with their capacity to maintain high availability. Maintaining IT resilience is unique from traditional disaster recovery in that it also encompasses planned events, such as cloud migrations, datacenter consolidations, and maintenance.


Load Balancing:

To gain IT resilience and keep applications highly available, companies must engage in load balancing, which is the practice of building an infrastructure that can distribute, manage, and shift workload traffic evenly across servers and data centers. With load balancing, a downed server is no concern because there are several other servers ready to pick up the slack.

Streaming giant Netflix often tests the load balancing ability of their network with a proprietary program called Chaos Monkey. Using this tool, they ensure that their infrastructure can sustain random failures by purposefully creating breakdowns throughout their environment. This is a great example for companies to follow. Ask yourself: What would happen if someone turned off my server or DDOSed my website? Would everything come crashing to a halt if an employee accidentally deleted a crucial file?


Backup:

Backups are just one piece of the disaster recovery puzzle. Imagine if you took a snapshot of your entire workload and replicated it on a separate server or disc—that is a backup. With backups, you always have a point-in-time copy of your workload to revert back to if something happened to your environment; however, anytime you must revert to a backup, anything created or changed between the time the last snapshot was taken and the time the disaster occurred will be lost.


Failover Cluster:

Another piece of the disaster recovery puzzle, failover clusters are groups of independent servers (often called nodes) that work together to increase the availability and scalability of clustered applications. Connected through networking and software, these servers “failover,” or begin working, when one or more nodes fail.

Which type of failover server you choose depends on how crucial the system is, along with the RPO and RTO objectives of the disaster recovery plan. Failover servers are classified as follows:

  • Cold Standby: Receives data backups from the production system; is installed and configured only if production fails.
  • Warm Standy: Receives backups from production and is up and running at all times; in the case of a failure, the processes and subsystems are started on the warm standby to take over the production role.
  • Hot Standby: This configuration is up and running with up-to-date data and processes that are always ready; however, a hot standby will not process requests unless the production server fails.

Replication:

This term represents the process of copying one server’s application and database systems to another server as part of a disaster recovery plan. Sometimes, this means replacing schedule backups. In fact, replication happens closer to real-time than traditional backups, and therefore can typically yield an adherence to shorter RPO and RTO.

Replication can happen three different ways:

  • Physical server to physical server
  • Physical server to virtual server
  • Virtual server to virtual server

Database Mirroring:

As with backups and replication, database mirroring involves copying a set of data on two different pieces of hardware; however, with database mirroring, both copies run simultaneously. Anytime an update, insertion, or deletion is made on the principal database, it is also made on the mirror database so that your backup is always current.


Journaling:

In the process of journaling, you create a log of every transaction that occurs within a backup or mirrored database. These logs are sometimes moved to another database for processing so that there is a warm standby failover configuration of the database.


At the end of the day, what you really need is business continuity.  

A well-formed business continuity plan will use all of these methods to ensure your organization can overcome serious incidents or disasters. Going beyond availability, business continuity plans determine how your business will continue to run at times of trouble. Can your business survive a systems failure? Can it survive a situation where your offices burn down? How quickly can you access your mission-critical data and mission-critical applications? How will people access your mission-critical applications while your primary servers are down? Do you need VPNs so employees can work from home or from a temporary space? Have you tested and retested your business continuity plan to ensure you can actually recover? Does your plan follow all relevant guidelines and regulations?

The right mix of solutions will depend on the way your business operates, the goals you’re trying to achieve, and your RPO and RTO targets. In the end, the resilience of any IT infrastructure or business comes down to planning, design, and budget. With the right partner to provide disaster recovery and business continuity management services, you can come up with a smart plan that proactively factors in all risk, TCO goals, and availability objectives.

To start planning your own battle-tested IT disaster recovery plan and business continuity strategy—and ensure that your business is ready for anything—contact one of our experts for a free risk assessment today.

A Smarter Path: Saving Money and Mitigating Risk Through a Digital Ecosystem

Since the early days of the Internet of Things (IoT), the movement has come a long way, with the focus shifting from simple internet connectivity to full-scale digital ecosystems. Now, ensuring the interconnectivity of devices—especially in mobile environments—is a must, and IT professionals are challenged to make their solutions efficient, available, and secure.

Along with these requirements, it’s important for these leaders to adapt to the always-evolving needs of a secure digital ecosystem. For many, the key to successfully managing their cloud solutions is a dynamic infrastructure that allows for faster and even more reliable communication between devices. They need a smarter path that provides scalability—and perhaps no better path exists than that of a purpose-built, cloud-based system.

The key to a successfully-managed cloud solution is a dynamic infrastructure.

Making the IoT Accessible for More Organizations

Understandably, the primary word on most C-level professionals’ and board members’ minds is usually money. Cash flow and ROI underline most conversations, even if the words aren’t uttered aloud. Fortunately, cloud solutions can help make keeping up with the IoT more affordable, and offer a number of other benefits, including:

  • High availability
  • On-demand access
  • Manageability
  • Built-in security
  • Capacity management
  • Geo-location services
  • Redundancy

These features would usually be extremely expensive to integrate and maintain, but in a cloud environment, they become somewhat standard.

How to Break the Mold

Even though building a digital ecosystem is becoming more affordable, there’s often a disconnect between what CIOs believe they should spend and the reality of the costs associated with secure, transformative technologies like cloud computing and IoT.

For those trying to sway decision makers into exploring these technologies, one approach that seems to work is to audit risk. It is well-accepted that most digital businesses are in danger of suffering major failures because of an inability to manage digital risk.1 For proof, look no further than a standard corporate insurance policy. But, risk can be mitigated and minimized, and when you’re integrating IoT, the way to do that is with a strong cloud effort.

Protecting Your Investment

No matter what you spend to get your digital ecosystem up and running, it’s important to choose solutions that are stacked with security features, such as:

  • Continuous monitoring
  • Routine maintenance and updating
  • Real-time network visibility
  • Modern firewalls and intrusion detection systems
  • Privilege alerts

When integrated into a risk strategy, these features help to reinforce security throughout your organization, and allow for a faster time to market and better efficiencies. This cultural shift won’t just be felt internally—it will also reflect in the service provided to clients, giving your organization a better reputation overall.

Avoiding Lock-In

According to Gartner, worldwide spending on public cloud services will grow 21.4% in 2018,2 but they warn that companies should guard themselves against lock-in—a phenomenon where an organization moves their applications and data onto one cloud platform and then experiences issues when they try to move away again.

It’s important to consider this as you begin building your digital ecosystem. Starting with a hybrid cloud architecture is a popular way to mitigate the risk of lock-in. This service allows you to bridge to the public cloud while also taking advantage of an on-premises or dedicated hosted infrastructure, essentially protecting you from the dangers of an irreversible migration.

Finding the Right Partner

When it comes to technology, a single degree of change can disrupt an entire market. The same can be said for changes within an enterprise. By making even incremental changes to your digital ecosystem, you can create a positive disruption that leads to better business opportunities—especially when you start with the right cloud approach and the right cloud partner.

Find a hosting partner you can trust to help design, build, secure, and operate your infrastructures. At Ntirety, we have the in-depth expertise needed to create or improve a customized and affordable digital ecosystem—all while helping to mitigate risk and keep your organization secure.

To learn how Ntirety can help you tap into the IoT through a cloud-based digital ecosystem, or to receive a free security assessment, talk to one of our Security Experts today.

1. Gartner, 2016.

2. Gartner, 2018.

Failure to Launch: How hybrid cloud can save some digital transformations from being duds

As enterprises work to keep up with the ever-evolving digital marketplace, some worry that the digital transformations required to keep enterprises competitive will disrupt their ongoing business operations.
In a recent article published on InfoWorld, Ntirety President and CEO Emil Sayegh discusses some of the challenges faced by enterprises trying to make a digital transition and how, when done correctly, certain technologies can help organizations to stay competitive in their respective markets.

Ntirety Listed in the Newly Released 2018 Gartner Market Guide Report

As hybrid cloud continues to gain prominence as an IT Infrastructure solution, providers of Managed Hybrid Cloud Hosting (MHCH) are becoming critical in helping companies run their workloads in a secure and dynamic environment. Ntirety has been focused on providing enterprises with a wide variety of managed hosting, and managed cloud services and capabilities, with a special focus on MHCH. As a result, we were recently listed as one of the top MCHC vendors in the 2018 Gartner Market Guide for Managed Hybrid Cloud Hosting.

This listing is a result of Ntirety’s strong ability to provide:

  • Managed services for both public cloud and dedicated infrastructures
  • Secure cloud migrations
  • Mission-critical application management
  • Comprehensive hyperscale managed support
  • Location-sensitive data and application hosting

Putting Our Abilities to the Test

These capabilities allow us to efficiently design, build, and operate PCI-, FERPA-, and HIPAA-compliant infrastructures that can securely store highly-sensitive data for our global enterprise customers, including industry leaders like Samsung. Last year, the electronics giant asked Ntirety to help secure their SmartTV application. Our team executed a customized approach that enabled Samsung to offer the first smart television app to be PCI compliant on a global scale. As the cloud market evolves, Ntirety will continue to offer solutions like these that are built to meet the individualized needs of each customer, and we look forward to meeting Gartner’s high standards in the process.

Bespin Global Follows Suit

In related news, our sister company in Korea, Bespin Global, which uses Ntirety Data Centers, has just been listed on the Gartner Magic Quadrant for the second year in a row.  The report cites, “Bespin Global is a small hybrid MSP headquartered in South Korea. It was founded in 2015 as a spinoff of Ntirety Korea. It offers hyperscale cloud managed and professional services, along with hybrid solutions, including colocation, dedicated servers and private cloud IaaS via Ntirety’s data centers.”

Harness Our Expertise

To learn more about what makes Ntirety a Gartner-recognized industry leader, or to receive a comprehensive consultation on your own cloud migration, hybrid cloud infrastructure, or mission-critical application management, contact a Ntirety Expert today.

Ntirety Sister Company, Bespin Global, Makes Company History

Customers benefit from high-end hybrid cloud solutions and migration services via Ntirety data centers.

Gartner has published their 2018 Magic Quadrant for Public Cloud Infrastructure Managed Service Providers (MSP), Worldwide. For the second year in a row, Bespin Global, Ntirety’s sister company in Korea, has been listed as one of the leading MSPs. This designation is based on:

  • Hyperscaling capabilities
  • Ability to manage and operate a public cloud environment
  • Availability of a cloud management platform to monitor the public cloud environment and calculate cost
  • Cloud architecture design, migration, and DevOps automation capabilities

As the only country in East Asia to be listed, the report puts Bespin in impressive company with its larger competitors, especially those utilizing hyperscale data center infrastructure—a global market that is expected to be worth $98.2 billion by 2022.1 Further, it emphasizes Bespin’s strength as a strategic and focused global MSP, capable of offering advanced managed and hybrid cloud solutions via Ntirety data centers, including:

  • Migration
  • Architecture
  • Dedicated servers
  • Private cloud Iaas

Bespin started as an internal business unit of Ntirety and became an independent entity in December 2015. In addition to that foundation, the company credits its proficiency and success to these key components:

  • Premier Partner Status with AWS and Azure
  • Expert management of multi-region deployments for their global customers
  • Cloud-native approach to solution architecture
  • In-house developed cloud management platform, available as stand-alone service

We sincerely congratulate our colleagues at Bespin on their hard-earned achievements and well-deserved recognition.

Ntirety Also Recognized In 2018 Gartner Report

As hybrid cloud continues to influence IaaS, providers of managed hybrid cloud hosting (MHCH) are becoming critical in helping companies run their workloads in a secure and dynamic environment.

With this trend in mind, Ntirety has been focused on providing enterprises with a wide variety of agile services and capabilities, including MHCH. As a result, we were recently listed as a representative MCHC vendor in the 2018 Gartner Market Guide for Managed Hybrid Cloud Hosting. This listing factors in our ability to provide:

  • Secure cloud migration
  • Mission-critical application management
  • Comprehensive hyperscale support
  • Location-sensitive data and application hosting

Maintaining Momentum

With 20 years of expertise, we are humbled to be included among other big industry players, and we look forward to innovating within the field to provide MHCH solutions that can change the way enterprises do business for the better.

Understand the Hype

To learn more about hyperscaling hybrid solutions, explore the 2018 Magic Quadrant report or contact us.

1. BCC Research, March 2018. Hyperscale Data Center Market to See 20.3% Annual Growth Through 2022

Healthcare IT Systems: Immunity in Cloud Hosting

With the passage of the healthcare reform act, tighter industry regulations that tie revenue to patient outcomes, and the ability to monitor national health at an unprecedented rate, the healthcare IT industry in the United States has become increasingly complex.

Today, technology and healthcare are merging in ways that will radically change the quality of work and life for both medical professionals and patients. Nowhere is this more evident than in the cloud, with increasing demand for not only cloud-based electronic health record (EHR) systems, but also applications that are connected to patients, caregivers, and every participant involved throughout the continuum of care. A recent report focusing on emerging technologies foresees the global healthcare cloud computing market to increase at a compound annual growth rate of 21.24% between 2017-2021, stating, “Some hospitals have deployed legacy systems for enterprise management systems, while some have migrated their IT infrastructure to the cloud-based model. IT systems should be made interoperable to support advanced medical technologies such as telemedicine, digital patient engagement systems, mHealth, and e-health systems.”

Cloud-based solutions in healthcare IT eliminate the need for repetitive tasks like patient paperwork and verification, lab result analysis, and simple prescription completions. And with record-keeping systems and medical care transitioning to cloud-based applications, there is a mountain of medical data that requires a hosted infrastructure that is secure, scalable, and available, for mission-critical applications.

The Human Variable in Security

Along with all these advancements, we are seeing more people entering into the healthcare IT system, which means more patients to monitor, gather and store data for, and who require quality care. To do this efficiently, you need interoperable systems that communicate with each other, that are always available, and HIPAA-compliant. Infrastructures and environments that are HITRUST certified are even more secure. Most systems already have this in place, but what often gets overlooked is security, both in the system and in the workplace. With this much data availability and portability, there is a lot of compute, processing power, and data storage requirements that can quickly scale beyond management with the amount of patient data coming in. Because of this, healthcare IT systems and technology providers put themselves at risk of ransomware and phishing attacks because their data is a veritable treasure trove for the modern-day hacker.

It is imperative for healthcare workers to be properly and periodically trained in current data privacy procedures and security best practices. A recent study showed that 78% of healthcare employees are lacking in even the most basic knowledge of security preparedness to prevent cyber threats. For instance, many employees do not know the difference between the de-identification and the encryption of data. All the HIPAA compliance and regulation adherence an organization has is no substitute for employee awareness, education, and implementation of cyber security risk prevention. So what’s at risk when an organization isn’t properly prepared or trained?  In a recent PHI incident involving Fresenius Medical Care, they were forced to pay a $3.5 million settlement to the U.S. Dept. of Health & Human Services Office for Civil Rights after 5 separate breaches were reported, some of which included stolen laptops which, “failed to implement a mechanism to encrypt and decrypt ePHI.”

An Industry Transformed

Beyond improved patient data management, cloud-based solutions give the healthcare IT industry the potential to advance patient care overall when coupled with transformative technologies such as AI, blockchain, and robotics. The data collected from these technologies can be analyzed in such a way to create more effective, efficient, and personalized health care. Research has shown the global digital health market was strengthened in Q4 2017 due to increased focus on improving interoperability and analytics capabilities, accelerating regional innovation diversity, and other factors.

Still, there are plenty of opportunities for IT to impact healthcare, most notably with mobile devices, especially when you consider the rapidly increasing use of mobile technology in healthcare today. Advantages include reduced errors in medication and labeling, improved communication between staff, and comprehensive improvement and cost reduction in patient care. Ninety-eight percent of clinicians are expected to use mobile devices by 2022, with adoption of mobile technology to expand to pharmacists, lab technicians, and eventually patient participation to provide a more holistic patient care experience. As these devices continue to grow in usage and as data flows at an increasingly higher rate, it’s more important than ever to have security strategies in place that can handle the volume, connectivity, and security required to deliver a safe, secure, and seamless experience for the healthcare user.

Regarding labs, North America is currently dominating the global market in Laboratory Information Systems which provide data in diagnosing, treatment, and illness prevention. The market’s growth is set to hit almost $2 billion by 2021, but not without a few challenges including high costs of lab information systems and, again, the lack of skilled healthcare IT professionals. The Harvard Business Reporter stated, “Relatively few organizations have taken the important next step of analyzing the wealth of data in their IT systems to understand the effectiveness of the care they deliver. Put differently, many health care organizations use IT as a tool to monitor current processes and protocols; what only a small number have done is leverage those same IT systems to see if those processes and protocols can be improved.” As data storage, processing, and portability requirements continue to grow in the laboratory market, security must remain a top priority; especially with the sensitive nature of the information regarding diagnosis, lab results, and personal PHI. If not implemented correctly, the effects of poorly managed cloud security can be disastrous.

Multi-Cloud, M.D.

 

Due to the varied nature of the services provided by the healthcare industry and the integration of mobile technologies, a solution based in the multi-cloud is best suited to meet the needs and demands of both practitioner and patient. But the healthcare industry is a unique case, because it has its own Internet of Medical Things (IoMT), connected medical devices like wireless heart monitors, glucometers, and other health-centric smart applications that, while convenient, are vulnerable to data breaches in the cloud. As more healthcare IT systems transition to the cloud, a thorough network-wide risk analysis is needed to ensure the unbreakable security and availability of their infrastructure. Once a secure framework has been established, healthcare IT systems can adapt, integrate, and automate these new technologies into their daily workflow.

Want to ensure that your infrastructure is completely secure and compliant? Contact a Ntirety Expert for a free risk assessment today.