Please provide your information and we will call you back Monday to Friday, 9-5 EST.
Your submission has been received, and a TeraGo representative will be in contact with you shortly.
Please provide your information and we will call you back Monday to Friday, 9-5 EST.
Your submission has been received, and a TeraGo representative will be in contact with you shortly.
Are you heading to the upcoming iTech Toronto West Conference on May 8th? Make sure to stop by the TeraGo booth #203 and catch our presentation, “Are You Ready to Move to the Cloud,” at 10:30am.
TeraGo will be at the upcoming iTech Toronto West Conference. Talk to us about moving to the cloud.
When: Tuesday, May 8th
Where: Toronto International Centre, Booth #203
At 10:30am TeraGo will be presenting: “Are you Ready to Move to the Cloud.” Join TeraGo’s Arturo Perez and Nabeel Sherif for an in-depth discussion on:
Download this Insightful whitepaper and learn the 12 questions that every organization needs to be asking before adopting cloud.
Download this whitepaper and uncover the strategy behind creating and executing a great Disaster Recovery plan in the cloud.
ITWC CIO Jim Love, TeraGo’s Mohamed Jivraj and Zerto’s Dimitri Li, discuss the strategy behind creating and executing a great Disaster Recovery plan in the cloud. The webinar covers the key cloud solutions to make DR plans workable and making the case for DR spend.
ITWC CIO Jim Love, TeraGo’s Mohamed Jivraj and Zerto’s Dimitri Li, discuss the strategy behind creating and executing a great Disaster Recovery plan in the cloud. The webinar covers the key cloud solutions to make DR plans workable and making the case for DR spend.
ITWC Jim Love, TeraGo’s Chris Taylor and AWS’ Eric Gales discuss:
ITWC CIO Jim Love TeraGo’s Chris Taylor and AWS’ Eric Gales discuss the accelerating benefits at every stage of cloud adoption.
This webinar covers:
The Alberta Energy sector is at a turning point, and digital transformation is the answer to today’s business solutions.
Author: Stephen Mackenzie, Account Executive at TeraGo
It is no secret that the energy sector is key to Alberta’s economy, generating $53B GDP and accounting for 17% of Alberta’s total GDP. Over the past few years, oil producers have faced many hurdles to profits from the low price of crude and natural gas, to issues with distribution and refining capacity.
Leaders are rising to the occasion and achieving business goals with the use of new technologies. The sector is at a turning point, and digital transformation is the answer to today’s business solutions.
Reduce operational costs
It is more important than ever to work smarter and leverage technology for finding insight to make decisions using technology-driven data. McKinsey research reveals that digital technologies have the potential to reduce CAPEX by up to 20% and OPEX by 3-5%.
Internet of Things is a buzzword these days, but there are very practical outcomes that are being undertaken now. For example, real time data collected from the field is used to monitor equipment. In the longer term, with more data collected, analysts can predict wear and tear to create more efficient maintenance plans and prevent the wastage of equipment shut downs. This requires fast, accurate and secure data transfer as well as compute power for the analysis.
A Reservoir Engineering Manager in Alberta recently shared that it is essential to have one “source of truth” when it comes to data. So much data is available on location production, but it is essential to consistently gather and store this data. Better software tools are being used for diagnostic analytics which improve decision making on where to extract for maximum return.
Lines of Business managers in the energy sector are driving new ideas like the examples above. An operations Director at a major energy firm, uses Amazon Web Services (AWS) to quickly deploy and test new applications: “We currently use AWS for a number of our pilot & proof of concept development projects before bringing them into The Microsoft BI world for production.”
Because data and systems are crucial for managing production and operations, reducing costs and driving business, a disaster with the interruption or loss of data would potentially cost millions. The threats are now greater than ever with malware and ransomware added to the risk of natural disaster or human error. Many businesses are leveraging the high security available with cloud solutions like AWS and private cloud.
4 questions about cloud, answered
Though the energy sector is in many ways leading the big push to digital transformation, surprisingly many companies are in the earliest stage of cloud adoption and only experimenting with a few projects in the cloud. What is holding them back from moving more significantly to the cloud and gaining business impact are questions such as:
About the contributor:
Stephen is a sales professional with over 8 years of experience in IT solution business. As an Account Executive for TeraGo, Stephen focuses on empowering Alberta business to unleash the innovation & efficiency gains possible from leveraging the power of the cloud
 The Next Frontier for Digital Technologies in Oil and Gas. McKinsey: https://www.mckinsey.com/industries/oil-and-gas/our-insights/the-next-frontier-for-digital-technologies-in-oil-and-gas
What does Downtime cost you?
It may be more than you think.
Do you know how much an outage could cost your business? Please use our simple calculator to help you measure the business impact of an IT outage.
Calculate your downtime below:
In today’s world technology is evolving at a rate that is alarming to most organizations. This editorial covers strategies for selecting the right toolset to enable your organization and the driving forces for Hybrid IT.
Author: Maria Afzal, Business Development Manager at TeraGo
In today’s world technology is evolving at a rate that is alarming to most organizations and the basic human instinct is either to welcome and embrace the change or flee to more familiar industries and verticals where the adoption is snail speed. However, the fact of the matter is, this era of digital transformation is impacting every industry from media, manufacturing, to government and education. The most prevalent impact however, is within the information technology industry itself.
IDC within their worldwide digital transformation predictions (FutureScapes) cited “By 2020, 60% of all enterprises will have fully articulated an organization-wide digital platform strategy and will be in the process of implementing that strategy”1. We are already seeing transformation through the adoption of Hybrid IT by the majority of enterprises. Terago’s definition of Hybrid IT is the right workloads, deployed in the right place, within the right management framework. For this reason it is crucial for the information technology industry to lead the path in understanding the driving factors which are vital to developing a timely response and continued success and growth in this market.
As the demand for agility and flexibility grows, companies are struggling to select the right toolset to enable their businesses. Amazon Web Services (AWS) launched in 2006 from the internal infrastructure that Amazon.com built to handle its online retail operations. AWS was one of the first companies to introduce a pay-as-you-go model that scales to provide users with compute, storage or throughput as needed. As this rapid growth accelerates, IT leaders are seeking insights and navigators to provide support in deciphering how leveraging Hybrid IT can propel their Go-To-Market (GTM) efforts, how innovation will be enabled through agile processes and how to optimize performance through proper access. These three pillars are driving forces for Hybrid IT.
Delivering software or any technology services requires capacity and bandwidth that is scalable. What distinguishes a top managed service provider from a good one, is providing enablement for smoother delivery to the customer, rather than just the internal end users. An example is IT Business Services serving the manufacturing industry. I recall mentoring an account executive whose customer was seeking to better understand how adopting Hybrid IT could deliver their SaaS platform to their clients at a faster rate than their competition. Through exploration of the customers strategic objectives and engagement of the key stakeholders the account executive succeeded at laying out a roadmap and vision which aligned with the customer’s 3 year Go-To-Market Strategy. However, I remember encouraging her to clearly outline the key enablement advantages of their employees. It wasn’t until the customer fully comprehended these advantages, that they saw the value. One of the key questions asked by this customer’s CIO, Phil was “How would your team ensure performance efficiency of our workloads and how will you enable our IT team to understand these components?” At first, he was not pleased with the answer because it involved the outsourcing of a skill set and Phil preferred control and internal management of their environment. However, after being told that this plan would enable 10X growth of their business over the course of 3 years, provide education for their IT team on managing and maintaining Hybrid IT environments in addition to gaining market share from the competition, he did not hesitate to sign the 3 year contract.
Furthermore, the commitment to defining a risk mitigation plan, including ownership of data, applications and workloads versus an assessment led plan, distinguishes a top managed service provider from a good one. Whilst assessment may help determine the workloads suitability and readiness for the cloud they are not valuable unless leveraged for a business impact analysis. This was made evident to me in my experience at a managed service company, helping a telecommunications organization define Hybrid IT. The team supporting me provided the telecommunications company with a fantastic baseline understanding of the IT infrastructure in place however, their analysis was based on this data. At the time I recall explaining to the solution architect that without knowledge of the impacting business processes, the client’s success criteria and tangibly tying our analysis back to these components, we would not be able to grasp their attention. This convinced the managed service provider to start introducing business impact analysis which they positioned in the market as assessments.
Innovation Through Agile Processes
A good business plan accounts for the “what if” scenarios which are critical to any successful business operation. Most organizations adopt Hybrid IT to support their business along with innovation lifecycle. In a 2011, PriceWaterCooper House blog they explain, “idea management systems enable the organization to manage the discovery, incubation, acceleration, and scaling of ideas to create commercial value through the development of innovative product and processes. They provide a structured, disciplined approach to managing the innovation process and surfacing metrics to manage the flow and outcomes of the process.”2 This explained the stages of innovation well and during my tenure in the IT industry I have advised most IT organizations to ensure a good business roadmap is defined prior to adopting Hybrid IT. However, one of the key benefits for IT/Telecom/SaaS organizations is the agility that the cloud provides. Agility is defined as the ability to provision resources in a matter of minutes thereby responding to changing conditions or opportunities faster. So how can we be structured and agile at the same time when the two concepts seem polar opposites of one another? To explain this, I would like to tell a story.
I had been introduced by one of my key vendor partners, TrendMicro to an organization that was scanning the market for Hybrid Cloud solutions. Hybrid Cloud is understood by the IT industry as the ability to leverage a combination of on-premise, private cloud and third party public cloud services with orchestration between these platforms. As I was introduced to the key stakeholders, I asked them how they form technology champions which I had learnt to be critical to the successful implementation of a Hybrid IT adoption. They explained that external factors were pressuring them to seek a solution, specifically their customers were mandating updated versions of their technology sets and without being able to act quickly to the market demand, they would start losing market share to the competition. In this scenario we were able to complete a business productivity analysis in addition to defining resource requirements of the delivered SaaS application within a matter of hours and onboard them to the most suited Hybrid Cloud. However, without the agility of the cloud and the correct impact analysis they would not have been able to react to these external pressures, let alone find ways to innovate for their customer base. Furthermore, they had business alignment on this entire project and along every step it was imperative to tie the technology benefits back to the business outcomes they were seeking.
The last key pillar, and perhaps the one that is overlooked, is how to optimize performance through proper access? Connectivity, throughput and latency to the data, applications or workloads are crucial to the success of adopting Hybrid IT. Similar to stages of innovation, the correct architecture is required in place to facilitate performance optimization. As I spent time mentoring and coaching sales professionals at the managed service provider I spent a few years with the MSP empowering the base line understanding of typical IT infrastructures vs the evolution with the introduction of software defined technologies. This knowledge is invaluable to most, and I found it to be instrumental in my coaching method of “Engaging through insights”. As I look back, I recall an instance where one sales professional had spent hours carefully defining a Hybrid IT approach however had not accounted for the client’s lack of investment on their networking and connectivity infrastructure. This led to delaying the deal and he later explained to the team how instrumental it is not to miss this component. However, that was a learning point for him and never again did he forget to uncover the existing networking environment and its readiness to support the customers Hybrid IT journey.
In conclusion, a navigator and a trusted advisor is critical to the success of customers Hybrid IT Journey and further to that the typical challenges Hybrid IT solves for can be summed up as GTM faster and effectively, innovation by leveraging agile processes and lastly, improve latency and access.
About the Contributor:
Maria is a sales professional with 8+ years in the technology industry. As the business development manager at TeraGo, Maria focuses on growing a successful sales team that delivers value to clients all the while contributing to TeraGo growth. She is passionate about technology, in particular the promise of cloud and she enables sales teams to support their customers realize the potential of cloud all the while, mitigating risk and increasing productivity.
ITWC CIO Jim Love, Arturo Perez and Khurram Raja discuss how to plan and execute the next steps in your move to the cloud. The webinar uncovers the secrets to making intelligent decisions around mixing public and private cloud structures, connectivity, security and strategy.
Join ITWC CIO Jim Love, Arturo Perez and Khurram Raja for an in-depth discussion of how to plan and execute the next steps in your move to the cloud. The discussion will uncover the secrets to making intelligent decisions around mixing public and private cloud structures, connectivity, security and strategy.
Expertise from 10 years building the Canadian cloud has led to the TeraGo Hybrid IT Framework. We can help you make the right choices for all the elements of a cloud solution.
TeraGo has been building the Canadian cloud for 10 years. In this time, we have created our TeraGo Hybrid IT framework to put the right workloads, in the right place, with the right management framework.
We will advise you on what cloud is right for each specific workload:
We’ll underpin this with the right infrastructure:
If you need experience and expertise to operationalize, we’ve got managed services to help you nail delivery.
To hear more about our Hybrid IT Framework and for specific advice for your business, call us at
Hybrid IT cloud solution for SaaS product uses virtualized data centre with rock-solid uptime, robust scalability, and hybrid-cloud compute capacity.
Protegra delivers software solutions to organizations around the world. It recently established a software-as-a-service line of business, Blue Canvas, to offer work management and payroll services and plans to grow by 700 percent over the next five years. To make this happen, Protegra needed a virtualized data center with rock-solid uptime, robust scalability, and hybrid-cloud compute capacity. It found its solution in a TeraGo Networks data center powered by VMware technology.
Founded in 1998 in Winnipeg, Manitoba, Protegra offers services to software-driven businesses that deliver solutions throughout Canada, United States, Europe, and Japan. Protegra builds custom applications for organizations and individuals to solve business problems in many different industries, including financial services and government.
Protegra launched its Blue Canvas business in 2015 for customers to manage payroll, scheduling, time and attendance, and customizable reporting in the cloud. Blue Canvas says its platform is the most modern, secure, and state-of-the-art in Canada, serving more than 20,000 employees across more than 500 organizations.
Software consulting is a business with relatively uneven revenue flow. After operating exclusively as a software consulting firm, Protegra decided to create its Blue Canvas service organization to help even out its bottom line with recurring revenue and to become more intimate with customers.
“We wanted monthly recurring revenue as well as consulting revenue,” says Frank Conway, product manager at Protegra. “We wanted the Blue Canvas business to smooth out our revenue flow. Knowing that our platform and capabilities are solid, entering the payroll business made sense.”
Protegra has long relied on virtualization in its software development projects and it similarly wanted its Blue Canvas application to be independent of any particular hardware solution. The company also wanted built-in scalability, and it knew virtualization would enable fast ramping up and out with no downtime. Finally, Protegra wanted a robust public data center that could deliver the uptime Blue Canvas needs to process payroll for all of its customers on time.
In addition to growing Blue Canvas by 700 percent in the next five years, Protegra plans to compete with payroll giants like ADP and Ceridian, so it needed an infrastructure it could trust to scale capably and aggressively.
Protegra researched data centers and cloud service providers throughout Canada and ultimately chose TeraGo, a Premier-level service provider in the VMware vCloud® Air Network™ program. It made the decision based on the promise of easy scalability, powerful throughput, and consistent uptime. “And, to be honest, in the beginning we were experiencing downtime because we were using Microsoft’s Hyper-V platform,” Conway says. “But the move to the VMware-based platform has been much more stable and we’ve been very happy.”
TeraGo’s data centers are powered by a range of VMware technologies. The VMware vSphere® platform is the foundation for the cloud environment, delivering performance, availability, and efficiency. The VMware vCloud Director® solution enables TeraGo to offer differentiated cloud services that are inherently hybrid-aware. And everything is centrally managed from the VMware vCenter Server® console.
TeraGo works with Protegra to ensure that it always has the right infrastructure and service levels for its needs. “The service we get from TeraGo is phenomenal,” Conway says. “We have access to support people almost immediately. TeraGo manages as much of the infrastructure as we want and gives us the level of control we need to keep our operating systems stable. For instance, we want to do our operating system patching ourselves, and we have that flexibility with TeraGo.”
Customer satisfaction is key to Blue Canvas’ continued growth. “We want to give our customers the ability to manage their payroll data on their schedule, not our schedule,” Conway says. “Our customers expect to be able to log on to the system and do their work whenever and wherever it is convenient for them—even on the beach. TeraGo and VMware are instrumental in making sure that can happen.”
With validated cloud services based on VMware technology, TeraGo can offer clients like Protegra rock-solid reliability. “I never even think about instability in the system because of my confidence in TeraGo and VMware and the underlying infrastructure,” Conway says. “It just works—and works perfectly. We promise 99.99 percent uptime to our customers, and we always exceed that, thanks to TeraGo and VMware.”
The natural peaks and valleys of the payroll cycle mean that there are some days when traffic is slow and other days when a lot of customers access services at the same time. But capacity is never an issue because TeraGo and VMware can provide whatever Blue Canvas needs, at just a moment’s notice.
That flexibility also offers Protegra’s Blue Canvas division the ability to add resources as it quickly increases its customer base. With TeraGo and VMware, Conway knows he can replicate an environment in a matter of minutes and have it up and running in a matter of hours.
Another big benefit for Blue Canvas is that TeraGo operates a VMware-based cloud in Canada. Many Blue Canvas customers want to keep their data—which includes sensitive banking and employee information—inside Canada, where it is not subject to laws like the U.S. Patriot Act. Having a Canadian data center provides data security and privacy peace of mind.
Conway says that Protegra has huge confidence in TeraGo and VMware, and the solid services they provide have allowed the company to earn the confidence of its own customers. “They get reliable results and high performance, and that’s exactly what they want,” Conway says.
In the near future, Blue Canvas plans to augment its current disaster recovery strategy by adding additional data center failover capabilities. It will deploy in a third location in Canada to serve as a disaster recovery site and thereafter expand heavily into new regions throughout Canada. “Having data centres located closer to our customers is very beneficial in terms of response times, and we can leverage additional TeraGo data centres in many different regions,” Conway says.
Watch this webinar to learn about Disaster Recovery best practices, and gain insight into our survey results of the Canadian marketplace!
e-Guide with 7 steps for an effective disaster recovery plan.
Protecting against problems like connectivity issues, hardware breakdown, human error, security breaches and more.
This week’s guest blog was written by Brent Whitfield, CEO of DCG Technical Solutions Inc.
There was a time, not too long ago, when one of a business owner’s main concerns was losing his or her assets through fire, storm damage or flood. While insurers catered to these fears by providing comprehensive business insurance policies, loss due to internet service interruption or a terrorist cyber-attack were deemed intangible and pretty much uninsurable.
Times are changing! According to Allianz’s 2017 Risk Barometer, no less than 88 per cent of business losses in 2016, by dollar amount, were attributable to human error and technology problems. This alone should be enough to spur businesses into focusing on sourcing adequate insurance and improving their training and IT provisioning. However, the report went on to highlight the increasing threat from ‘non-damage’ events: those factors that, unlike fire or flood, do no damage to a business’s assets but nevertheless severely disrupt business. Business disruption was cited as the number one cause for concern to businesses in the USA and there was also a growing fear of cyber incidents (e.g. online fraud and hacktivism), especially in countries such as Germany, South Africa and the UK.
IT disruption covers a wide range of potential business threats, from internet speed fluctuations to database corruption, and this article looks at some of the most common.
Internet Speed and Connectivity Issues
Internet reliability is at the heart of efficient business in the 21st Century and most companies, particularly those in large urban areas, are well-served in this respect. The three most likely causes of internet disruption are congestion, speed fluctuation and a failed link to the ISP. In most cases, the first two issues are simple to diagnose and correct (usually by means of bandwidth control or increasing the size of the business’s bandwidth link). Companies who contract their IT provision to a managed services provider (MSP) should ensure that a minimum IT service requirement is stipulated in the service level agreement (SLA).
Failed links, though rare, can be more difficult to resolve, particularly if they are caused by circumstances beyond your ISP’s control. The number one protection against this kind of total service loss is to have a backup ISP. If this scenario has never crossed your mind then it would be wise to draw up or overhaul your disaster recovery plan.
Software Design and Hardware Breakdown
Many employees blame poor software and interface design for obstructing their workflow. Digging deeper often highlights an issue with training provision with employees expected to adapt to new programs and interfaces with minimal tuition. On the other end of the scale are businesses which persist with software that is no longer up to the job. A good IT manager will listen to employees’ concerns and be creatively involved in sourcing solutions. The cloud offers many ways to streamline business processes through SaaS and PaaS applications, often slashing costs at the same time.
Even a decade ago, hardware failure was a common cause of IT disruption but this is no longer the case and you are more likely to lose services due to the failure of the network itself (storms bringing down cables, etc.) than due to router malfunction or other client-side hardware failure. One reason for this is that the hardware itself is becoming more reliable. Alongside this, there is the gradual shift towards cloud services meaning there is literally less hardware on site to go wrong. The most significant threats to hardware operation are lock-up, for example when too many processes are channeled through a router (simplify your configuration or upgrade your router!) and power surges (UPS protection is a must).
Many sources of IT disruption can be ultimately traced back to human error and this shouldn’t be surprising due to the complexity of many programs and networks and the pace of change. There are countless ways in which humans can mess up. They might miss out a critical step on some operational software, lock a device out of the local network by duplicating an IP address or unplug a router to make space for another piece of equipment. Most of these problems can be ironed out through effective training and correcting procedural areas. One huge issue is a lack of awareness of or compliance with security protocols. That deserves a section in itself!
In 2014, Sony became the most high profile victim of a staggering 4,000 per cent increase in ransomware exploits. Since then, barely a month goes by before we hear about another big company being hacked and either losing sensitive customer data or suffering severe disruption.
Cyber-criminals have realized that the easiest way into a company’s high-security network is through its low-security employees. Watering hole attacks, for example, target the vulnerabilities in everyday websites that a specific company employee is known to visit, attempting to direct them to a source of infection. There are many other types of attacks but the vast majority can be avoided by following a robust security policy. This should include prompt installation of software updates and patches; the creation and regular changing of strong passwords and company-wide awareness training focusing on avoiding phishing attacks and other common vectors of infection. The creation of regular off-network backups will minimize the risk of irreversible data loss or corruption.
A Note on BYOD
A growing number of businesses are realizing the efficiency savings available through implementing a ‘Bring your own Device’ (BYOD) policy. For all of its advantages this opens up a whole new set of risks including deliberate and accidental third-party access to sensitive data. A BYOD policy needs to be watertight and cover areas such as encryption during data storage and transfer, monitoring of customer device use, measures to keep business and personal data separate and processes for data recovery and deletion following device loss or employee resignation.
Backup Processes and Data Corruption
The nature of magnetic storage means that database corruption is inevitable. Fortunately, operating systems contain inbuilt check and repair processes that resolve most errors but there is always the chance of serious corruption (e.g. of the boot page).
Businesses can protect themselves from this scenario by backing up regularly, securely and, ideally, in multiple locations. By thinking more in terms of disaster recovery over traditional backup, companies can weigh up all the factors involved – from timely recovery of date and resumption of service to secure storage. There are various public cloud, private cloud and hybrid backup solutions on the market and outsourcing backup monitoring to an MSP can be a good idea to free up resources.
About the Contributor
Brent Whitfield is CEO of DCG Technical Solutions Inc. DCG provide a range of Los Angeles IT Services from disaster recovery and exchange mail support to full MSP and CIO services. Brent has been featured in Fast Company, CNBC, Network Computing, Reuters, and Yahoo Business. https://www.dcgla.com was recognized among the Top 10 Fastest Growing MSPs in North America by MSP mentor. Twitter @DCGCloud
Businesses are moving toward Disaster Recover which provides technologically advanced backup of your organization’s data in the event of failure and minimizes costly downtime.
The logic behind the inadequacies of traditional backup
The fact is this: Traditional backup methods, today, are completely inadequate. Many business owners won’t even consider the backup methods of yesterday (such as tapes and disks) and are moving toward Disaster Recovery, which provides technologically advanced backup of your organization’s data in the event of a failure, and minimizes costly downtime. With Disaster Recovery you know your data is continually protected, storage secured, and provided with immediate data recovery. Recovery time and write speeds are faster, backup is reliable, downtime is significantly reduced, there’s little human intervention required and the money saved is substantial.
Take a look at these significant differences between traditional backup and Disaster Recovery:
Traditional backup: With legacy backup technologies like tape, downtime is prolonged, since a full recovery can take days or weeks.
Disaster Recovery: Downtime after a disaster is reduced to hours, minutes, or even seconds.
Traditional backup: High risk of backup and recovery failure from human error since frequent manual intervention is required. 58% of downtime is a result of human error. 
Disaster Recovery: Fully automated backup process means very little manual management required.
Traditional backup: Difficult to test if backup is working properly.
Disaster Recovery: Automated screenshots are taken of each image-based backup, then emailed to user, to verify a successful backup was made.
Traditional backup: Data backups are at risk when based only in one location, either local or in the cloud.
Disaster Recovery: Data backups stored in both local device and secure cloud will mitigate downtime, as businesses can run off the local device or the cloud.
Traditional backup: Legacy systems like tape have slow write speeds. Slow backups mean fewer backups per day and an inferior recovery point objective (RPO).
Disaster Recovery: Modern backup hardware gives you high-performance networking, and reliable, high-speed hard disk and solid-state drives. Faster backups means you have more intermediate points to recover from.
Traditional backup: Converting backups to bootable virtual machines is time-consuming and error prone, meaning longer recovery times.
Disaster Recovery: Incremental backups can be instantly virtualized, rather than the entire backup chain.
Traditional backup: Time consuming and expensive to make copies of, or store, backups in multiple locations. 61% of SMBs still ship tapes to an off-site location. 
Disaster Recovery: Each image-based backup is automatically saved as a VMDK, in both local device and secure cloud.
Traditional backup: Limited options for encrypting data, may not pass industry regulations (i.e. HIPAA, SOX).
Disaster Recovery: AES 256 and SSL key-based encryption ensures data is safe and meets industry regulations.
Traditional backup: When recovering data, tape failure rates exceed 50%.
Disaster Recovery: Minimal risk of corrupted backups or data loss.
Traditional backup: Potential for theft of loss of media.
Disaster Recovery: Off-site backups stored in SSAE 16 data centers.
Traditional backup: Perceived cost savings are deceiving when you consider the average cost of downtime is $163,674 per hour. 
Disaster Recovery: The ability to keep your business running in the event of disaster has immeasurable value.
If your system experiences a failure, you’re facing a lengthy, costly, and complicated recovery. Don’t wait for a system failure to happen before you figure out your recovery doesn’t have to be that complicated. Make that transition out of traditional backups and secure your system’s future with Disaster Recovery – your data depends on it. 
TeraGo is there to help you along the way
Get in touch with us so that our experts can guide you all the way from a Disaster Recovery assessment to solution design, implementation, maintenance & testing and should you ever require a fully managed recovery, using leading edge technologies on our fully owned and managed cloud infrastructure. Call 1-800-TERAGO-1 (837-24651)
 “Enterprise Data and the Cost of Downtime,” IOUG, July 2012
 Information Week
 Aberdeen Group
Brent Whitfield is the CEO of DCG Technical Solutions Inc. DCG provides the specialist advice and IT support Los Angeles businesses need to remain competitive and productive, despite their often limited IT infrastructure expenditure. Brent has been featured in Fast Company, CNBC, Network Computing, Reuters, and Yahoo Business. https://www.dcgla.com was recognized among the Top 10 Fastest Growing MSPs in North America by MSP mentor. Twitter: @DCGCloud.
The two biggest contributors to IT infrastructure downtime are system failures and human errors. Therefore every organization is vulnerable to disasters and planning for disasters should be a an organizational priority.
This week’s blog post was co-written by Cloud solutions experts, Anando Chatterjee and Ashish Patel.
Outages are Expensive!
Consider the recent IT outages that hit two major airline companies in the United States – Southwest Airlines & Delta Airlines. A failed networking device at Southwest’s operations centre rendered its computer systems offline for several hours which led to the cancellation of nearly 2000 flights over the next few days. Major news outlets predicted the loss to be somewhere between $54M and $82M. Similarly, a power outage at its main operations centre in Atlanta led to the cancellation of about 2000 Delta Airline flights over the next couple days. The cost? About $150M to its passenger revenue.
It’s quite evident from both of these events that they were caused by infrastructure failures as opposed to natural causes such as floods, fires or earthquakes. In fact, a recent study conducted by iLand which included responses from 250 IT decision makers from the UK, indicated that only 20% of outages were caused by natural disasters. It was further discovered that the two biggest contributors to IT infrastructure downtime were system failures and human errors.
If most outages are caused by system issues and human error, it would seem that every organization is vulnerable to disasters regardless of their geographic location or the nature of their business, and that planning for a disaster should be an organizational priority and not something that only IT departments should be responsible for.
Start planning now
To start off, it’s important to assess your IT department’s existing ability to withstand and recover from disasters. For example, some of the things that can be quickly assessed are how often data is backed up, whether there are multiple data centres and what the current processes are to address technology issues. This would provide a baseline from which the organizations future DR capabilities can be grounded on.
The next step would be to determine the business’s actual needs in terms of Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) – the acceptable amount of downtime and data loss, respectively. A Business Impact Assessment (BIA) needs to be conducted at this point to better understand the potential annual financial risks caused by downtime. The BIA is based on facts and statistics and would reflect whether the RTOs and RPOs desired by businesses are so costly to achieve that they end up outweighing the costs of downtime. This is an essential tool to help achieve consensus between business and IT, and will allow IT to begin the budgeting process for the required DR capabilities. It’s important to remember that how money is spent is more crucial than how much money is spent. Numerous studies have shown that organizations that dedicated a larger percent of their IT budget to DR actually had longer RTOs and RPOs.
The third and final step in a DR planning process should be to set Key Performance Indicators (KPIs) and Key Risk Indicators (KRIs). KPIs will help business and IT decision makers to focus on and evaluate which Disaster Recovery (DR) activities should be performed, the timeframe they should be completed in and the level of success achieved within each activity. On the other hand, KRIs would help in identifying and mitigating risks and recognize opportunities for future improvements within the DR program.
Some sample KPIs to consider:
Some KRI’s to consider:
TeraGo is there to help you along the way
Get in touch with us so that our experts can guide you all the way from a DR assessment to solution design, implementation, maintenance & testing and should you ever require a fully managed recovery, using leading edge technologies on our fully owned and managed cloud infrastructure. Call 1-800-TERAGO-1 (837-24651)
 The State of IT Disaster recovery amongst UK Businesses – survey conducted, reviewed and audited by Opinion Matters Inc.
Key points on how to manage the dark corners of Shadow IT.
This week’s blog post was written by guest blogger, Adam Ferraresi. Adam a technology enthusiast, lives in Dallas, Texas, has a successful career in web development and is trusted writer of wefollowtech.com. When Adam’s not concocting some new interesting articles, he loves researching industry topics and reading up on the latest trends that impact businesses.
Shadow IT isn’t a new event in the business world, and it’s always been lurking in one form or another in dark corners of every IT sector. The truth is that in today’s day and age, when practically anything in the world is but a few clicks away from us, shadow IT is something that most organizations and companies simply have to see as inevitable for the most part. Yet it is important for companies to set the pace and for employees to understand the IT department’s regulations. It is crucial to approach this issue prudently and open-mindedly, as shadow IT can actually be used to your business’ advantage if you play your cards right. There are a few ways you can deal with Shadow IT effectively:
For starters, let’s clarify what stands behind the ominous term Shadow IT. As you probably know, practically every business today has an IT department that more or less successfully, deals with all aspects of IT support, so that both the company, as well as its employees work in a technologically optimal environment.
In reality, many businesses don’t invest in enough resources to keep the technology in the work place as up-to-date as possible. Consequently, employees bring their own devices to the workplace so that they can access programs, apps and social media platforms that are otherwise restricted by the company. This includes storing business-sensitive or confidential data on unapproved software. While the use of these unauthorized apps and cloud solutions isn’t malicious in its core, it can prove to be a significant security threat that endangers the whole business.
More often than not, CIOs (Chief Information Officers) choose to look the other way when it comes to Shadow IT, and reasons for this may be manifold, however, Shadow IT can become a very serious issue if not taken seriously and dealt with accordingly. Here are some suggestions on what you can do.
Try To Cater to your Employees’ Needs
Think for a second of how up-to date the current technology in your office is. If it had seen better days, then you can bet that your employees will bring their mobile devices with which they are accustomed and are much faster at completing tasks. On one hand, this is excellent news for overall work efficiency, but on the other hand, it leaves gaping holes in the company’s security.
If your employees feel that they can’t get things done quickly on devices you provide for them, you can be sure that without hesitation, they will use the BYOD (bring your own device) shortcut; especially if you have employees working remotely. Your IT department should provide your employees with apps and tools that are compatible with your company’s system and will enable them to access all the sensitive company data easily and securely. This might be the least painful way to deal with Shadow IT, seeing that you kill two birds with one stone. You’re not restricting your employees in using their own devices for work (if there are restrictions, there are people that will find a way around them) and both your workers and your confidential corporate documentation are secure.
Estimate Just How Efficient Your IT Department Is
This is one of the most common problems in too many companies. Unfortunately, the IT world changes from day to day, which makes keeping strict policies and rules practically impossible. More often than not, businesses deal with outdated IT rules that simply don’t apply in a quickly changing environment, which then causes the whole technological aspect of the company to come to a halt.
The IT department is commonly seen as “the enemy” of innovation since the IT team can take their time in approving requests for new platforms and implementing them. Consequentially, once employees realize that it will take too much time to fix what may seem urgent to the individual making the request, they turn to other methods that will give them quicker results. It is paramount to figure out a system that will enable your IT team to deal with implementing processes as efficiently as possible, so that your business keeps up with all relatively new technology trends.
Keep an Eye on Your Business Network
One of the obvious, but somehow still neglected steps in keeping Shadow IT in check is monitoring your network. This is a never ending and tedious job, but it is necessary in order to find who and what exactly are the potential problems in your company.
Primarily, your IT department needs to be aware of where all the corporate data is, which in itself is no easy task, but it must be done nevertheless.
Secondly, your business network must constantly be checked for unknown devices that are connected to it, so that you can assess where potential issues may appear. By doing this, you will also gain insight into detailed information about the unapproved devices and what type of technology you’re dealing with. Ideally, this maneuver can easily be incorporated with regular vulnerability scanning, which is thus far, one of the most vital security measures to take.
Your Employees Need to Understand the Rules
Yes, this is easier said than done, but so many problems arise in the workplace because of misunderstandings or a lack of awareness. As mentioned, IT departments tend to be a bit demonized, which is why their explanation of rules and briefings about other relevant topics are considered a snooze-fest. Needless to say that interdepartmental communication is important in order for everyone to have a clear understanding of what the policies are, for the betterment of the company. Otherwise, you’ll have IT guys that are all but lost in translation, Shadow IT spreading like weeds, and employees being negligent of the company’s security.
One of the best ways to deal with Shadow IT must be to think ahead. If you don’t want to be caught off guard by security threats that Shadow IT can represent, be informed about the latest technologies that your employees may find handy. This way you get to beat them to the punch and implement the technology before anyone thinks of doing it themselves, which gives you an edge in tackling Shadow IT.
Also, don’t hesitate to experiment, even though you can’t be certain what innovations can bring. Know that humans always better react to innovation than to restriction (though we often see them as one and the same), so if it doesn’t pose a threat, you can choose to tolerate IT under your conditions. Evaluate the severity of the threat of using unapproved devices or software, and balance the consequences out.
Tackling Shadow IT has become a thing of strategy, not of brute force of restriction. It will be much better for your company and your employees to find some middle ground so that even if Shadow IT exists in your company, you can use it to your advantage.
To minimize the exposure of data breaches, it is also important for businesses to provide a secure File Sync and Share tool that is just as easy to use, and addresses the pitfalls of the widely available public options. TeraGo Cloud Drive is a file, sync and share solution made specifically for businesses. Cloud Drive’s features allow users to have enterprise level security and control over their files and meets all the mentioned security requirements, so that your critical data stays within your control.
Advice on how to choose the correct data centre facility that will grow with you, and offer pricing that will save your business money.
With data volumes increasing exponentially with the use of smart systems, the Internet of Things, and other business intelligence gathering processes, there is a burgeoning need for enhanced business infrastructure that will protect and manage critical data. Colocation services have emerged to become a valuable component of many businesses’ network infrastructure. They offer a secure space to host and process data, and are backed by the power and cooling facilities required to keep your colocation environment up and running.
Selecting the right data centre is very important. Choose the right one so you can have a facility that will grow with you, and offer pricing that will save your business money. Selecting the wrong one, can mean putting your business and your customers in jeopardy.
Why is moving to a data centre right for my business?
When speaking with businesses who don’t have services within a data centre, or are in the wrong one, I often hear the same questions and concerns: My server room is already secure – there’s little value in moving to a data centre; Won’t colocation be more expensive for me?; My business isn’t really big enough to need a data centre; I am already in a data centre and keep losing access to my servers – how are you any better? The commonality of all of these is fit. You need to properly evaluate whether or not any given data centre meets the current needs of your business and has the ability to take you forward.
The driving factor for any business when seeking data centre services is security. Your business and customer data is one of your most important assets, and needs to be protected. Sure, you can lock your office doors at night and alarm your server room, but unless your office is staffed 24/7 you’re still vulnerable to break-ins. Data centres are designed to provide a level of security beyond what any normal business facility will. Most data centres will have a standard set of security features including restricted access, and taped video surveillance. Avoid the ones that don’t. More robust centres will include biometric security entry, man-traps, 24/7 on-site security, live video surveillance, and secure cage availability. Knowing your needs is important. A business who is simply managing a gaming server will have a much lower security need than a business who is managing customer financial or medical records. In these instances, your security needs are likely driven by regulatory requirements within your industry. Finding a data centre that meets these requirements, or can accommodate obtaining the necessary certifications is critical.
Finding a provider with the right level of security is a good foundation, but if your colocation services go down due to power issues, cooling problems, or connectivity drops, your business and reputation will suffer. The resiliency of data centres varies greatly. Small budget Tier 1 type data centres offer a lower level of redundancy on their key systems. More robust Tier 3 data centres will offer a much greater level of redundancy on all of their systems. Most systems in these data centres are replicated and fully diverse, and they may even have power coming from multiple power grids for ultimate protection. Knowing your level of tolerance to downtime is key. If your business has zero tolerance to any downtime, you need to make sure you properly evaluate each provider’s failover plans. As a customer, you should also make sure to seek both A + B power feeds for your rack or cage environment. While paying for a secondary power circuit may seem unnecessary, it is a critical safeguard and commonplace in today’s data centre.
Connecting to a Colocation Environment
Having the proper network in place to reach your colocation environment is very important. Choosing a facility with multiple and diverse transit providers, and a high degree of scalability will offer you varying routing options, and will improve reliability in the event of a network outage. All the best-designed colocation service environments become inadequate if you don’t have the proper network in place to reach them.
Colocation and Hybrid Cloud Solutions
Your business is going to grow, and with that growth comes change. You are going to need a provider who is nimble enough to address those needs, and has facilities that can scale with you. That budget value data centre decision you make early on, will likely fail to keep pace with your growth. Knowing your growth strategy and how data storage and processing plays its part is critical in the decision process.
Many organizations are moving towards utilizing cloud-based applications in a secure environment for backing up and storing sensitive data. Being located in a data centre who offers colocation as well as a robust cloud environment can result in huge dividends. Some data centers, like TeraGo allow users to put their non-cloud infrastructure in the same facility as your cloud systems, making it much easier to move data between the two. It also enables control over your infrastructure while giving the flexibility of rapid deployment and scalability.
At TeraGo, we take meticulous care of our customers end to end data needs. We operate 7 data centres across Canada offering geo-redundancy; all connected by our national access network, and supported by our cloud environment. Contact us today to learn more about how our Colocation solutions can enable your company.
It is becoming increasingly important for companies to invest heavily on data security to protect their information from third parties such as direct competitors, former employees, or so-called “hacktivists”.
This week’s blog post was written by Bernard Chan, an expert on Cloud Solutions and Cloud Services Product Manager at TeraGo Networks.
In today’s information-based economy, a company’s data is its most valuable asset. Whether it is the company’s own intellectual property or its customer’s personal information, there is significant competitive and reputational reasons to keep the data safe. Understanding its importance, many companies invest heavily on data security to protect their information from third parties that have specific agendas to access this data – such as direct competitors, former employees, or so-called “hacktivists”.
Ironically, the most common source of company data breaches, and the most difficult to protect against, is actually from its own employees. With the variety of File Sync & Share options available for workers to send, store, and communicate company information, even employees with the best intentions to share information securely may be exposed in ways that they are not aware of. Some common flaws in commonly used tools that are deemed secure are as follows:
Backup Information via iCloud or other public clouds
Using Public File Sync and Share solutions to send large files
To minimize the exposure of such data breaches, it is important for businesses to provide a secure File Sync and Share tool that is just as easy to use, and addresses the pitfalls of the widely available public options. When choosing a File Sync and Share tool for your organization, you should consider the following list of security requirements to make sure your organization is protected.
TeraGo Cloud Drive is a file, sync and share solution made specifically for businesses. The Cloud Drive’s features allow users to have enterprise level security and control over their files and meets all the mentioned security requirements, so that your critical data stays within your control.
Learn the questions your organization should be asking before considering a cloud solution.
Hybrid cloud is rapidly emerging in the business world with its ability to support enterprises in reducing costs, increasing returns and most importantly, keeping our vital data at our fingertips. So what’s keeping all businesses from completely migrating to the cloud? The fear of a loss of security.
To simplify, cloud computing is an internet-based, centralized data management system that is used for on-demand processing, storage, and applications. Using a cloud solution means going from CapEx to OpEx, since this allows businesses to forgo the costs of hosting servers with all their intricate needs within their own premise, all the while having access to the data at all times. There are always fears associated with new technology, which is essentially the fear of the unknown.
According to industry expert and Director of Cloud Solutions, Ashish Patel, the top 5 security concerns that customers face are:
It is extremely important to have these types of reservations when inquiring about a new solution, and they should be addressed by any service provider. More importantly, you as the customer should be inquiring into those reservations and be informed before making any decision.
Here are seven questions every customer should ask before becoming a cloud user:
1. Who has control?
2. Where is it located?
3. Who backs up the data?
4. Who has access to our data?
5. How resilient is the cloud solution?
6. How does the security team engage throughout the journey?
7. Will the provider assist in our business’ governance, auditing and compliance processes?
A cloud provider should be able to sit down with each customer and address every concern or question the customer has. Patel states that “security is a top concern because of the high degree of impact on a business’ privacy, protection, and resiliency” so customers should dive deeper and use the following criteria to evaluate a service provider:
(Gartner: Assessing the Security Risks of Cloud Computing, June 2008)
Security should be built into the overarching cloud design and architecture from end to end. It is a common fallacy that businesses focus on just specific security aspects that are making headlines, instead of assessing on a holistic view from the start. For example, “Sixty percent of respondents to The Register’s cloud survey said they were using VPN connections, but only 34% said they were using cloud firewalls or encrypting data at rest.” -Secure World (https://www.secureworldexpo.com/10-things-you-need-know-about-cloud-security)
There is a number of security procedures, technologies, trust models, mechanisms and laws that are required to uphold cloud infrastructures. TeraGo offers all of the above through our enterprise grade firewalls and VPN offerings, intrusion prevention systems, and secure content management services. We also go one step above the competition by providing private enterprise networks:
Security is important because it allows enterprises to implement business models that are more flexible, efficient and accessible, hence why it’s important for users to ask the right questions before on-boarding. Needless to say however, that cloud security is continually evolving, and accordingly, it is critical that businesses re-visit these questions on a regular basis to ensure that their providers are up to date with the latest security standards.
With the increasing number of DDOs attacks, learn how to keep your company data safe and protect your brand reputation.
By Network product expert, Aaron McIntosh
DDoS attacks pose an increasing threat to businesses, especially those managing e-commerce applications. While DDoS attacks originally began as mischievous attempts to frustrate organizations and users, many have become much more sinister in nature, costing organizations billions each year. From small malware intrusions to large server floods or data breaches, any attack can have lasting consequences for a company. Not only is your company and customer data at risk, but your brand as well.
What’s the worth of your Internet connectivity?
The common goal of the majority of DDoS attacks is to disrupt your business, damaging the customer experience of those trying to reach an organization by flooding its servers and rendering access to the business nearly impossible. Customers attempting to reach your organization online to make a purchase or enquire about a product are denied access. DDoS attacks make up about 50% of all cyber-attacks.
A recent report on Global Applications & Network Security by Radware indicates that customer loss (at 17%) and service availability (at 22%) are the two fastest growing concerns of organizations, with respect to cyber-attacks. With the growing expectations that businesses should be available to customers at all times, visitors can turn to a competitor to meet their immediate needs in the event of a DDoS attack, leading to customers losing confidence in your organization.
Protecting your most vital asset – your data
Among DDoS attacks, low and slow, protocol and application layer attacks are on the rise. These forms of attacks come in many varieties, are relatively easy to launch, and can often fall under the radar of a company’s IT team by disguising themselves as legitimate traffic, often through encryption. They overwhelm servers by creating distraction traffic, while malware operates in the background infiltrating the weak points in your organization’s armor. According to Radware’s report, 37% of attacks are singularly focused on stealing proprietary information and data. Once an intrusion occurs, data may be held at ransom, deleted, sold to other parties, or worse, publically exposed for all to see. The liability of such losses has the power to cripple even the largest organizations.
The increasing cost of protecting against DDoS attacks
The positive news regarding DDoS attacks is that as the threats become more prevalent, the variety of solutions to defend against them has expanded as well. Organizations are finally taking note, and as outlined in the Radware report, 47% of respondents are investing more in DDoS protection now, compared to last year.
Managing a robust defense system comes at a cost. Many organizations attempt to build and manage their own protection system. They employ a number of highly-trained IT resources, or have resources on-call, ready to diagnose, mitigate and manage the attacks. In the event of an attack, staff are pulled away from other business functions to get your business back up and running. With most DDoS attacks lasting several hours, costs can quickly add up. As DDoS attack techniques become more diverse, resources and systems will also need to be constantly updated on recent trends and mitigation techniques. The drain on employee training budgets alone can stifle the best laid plans.
Solutions such as managed DDoS Mitigation Services can significantly reduce an organization’s total cost of ownership for security management. The service provider takes on the task of monitoring the connectivity, managing attack mitigation, and ensuring that your business is available to your customers. The services focus on security and up-time, while you focus on growing it.
Preparing for an attack
Everything we know about DDoS tells us one thing – no organization with an online presence is immune to a DDoS attack. Various industry reports all conclude that nearly 50% organizations have recently suffered an attack. Of those attacked, 100% would have suffered some form of reputational damage, customer frustration, data or revenue loss, or cost to mitigate the attack. Protecting your organization from DDoS attacks is critical to the sustainability of all organizations, and the price of that protection is often more affordable than you would think.
An Isolated Cloud For Absolute Performance And Security
Author: This week’s blog post was written by Bernard Chan, Cloud Services Product Manager at TeraGo Networks.
When considering which cloud solution is right for your business, we compare it to choosing a type of residence; the Multi-Tenant cloud is highly competent in that it hosts multiple tenants, with cost-optimization. The Single-Tenant cloud hosts one tenant at a time while meeting the unique customizations of each tenant. So what cloud offering is best when you need absolute performance beyond single-tenant capabilities, while maintaining the cost-flexibility of the Multi-tenant? TeraGo has introduced its most recent cloud platform that delivers isolated hosting with the benefit of cost-optimization.
Having spoken to a quite a number of customers from different industries on what keeps them up at night, a common pain point always seems to come up – what to do with workloads that cannot be moved to the multi-tenant cloud? Even though the individual customers may be from different industries or work in different functions, the workloads that can’t be moved to the multi-tenant cloud typically fall under the following categories:
From the available options in the market today, the problem can be addressed by 1) continuing to host individually owned bare metal servers 2) moving to a co-location arrangement, or 3) using one of the few bare metal offerings available by some of the large public cloud providers. Unfortunately, the decision making process always comes down to a trade-off exercise, in which you must decide on which of the following dimensions to sacrifice:
When choosing whether to move to a Bare Metal solution, you need to consider choice and flexibility to get exactly what you want in your Bare Metal boxes, so that you can focus on your business instead of going through the long and painful trade-off exercise. Here are five major components you should consider in your decision making process:
Want to learn more about Bare Metal by TeraGo? Check out the “Assessing Performance and Security: the Bare Metal Cloud” webinar, hosted by Ashish Patel, Director of Cloud Services, to see how our offering can solve your business problem.