Cloud Computing Creates Demand For Bandwidth Shaping


image1-3The rise of cloud computing has been a mixed bag for the bottom line of traditional network hardware manufacturers.  Yes, there is business to be had by supplying the burgeoning cloud service providers with new hardware; however, as companies move their applications into the cloud, the elaborate WAN networks of yesteryear are slowly being phased out. The result is a decrease in sales of routers and switches, a dagger in the heart of the very growth engine that gave rise to the likes of Cisco, Juniper, and Adtran.

From a business perspective, we are pleasantly surprised to see an uptick in demand in the latter half of 2017 for bandwidth shapers.  We expect this to continue on into 2018 and beyond.

Why are bandwidth shapers seeing an uptick in interest?
Prior to the rise of cloud computing , companies required large internal LAN network pipes, with relatively small connections to the Internet.  As services move to the Cloud, the data that formerly traversed the local LAN is now being funneled out of the building through the pipe leading to the Internet.   For the most part, companies realize this extra burden on their Internet connection and take action by buying more bandwidth. Purchasing bandwidth makes sense in markets where bandwidth is cheap, but is not always possible.

Companies are realizing they cannot afford to have gridlock into their Cloud.  Network administrators understand that at any time an unanticipated spike in bandwidth demand could overwhelm their cloud connection.  The ramifications of clogged cloud connections could be catastrophic to their business, especially as more business is performed online.  Hence, we are getting preemptive inquiries about ensuring their cloud service will prioritize critical services across their Internet connection with a smart bandwidth shaper.

We are also getting inquiries from businesses that have fallen behind and are unable to upgrade their Internet pipe fast enough to keep up with Cloud demand.   This cyclical pattern of upgrading/running out of bandwidth can be tempered by using a bandwidth shaper.  As your network peaks, your bandwidth shaper can ensure that available resources are shared optimally, until you upgrade and have more bandwidth available.

Although moving to the Cloud seems to introduce a new paradigm, from the world of network optimization, the challenges are the same.  Over the years we have always recommended a two-prong approach to optimization: 1) adequate bandwidth, and 2) bandwidth shaping.  The reason for our recommendation continues to be the same.  With bandwidth shaping, you are ensuring that you are best-positioned to handle peak traffic on your network.  And now, more than ever, as business goes “online” and into the Cloud, and both your employees and your customers are on your network, bandwidth shaping is a prudent insurance policy to providing a great experience on your network.

 

 

Economics of the Internet Cloud Part 1


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Why is it that you need to load up all of your applications and carry them around with you on your personal computing device ?   From  I-bird Pro, to your favorite weather application, the standard operating model  assumes you purchase these things , and then  affix them to your medium of preference.

Essentially you are tethered to your personal device.

Yes there are business reasons why a company like Apple would prefer this model.   They own the hardware and they control the applications, and thus it is in their interest to keep you walled off and loyal  to your investment in Apple products.

But there is another more insidious economic restriction that forces this model upon us. And that is a lag in speed and availability of wireless bandwidth.  If you had a wireless connection to the cloud that was low-cost and offered a minimum of 300 megabits  access without restriction, you could instantly fire up any application in existence without ever pre-downloading it.  Your personal computing device would not store anything.   This is the world of the future that I referenced in my previous article , Will Cloud Computing Obsolete Your Personal Device?

The X factor in my prediction is when will we have 300 megabit wireless  bandwidth speeds across the globe without restrictions ?  The assumption is that bandwidth speed and prices will follow a similar kind of curve similar to improvements in  computing speeds, a Moore’s law for bandwidth if you will.

It will happen but the question is how fast, 10 years , 20 years 50 years?  And when it does vendors and consumers will quickly learn it is much more convenient to keep everything in the cloud.  No more apps tied to your device.  People  will own some some very cheap cloud space for all their  “stuff”,  and the  device on which it runs will become  less  and less important.

Bandwidth speed increases in wireless are running against some pretty severe headwinds which I will cover in my next article stay tuned.

Will Cloud Computing Obsolete Your Personal Device?


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Twenty two years ago, all the Buzz  amongst the engineers in the AT&T Bell  labs offices,  was a technology called “thin client”.     The term “cloud” had not yet been coined yet,  but the seeds had been sowed.  We went to our project managment as we always did when we had a good idea, and as usual, being the dinosaurs that they were, they could not even grasp the concept , their brains were three sizes tooo small, and so the idea was tabled.

And then came  the Googles,  and the  Apples of the world,  the disrupters.  As bell labs reached old age , and wallowed in its death throws, I watched from afar as cloud computing took shape.

Today cloud computing is changing the face of the computer and networking world.   From my early 90’s excitement, it took over 10 agonizing years for the first cotyledons to appear above the soil. And even today,  20 years later, cloud computing is in its adolescence, the plants are essentially teenagers.

Historians probably won’t even take note of those 10 lost years. It will be footnoted as if that transition  time was instantaneous.  For those of us who waited in anticipation during  that incubation period , the time was real, it lasted over  1/4 of our professional working  lives.

Today, cloud computing is having a ripple effect on other technologies that  were  once assumed sacred. For example, customer premise networks and all the associated hardware are getting flushed down the toilet.    Businesses are simplifying their on premise networks and will continue to do so.  This is not good news for Cisco, or the desktop PC manufactures , chip makers and on down the line.

What to expect 20 years from now.   Okay here goes, I predict that the  “personal” computing devices that we know and love, might fall into decline in the next 25 years. Say goodbye to “your” IPAD or “your” iPhone.

That’s not to say you won’t have a device at your disposal for personal use, but it will only be tied to you for the time period for which you are using it.   You walk into the store , along with the shopping carts  there are  stack of computing devices, you pick one up , touch your thumb to it, and instantly it has all your data.

Imagine if  personal computing devices were so ubiquitous in society that you did not have to own one.  How freeing would that  be ?  You would not have to worry about forgetting it, or taking it through security . Where ever happened to be , in a  hotel, library, you could just grab one of the many complimentary devices stacked at the door, touch your thumb to the screen , and you are ready to go, e-mail, pictures , games all your personal settings ready to go.

Yes  you would  pay for the content and the services , through the nose most likely, but the hardware would be an irrelevant commodity.

Still skeptical ?  I’ll cover the the economics of how this transition will happen in my next post , stay tuned.

Five Requirements for QoS and Your Cloud Computing


I received a call today from one of the Largest Tier 1 providers in the world.  The salesperson on the other end was lamenting about his inability to sell cloud services to his customers.  His service offerings were hot, but the customers’ Internet connections were not.  Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

Before I finish my story,  I promised a list of what Next Generation traffic controller can do so without further adieu, here it is.

  1. Next Generation Bandwidth controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
  2. Next Generation Bandwidth controllers must NOT rely on Layer 7 DPI technology to identify traffic. (too much encryption and tunneling today for this to be viable)
  3. Next Generation Bandwidth controllers must hit a price range of $5k to $10k USD  for medium to large businesses.
  4. Next Generation Traffic controllers must not require babysitting and adjustments from the IT staff to remain effective.
  5. A Next Generation traffic controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales rep, when they moved to the cloud many of them had run into bottlenecks.  The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call!  Think about it, when you go to the cloud you only control one end of the link.  This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.

The happy ending is that we were able to help our friend at BT telecom,BT_logo by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.

Six Ways to Save With Cloud Computing


I was just doing some research on the cost savings of Cloud computing, and clearly it is shaking up the IT industry.  The five points in this Webroot article, “Five Financial Benefits of Moving to the Cloud”, really hit the nail on the head.   The major points are listed below.

#1. Fully utilized hardware

#2. Lower power costs

#3. Lower people costs

#4. Zero capital costs

#5. Resilience without redundancy

Not listed in the article details was a 6th way that you save money in the cloud.  The following is from conversations I have had with a few of our customers that have moved to the Cloud.

#6.  Lower network costs

Since your business services are in the cloud, you can ditch all of those expensive MPLS links that you use to privately tie your offices to your back-end systems, and replace them with lower-cost commercial Internet links. You do not really need more bandwidth, just better bandwidth performance.  The commodity Internet links are likely good enough, but… when you move to the Cloud you will need a smart bandwidth shaper.

Your link to the Internet becomes even more critical when you go the Cloud.  But that does not mean bigger and more expensive pipes. Cloud applications are very lean and you do not need a big pipe to support them. You just need to make sure recreational traffic does not cut into your business application traffic. Here is my shameless plug: The NetEqualizer is perfectly designed to separate out the business traffic from the recreational.  Licensing is simple, and surprisingly affordable.

The NetEqualizer is Cloud-Ready.  If you are moving your business applications to the Cloud, contact us to see if we can help ease congestion for your traffic going both to and from the Cloud.

How Much Bandwidth do you Need for Cloud Services?


The good news is most cloud applications have a very small Internet footprint. The bad news is, if left unchecked, all that recreational video will suck the life out of your Internet connection before you know it.

The screen shot below is from a live snapshot depicting bandwidth utilization on a business network. Screen Shot 2016-01-27 at 12.26.49 PM

That top number, circled in red, is a YouTube video, and it is consuming about 3 megabits of bandwidth.  Directly underneath that are a couple of cloud service applications from Amazon, and they are consuming 1/10 of what the YouTube video demolishes.

Over the past few years I have analyzed quite a few customer systems, and I consistently see cloud-based business applications consuming  a small fraction of what video and software updates require.

For most businesses,  if they never allowed a video or software update to cross their network, they could easily handle all the cloud-based business applications without worry of running out of room on their trunks. Remember, video and updates use ten times what cloud applications consume. The savings in bandwidth utilization would be so great that  they could cut their contracted bandwidth allocation to a fraction of what they currently have.

Coming back to earth, I don’t think this plan is practical. We live in a video and software update driven world.

If you can’t outright block video and updates, the next best thing would be to give them a lower priority when there is contention on the line. The natural solution that most IT administrators gravitate to is to try to identify it by traffic type.  Although intuitively appealing, there are some major drawbacks with typecasting traffic on the fly.  The biggest drawback is that everything is coming across as encrypted traffic, and you really can’t expect to identify traffic once it is encrypted.

The good news is that you can reliably guess that your smaller footprint traffic is Cloud or Interactive (important), and those large 3 megabit + streams should get a lower priority (not as important).  For more on the subject of how to set your cloud priority we recommend reading: QoS and your Cloud Applications

 

 

Capacity Planning for Cloud Applications


The main factors to consider when capacity planning your Internet Link for cloud applications are:

1) How much bandwidth do your cloud applications actually need?

Typical cloud applications require about 1/2 of a megabit or less. There are exceptions to this rule, but for the most part a good cloud application design does not involve large transfers of data. QuickBooks, salesforce, Gmail, and just about any cloud-based data base will be under the 1/2 megabit guideline. The chart below really brings to light the difference between your typical, interactive Cloud Application and the types of applications that will really eat up your data link.

Screen Shot 2015-12-29 at 4.18.59 PM

Bandwidth Usage for Cloud Based Applications compared to Big Hitters

2) What types of traffic will be sharing your link with the cloud?

The big hitters are typically YouTube and Netflix.  They can consume up to 4 megabits or higher per connection.  Also, system updates for Windows and iOS, as well as internal backups to cloud storage, can consume 20 megabits or more.  Another big hitter can be typical Web Portal sites, such as CNN, Yahoo, and Fox News. A few years ago these sites had a small footprint as they consisted of static images and text.  Today, many of these sites automatically fire up video feeds, which greatly increase their footprint.

3) What is the cost of your Internet Bandwidth, and do you have enough?

Obviously, if there was no limit to the size of your Internet pipe or the required infrastructure to handle it, there would be no concerns or need for capacity planning.  In order to be safe, a good rule of thumb as of 2016 is that you need about 100 megabits per 20 users. Less than that, and you will need to be willing to scale back some of those larger bandwidth-consuming applications, which brings us to point 4.

4) Are you willing to give a lower priority to recreational traffic in order to insure your critical cloud applications do not suffer?

Hopefully you work in an organization where compromise can be explained, and the easiest compromise to make is to limit non-essential video and recreational traffic.  And those iOS updates? Typically a good bandwidth control solution will detect them and slow them down, so essentially they run in the background with a smaller footprint over a longer period of time.

Bandwidth Control in the Cloud


The good news about cloud based applications is that in order to be successful, they must be fairly light weight in terms of their bandwidth footprint. Most cloud based designers keep create applications with a fairly small data footprint. A poorly designed cloud application that required large amounts of data transfer, would not get good reviews and would likely fizzle out.

The bad news is that cloud applications must share your Internet link with recreational traffic, and recreational traffic is often bandwidth intensive with no intention of playing nice when sharing a link .

For businesses,  a legitimate concern is having their critical cloud based applications  starved for bandwidth. When this happens they can perform poorly or lock up, creating a serious drop in productivity.

 

If you suspect you have bandwidth contention impacting the performance of a critical cloud application, the best place to start  your investigation would be with a bandwidth controller/monitor that can show you the basic footprint of how much bandwidth an application is using.

Below is a quick screen shot that I often use from our NetEqualizer when trouble shooting a customer link. It gives me a nice  snap shot of utilization. I can sort the heaviest users by their bandwidth footprint. I can can then click on a convenient, DNS look up tab, to see who they are.

Screen Shot 2015-12-29 at 8.25.52 AM

In my next post I will detail some typical bandwidth planning metrics for going to the cloud . Stay tuned.

QoS and Your Cloud Applications, the Must Know Facts


When you make the switch to the cloud, you will likely discover that the standard QoS techniques, from the days when services were hosted within your enterprise, will not work on traffic coming in from the public Internet.  Below we detail why, and offer some unique alternatives to traditional router-based QoS. Read on to learn about new QoS techniques designed specifically for the Cloud.

Any QoS designed for the Cloud must address incoming traffic not originating on your Network

Most Internet congestion is caused by incoming traffic. From downloads of data not originating at your facility. Unlike the pre-cloud days, your local router cannot give priority to this data because it has no control over the sending server stream.  Yes, you can still control the priority of outgoing data, but if recreational traffic coming into your network comes in at the same priority as, let’s say, a cloud based VOIP call, then when your download link is full, all traffic will suffer.

Likely No Help from your service provider

Even if you asked your cloud hosting service to mark their traffic as priority, your public Internet provider likely will not treat ToS bits with any form of priority. Hence, all data coming from the Internet into your router from the outside will hit with equal priority. During peak traffic times, important cloud traffic will not be able to punch through the morass.

Is there any way to give priority to incoming cloud traffic?

Is QoS over the Internet for Cloud traffic possible? The answer is yes, QoS on an Internet link is possible. We have spent the better part of seven years practicing this art form and while it is not rocket science, it does require a philosophical shift in thinking to get your arms around it.

How to give priority to Cloud Traffic

We call it “equalizing,” or behavior-based shaping, and it involves monitoring incoming and outgoing streams on your Internet link. Priority or QoS is nothing more than favoring one stream’s packets over another stream’s. You can accomplish priority QoS on incoming streams by queuing (slowing down) one stream over another without relying on ToS bits.

How do we determine which “streams” to slow down?

It turns out in the real world there are three types of applications that matter:

1 ) Cloud based Business applications. Typically things like data bases, accounting, sales force, educational, Voip services.

2) Recreational traffic such as Netflix, YouTube

3) Downloads and updates

The kicker that we discovered and almost always holds true is that Cloud based applications will use a fraction of the bandwidth of the video recreational traffic and the downloads. If you can simply spot these non essential data hogs by size and slow them down a bit, there will be plenty of room for your Cloud applications during peak periods.

How do we ensure that cloud traffic has priority if we can’t rely on QoS bits?

To be honest, we stumbled upon this technique about 12 years ago. We keep track of all the streams coming into your network with what can best be described as a sniffing device. When we see a large stream of data, we know from experience that it can’t be cloud traffic, as it is too large of a stream. Cloud applications by design are rarely large streams, because if they were, the cloud application would likely be sluggish and not commercially viable. With our sniffing device, the NetEqualizer, we are able to slow down the non-cloud connections by adding in tiny bit of latency, while at the same time allowing the cloud application streams to pass through. The interesting result is that the sending servers (the same ones that ignore TOS bits) will actually sense that their traffic is being delayed in transport and they will back off their sending speeds on their own.

For more information or a demo feel free to contact us http://www.netequalizer.com.

For further reading on this topic, check out this article: “Traffic Management, Vital in the Cloud”

Caching in the Cloud is Here


By Art Reisman, CTO APconnections (www.netequalizer.com)

I just got a note from a customer, a University, that their ISP is offering them 200 megabit internet at fixed price. The kicker is, they can also have access to a 1 gigabit feed specifically for YouTube at no extra cost.  The only explanation for this is that their upstream ISP has an extensive in-network YouTube cache. I am just kicking myself for not seeing this coming!

I was well-aware that many of the larger ISPs cached NetFlix and YouTube on a large scale, but this is the first I have heard of a bandwidth provider offering a special reduced rate for YouTube to a customer downstream. I am just mad at myself for not predicting this type of offer and hearing about it from a third party.

As for the NetEqualizer, we have already made adjustments in our licensing for this differential traffic to come through at no extra charge beyond your regular license level, in this case 200 megabits. So if for example, you have a 350 megabit license, but have access to a 1Gbps YouTube feed, you will pay for a 350 megabit license, not 1Gbps.  We will not charge you for the overage while accessing YouTube.

You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks


By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Cloud Computing – Do You Have Enough Bandwidth? And a Few Other Things to Consider


The following is a list of things to consider when using a cloud-computing model.

Bandwidth: Is your link fast enough to support cloud computing?

We get asked this question all the time: What is the best-practice standard for bandwidth allocation?

Well, the answer depends on what you are computing.

– First, there is the application itself.  Is your application dynamically loading up modules every time you click on a new screen? If the application is designed correctly, it will be lightweight and come up quickly in your browser. Flash video screens certainly spruce up the experience, but I hate waiting for them. Make sure when you go to a cloud model that your application is adapted for limited bandwidth.

– Second, what type of transactions are you running? Are you running videos and large graphics or just data? Are you doing photo processing from Kodak? If so, you are not typical, and moving images up and down your link will be your constraining factor.

– Third, are you sharing general Internet access with your cloud link? In other words, is that guy on his lunch break watching a replay of royal wedding bloopers on YouTube interfering with your salesforce.com access?

The good news is (assuming you will be running a transactional cloud computing environment – e.g. accounting, sales database, basic email, attendance, medical records – without video clips or large data files), you most likely will not need additional Internet bandwidth. Obviously, we assume your business has reasonable Internet response times prior to transitioning to a cloud application.

Factoid: Typically, for a business in an urban area, we would expect about 10 megabits of bandwidth for every 100 employees. If you fall below this ratio, 10/100, you can still take advantage of cloud computing but you may need  some form of QoS device to prevent the recreational or non-essential Internet access from interfering with your cloud applications.  See our article on contention ratio for more information.

Security: Can you trust your data in the cloud?

For the most part, chances are your cloud partner will have much better resources to deal with security than your enterprise, as this should be a primary function of their business. They should have an economy of scale – whereas most companies view security as a cost and are always juggling those costs against profits, cloud-computing providers will view security as an asset and invest more heavily.

We addressed security in detail in our article how secure is the cloud, but here are some of the main points to consider:

1) Transit security: moving data to and from your cloud provider. How are you going to make sure this is secure?
2) Storage: handling of your data at your cloud provider, is it secure once it gets there from an outside hacker?
3) Inside job: this is often overlooked, but can be a huge security risk. Who has access to your data within the provider network?

Evaluating security when choosing your provider.

You would assume the cloud company, whether it be Apple or Google (Gmail, Google Calendar), uses some best practices to ensure security. My fear is that ultimately some major cloud provider will fail miserably just like banks and brokerage firms. Over time, one or more of them will become complacent. Here is my check list on what I would want in my trusted cloud computing partner:

1) Do they have redundancy in their facilities and their access?
2) Do they screen their employees for criminal records and drug usage?
3) Are they willing to let you, or a truly independent auditor, into their facility?
4) How often do they back-up data and how do they test recovery?

Big Brother is watching.

This is not so much a traditional security threat, but if you are using a free service you are likely going to agree, somewhere in their fine print, to expose some of your information for marketing purposes. Ever wonder how those targeted ads appear that are relevant to the content of the mail you are reading?

Link reliability.

What happens if your link goes down or your provider link goes down, how dependent are you? Make sure your business or application can handle unexpected downtime.

Editors note: unless otherwise stated, these tips assume you are using a third-party provider for resources applications and are not a large enterprise with a centralized service on your Internet. For example, using QuickBooks over the Internet would be considered a cloud application (and one that I use extensively in our business), however, centralizing Microsoft excel on a corporate server with thin terminal clients would not be cloud computing.

How Safe is The Cloud?


By Zack Sanders, NetEqualizer Guest Columnist

There is no question that cloud-computing infrastructures are the future for businesses of every size. The advantages they offer are plentiful:

  • Scalability – IT personnel used to have to scramble for hardware when business decisions dictated the need for more servers or storage. With cloud computing, an organization can quickly add and subtract capacity at will. New server instances are available within minutes of provisioning them.
  • Cost – For a lot of companies (especially new ones), the prospect of purchasing multiple $5,000 servers (and to pay to have someone maintain them) is not very attractive. Cloud servers are very cheap – and you only pay for what you use. If you don’t require a lot of storage space, you can pay around 1 cent per hour per instance. That’s roughly $8/month. If you can’t incur that cost, you should probably reevaluate your business model.
  • Availability – In-house data centers experience routine outages. When you outsource your data center to the cloud, everything server related is in the hands of industry experts. This greatly increases quality of service and availability. That’s not to say outages don’t occur – they do – just not nearly as often or as unpredictably.

While it’s easy to see the benefits of cloud computing, it does have its potential pitfalls. The major questions that always accompany cloud computing discussions are:

  • “How does the security landscape change in the cloud?” – and
  • “What do I need to do to protect my data?”

Businesses and users are concerned about sending their sensitive data to a server that is not totally under their control – and they are correct to be wary. However, when taking proper precautions, cloud infrastructures can be just as safe – if not safer – than physical, in-house data centers. Here’s why:

  • They’re the best at what they do – Cloud computing vendors invest tons of money securing their physical servers that are hosting your virtual servers. They’ll be compliant with all major physical security guidelines, have up-to-date firewalls and patches, and have proper disaster recovery policies and redundant environments in place. From this standpoint, they’ll rank above almost any private company’s in-house data center.
  • They protect your data internally – Cloud providers have systems in place to prevent data leaks or access by third parties. Proper separation of duties should ensure that root users at the cloud provider couldn’t even penetrate your data.
  • They manage authentication and authorization effectively – Because logging and unique identification are central components to many compliance standards, cloud providers have strong identity management and logging solutions in place.

The above factors provide a lot of piece of mind, but with security it’s always important to layer approaches and be diligent. By layering, I mean that the most secure infrastructures have layers of security components that, if one were to fail, the next one would thwart an attack. This diligence is just as important for securing your external cloud infrastructure. No environment is ever immune to compromise. A key security aspect of the cloud is that your server is outside of your internal network, and thus your data must travel public connections to and from your external virtual machine. Companies with sensitive data are very worried about this. However, when taking the following security measures, your data can be just as safe in the cloud:

  • Secure the transmission of data – Setup SSL connections for sensitive data, especially logins and database connections.
  • Use keys for remote login – Utilize public/private keys, two-factor authentication, or other strong authentication technologies. Do not allow remote root login to your servers. Brute force bots hound remote root logins incessantly in cloud provider address spaces.
  • Encrypt sensitive data sent to the cloud – SSL will take care of the data’s integrity during transmission, but it should also be stored encrypted on the cloud server.
  • Review logs diligently – use log analysis software ALONG WITH manual review. Automated technology combined with a manual review policy is a good example of layering.

So, when taking proper precautions (precautions that you should already be taking for your in-house data center), the cloud is a great way to manage your infrastructure needs. Just be sure to select a provider that is reputable and make sure to read the SLA. If the hosting price is too good to be true, it probably is. You can’t take chances with your sensitive data.

About the author:

Zack Sanders is a Web Application Security Specialist with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

Covid-19 and Increased Internet Usage


Our sympathies go out to everyone who has been impacted by Covid 19, whether you had it personally or it affected your family and friends. I personally lost a sister to Covid-19 complications back in May; hence I take this virus very seriously.

The question I ask myself now as we see a light at the end of the Covid-19 tunnel with the anticipated vaccines next month is, how has Covid-19 changed the IT landscape for us and our customers?

The biggest change that we have seen is Increased Internet Usage.

We have seen a 500 percent increase in NetEqualizer License upgrades over the past 6 months, which means that our customers are ramping up their circuits to ensure a work from home experience without interruption or outages. What we can’t tell for sure is whether or not these upgrades were more out of an abundance of caution, getting ahead of the curve, or if there was actually a significant increase in demand. 

Without a doubt, home usage of Internet has increased, as consumers work from home on Zoom calls, watch more movies, and find ways to entertain themselves in a world where they are staying at home most of the time.  Did this shift actually put more traffic on the average business office network where our bandwidth controllers normally reside?  The knee jerk reaction would be yes of course, but I would argue not so fast.  Let me lay out my logic here…

For one, with a group of people working remotely using the plethora of cloud-hosted collaboration applications such as Zoom, or Blackboard sharing, there is very little if any extra bandwidth burden back at the home office or campus. The additional cloud-based traffic from remote users will be pushed onto their residential ISP providers. On the other hand, organizations that did not transition services to the cloud will have their hands full handling the traffic from home users coming in over VPN into the office.

Higher Education usage is a slightly different animal.   Let’s explore the three different cases as I see them for Higher Education.

#1) Everybody is Remote

In this instance it is highly unlikely there would be any increase in bandwidth usage at the campus itself. All of the Zoom or Microsoft Teams traffic would be shifted to the ISPs at the residences of students and teachers.

2) Teachers are On-Site and Students are Remote

For this we can do an approximation.

For each teacher sharing a room session you can estimate 2 to 8 megabits of consistent bandwidth load. Take a high school with 40 teachers on active Zoom calls, you could estimate a sustained 300 megabits dedicated to Zoom.  With just a skeleton crew of teachers and no students in the building the Internet Capacity should hold as the students tend to eat up huge chunks of bandwidth which is no longer the case. 

3) Mixed Remote and In-person Students

The one scenario that would stress existing infrastructure would be the case where students are on campus while at the same time classes are being broadcast remotely for the students who are unable to come to class in person.  In this instance, you have close to the normal campus load plus all the Zoom or Microsoft Teams sessions emanating from the classrooms. To top it off these Zoom or Microsoft Team sessions are highly sensitive to latency and thus the institution cannot risk even a small amount of congestion as that would cause an interruption to all classes. 

Prior to Covid-19, Internet congestion might interrupt a Skype conference call with the sales team to Europe, which is no laughing matter but a survivable disruption.  Post Covid-19, an interruption in Internet communcation could potentially  interrupt the  entire organization, which is not tolerable. 

In summary, it was probably wise for most institutions to beef up their IT infrastructure to handle more bandwidth. Even knowing in hindsight that  in some cases, it may have not been needed on the campus or the office.  Given the absolutely essential nature that Internet communication has played to keep Businesses and Higher Ed connected, it was not worth the risk of being caught with too little.

Stay tuned for a future article detailing the impact of Covid-19 on ISPs…

Technology Predictions for 2018+


By Art Reisman

CTO http://www.apconnections.net

Below are my predictions for technology in 2018 and beyond. As you will see some of them are fairly pragmatic, while others may stretch the imagination a little bit.

  1. Forget about drones delivering packages to your door; too many obstacles in densely populated areas. For example, I don’t want unmanned drones dangling 30 pound flower pots flying above my head in my neighborhood. One gust of wind and bam,  flower-pot comes hurtling out of the sky.  I don’t want it even if it is technically possible!  But what is feasible, and likely, are slow plodding autonomous robots that can carry a payload and navigate to your doorstep.   Not as sexy as zippy little drones, but this technology is fairly mature on factory floors already, and those robots don’t ask for much in return.
  2. As for Networking advancements, we may see a “Cloud” backlash where companies bring some of their technology back in-house to gain full control of their systems and data.  I am not predicting the Cloud won’t continue to be a big player, it will, but it may have a hiccup or two along the way.  My reasoning is simple, and it goes back to the days of the telephone when AT&T started offering a PBX in the sky.  The exact name for this service slips my mind.  It sounded great and had its advantages, but many companies opted to purchase their own customer premise PBX equipment, as they did not want a third-party operating such a critical piece of infrastructure.  The same might be said for private companies thinking about the Cloud.  They could make an argument that they need to secure their own data and also ensure uptime access to their data.
  3. More broadband wireless ISPs coming to your neighborhood as an alternative option for home Internet.  I have had my ear to the street for quite some time, and the ability to beam high-speed Internet to your house has come a long way in the last 10 years.  Also the distrust, bitterness, dare I say hatred, for the traditional large incumbents is always a factor. One friend of mine is making inroads in a major city right in the heart of downtown simply by word of mouth.  His speeds are competitive, his costs are lower, and his service cannot be matched by the entrenched incumbent.
  4. Lower automobile insurance rates.  The newer fleet of smart cars that automatically break for or completely avoid obstacles is going to reduce serious accidents by 50 percent or more in the near future.  Insurance payouts will drop and eventually this will be passed on to consumers.  Longer-term, as everyone on the road has autonomous driving cars, insurance will be analogous to a manufacturer’s warranty, and will be paid by the auto manufacturer.
  5. The Internet of Things (IoT) will continue to explode, particularly in the smart home arena.  Home security has taken leaps & bounds in recent years, enabling a consumer to lock/unlock, view and manage their home remotely.  Now we are seeing IoT imbedded in more appliances, which will be able to be controlled remotely as well – so that you can run the dishwasher, washer, dryer, or oven from anywhere.
  6. Individual Biosensory data, like that collected by Garmin and Fitbit monitors, will be used by more companies and in more ways.  In 2018 my health insurance company is offering discounts for members that prove they use their gym memberships.  It is only a small leap to imagine a health insurance company asking for my biosensory data, to select my insurance group and to set my insurance rates.  As more people use fitness trackers and share their data (currently only with friends), it will become the norm to share this type of data, probably at first anonymously.  I can see a future where  health care providers and employers use this data to make decisions.

I will update soon as new ideas continue to pop into my head all the time.  Stay tuned!

%d bloggers like this: