Economics of the Internet Cloud Part 1


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Why is it that you need to load up all of your applications and carry them around with you on your personal computing device ?   From  I-bird Pro, to your favorite weather application, the standard operating model  assumes you purchase these things , and then  affix them to your medium of preference.

Essentially you are tethered to your personal device.

Yes there are business reasons why a company like Apple would prefer this model.   They own the hardware and they control the applications, and thus it is in their interest to keep you walled off and loyal  to your investment in Apple products.

But there is another more insidious economic restriction that forces this model upon us. And that is a lag in speed and availability of wireless bandwidth.  If you had a wireless connection to the cloud that was low-cost and offered a minimum of 300 megabits  access without restriction, you could instantly fire up any application in existence without ever pre-downloading it.  Your personal computing device would not store anything.   This is the world of the future that I referenced in my previous article , Will Cloud Computing Obsolete Your Personal Device?

The X factor in my prediction is when will we have 300 megabit wireless  bandwidth speeds across the globe without restrictions ?  The assumption is that bandwidth speed and prices will follow a similar kind of curve similar to improvements in  computing speeds, a Moore’s law for bandwidth if you will.

It will happen but the question is how fast, 10 years , 20 years 50 years?  And when it does vendors and consumers will quickly learn it is much more convenient to keep everything in the cloud.  No more apps tied to your device.  People  will own some some very cheap cloud space for all their  “stuff”,  and the  device on which it runs will become  less  and less important.

Bandwidth speed increases in wireless are running against some pretty severe headwinds which I will cover in my next article stay tuned.

Will Cloud Computing Obsolete Your Personal Device?


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Twenty two years ago, all the Buzz  amongst the engineers in the AT&T Bell  labs offices,  was a technology called “thin client”.     The term “cloud” had not yet been coined yet,  but the seeds had been sowed.  We went to our project managment as we always did when we had a good idea, and as usual, being the dinosaurs that they were, they could not even grasp the concept , their brains were three sizes tooo small, and so the idea was tabled.

And then came  the Googles,  and the  Apples of the world,  the disrupters.  As bell labs reached old age , and wallowed in its death throws, I watched from afar as cloud computing took shape.

Today cloud computing is changing the face of the computer and networking world.   From my early 90’s excitement, it took over 10 agonizing years for the first cotyledons to appear above the soil. And even today,  20 years later, cloud computing is in its adolescence, the plants are essentially teenagers.

Historians probably won’t even take note of those 10 lost years. It will be footnoted as if that transition  time was instantaneous.  For those of us who waited in anticipation during  that incubation period , the time was real, it lasted over  1/4 of our professional working  lives.

Today, cloud computing is having a ripple effect on other technologies that  were  once assumed sacred. For example, customer premise networks and all the associated hardware are getting flushed down the toilet.    Businesses are simplifying their on premise networks and will continue to do so.  This is not good news for Cisco, or the desktop PC manufactures , chip makers and on down the line.

What to expect 20 years from now.   Okay here goes, I predict that the  “personal” computing devices that we know and love, might fall into decline in the next 25 years. Say goodbye to “your” IPAD or “your” iPhone.

That’s not to say you won’t have a device at your disposal for personal use, but it will only be tied to you for the time period for which you are using it.   You walk into the store , along with the shopping carts  there are  stack of computing devices, you pick one up , touch your thumb to it, and instantly it has all your data.

Imagine if  personal computing devices were so ubiquitous in society that you did not have to own one.  How freeing would that  be ?  You would not have to worry about forgetting it, or taking it through security . Where ever happened to be , in a  hotel, library, you could just grab one of the many complimentary devices stacked at the door, touch your thumb to the screen , and you are ready to go, e-mail, pictures , games all your personal settings ready to go.

Yes  you would  pay for the content and the services , through the nose most likely, but the hardware would be an irrelevant commodity.

Still skeptical ?  I’ll cover the the economics of how this transition will happen in my next post , stay tuned.

Five Requirements for QoS and Your Cloud Computing


I received a call today from one of the Largest Tier 1 providers in the world.  The salesperson on the other end was lamenting about his inability to sell cloud services to his customers.  His service offerings were hot, but the customers’ Internet connections were not.  Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

Before I finish my story,  I promised a list of what Next Generation traffic controller can do so without further adieu, here it is.

  1. Next Generation Bandwidth controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
  2. Next Generation Bandwidth controllers must NOT rely on Layer 7 DPI technology to identify traffic. (too much encryption and tunneling today for this to be viable)
  3. Next Generation Bandwidth controllers must hit a price range of $5k to $10k USD  for medium to large businesses.
  4. Next Generation Traffic controllers must not require babysitting and adjustments from the IT staff to remain effective.
  5. A Next Generation traffic controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales rep, when they moved to the cloud many of them had run into bottlenecks.  The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call!  Think about it, when you go to the cloud you only control one end of the link.  This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.

The happy ending is that we were able to help our friend at BT telecom,BT_logo by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.

Six Ways to Save With Cloud Computing


I was just doing some research on the cost savings of Cloud computing, and clearly it is shaking up the IT industry.  The five points in this Webroot article, “Five Financial Benefits of Moving to the Cloud”, really hit the nail on the head.   The major points are listed below.

#1. Fully utilized hardware

#2. Lower power costs

#3. Lower people costs

#4. Zero capital costs

#5. Resilience without redundancy

Not listed in the article details was a 6th way that you save money in the cloud.  The following is from conversations I have had with a few of our customers that have moved to the Cloud.

#6.  Lower network costs

Since your business services are in the cloud, you can ditch all of those expensive MPLS links that you use to privately tie your offices to your back-end systems, and replace them with lower-cost commercial Internet links. You do not really need more bandwidth, just better bandwidth performance.  The commodity Internet links are likely good enough, but… when you move to the Cloud you will need a smart bandwidth shaper.

Your link to the Internet becomes even more critical when you go the Cloud.  But that does not mean bigger and more expensive pipes. Cloud applications are very lean and you do not need a big pipe to support them. You just need to make sure recreational traffic does not cut into your business application traffic. Here is my shameless plug: The NetEqualizer is perfectly designed to separate out the business traffic from the recreational.  Licensing is simple, and surprisingly affordable.

The NetEqualizer is Cloud-Ready.  If you are moving your business applications to the Cloud, contact us to see if we can help ease congestion for your traffic going both to and from the Cloud.

How Much Bandwidth do you Need for Cloud Services?


The good news is most cloud applications have a very small Internet footprint. The bad news is, if left unchecked, all that recreational video will suck the life out of your Internet connection before you know it.

The screen shot below is from a live snapshot depicting bandwidth utilization on a business network. Screen Shot 2016-01-27 at 12.26.49 PM

That top number, circled in red, is a YouTube video, and it is consuming about 3 megabits of bandwidth.  Directly underneath that are a couple of cloud service applications from Amazon, and they are consuming 1/10 of what the YouTube video demolishes.

Over the past few years I have analyzed quite a few customer systems, and I consistently see cloud-based business applications consuming  a small fraction of what video and software updates require.

For most businesses,  if they never allowed a video or software update to cross their network, they could easily handle all the cloud-based business applications without worry of running out of room on their trunks. Remember, video and updates use ten times what cloud applications consume. The savings in bandwidth utilization would be so great that  they could cut their contracted bandwidth allocation to a fraction of what they currently have.

Coming back to earth, I don’t think this plan is practical. We live in a video and software update driven world.

If you can’t outright block video and updates, the next best thing would be to give them a lower priority when there is contention on the line. The natural solution that most IT administrators gravitate to is to try to identify it by traffic type.  Although intuitively appealing, there are some major drawbacks with typecasting traffic on the fly.  The biggest drawback is that everything is coming across as encrypted traffic, and you really can’t expect to identify traffic once it is encrypted.

The good news is that you can reliably guess that your smaller footprint traffic is Cloud or Interactive (important), and those large 3 megabit + streams should get a lower priority (not as important).  For more on the subject of how to set your cloud priority we recommend reading: QoS and your Cloud Applications

 

 

Capacity Planning for Cloud Applications


The main factors to consider when capacity planning your Internet Link for cloud applications are:

1) How much bandwidth do your cloud applications actually need?

Typical cloud applications require about 1/2 of a megabit or less. There are exceptions to this rule, but for the most part a good cloud application design does not involve large transfers of data. QuickBooks, salesforce, Gmail, and just about any cloud-based data base will be under the 1/2 megabit guideline. The chart below really brings to light the difference between your typical, interactive Cloud Application and the types of applications that will really eat up your data link.

Screen Shot 2015-12-29 at 4.18.59 PM

Bandwidth Usage for Cloud Based Applications compared to Big Hitters

2) What types of traffic will be sharing your link with the cloud?

The big hitters are typically YouTube and Netflix.  They can consume up to 4 megabits or higher per connection.  Also, system updates for Windows and iOS, as well as internal backups to cloud storage, can consume 20 megabits or more.  Another big hitter can be typical Web Portal sites, such as CNN, Yahoo, and Fox News. A few years ago these sites had a small footprint as they consisted of static images and text.  Today, many of these sites automatically fire up video feeds, which greatly increase their footprint.

3) What is the cost of your Internet Bandwidth, and do you have enough?

Obviously, if there was no limit to the size of your Internet pipe or the required infrastructure to handle it, there would be no concerns or need for capacity planning.  In order to be safe, a good rule of thumb as of 2016 is that you need about 100 megabits per 20 users. Less than that, and you will need to be willing to scale back some of those larger bandwidth-consuming applications, which brings us to point 4.

4) Are you willing to give a lower priority to recreational traffic in order to insure your critical cloud applications do not suffer?

Hopefully you work in an organization where compromise can be explained, and the easiest compromise to make is to limit non-essential video and recreational traffic.  And those iOS updates? Typically a good bandwidth control solution will detect them and slow them down, so essentially they run in the background with a smaller footprint over a longer period of time.

Bandwidth Control in the Cloud


The good news about cloud based applications is that in order to be successful, they must be fairly light weight in terms of their bandwidth footprint. Most cloud based designers keep create applications with a fairly small data footprint. A poorly designed cloud application that required large amounts of data transfer, would not get good reviews and would likely fizzle out.

The bad news is that cloud applications must share your Internet link with recreational traffic, and recreational traffic is often bandwidth intensive with no intention of playing nice when sharing a link .

For businesses,  a legitimate concern is having their critical cloud based applications  starved for bandwidth. When this happens they can perform poorly or lock up, creating a serious drop in productivity.

 

If you suspect you have bandwidth contention impacting the performance of a critical cloud application, the best place to start  your investigation would be with a bandwidth controller/monitor that can show you the basic footprint of how much bandwidth an application is using.

Below is a quick screen shot that I often use from our NetEqualizer when trouble shooting a customer link. It gives me a nice  snap shot of utilization. I can sort the heaviest users by their bandwidth footprint. I can can then click on a convenient, DNS look up tab, to see who they are.

Screen Shot 2015-12-29 at 8.25.52 AM

In my next post I will detail some typical bandwidth planning metrics for going to the cloud . Stay tuned.

QoS and Your Cloud Applications, the Must Know Facts


When you make the switch to the cloud, you will likely discover that the standard QoS techniques, from the days when services were hosted within your enterprise, will not work on traffic coming in from the public Internet.  Below we detail why, and offer some unique alternatives to traditional router-based QoS. Read on to learn about new QoS techniques designed specifically for the Cloud.

Any QoS designed for the Cloud must address incoming traffic not originating on your Network

Most Internet congestion is caused by incoming traffic. From downloads of data not originating at your facility. Unlike the pre-cloud days, your local router cannot give priority to this data because it has no control over the sending server stream.  Yes, you can still control the priority of outgoing data, but if recreational traffic coming into your network comes in at the same priority as, let’s say, a cloud based VOIP call, then when your download link is full, all traffic will suffer.

Likely No Help from your service provider

Even if you asked your cloud hosting service to mark their traffic as priority, your public Internet provider likely will not treat ToS bits with any form of priority. Hence, all data coming from the Internet into your router from the outside will hit with equal priority. During peak traffic times, important cloud traffic will not be able to punch through the morass.

Is there any way to give priority to incoming cloud traffic?

Is QoS over the Internet for Cloud traffic possible? The answer is yes, QoS on an Internet link is possible. We have spent the better part of seven years practicing this art form and while it is not rocket science, it does require a philosophical shift in thinking to get your arms around it.

How to give priority to Cloud Traffic

We call it “equalizing,” or behavior-based shaping, and it involves monitoring incoming and outgoing streams on your Internet link. Priority or QoS is nothing more than favoring one stream’s packets over another stream’s. You can accomplish priority QoS on incoming streams by queuing (slowing down) one stream over another without relying on ToS bits.

How do we determine which “streams” to slow down?

It turns out in the real world there are three types of applications that matter:

1 ) Cloud based Business applications. Typically things like data bases, accounting, sales force, educational, Voip services.

2) Recreational traffic such as Netflix, YouTube

3) Downloads and updates

The kicker that we discovered and almost always holds true is that Cloud based applications will use a fraction of the bandwidth of the video recreational traffic and the downloads. If you can simply spot these non essential data hogs by size and slow them down a bit, there will be plenty of room for your Cloud applications during peak periods.

How do we ensure that cloud traffic has priority if we can’t rely on QoS bits?

To be honest, we stumbled upon this technique about 12 years ago. We keep track of all the streams coming into your network with what can best be described as a sniffing device. When we see a large stream of data, we know from experience that it can’t be cloud traffic, as it is too large of a stream. Cloud applications by design are rarely large streams, because if they were, the cloud application would likely be sluggish and not commercially viable. With our sniffing device, the NetEqualizer, we are able to slow down the non-cloud connections by adding in tiny bit of latency, while at the same time allowing the cloud application streams to pass through. The interesting result is that the sending servers (the same ones that ignore TOS bits) will actually sense that their traffic is being delayed in transport and they will back off their sending speeds on their own.

For more information or a demo feel free to contact us http://www.netequalizer.com.

For further reading on this topic, check out this article: “Traffic Management, Vital in the Cloud”

Caching in the Cloud is Here


By Art Reisman, CTO APconnections (www.netequalizer.com)

I just got a note from a customer, a University, that their ISP is offering them 200 megabit internet at fixed price. The kicker is, they can also have access to a 1 gigabit feed specifically for YouTube at no extra cost.  The only explanation for this is that their upstream ISP has an extensive in-network YouTube cache. I am just kicking myself for not seeing this coming!

I was well-aware that many of the larger ISPs cached NetFlix and YouTube on a large scale, but this is the first I have heard of a bandwidth provider offering a special reduced rate for YouTube to a customer downstream. I am just mad at myself for not predicting this type of offer and hearing about it from a third party.

As for the NetEqualizer, we have already made adjustments in our licensing for this differential traffic to come through at no extra charge beyond your regular license level, in this case 200 megabits. So if for example, you have a 350 megabit license, but have access to a 1Gbps YouTube feed, you will pay for a 350 megabit license, not 1Gbps.  We will not charge you for the overage while accessing YouTube.

You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks


By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Cloud Computing – Do You Have Enough Bandwidth? And a Few Other Things to Consider


The following is a list of things to consider when using a cloud-computing model.

Bandwidth: Is your link fast enough to support cloud computing?

We get asked this question all the time: What is the best-practice standard for bandwidth allocation?

Well, the answer depends on what you are computing.

– First, there is the application itself.  Is your application dynamically loading up modules every time you click on a new screen? If the application is designed correctly, it will be lightweight and come up quickly in your browser. Flash video screens certainly spruce up the experience, but I hate waiting for them. Make sure when you go to a cloud model that your application is adapted for limited bandwidth.

– Second, what type of transactions are you running? Are you running videos and large graphics or just data? Are you doing photo processing from Kodak? If so, you are not typical, and moving images up and down your link will be your constraining factor.

– Third, are you sharing general Internet access with your cloud link? In other words, is that guy on his lunch break watching a replay of royal wedding bloopers on YouTube interfering with your salesforce.com access?

The good news is (assuming you will be running a transactional cloud computing environment – e.g. accounting, sales database, basic email, attendance, medical records – without video clips or large data files), you most likely will not need additional Internet bandwidth. Obviously, we assume your business has reasonable Internet response times prior to transitioning to a cloud application.

Factoid: Typically, for a business in an urban area, we would expect about 10 megabits of bandwidth for every 100 employees. If you fall below this ratio, 10/100, you can still take advantage of cloud computing but you may need  some form of QoS device to prevent the recreational or non-essential Internet access from interfering with your cloud applications.  See our article on contention ratio for more information.

Security: Can you trust your data in the cloud?

For the most part, chances are your cloud partner will have much better resources to deal with security than your enterprise, as this should be a primary function of their business. They should have an economy of scale – whereas most companies view security as a cost and are always juggling those costs against profits, cloud-computing providers will view security as an asset and invest more heavily.

We addressed security in detail in our article how secure is the cloud, but here are some of the main points to consider:

1) Transit security: moving data to and from your cloud provider. How are you going to make sure this is secure?
2) Storage: handling of your data at your cloud provider, is it secure once it gets there from an outside hacker?
3) Inside job: this is often overlooked, but can be a huge security risk. Who has access to your data within the provider network?

Evaluating security when choosing your provider.

You would assume the cloud company, whether it be Apple or Google (Gmail, Google Calendar), uses some best practices to ensure security. My fear is that ultimately some major cloud provider will fail miserably just like banks and brokerage firms. Over time, one or more of them will become complacent. Here is my check list on what I would want in my trusted cloud computing partner:

1) Do they have redundancy in their facilities and their access?
2) Do they screen their employees for criminal records and drug usage?
3) Are they willing to let you, or a truly independent auditor, into their facility?
4) How often do they back-up data and how do they test recovery?

Big Brother is watching.

This is not so much a traditional security threat, but if you are using a free service you are likely going to agree, somewhere in their fine print, to expose some of your information for marketing purposes. Ever wonder how those targeted ads appear that are relevant to the content of the mail you are reading?

Link reliability.

What happens if your link goes down or your provider link goes down, how dependent are you? Make sure your business or application can handle unexpected downtime.

Editors note: unless otherwise stated, these tips assume you are using a third-party provider for resources applications and are not a large enterprise with a centralized service on your Internet. For example, using QuickBooks over the Internet would be considered a cloud application (and one that I use extensively in our business), however, centralizing Microsoft excel on a corporate server with thin terminal clients would not be cloud computing.

How Safe is The Cloud?


By Zack Sanders, NetEqualizer Guest Columnist

There is no question that cloud-computing infrastructures are the future for businesses of every size. The advantages they offer are plentiful:

  • Scalability – IT personnel used to have to scramble for hardware when business decisions dictated the need for more servers or storage. With cloud computing, an organization can quickly add and subtract capacity at will. New server instances are available within minutes of provisioning them.
  • Cost – For a lot of companies (especially new ones), the prospect of purchasing multiple $5,000 servers (and to pay to have someone maintain them) is not very attractive. Cloud servers are very cheap – and you only pay for what you use. If you don’t require a lot of storage space, you can pay around 1 cent per hour per instance. That’s roughly $8/month. If you can’t incur that cost, you should probably reevaluate your business model.
  • Availability – In-house data centers experience routine outages. When you outsource your data center to the cloud, everything server related is in the hands of industry experts. This greatly increases quality of service and availability. That’s not to say outages don’t occur – they do – just not nearly as often or as unpredictably.

While it’s easy to see the benefits of cloud computing, it does have its potential pitfalls. The major questions that always accompany cloud computing discussions are:

  • “How does the security landscape change in the cloud?” – and
  • “What do I need to do to protect my data?”

Businesses and users are concerned about sending their sensitive data to a server that is not totally under their control – and they are correct to be wary. However, when taking proper precautions, cloud infrastructures can be just as safe – if not safer – than physical, in-house data centers. Here’s why:

  • They’re the best at what they do – Cloud computing vendors invest tons of money securing their physical servers that are hosting your virtual servers. They’ll be compliant with all major physical security guidelines, have up-to-date firewalls and patches, and have proper disaster recovery policies and redundant environments in place. From this standpoint, they’ll rank above almost any private company’s in-house data center.
  • They protect your data internally – Cloud providers have systems in place to prevent data leaks or access by third parties. Proper separation of duties should ensure that root users at the cloud provider couldn’t even penetrate your data.
  • They manage authentication and authorization effectively – Because logging and unique identification are central components to many compliance standards, cloud providers have strong identity management and logging solutions in place.

The above factors provide a lot of piece of mind, but with security it’s always important to layer approaches and be diligent. By layering, I mean that the most secure infrastructures have layers of security components that, if one were to fail, the next one would thwart an attack. This diligence is just as important for securing your external cloud infrastructure. No environment is ever immune to compromise. A key security aspect of the cloud is that your server is outside of your internal network, and thus your data must travel public connections to and from your external virtual machine. Companies with sensitive data are very worried about this. However, when taking the following security measures, your data can be just as safe in the cloud:

  • Secure the transmission of data – Setup SSL connections for sensitive data, especially logins and database connections.
  • Use keys for remote login – Utilize public/private keys, two-factor authentication, or other strong authentication technologies. Do not allow remote root login to your servers. Brute force bots hound remote root logins incessantly in cloud provider address spaces.
  • Encrypt sensitive data sent to the cloud – SSL will take care of the data’s integrity during transmission, but it should also be stored encrypted on the cloud server.
  • Review logs diligently – use log analysis software ALONG WITH manual review. Automated technology combined with a manual review policy is a good example of layering.

So, when taking proper precautions (precautions that you should already be taking for your in-house data center), the cloud is a great way to manage your infrastructure needs. Just be sure to select a provider that is reputable and make sure to read the SLA. If the hosting price is too good to be true, it probably is. You can’t take chances with your sensitive data.

About the author:

Zack Sanders is a Web Application Security Specialist with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

NetEqualizer News: July 2017


We hope you enjoy this month’s NetEqualizer Newsletter. Highlights include our 8.5 Release is Generally Available, along with an updated Quick Start Guide, our new DNS Traffic Tracking, and more!

 

July 2017

 

8.5 Release is Generally Available!
Greetings! Enjoy another issue of NetEqualizer News.

In our last newsletter, we mentioned that 8.5 development was complete. This month we are happy to announce that we have finished our testing phase (thanks test team!), and 8.5 is officially Generally Available! In this month’s newsletter we offer you detailed 8.5 Release Notes, preview some of the favorite 8.5 screens, and also provide the updated Quick Start Guide.

We will be updating the User Guide to 8.5 shortly, look to hear more in an upcoming newsletter.

We continue to work with you to solve some of your most pressing network problems – so if you have one that you would like to discuss with us, please call or email me anytime at 303.997.1300 x103 or art@apconnections.net.

And remember we are now on Twitter. You can follow us @NetEqualizer.

– Art Reisman (CTO)

In this Issue:

:: The 8.5 Release is Ready!

:: 8.5 Release Notes

:: 8.5 Release Quick Start Guide

:: Let NetEqualizer Be Your Bandwidth Referee

:: Best of Blog: Tracking Traffic by DNS

The 8.5 Release is Ready

8.5 Release is GA!

We are very happy to announce that our 8.5 Release is now Generally Available.

By far the most exciting and pleasant surprise feature is the reporting by DNS name. This essentially gives NetEqualizer reporting the ability to show detailed traffic by type without the need for expensive and unreliable Layer 7 filtering. The ramifications and the history on why this is possible make for an interesting story, and thus we have dedicated a full article on the subject – see here. This is just one of the many exciting features available in our 8.5 Release – we preview some of these below…

1) DNS Visibility (Hostname Reporting)
As mentioned above, we are very excited about the new ability to track and view traffic flows by hostnames. With the 8.5 Release, you can view hostnames in the Active Connections table:

And, you can track these hosts by adding them to Traffic History-> Manage Tracked Hosts, as shown below. This enables you to view data by hostname in our Traffic History graphs, as shown in the graph below. This is in addition to our current offerings of Traffic History by IP address, Pool, or VLAN:

2) Login and Logout
The 8.5 Release also has more security features added – including login/logout, session management, and HTTPS.

3) Color-coding in the NetEqualizer Log
We’ve also enhanced the ability to read the log file by adding color-coded markings to our log entries. In 8.5, this includes penalty and informational entries. Below we show how information-only entries are highlighted:

These markings will show new penalties, increased penalties, decreased penalties, and removed penalties, as well as informational entries about traffic that is going through your NetEqualizer (see above).

4) Pool-specific Equalizing (Pool Level Ratio and Hog Minimum)
One of the most requested features we’ve heard from our users, the 8.5 Release has the ability to fine tune your Pool settings even further with pool-specific HOGMIN and RATIO parameters.

Feel free to use the network-wide defaults or create your own! The changes will be reflected in the Pool dashboard:

There are many more changes that we are know you will be excited to see. If you are interested in the 8.5 Release, please contact us. The 8.5 Release is free to customers with valid NSS (NetEqualizer Software and Support) subscriptions.

8.5 Release Notes
You have read about some of our 8.5 features & screens above.  If you are interested in learning more about 8.5, you can read our official 8.5 Release Notes, which as always, are posted on our NetEqualizer Blog site (www.netequalizernews.com).

8.5 Release Quick Start Guide

Take a look at our new Quick Start Guide!

We are happy to share a preview of our updated Quick Start Guide, which now reflects our 8.5 Release!

Some of the key changes now discussed include:

1) our new Login/Log Out capability, highlighted on Page 4.
2) our enhanced Active Connections Table on Page 11, which now shows penalty status for each data flow.
3) our “visual” NetEqualizer Log on Page 12, which contains detailed color-coded information about penalty statuses.

As this is the Demo Version, it does not contain passwords. As always, we ship the full Quick Start Guide with each NetEqualizer unit, so that you will receive an updated version with passwords each time you purchase a NetEqualizer.

Click here or on the image at right to view the full Quick Start Guide.

Let NetEqualizer Be Your Bandwidth Referee

NetEqualizer works so well you won’t even notice it!

The best compliment you can give an umpire or referee in a sporting event is that you did not notice them, and with that example in mind we can safely say our configuration checking is doing its job.

It is rare for us to get Support calls regarding configuration mistakes. This invisibility and smoothness of operation is due to ongoing work behind the scenes to make sure that configuration changes make sense and guide the user away from common mistakes. With every release we improve in this area! I’m sure our long-time customers from the very early days (circa 2005) would not recognize the GUI and ease of use if they made a jump all in one step.

As part of our 8.5 offering, our Support Team has enhanced their configuration validation capabilities. When you send in your diagnostic file, they can now automatically check your Traffic Limits and P2P Limits against a more complex set of validity rules, including unintended overlapping IP ranges.

If you are interested in taking advantage of this 8.5 feature, contact our Support Team to learn more.

Best Of Blog

Tracking Traffic by DNS

By Art Reisman

The video rental industry of the early 80’s was comprised of 1000’s of independent stores. Corner video rental shops were as numerous as today’s Starbucks. In the late 1990’s, consolidation took over. Blockbuster, with its bright blue canopy lighting up the night sky, swallowed them up like doggy treats. All the small retail outlets were gone. Blockbuster had changed everything – their economy of scale, and their chain store familiarity, had overrun the small operators…

Photo of the Month
After the storm – summer vacation on the lake…

One of our staff members just returned from a lake vacation, which in my opinion is the best kind of vacation in summer. This shot was taken right after a rainstorm on the lake. The sun peaking through the clouds really highlighted the landscape and made the rainbow stunning.

APconnections, home of the NetEqualizer | (303) 997-1300 | Email | Website 

How I Survived a Ransomware Attack


By Art Reisman

About six months ago, I was trying to access a web site when I got the infamous message: “Your Flash Player is out-of-date”.  I was provided with a link to a site to update my Adobe Flash Player.  At the time, I thought nothing of updating my Flash Player, as this had happened perhaps 100 times already. That begs the question as to why my perfectly fine and happy Adobe Flash Player constantly needs to be updated?  Another story for another day.

In my haste, I clicked the link and promptly received the Adobe Flash update for my Mac and installed it. For all intents and purposes, that was the end of my Mac.  This thing just took it over, destroying it.  It would insidiously let me get started with my daily work and then within a few minutes I would receive a barrage of almost constant messages popping up telling me I had a virus and to call some number for help.  Classic Ransomware.  At the time I did not think Macs were vulnerable to this type of thing, as the only viruses I had contracted prior were on my Windows machines, which I tossed in the scrap pile several years ago for that very reason.

My solution to this dilemma was simply to re-load my Mac from scratch.  I was up and running again in about one hour.   A hassle yes, the end of the world – no.

Now you might be wondering what about all my data programs and files I store on my Mac?  And to that I answer what data files?  Everything I do is in the Cloud, nothing is stored on my Mac, as I believe that there is no reason to store anything locally.

Gmail, Quickbooks, WordPress, photos, documents, and everything else that I use are all stored in the Cloud!

For backup purposes, I periodically e-mail a list of all my important Cloud links to myself.  Since they are stored in Gmail, they are always accessible and I can access them from any computer.  Data recovery amounts to nothing more than finding my most recent backup list e-mail and clicking on my Cloud links as needed.

How to Survive High Contention Ratios and Prevent Network Congestion


Is there a way to raise contention ratios without creating network congestion, thus allowing your network to service more users?

Yes there is.

First a little background on the terminology.

Congestion occurs when a shared network attempts to deliver more bandwidth to its users than is available. We typically think of an oversold/contended network with respect to ISPs and residential customers; but this condition also occurs within businesses, schools and any organization where more users are vying for bandwidth than is available.

 The term, contention ratio, is used in the industry as a way of determining just how oversold your network is.  A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio.
 A decade ago, a 10-to-1 contention ratio was common. Today, bandwidth is much less expensive and the average contention ratios have come down.  Unfortunately, as bandwidth costs have dropped, pressure on trunks has risen, as today’s applications require increasing amounts of bandwidth. The most common congestion symptom is  slow network response times.
Now back to our original question…
Is there a way to raise contention ratios without creating congestion, thus allowing your network to service more users?
This is where a smart bandwidth controller can help.  Back in the “old” days before encryption was king, most solutions involved classifying types of traffic, and restricting less important traffic based on customer preferences.   Classifying by type went away with encryption, which prevents traffic classifiers from seeing the specifics of what is traversing a network.  A modern bandwidth controller uses dynamic rules to restrict  traffic based on aberrant behavior.  Although this might seem less intuitive than specifically restricting traffic by type, it turns out to be just as reliable, not to mention simpler and more cost-effective to implement.
We have seen results where a customer can increase their user base by as much as 50 percent and still have decent response times for interactive  cloud applications.
To learn more, contact us, our engineering team is more than happy to go over your specific situation, to see if we can help you.
%d bloggers like this: