Economics of the Internet Cloud Part 1


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Why is it that you need to load up all of your applications and carry them around with you on your personal computing device ?   From  I-bird Pro, to your favorite weather application, the standard operating model  assumes you purchase these things , and then  affix them to your medium of preference.

Essentially you are tethered to your personal device.

Yes there are business reasons why a company like Apple would prefer this model.   They own the hardware and they control the applications, and thus it is in their interest to keep you walled off and loyal  to your investment in Apple products.

But there is another more insidious economic restriction that forces this model upon us. And that is a lag in speed and availability of wireless bandwidth.  If you had a wireless connection to the cloud that was low-cost and offered a minimum of 300 megabits  access without restriction, you could instantly fire up any application in existence without ever pre-downloading it.  Your personal computing device would not store anything.   This is the world of the future that I referenced in my previous article , Will Cloud Computing Obsolete Your Personal Device?

The X factor in my prediction is when will we have 300 megabit wireless  bandwidth speeds across the globe without restrictions ?  The assumption is that bandwidth speed and prices will follow a similar kind of curve similar to improvements in  computing speeds, a Moore’s law for bandwidth if you will.

It will happen but the question is how fast, 10 years , 20 years 50 years?  And when it does vendors and consumers will quickly learn it is much more convenient to keep everything in the cloud.  No more apps tied to your device.  People  will own some some very cheap cloud space for all their  “stuff”,  and the  device on which it runs will become  less  and less important.

Bandwidth speed increases in wireless are running against some pretty severe headwinds which I will cover in my next article stay tuned.

Will Cloud Computing Obsolete Your Personal Device?


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Twenty two years ago, all the Buzz  amongst the engineers in the AT&T Bell  labs offices,  was a technology called “thin client”.     The term “cloud” had not yet been coined yet,  but the seeds had been sowed.  We went to our project managment as we always did when we had a good idea, and as usual, being the dinosaurs that they were, they could not even grasp the concept , their brains were three sizes tooo small, and so the idea was tabled.

And then came  the Googles,  and the  Apples of the world,  the disrupters.  As bell labs reached old age , and wallowed in its death throws, I watched from afar as cloud computing took shape.

Today cloud computing is changing the face of the computer and networking world.   From my early 90’s excitement, it took over 10 agonizing years for the first cotyledons to appear above the soil. And even today,  20 years later, cloud computing is in its adolescence, the plants are essentially teenagers.

Historians probably won’t even take note of those 10 lost years. It will be footnoted as if that transition  time was instantaneous.  For those of us who waited in anticipation during  that incubation period , the time was real, it lasted over  1/4 of our professional working  lives.

Today, cloud computing is having a ripple effect on other technologies that  were  once assumed sacred. For example, customer premise networks and all the associated hardware are getting flushed down the toilet.    Businesses are simplifying their on premise networks and will continue to do so.  This is not good news for Cisco, or the desktop PC manufactures , chip makers and on down the line.

What to expect 20 years from now.   Okay here goes, I predict that the  “personal” computing devices that we know and love, might fall into decline in the next 25 years. Say goodbye to “your” IPAD or “your” iPhone.

That’s not to say you won’t have a device at your disposal for personal use, but it will only be tied to you for the time period for which you are using it.   You walk into the store , along with the shopping carts  there are  stack of computing devices, you pick one up , touch your thumb to it, and instantly it has all your data.

Imagine if  personal computing devices were so ubiquitous in society that you did not have to own one.  How freeing would that  be ?  You would not have to worry about forgetting it, or taking it through security . Where ever happened to be , in a  hotel, library, you could just grab one of the many complimentary devices stacked at the door, touch your thumb to the screen , and you are ready to go, e-mail, pictures , games all your personal settings ready to go.

Yes  you would  pay for the content and the services , through the nose most likely, but the hardware would be an irrelevant commodity.

Still skeptical ?  I’ll cover the the economics of how this transition will happen in my next post , stay tuned.

Five Requirements for QoS and Your Cloud Computing


I received a call today from one of the Largest Tier 1 providers in the world.  The salesperson on the other end was lamenting about his inability to sell cloud services to his customers.  His service offerings were hot, but the customers’ Internet connections were not.  Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

Before I finish my story,  I promised a list of what Next Generation traffic controller can do so without further adieu, here it is.

  1. Next Generation Bandwidth controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
  2. Next Generation Bandwidth controllers must NOT rely on Layer 7 DPI technology to identify traffic. (too much encryption and tunneling today for this to be viable)
  3. Next Generation Bandwidth controllers must hit a price range of $5k to $10k USD  for medium to large businesses.
  4. Next Generation Traffic controllers must not require babysitting and adjustments from the IT staff to remain effective.
  5. A Next Generation traffic controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales rep, when they moved to the cloud many of them had run into bottlenecks.  The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call!  Think about it, when you go to the cloud you only control one end of the link.  This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.

The happy ending is that we were able to help our friend at BT telecom,BT_logo by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.

Six Ways to Save With Cloud Computing


I was just doing some research on the cost savings of Cloud computing, and clearly it is shaking up the IT industry.  The five points in this Webroot article, “Five Financial Benefits of Moving to the Cloud”, really hit the nail on the head.   The major points are listed below.

#1. Fully utilized hardware

#2. Lower power costs

#3. Lower people costs

#4. Zero capital costs

#5. Resilience without redundancy

Not listed in the article details was a 6th way that you save money in the cloud.  The following is from conversations I have had with a few of our customers that have moved to the Cloud.

#6.  Lower network costs

Since your business services are in the cloud, you can ditch all of those expensive MPLS links that you use to privately tie your offices to your back-end systems, and replace them with lower-cost commercial Internet links. You do not really need more bandwidth, just better bandwidth performance.  The commodity Internet links are likely good enough, but… when you move to the Cloud you will need a smart bandwidth shaper.

Your link to the Internet becomes even more critical when you go the Cloud.  But that does not mean bigger and more expensive pipes. Cloud applications are very lean and you do not need a big pipe to support them. You just need to make sure recreational traffic does not cut into your business application traffic. Here is my shameless plug: The NetEqualizer is perfectly designed to separate out the business traffic from the recreational.  Licensing is simple, and surprisingly affordable.

The NetEqualizer is Cloud-Ready.  If you are moving your business applications to the Cloud, contact us to see if we can help ease congestion for your traffic going both to and from the Cloud.

How Much Bandwidth do you Need for Cloud Services?


The good news is most cloud applications have a very small Internet footprint. The bad news is, if left unchecked, all that recreational video will suck the life out of your Internet connection before you know it.

The screen shot below is from a live snapshot depicting bandwidth utilization on a business network. Screen Shot 2016-01-27 at 12.26.49 PM

That top number, circled in red, is a YouTube video, and it is consuming about 3 megabits of bandwidth.  Directly underneath that are a couple of cloud service applications from Amazon, and they are consuming 1/10 of what the YouTube video demolishes.

Over the past few years I have analyzed quite a few customer systems, and I consistently see cloud-based business applications consuming  a small fraction of what video and software updates require.

For most businesses,  if they never allowed a video or software update to cross their network, they could easily handle all the cloud-based business applications without worry of running out of room on their trunks. Remember, video and updates use ten times what cloud applications consume. The savings in bandwidth utilization would be so great that  they could cut their contracted bandwidth allocation to a fraction of what they currently have.

Coming back to earth, I don’t think this plan is practical. We live in a video and software update driven world.

If you can’t outright block video and updates, the next best thing would be to give them a lower priority when there is contention on the line. The natural solution that most IT administrators gravitate to is to try to identify it by traffic type.  Although intuitively appealing, there are some major drawbacks with typecasting traffic on the fly.  The biggest drawback is that everything is coming across as encrypted traffic, and you really can’t expect to identify traffic once it is encrypted.

The good news is that you can reliably guess that your smaller footprint traffic is Cloud or Interactive (important), and those large 3 megabit + streams should get a lower priority (not as important).  For more on the subject of how to set your cloud priority we recommend reading: QoS and your Cloud Applications

 

 

Capacity Planning for Cloud Applications


The main factors to consider when capacity planning your Internet Link for cloud applications are:

1) How much bandwidth do your cloud applications actually need?

Typical cloud applications require about 1/2 of a megabit or less. There are exceptions to this rule, but for the most part a good cloud application design does not involve large transfers of data. QuickBooks, salesforce, Gmail, and just about any cloud-based data base will be under the 1/2 megabit guideline. The chart below really brings to light the difference between your typical, interactive Cloud Application and the types of applications that will really eat up your data link.

Screen Shot 2015-12-29 at 4.18.59 PM

Bandwidth Usage for Cloud Based Applications compared to Big Hitters

2) What types of traffic will be sharing your link with the cloud?

The big hitters are typically YouTube and Netflix.  They can consume up to 4 megabits or higher per connection.  Also, system updates for Windows and iOS, as well as internal backups to cloud storage, can consume 20 megabits or more.  Another big hitter can be typical Web Portal sites, such as CNN, Yahoo, and Fox News. A few years ago these sites had a small footprint as they consisted of static images and text.  Today, many of these sites automatically fire up video feeds, which greatly increase their footprint.

3) What is the cost of your Internet Bandwidth, and do you have enough?

Obviously, if there was no limit to the size of your Internet pipe or the required infrastructure to handle it, there would be no concerns or need for capacity planning.  In order to be safe, a good rule of thumb as of 2016 is that you need about 100 megabits per 20 users. Less than that, and you will need to be willing to scale back some of those larger bandwidth-consuming applications, which brings us to point 4.

4) Are you willing to give a lower priority to recreational traffic in order to insure your critical cloud applications do not suffer?

Hopefully you work in an organization where compromise can be explained, and the easiest compromise to make is to limit non-essential video and recreational traffic.  And those iOS updates? Typically a good bandwidth control solution will detect them and slow them down, so essentially they run in the background with a smaller footprint over a longer period of time.

Bandwidth Control in the Cloud


The good news about cloud based applications is that in order to be successful, they must be fairly light weight in terms of their bandwidth footprint. Most cloud based designers keep create applications with a fairly small data footprint. A poorly designed cloud application that required large amounts of data transfer, would not get good reviews and would likely fizzle out.

The bad news is that cloud applications must share your Internet link with recreational traffic, and recreational traffic is often bandwidth intensive with no intention of playing nice when sharing a link .

For businesses,  a legitimate concern is having their critical cloud based applications  starved for bandwidth. When this happens they can perform poorly or lock up, creating a serious drop in productivity.

 

If you suspect you have bandwidth contention impacting the performance of a critical cloud application, the best place to start  your investigation would be with a bandwidth controller/monitor that can show you the basic footprint of how much bandwidth an application is using.

Below is a quick screen shot that I often use from our NetEqualizer when trouble shooting a customer link. It gives me a nice  snap shot of utilization. I can sort the heaviest users by their bandwidth footprint. I can can then click on a convenient, DNS look up tab, to see who they are.

Screen Shot 2015-12-29 at 8.25.52 AM

In my next post I will detail some typical bandwidth planning metrics for going to the cloud . Stay tuned.

QoS and Your Cloud Applications, the Must Know Facts


When you make the switch to the cloud, you will likely discover that the standard QoS techniques, from the days when services were hosted within your enterprise, will not work on traffic coming in from the public Internet.  Below we detail why, and offer some unique alternatives to traditional router-based QoS. Read on to learn about new QoS techniques designed specifically for the Cloud.

Any QoS designed for the Cloud must address incoming traffic not originating on your Network

Most Internet congestion is caused by incoming traffic. From downloads of data not originating at your facility. Unlike the pre-cloud days, your local router cannot give priority to this data because it has no control over the sending server stream.  Yes, you can still control the priority of outgoing data, but if recreational traffic coming into your network comes in at the same priority as, let’s say, a cloud based VOIP call, then when your download link is full, all traffic will suffer.

Likely No Help from your service provider

Even if you asked your cloud hosting service to mark their traffic as priority, your public Internet provider likely will not treat ToS bits with any form of priority. Hence, all data coming from the Internet into your router from the outside will hit with equal priority. During peak traffic times, important cloud traffic will not be able to punch through the morass.

Is there any way to give priority to incoming cloud traffic?

Is QoS over the Internet for Cloud traffic possible? The answer is yes, QoS on an Internet link is possible. We have spent the better part of seven years practicing this art form and while it is not rocket science, it does require a philosophical shift in thinking to get your arms around it.

How to give priority to Cloud Traffic

We call it “equalizing,” or behavior-based shaping, and it involves monitoring incoming and outgoing streams on your Internet link. Priority or QoS is nothing more than favoring one stream’s packets over another stream’s. You can accomplish priority QoS on incoming streams by queuing (slowing down) one stream over another without relying on ToS bits.

How do we determine which “streams” to slow down?

It turns out in the real world there are three types of applications that matter:

1 ) Cloud based Business applications. Typically things like data bases, accounting, sales force, educational, Voip services.

2) Recreational traffic such as Netflix, YouTube

3) Downloads and updates

The kicker that we discovered and almost always holds true is that Cloud based applications will use a fraction of the bandwidth of the video recreational traffic and the downloads. If you can simply spot these non essential data hogs by size and slow them down a bit, there will be plenty of room for your Cloud applications during peak periods.

How do we ensure that cloud traffic has priority if we can’t rely on QoS bits?

To be honest, we stumbled upon this technique about 12 years ago. We keep track of all the streams coming into your network with what can best be described as a sniffing device. When we see a large stream of data, we know from experience that it can’t be cloud traffic, as it is too large of a stream. Cloud applications by design are rarely large streams, because if they were, the cloud application would likely be sluggish and not commercially viable. With our sniffing device, the NetEqualizer, we are able to slow down the non-cloud connections by adding in tiny bit of latency, while at the same time allowing the cloud application streams to pass through. The interesting result is that the sending servers (the same ones that ignore TOS bits) will actually sense that their traffic is being delayed in transport and they will back off their sending speeds on their own.

For more information or a demo feel free to contact us http://www.netequalizer.com.

For further reading on this topic, check out this article: “Traffic Management, Vital in the Cloud”

Caching in the Cloud is Here


By Art Reisman, CTO APconnections (www.netequalizer.com)

I just got a note from a customer, a University, that their ISP is offering them 200 megabit internet at fixed price. The kicker is, they can also have access to a 1 gigabit feed specifically for YouTube at no extra cost.  The only explanation for this is that their upstream ISP has an extensive in-network YouTube cache. I am just kicking myself for not seeing this coming!

I was well-aware that many of the larger ISPs cached NetFlix and YouTube on a large scale, but this is the first I have heard of a bandwidth provider offering a special reduced rate for YouTube to a customer downstream. I am just mad at myself for not predicting this type of offer and hearing about it from a third party.

As for the NetEqualizer, we have already made adjustments in our licensing for this differential traffic to come through at no extra charge beyond your regular license level, in this case 200 megabits. So if for example, you have a 350 megabit license, but have access to a 1Gbps YouTube feed, you will pay for a 350 megabit license, not 1Gbps.  We will not charge you for the overage while accessing YouTube.

You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks


By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Cloud Computing – Do You Have Enough Bandwidth? And a Few Other Things to Consider


The following is a list of things to consider when using a cloud-computing model.

Bandwidth: Is your link fast enough to support cloud computing?

We get asked this question all the time: What is the best-practice standard for bandwidth allocation?

Well, the answer depends on what you are computing.

– First, there is the application itself.  Is your application dynamically loading up modules every time you click on a new screen? If the application is designed correctly, it will be lightweight and come up quickly in your browser. Flash video screens certainly spruce up the experience, but I hate waiting for them. Make sure when you go to a cloud model that your application is adapted for limited bandwidth.

– Second, what type of transactions are you running? Are you running videos and large graphics or just data? Are you doing photo processing from Kodak? If so, you are not typical, and moving images up and down your link will be your constraining factor.

– Third, are you sharing general Internet access with your cloud link? In other words, is that guy on his lunch break watching a replay of royal wedding bloopers on YouTube interfering with your salesforce.com access?

The good news is (assuming you will be running a transactional cloud computing environment – e.g. accounting, sales database, basic email, attendance, medical records – without video clips or large data files), you most likely will not need additional Internet bandwidth. Obviously, we assume your business has reasonable Internet response times prior to transitioning to a cloud application.

Factoid: Typically, for a business in an urban area, we would expect about 10 megabits of bandwidth for every 100 employees. If you fall below this ratio, 10/100, you can still take advantage of cloud computing but you may need  some form of QoS device to prevent the recreational or non-essential Internet access from interfering with your cloud applications.  See our article on contention ratio for more information.

Security: Can you trust your data in the cloud?

For the most part, chances are your cloud partner will have much better resources to deal with security than your enterprise, as this should be a primary function of their business. They should have an economy of scale – whereas most companies view security as a cost and are always juggling those costs against profits, cloud-computing providers will view security as an asset and invest more heavily.

We addressed security in detail in our article how secure is the cloud, but here are some of the main points to consider:

1) Transit security: moving data to and from your cloud provider. How are you going to make sure this is secure?
2) Storage: handling of your data at your cloud provider, is it secure once it gets there from an outside hacker?
3) Inside job: this is often overlooked, but can be a huge security risk. Who has access to your data within the provider network?

Evaluating security when choosing your provider.

You would assume the cloud company, whether it be Apple or Google (Gmail, Google Calendar), uses some best practices to ensure security. My fear is that ultimately some major cloud provider will fail miserably just like banks and brokerage firms. Over time, one or more of them will become complacent. Here is my check list on what I would want in my trusted cloud computing partner:

1) Do they have redundancy in their facilities and their access?
2) Do they screen their employees for criminal records and drug usage?
3) Are they willing to let you, or a truly independent auditor, into their facility?
4) How often do they back-up data and how do they test recovery?

Big Brother is watching.

This is not so much a traditional security threat, but if you are using a free service you are likely going to agree, somewhere in their fine print, to expose some of your information for marketing purposes. Ever wonder how those targeted ads appear that are relevant to the content of the mail you are reading?

Link reliability.

What happens if your link goes down or your provider link goes down, how dependent are you? Make sure your business or application can handle unexpected downtime.

Editors note: unless otherwise stated, these tips assume you are using a third-party provider for resources applications and are not a large enterprise with a centralized service on your Internet. For example, using QuickBooks over the Internet would be considered a cloud application (and one that I use extensively in our business), however, centralizing Microsoft excel on a corporate server with thin terminal clients would not be cloud computing.

How Safe is The Cloud?


By Zack Sanders, NetEqualizer Guest Columnist

There is no question that cloud-computing infrastructures are the future for businesses of every size. The advantages they offer are plentiful:

  • Scalability – IT personnel used to have to scramble for hardware when business decisions dictated the need for more servers or storage. With cloud computing, an organization can quickly add and subtract capacity at will. New server instances are available within minutes of provisioning them.
  • Cost – For a lot of companies (especially new ones), the prospect of purchasing multiple $5,000 servers (and to pay to have someone maintain them) is not very attractive. Cloud servers are very cheap – and you only pay for what you use. If you don’t require a lot of storage space, you can pay around 1 cent per hour per instance. That’s roughly $8/month. If you can’t incur that cost, you should probably reevaluate your business model.
  • Availability – In-house data centers experience routine outages. When you outsource your data center to the cloud, everything server related is in the hands of industry experts. This greatly increases quality of service and availability. That’s not to say outages don’t occur – they do – just not nearly as often or as unpredictably.

While it’s easy to see the benefits of cloud computing, it does have its potential pitfalls. The major questions that always accompany cloud computing discussions are:

  • “How does the security landscape change in the cloud?” – and
  • “What do I need to do to protect my data?”

Businesses and users are concerned about sending their sensitive data to a server that is not totally under their control – and they are correct to be wary. However, when taking proper precautions, cloud infrastructures can be just as safe – if not safer – than physical, in-house data centers. Here’s why:

  • They’re the best at what they do – Cloud computing vendors invest tons of money securing their physical servers that are hosting your virtual servers. They’ll be compliant with all major physical security guidelines, have up-to-date firewalls and patches, and have proper disaster recovery policies and redundant environments in place. From this standpoint, they’ll rank above almost any private company’s in-house data center.
  • They protect your data internally – Cloud providers have systems in place to prevent data leaks or access by third parties. Proper separation of duties should ensure that root users at the cloud provider couldn’t even penetrate your data.
  • They manage authentication and authorization effectively – Because logging and unique identification are central components to many compliance standards, cloud providers have strong identity management and logging solutions in place.

The above factors provide a lot of piece of mind, but with security it’s always important to layer approaches and be diligent. By layering, I mean that the most secure infrastructures have layers of security components that, if one were to fail, the next one would thwart an attack. This diligence is just as important for securing your external cloud infrastructure. No environment is ever immune to compromise. A key security aspect of the cloud is that your server is outside of your internal network, and thus your data must travel public connections to and from your external virtual machine. Companies with sensitive data are very worried about this. However, when taking the following security measures, your data can be just as safe in the cloud:

  • Secure the transmission of data – Setup SSL connections for sensitive data, especially logins and database connections.
  • Use keys for remote login – Utilize public/private keys, two-factor authentication, or other strong authentication technologies. Do not allow remote root login to your servers. Brute force bots hound remote root logins incessantly in cloud provider address spaces.
  • Encrypt sensitive data sent to the cloud – SSL will take care of the data’s integrity during transmission, but it should also be stored encrypted on the cloud server.
  • Review logs diligently – use log analysis software ALONG WITH manual review. Automated technology combined with a manual review policy is a good example of layering.

So, when taking proper precautions (precautions that you should already be taking for your in-house data center), the cloud is a great way to manage your infrastructure needs. Just be sure to select a provider that is reputable and make sure to read the SLA. If the hosting price is too good to be true, it probably is. You can’t take chances with your sensitive data.

About the author:

Zack Sanders is a Web Application Security Specialist with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

How to Survive High Contention Ratios and Prevent Network Congestion


Is there a way to raise contention ratios without creating network congestion, thus allowing your network to service more users?

Yes there is.

First a little background on the terminology.

Congestion occurs when a shared network attempts to deliver more bandwidth to its users than is available. We typically think of an oversold/contended network with respect to ISPs and residential customers; but this condition also occurs within businesses, schools and any organization where more users are vying for bandwidth than is available.

 The term, contention ratio, is used in the industry as a way of determining just how oversold your network is.  A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio.
 A decade ago, a 10-to-1 contention ratio was common. Today, bandwidth is much less expensive and the average contention ratios have come down.  Unfortunately, as bandwidth costs have dropped, pressure on trunks has risen, as today’s applications require increasing amounts of bandwidth. The most common congestion symptom is  slow network response times.
Now back to our original question…
Is there a way to raise contention ratios without creating congestion, thus allowing your network to service more users?
This is where a smart bandwidth controller can help.  Back in the “old” days before encryption was king, most solutions involved classifying types of traffic, and restricting less important traffic based on customer preferences.   Classifying by type went away with encryption, which prevents traffic classifiers from seeing the specifics of what is traversing a network.  A modern bandwidth controller uses dynamic rules to restrict  traffic based on aberrant behavior.  Although this might seem less intuitive than specifically restricting traffic by type, it turns out to be just as reliable, not to mention simpler and more cost-effective to implement.
We have seen results where a customer can increase their user base by as much as 50 percent and still have decent response times for interactive  cloud applications.
To learn more, contact us, our engineering team is more than happy to go over your specific situation, to see if we can help you.

NetEqualizer News: January 2017


We hope you enjoy this month’s NetEqualizer Newsletter. Highlights include a preview of more 8.5 Release features, an announcement of our 8.4 User Guide, our planned 2017 Road Trips, and more!

 

  January 2017

 

8.5 Release Planning is Underway!
Greetings! Enjoy another issue of NetEqualizer News.

As we kick off the new year, I am excited to begin development on our 8.5 Release, currently planned for late spring/early summer. This month, we continue to discuss the features planned for 8.5.img_2686I also like to get out in the field to meet with our customers, and those interested in the NetEqualizer. Check out my 2017 Road Trip plans in this month’s newsletter.

And finally, we have the 8.4 User Guide available, for those of you who like to delve into our features in detail – enjoy!

We continue to work with you to solve some of your most pressing network problems – so if you have one that you would like to discuss with us, please call or email me anytime at 303.997.1300 x103 or art@apconnections.net.

And remember we are now on Twitter. You can follow us @NetEqualizer.

– Art Reisman (CTO)

In this Issue:

:: 8.5 Release Features Preview

:: 8.5 Feedback Received – Thank You!

:: The 8.4 User Guide is Now Available!

:: 2017 Road Trips

:: Time for a Tech Refresh?

:: Best of Blog: Top 5 Reasons Confirming Employers Don’t Like Their IT Guy

8.5 Release Features Preview

We are staring to develop our 8.5 Release!

Continued from November 2016

In November we talked about Cloud Reporting, Read-Only Login, and NetEqualizer Logout.

This month we introduce several more features planned for 8.5:

1) Pool-specific RATIO and HOGMIN

2) Retain RTR State Upon Reboot

Pool-specific RATIO and HOGMIN

Ever since we first started making NetEqualizers, there has been one RATIO and one HOGMIN setting that applied to all traffic going through the device. Beginning with Release 8.5, however, we’ve enhanced our software to allow for Pool-specific RATIO and HOGMIN settings. This means that each Pool can have it’s own unique configuration with regard to these values. These changes help administrators have more fine-tuned control over when Equalizing occurs and what the minimum requirements for Equalizing will be on a Pool level rather than a network level.

Retain RTR State Upon Reboot

This has been one of the most requested features ever since we introduced RTR, and we are happy to say it will be part of Release 8.5. With this release, RTR will start upon reboot and maintain all your reporting settings so that you don’t need to go back into the device and start the service manually. This is useful in case the device is affected by a power outage or another type of unplanned activity.

Stay tuned to our newsletter for further updates on Release 8.5. We are currently underway in the development process and are still shooting for a late spring/early summer release. As always, the release is free to those with valid NetEqualizer Software and Support (NSS) plans. Contact us today with questions!

contact_us_box

8.5 Feedback Received – Thank You!

 We Appreciated Your Suggestions!

We asked for input to our 8.5 Release and you responded with some great ideas – thank you!fancy thank-you

Here are the features that you asked us to consider for 8.5. We will let you know what makes it over the course of future newsletters…

– Quota Enhancements: Email Customer on Exceed Quota, Summary Email before Reset, Quota in the Cloud, Web Portal

– Add sophisticated SNMP logic

– Protocol Tracking Reports

– Traffic by Source IP Report

– Bandwidth Test for Troubleshooting

– Build out Automated Alerts

– Add Real-Time Penalties to RTR Dashboard

– Add Name capability to HL, Masks, VLANs, P2P, and Priority

– Add Visibility to Penalty against what Rule

– Add Host Name from NSLookup to RTR Reports

If any of the above suggestions would also be useful to you and your organization, please let us know!

unnamed-3

The 8.4 User Guide is Now Available!

Dive into the details on NetEqualizer’s features…

We are excited to announce the User Guide has been updated to reflect Software Update 8.4, in several key areas.screenshot-at-feb-08-23-53-34

We have focused on updating the configuration sections, describing our new Batch Entry Screens for setting up Bandwidth Limits, limiting P2P Traffic, setting Bandwidth Priorities, and restricting Bandwidth Usage.

We also have added a new section to the User Guide, which walks through our Perform Quick Edits capability.  Quick Edits is useful when you want to add or delete one or a small number of rules.  We offer Quick Edits for seven (7) types of rules, including Pools, Hard Limits, and P2P Traffic Limits.

You can view the updated User Guide by clicking here or on the picture at right.

Note that the Appendices and Monitoring & Reporting sections are not yet updated to 8.4.

We plan to update the remaining sections of the User Guide to 8.4 soon. Look for an update in an upcoming newsletter!

2017 Road Trips

We’re hitting the road…

Our CTO, Art Reisman, is planning to make a swing up the East coast this spring. Most likely he will be in the Boston and New England area the week of Feb 20th – with some room for flexibility in the timeframe. If you are on the East coast and would like to host a formal on-site Tech Refresh, let us know and we will try to get it scheduled!

contact_us_box

Time for a Tech Refresh?

Re-familiarize yourself with NetEqualizer!

Now that Release 8.4 has been out for 6 months, and many customers have moved to it, you may have questions! Release 8.4 had a lot of changes associated with it that may be slightly confusing if you are used to older GUI versions.

Don’t worry though, we are here to help! If you are current on your NetEqualizer Software and Support (NSS) plan, we’d like to offer you a FREE 30 minute Tech Refresh to go over any questions or issues you might have with your NetEqualizer. Contact us today to schedule a time slot with an engineer!

contact_us_box-1

Best Of Blog

Top 5 Reasons Confirming Employers Don’t Like Their IT Guy

By Art Reisman

ca3b912d-b4a8-40d4-a2a8-320abe66658e

1) The IT room is the dregs

Whenever I travel to visit with my IT customers, it is always a challenge to find their office.  Even if I find the right building on the Business/College Campus, finding their actual location within the building is anything but certain. Usually it ends up being in some unmarked room behind a loading dock, accessible only by secret passage designed to relieve the building of cafeteria waste near the trash bins. Many times, their offices are one and the same thing as the old server computer room, with the raised floor, screaming fans, and air cooled to a Scottish winter…

Photo of the Month
a4b5df23-0e88-48dc-a3c3-82e7b0d74d94
TEDx Aruba

This past fall, a staff member and his wife, Andrea, visited the island of Aruba in the south Caribbean Sea. The official slogan for the country is “One Happy Island,” and this held true the entire trip – all of the people were extremely friendly and welcoming. The purpose of the trip was to present at TEDx Aruba on the topic of sustainability – specifically how our trash plays a role in the most pressing environmental issues of our time. Andrea runs a non-profit based in Boulder, CO that helps educate people on how to reduce their trash and plastic footprint as well as live more simple, meaningful lives. Check out her website and follow her on Instagram if you are so inclined!

APconnections, home of the NetEqualizer | (303) 997-1300 | Email | Website 

NetEqualizer News: November 2016


We hope you enjoy this month’s NetEqualizer Newsletter. Highlights include a 8.5 Release feature preview, customer testimonials, and more!

 

  November 2016

 

8.5 Release Planning is Underway!
Greetings! Enjoy another issue of NetEqualizer News.

As we start into the holiday season here in the U.S., I am thankful for many things. First, I want to THANK YOU, our customers, for making this all worthwhile.

fancy thank-you

In my conversations with customers & prospects, I hear over & over how much our behavior-based shaping (aka equalizing) saves you time, money, and headaches. Thank you for validating all our efforts here at APconnections!

I am also thankful that the Presidential Election is over in the U.S., as I am tired of seeing political TV advertisements, which seem to be on every 10 minutes.

We continue to work with you to solve some of your most pressing network problems – so if you have one that you would like to discuss with us, please call or email me anytime at 303.997.1300 x103 or art@apconnections.net.

And remember we are now on Twitter. You can follow us @NetEqualizer.

– Art Reisman (CTO)

In this Issue:

:: 8.5 Release Features Preview

:: We Want Your Suggestions for the 8.5 Release!

:: Is Anyone Out There Still Suffering From DDoS Attacks?

:: Featured Customer Testimonials

:: Best of Blog: Using NetEqualizer to Ensure Clean, Clear QoS for VOIP Calls

8.5 Release Features Preview

We are staring to plan our 8.5 Release!

We have started putting together initial plans for our late spring software update – 8.5 Release. We have some exciting features in mind! Here is a preview of several features that will be included:

Cloud Reporting

Have you ever wanted to access reporting data for longer than 4 weeks? The reason for the current NetEqualizer limit is that we can only store so much data on the device itself.unnamed-2

Our new Cloud Reporting offering will allow you to store historical NetEqualizer data for an extended period of time. You’ll be able to seamlessly pull this data from the Cloud and display the results on your NetEqualizer, or use it for other reporting and archiving purposes.

Read-only Login Account (customer feature request)

The NetEqualizer has always used basic HTTP authentication for it’s one account, but that is about to change! The next release will have a more standard login page with two roles – the current administrator role as well as a NEW read-only account role. The read-only account will let non-technical staff log in and view reports as well as a few other features.fsdf

NetEqualizer Logout (customer feature request)

We will support web application sessions with both log in & log out. Today we offer login but in 8.5 users will also be able to securely log their session out once they are finished using the GUI.

We are very excited about enhancing our recent 8.4 Release user interface with these changes. Stay tuned to the newsletter for updates on 8.5 features, release dates, and more!

We Want Your Suggestions for the 8.5 Release!

 We want your help! Last call for suggestions for our 8.5 Release.

Now is your last chance for 8.5 Release feature requests!

Many of our best features come from customer requests. For example, for all of you that wanted to have a read-only account for NetEqualizer administration, you’ll be happy to know that we have included it in our upcoming 8.5 Release. Our NetEqualizer Logout is also based on a customer suggestion.

For those suggested features that don’t make the cut, it is not because we did not like them (we like all the suggestions), but we have to filter on features that apply to a large set of our customers. We also keep track of all feature requests, so if yours does not make it into 8.5, it may be scheduled in a future release.

We only know what features you are interested in if you speak up! We have no way of knowing if a feature is popular or not unless we hear from you. So please, think deep and tell us what features would make the NetEqualizer tool more valuable to you!

Here are some questions you can ask yourself or your IT team to come up with ideas:

  1. What feature could I use to help us troubleshoot network problems, perhaps something you need to see in our reports?
  2. What feature would further help optimize our bandwidth resource, perhaps your wireless network has unique challenges?
  3. What security concerns do you have? Anything in the DDoS arena?
  4. What feature could be added to make setup and maintenance more efficient?

unnamed-3

Is Anyone Out There Still Suffering from DDoS Attacks?

What have your experiences been?

Perhaps the Russians have given up on hacking? We are not sure, but we certainly have seen a big drop off in DDoS help requests to our support team – so much so that we have put our DDoS firewall enhancement plans on hold.

We were working on a feature request to block foreign IP’s by connection count as one of our DDoS triggers. It would work something like this:

A NetEqualizer customer sets a white list for public IP’s to let through (not blocked). Any other public IP hitting the network with more than X active connections would trigger an alert or possibly a block based on your preference.

We need to know if such a feature, or another DDoS approach would be better, based on your experience.

Let us know what you have been seeing as far as DDoS attacks on your network!

unnamed-4

Featured Testimonials

What our customers are saying…

We take great pride in ensuring our customers are happy with their NetEqualizer! You can find all of our customer testimonials on our website under the “Customers” menu.

Here are just a few testimonials that we’ve received in 2016:

Reed Collegeunnamed-6

“We’ve had NetEqualizers on campus at Reed for several years and continue to be very happy with the product. We have a very small staff and don’t have time to “tune” a device like a Packetshaper. Instead the NetEqualizer is protocol agnostic in the way it shapes traffic for most users but also allows us to quickly prioritize some traffic if necessary.

Over the years the NetEqualizer has saved us countless hours of staff time. We did lose some visibility into what is happening on our border network but our IDS/IPS replaced that functionality. NetEqualizer is an excellent product.”Gary Schlickeiser – Director of Technology Infrastructure Services

Thanks Gary for your kind words!

Edmonton Regional Airport Authorityunnamed-7

“We presently use two NE3000 units for Internet traffic control and monitoring in a redundant setup. At present we have a maximum of 600 Mbps Internet throughput, with over 300 IP addresses in use in some 120+ address Pools.

The NetEqualizer is a very useful tool for us for monitoring and setting speeds for our many users. Most of the feeds come straight off our Campus network, which is spread over a seven kilometer distance from one end of the airdrome to the other. We also feed a number of circuits to customers using ADSL equipment in the older areas where fiber is not yet available. Everything runs though the “live” NE3000!

Controllability and monitoring is key for our customers, as they pay for the speed they are asking for. With the RTR Dashboard, we continually monitor overall usage peaks to make sure we provide enough bandwidth but, more importantly, to our individual customers. Many customers are not sure of how much bandwidth they need, so using the Neteq we can simply change their speed and watch the individual IP and/or Pool usage to monitor. This becomes especially useful now as many customers, including ourselves, use IP telephony to remote sites; so we need to maintain critical bandwidth availability for this purpose. That way when they or we have conference calls for example, no one is getting choppy conversations. All easily monitored and adjusted with the Dashboard and Traffic Management features.

We also have used the Neteq firewall feature to stop certain attack threats and customer infected pcs or servers from spewing email or other reported outbound attacks, not a fun thing but it happens.

Overall a very critical tool for our success in providing internet to users and it has worked very well for the past 8 or more years!”Willy Damgaard – Network and Telecom Analyst

Thanks Willy! We are happy to help.

Cooperative Light & Powerunnamed-8

“Our company is an electric utility and we have a subsidiary WISP with about 1,000 unlicensed fixed wireless customers. We purchased our first NetEqualizer about a year ago to replace our fair access policy server from another company. The server we replaced allowed burst then sustained bandwidth so we weren’t sure if “equalizing” would work, but it works extremely well as advertised.

The NetEqualizer is stable and actually requires very little maintenance after initial configuration. In our case, we wanted to limit the upper end of what a customer could use (max burst). We were able to set that parameter in our wireless CPE’s. Then we set the equalizing pools for the size of our APs. The NetEqualizer can do a burst then sustained then burst at equal intervals, but to our surprise we actually didn’t need to use it.

We also purchased the DDoS Firewall and that is working nicely as well for quick identification of attacks. Perhaps the most important thing to note is the support is excellent. From sales to engineering the team is very responsive and knowledgeable. We were so impressed that we actually purchased a second NetEqualizer to handle the rest of our network. This company is A+.”Kevin Olson – Communication Manager

Thanks Kevin!

It is wonderful to hear such glowing feedback from one of our newer customers! If you would like to share your feedback on the NetEqualizer, to be highlighted in a future NetEqualizer News, click here to send us an email.

unnamed-5

Best Of Blog

Using NetEqualizer to Ensure Clean, Clear QoS for VoIP Calls

By Art Reisman
 
Last week I talked to several ISP’s (Note: these were blind calls, not from our customers) that were having issues with end customers calling and complaining that their web browsing and VOIP calls were suffering. The funny thing is that the congestion was not the fault of the ISP, but the fault of the local connection being saturated with video. For example, if the ISP delivers a 10 meg circuit, and the customer starts two Netflix sessions, they would clog their own circuit.
Those conversations reminded me of an article I wrote back in 2010 that explains how the NetEqualizer can alleviate this type of congestion for VoIP. Here it is…

Photo of the Month
img_2686
Hiking Near Caribou Ranch
It’s been unseasonably warm in Colorado this fall. We’ve been taking advantage of this by hiking in the mountains amidst the changing leaf colors. 
APconnections, home of the NetEqualizer | (303) 997-1300 | Email | Website 
%d bloggers like this: