Deja Vu, IVR, and the Online Shopper’s Bill of Rights


By Art Reisman
CTO
www.apconnections.net
www.netequalizer.com

My Bill of Rights for how the online shopping experience should be in a perfect world.

1) Ship to multiple addresses. This means specifically the ability to ship any item in an order to any address.

2) On the confirmation page, always let the user edit their order right there, delete, change quantity, ship to address, shipping options, etc. All buttons should be available for each item.

3) Never force the user to hit the back button for any mistake, assume they need to edit everything from every page, as if in a fully connected matrix. Let them navigate to anywhere from anywhere.

4) Don’t show items out of stock or on back order UNLESS the customer requests to see that garbage.

5) You had better know what is out of stock. :)

6) The submit button should immediately disappear when it is hit, it is either hit or not hit, and there should be no way for a customer to order something twice by accident or to be left wondering if they have ordered twice. The system should also display the appropriate status messages while an order is being processed.

7) If there is a problem on any page in the ordering process, a detailed message on what the problem was should appear at the top of page, along with highlighting the problem field, leaving a customer to wonder what they did wrong is just bad.

8) Gift wrap available or not when selecting an item, not at the end of the ordering process.

9) If the item or order is not under your inventory control then don’t sell it or pretend to sell it without a disclaimer.

10) Remember all the fields when navigating between options. For example, a user should never have to fill out an address twice unless it is a new address.

Why is it so hard to solve these problems ?

Long before the days of Internet, I was a system architect charged with designing an Integrated Voice Response product called Conversant (Conversant was one of the predecessors to Avaya IP Office). Although not nearly as wide-spread as the Internet of today, most large companies provided automated services over the phone throughout the 1990’s. Perhaps you are familiar with a typical IVR – Press 1 for sales, press 2 for support, etc. In an effort to reduce labor costs, companies also used the phone touch tone interface for more complex operations such as tracking your package or placing an order on a stock. It turns out that most of the quality factors associated with designing an IVR application of yesterday are now reflected in many of the issues facing the online shopping experience of today.

Most small companies really don’t have the resources to use anything more than a templated application. Sometimes the pre-built application is flawed, but more often than not, the application needs integration into the merchants back-end and business processes. The pre-built applications come with programming stubs for error conditions which must be handled. For small businesses, even the simplest customizations to an on-line application will run a minimum of 10k in programmer costs, and to hire a reputable company that specializes in customer integration is more like 50k.

Related Internet users bill of rights

Will Bandwidth Shaping Ever Be Obsolete?


By Art Reisman

CTO – www.netequalizer.com

I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.

A recent university IT user group discussion thread kicked off with the following comment:

“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network.  My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”

Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router.  The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.

Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?

The answer is not likely.

First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.

Yes, an ISP can temporarily leap ahead of demand.

We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.

It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.

It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.

Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.

The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.

For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.

Internet User’s Bill of Rights


This is the second article in our series. Our first was a Bill of Rights dictating the etiquette of software updates. We continue with a proposed Bill of Rights for consumers with respect to their Internet service.

1) Providers must divulge the contention ratio of their service.

At the core of all Internet service is a balancing act between the number of people that are sharing a resource and how much of that resource is available.

For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks — perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.

The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.

The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial up.

2) Service speeds should be based on the amount of bandwidth available at the providers exchange point and NOT the last mile.

Even if your neighborhood (last mile) link remains clear, your provider’s connection can become saturated at its exchange point. The Internet is made up of different provider networks and backbones. If you send an e-mail to a friend who receives service from a company other than your provider, then your ISP must send that data on to another network at an exchange point. The speed of an exchange point is not infinite, but is dictated by the type of switching equipment. If the exchange point traffic exceeds the capacity of the switch or receiving carrier, then traffic will slow.

3) No preferential treatment to speed test sites.

It is possible for an ISP to give preferential treatment to individual speed test sites. Providers have all sorts of tools at their disposal to allow and disallow certain kinds of traffic. There should never be any preferential treatment to a speed test site.

4) No deliberate re-routing of traffic.

Another common tactic to save resources at the exchange points of a provider is to re-route file-sharing requests to stay within their network. For example, if you were using a common file-sharing application such as BitTorrent, and you were looking some non-copyrighted material, it would be in your best interest to contact resources all over the world to ensure the fastest download.

However, if your provider can keep you on their network, they can avoid clogging their exchange points. Since companies keep tabs on how much traffic they exchange in a balance sheet, making up for surpluses with cash, it is in their interest to keep traffic confined to their network, if possible.

5) Clearly disclose any time of day bandwidth restrictions.

The ability to increase bandwidth for a short period of time and then slow you down if you persist at downloading is another trick ISPs can use. Sometimes they call this burst speed, which can mean speeds being increased up to five megabits, and they make this sort of behavior look like a consumer benefit. Perhaps Internet usage will seem a bit faster, but it is really a marketing tool that allows ISPs to advertise higher connection speeds – even though these speeds can be sporadic and short-lived.

For example, you may only be able to attain five megabits at 12:00 a.m. on Tuesdays, or some other random unknown times. Your provider is likely just letting users have access to higher speeds at times of low usage. On the other hand, during busier times of day, it is rare that these higher speeds will be available.

There is now a consortium called M-Lab which has put together a sophisticated speed test site designed to give specific details on what your ISP is doing to your connection. See the article below for more information.

Related article Ten things your internet provider does not want you to know.

Related article On line shoppers bill of rights

Is Your Data Really Secure?


By Zack Sanders

Most businesses, if asked, would tell you they do care about the security of their customers. The controversial part of security comes to a head when you ask the question in a different way. Does your business care enough about security to make an investment in protecting customer data? There are a few companies that proactively invest in security for security’s sake, but they are largely in the minority.

The two key driving factors that determine a business’s commitment to security investment are:

1) Government or Industry Standard Compliance – This is what drives businesses like your credit card company, your local bank, and your healthcare provider to care about security. In order to operate, they are forced to care. Standards like HIPAA and PCI require them to go through security audits and checkups. Note: And just because they invest in meeting a compliance standard,  it may not translate to secure data, as we will point out below.

2) A Breach Occurs – Nothing will change an organization’s attitude toward security like a massive, embarrassing security breach. Sadly, it usually takes something like this happening to drive home the point that security is important for everyone.

The fact is, most businesses are running on very thin margins and other operating operating costs come before security spending. Human nature is such that we prioritize by what is in front of us now. What we don’t know can’t hurt us. It is easy for a business to assume that their minimum firewall configuration is good enough for now. Unfortunately they cannot easily see the holes in their firewall. Most firewall security can easily be breached through advertised public interfaces.

How do we know? Because we often do complimentary spot checks on company web servers. It is a rare case when we  have not been able to break in, attaining access to all customer records. Even though our sample set is small, our breach rate is so high, we can reliably extrapolate that most companies can easily be broken into.

As we eluded to above, even some of the companies that follow a standard are still vulnerable. Many large corporations  just go through the motions to comply with a standard, so they might typically seek out “trusted,” large professional services firms to do their audits. Often, these companies will conduct boiler plate assessments where auditors run down a checklist with the sole goal of certifying the application or organization as compliant.

Hiring a huge firm to do an audit makes it much easier to deflect blame in the case of an incident. The employee responsible for hiring the audit firm can say, “Well, I hired XXX – what more could I have done?” If they had hired a small firm to do the audit, and a breach occurred, their judgement and job might come into question – however unfair that might be.

As a professional web application security analyst that has personally handled the aftermath of many serious security breaches, I would advocate that if you take your security seriously, start with an assessment challenge using a firm that will work to expose your real world vulnerabilities.

How to Put a Value on IT Consulting


By Art Reisman

This post was inspired after a conversation with one of our IT resellers.  My commentary is based on thousands of  experiences I have had helping solve client network IT  issues over the past 20 years.

There is a wide range of ability in the network consulting world, and the right IT consultant is just as important as choosing a reliable car or plane. Short changing yourself on a shiny new paint job with a low price can lead to disaster.

The problem clients must overcome when picking a consultant is that often the person doing the hiring is not an experienced IT professional, hence it is hard to  judge IT competency. A person who has not had to solve real world networking problems may have no good reference point to judge an IT consultant. It would be like me auditioning pianists for admission to the Julliard School (also a past customer of ours).  I could not ever hope to choose between the nuances of great pianist versus a bar hack playing pop songs. In the world of IT, on face value, the talent of an IT person is also hard to differentiate. A nice guy with good people skills is important but does not prove IT competency. Certifications are fine, but are also not a guarantee of competency. Going back to my Julliard example, perhaps with a few tips from an expert I could narrow the field a bit ?

Below are some ideas that should provide some guidance when narrowing your choice of IT consultant.

The basic difference in competency, as measured by results, will come down to  those professionals that can solve new problems as presented and those that can’t. For example, a consultant without unique problem solving skills will always try to map a new problem as a variation of an old problem, and thus will tend to go down a trial an error check list in sequential order. This will work for solving very basic problems based on their knowledge base of known problems, but it can really rack up the hours and downtime when this person is presented with a new issue not previously encountered.  I would ask this question of a potential consultant. Even if you are non technical ask the question, and listen for enthusiasm in the answer not so much details.

“Can you run me through an example of any unique networking problem you have encountered, and what method you used to solve it?” A good networking person will be full and proud of their war stories, and should actually enjoy talking about them.

The other obvious place to find a networking consultant is from a reference, but be careful. I would only value the reference if the party giving it has had severe IT failures for comparison.

There are plenty of competent IT people that can do the standard stuff, the person giving a reference will only be valuable if they have gone from bad to good, or vice versa. If they start with good, they will assume all IT people are like this, and not appreciate what they have stumbled into.  If they start with average, they will not know it is average, until they experience good. The  average IT person will be busy all the time,  and eventually solve problems via the brute force method. In their processes they will sound intelligent and always have an issue to solve (often of their own bumbling)   Until a reference experiences the efficiency of somebody really good as a comparison  a good IT person is hardly ever noticed) they won’t have the reference point.

Networking Equipment and Virtual Machines Do Not Mix


By Joe DEsopo

Editors Note:
We often get asked why we don’t offer our NetEqualizer as a virtual machine. Although the excerpt below is geared toward the NetEqualizer, you could just as easily substitute the word  “router” or “firewall” in place of NetEqualizer and the information would apply to just about any networking product on the market. For example, even a simple Linksys router has a version of Linux under the hood and to my knowlege they don’t offer that product as VM. In the following excerpt lifted from a real response to one of our larger customers (a hotel operator), we detail the reasons.

————————————————————————–

Dear Customer

We’ve very consciously decided not to release a virtualized copy of the software. The driver for our decision is throughput performance and accuracy.

As you can imagine, The NetEqualizer is optimized to do very fast packet/flow accounting and rule enforcement while minimizing unwanted negative effects (latencies, etc…) in networks. As you know, the NetEqualizer needs to operate in the sub-second time domain over what could be up to tens of thousands of flows per second.

As part of our value proposition, we’ve been successful, where others have not, at achieving tremendous throughput levels on low cost commodity platforms (Intel based Supermicro motherboards), which helps us provide a tremendous pricing advantage (typically we are 1/3 – 1/5 the price of alternative solutions). Furthermore, from an engineering point of view, we have learned from experience that slight variations in Linux, System Clocks, NIC Drivers, etc… can lead to many unwanted effects and we often have to re-optimize our system when these things are upgraded. In some special areas, in order to enable super-fast speeds, we’ve had to write our own Kernel-level code to bypass unacceptable speed penalties that we would otherwise have to live with on generic Linux systems. To some degree, this is our “secret sauce.” Nevertheless, I hope you can see that the capabilities of the NetEqualizer can only be realized by a carefully engineered synergy between our Software, Linux and the Hardware.

With that as a background, we have taken the position that a virtualized version of the NetEqualizer would not be in anyone’s best interest.   The fact is, we need to know and understand the specific timing tolerances in any given moment and system environment.  This is especially true if a bug is encountered in the field and we need to reproduce it in our labs in order to isolate and fix the problem (note: many bugs we find our not of our own making – they are often changes in Linux that used to work fine, but for some reason have changed in a newer release and we are unaware and that requires us to discover and re-optimize around).

I hope I’ve done a good job of explaining the technical complexities surrounding a “virtualized” NetEqualizer.  I know it sounds like a great idea, but really we think it cannot be done to an acceptable level of performance and support.

The Internet was Never Intended for On-demand TV and Movies


By Art Reisman

www.netequalizer.com

I just got off the phone with one our customers who happens to be a large ISP. He chewed me out because we were throttling his video, and his customers were complaining. I tell him, if we did not throttle his video during peak times, his whole pipe would come to screeching halt. Seems everybody is looking for a magic bullet to squeeze blood from a turnip.

Can the Internet be retrofitted for video?

Yes, there are a few tricks an ISP can do to make video more acceptable, but the bottom line is, the Internet was never intended to deliver video.

One basic basic trick being used to eek out some video, is to cache local copies of video content, and then deliver it to you when you click a URL for a movie. This technique follows along the same path as the original on demand video of the 1980’s. The kind of service where you called your cable company and purchased a movie to start at 3:00 pm.  Believe it or not, there was often a video player with a cassette at other end of the cable going into your home, and your provider would just turn the video player on with the movie at the prescribed time. Today, the selection of available video has expanded and the delivery mechanism has gotten a bit more sophisticated, but for the most part, popular video is delivered via a direct wire from the operator into your home. It is usually NOT coming across the public Internet, it only appears that way (if it came across the Internet it would be slow and sporadic). Content that comes from the open Internet must come through an exchange point, and if your ISP has to rely on their exchange point to retrieve video content, things can get congested rather quickly.

What is an Internet Exchange point and why does it matter?

Perhaps an explanation of exchange points might help. Think of a giant railroad yard, where trains from all over the country converge and then return from where they came. In the yard they exchange their goods with the other train operators. For example, a train from Montana brings in coal destined for power plants in the east, and the trains from the east brings mining supplies and food for the people of Montana. As per a gentleman’s agreement, the railroad companies will transfer some goods to other operators, and take some goods in return. Although fictional, this would be a fair trade agreement. The fair trade in our railroad example works as as long as everybody exchanges about the same amount of stuff. But, suppose one day a train from the south shows up with 10 times the size load they wish to exchange data with, and suppose their goods are perishable, like raw milk products. Not only do they have more than their fair share to exchange, but they also have a time dependency on the exchange. They must get their milk to other markets quickly or it loses all value. You can imagine that the some of the railroads in the exchange co-operative would be overloaded and problems would arise.

I wish I could take every media person who writes about the Internet, take them into a room, and not let them leave until they understand the concept of an Internet exchange point. The Internet is founded on a best effort exchange agreement. Everything is built off this mode, and it cannot easily be changed.

So how does this relate back to the problems of video?

There really is no problem with the Internet, it works as intended and is a magnificent model of best effort exchange. The problem occurs with the delusion of content providers pumping video content into the pipes without any consideration of what might happen at the exchange points.

A bit of quick history on exchange point evolution.

Over the years, the original government network operators started exchanging with private operators, such as AT&T, Verizon, and Level 3. These private operators have made great improvement efforts to the capacity of their links and exchange points, but the basic problem still exists. The sender and receiver never have any guarantee if their real time streaming video will get to the other end in a timely manner.

As for caching, it is a band aid, and works some of the time for the most popular videos that get watched over and over again, but it does not solve the problem at the exchange points, and consumers and providers are always pumping more content into the pipes.

So can the problem of streaming content be solved?

The short answer is yes, but it would not be the Internet. I suspect one might call it the Internet for marketing purposes, but out of necessity. It would be some new network with a different political structure and entirely different rules. This would have much higher cost to ensure data paths for video, and operators would have to pass the cost of transport and path set up directly on to the content providers to make it work. Best effort fair exchange would be out of the picture.

For example, over the years I have seen numerous plans by wizards who draw up block diagrams on how to make the Internet a signaling switching network, instead of a best effort network. Each time I see one of these plans, I just sort of shrug. It has been done before and done very well,  they never consider the data networks originally built by AT&T, which was a fully functional switched network for sending data to anybody with guaranteed bandwidth. We’ll see where we end up.

Nine Tips for Organic Technology Start Ups


By Art Reisman

Art is CTO and Co-Founder of APconnections – makers of the NetEqualizer. NetEqualizer is used by thousands of ISPs worldwide to arbitrate bandwidth. He is also the principal engineer and inventor of the Kent Moore EVA, a product used to trouble shoot millions of vehicle vibration issues since 1992.

1) Find somebody who has built at least two businesses on their own, and better yet, somebody that has done it more than once from scratch.

For example, a Harvard MBA that went to work for Goldman-Sachs right out of school has no idea what you are up against. They may be brilliant, but without experience specifically in the field of growing a start up, their education and experience is not as good as somebody who had done it on their own.

2) Be leery of late 1990’s dot com moguls.

Many good people got lucky during those years. It was a rare time that will likely never happen again. Yes, there are as some true stars from that era, but most were just people who were in the right place at the right time. Their experiences generally don’t translate to a market place where money is tight and you must bite and scratch for every inch of success.

3) Be careful not to give too much credence to the advice of current and former executives at large companies.

They are great if you are looking for connections and introductions within those companies, but rarely do they understand bootstrapping a start up. These executives most likely operated in a company with large resources and rampant bueracracy that required a completely different set of skills than a start up.

4) Amazingly, I have found Real Estate Broker(s) are a great source for marketing ideas.

Not the agents, but the founders of the companies that built real estate companies up from scratch. I can assure you they have some creative ideas that will translate to your tech business.

5) Product companies must avoid the consulting trap.

If you produce a software product and (or any product for that matter), you will always be inundated for specialty, one-off, requests from customers. These requests are well intentioned, but you can’t let your time and direction of a single customer drive your feature set. The exception to this rule is obviously if you are getting similar requests from multiple customers. If you start building special features for single customers, ultimately you will barely break even, and may go broke trying to please them. At some point (now), you have to say this is our product, and this is our price, and these are the features, and if a customer needs specialty features, you will need to politely decline. If your competition takes up your account on promises of customization, you can be sure they are spreading their resources thin.

6) Validate your product see if you can sell to strangers.

Early on, you need to sell what you have to somebody that is not a friend. Friends are great for testing a product, or making you feel good, or talking up your company, but for real honest feedback on whether your product will be a commercial success you need to find somebody that buys your product. I don’t really care if it is a $10 sale or a $10,000 sale, it is important to establish that somebody is willing to purchase your product. From there, you can work on pricing models. Perfection is great but don’t stay in development for years making things better and perfecting your support channel, or whatever. The reality is you have to sell something to build momentum and delay to market is your enemy. If you do not find customers willing to commit their hard earned money for your product at some early stage you do not have a product.

You should be able to take early deposits on the concept if nothing else.

7) Don’t spend precious cash on patents and lawyers to defend non existing value.

As an organic or unfunded start up, the last thing you need to worry about is somebody stealing your idea, and yet this is the first piece of advice you are going to get from everybody you know. The fact is, there are millions of patents out there for failed products protecting nothing. I suppose it could happen, somebody steals your idea and profits before you get off the ground, but it is much more likely you will waste 6 months mortgage on a patent that you’ll never get a chance to defend. Even if you have a patent, you won’t be able to defend yourself with a large pocketed rival. The good news is if you have a good growing idea, investors will take care of the protection of your idea when they buy you.

8) Become an expert in your field. Maybe you are already? Sounds obvious, but make sure you know every detail of your technology and how it can help your customers.

9) Test the market like Billy Mays (may he rest in peace).

Before he passed away, Billy and his partner had a show where they took you through the test market phase of the products they introduced. The plan was simple, build a cheesy commercial to demo the cheesy product. Then run your advertisements in a small market metro area on late night TV. Although your audience may be insomniacs watching re-runs of old movies late at night, you need to find a way to test market your idea and get honest feedback (people calling trying to buy your product is a good indicator). You might even want to run some teasers to your market before you launch, but do so with limited resources. If you get a representative sample, you can then decide to ramp up from there with some confidence.

10) Need verses buy. The only measure of success is from somebody buying your product. Just because people “need” your product is not an indicator of if they are willing to pay for it. People “need” lots of things and only actually buy a small percentage. I need a bigger house , a nice car, a vacation to Hawaii. I also need a sprinkler system, faster computer, but I bought none of these things this past year.

In the last four years from 2008 and to 2012 hot selling items have been very basic services, such as telephone systems, heat, advertising.  Very few businesses are buying anything beyond the essentials in any quantity. This could change if the economy goes back into a growth phase, but the point here is to build something that is a necessity with clear value and you must test that value by selling product, an open wallet is the only to validate need verses buy Marketing surveys of intentions will not tell the truth. Don’t get me wrong there is always opportunity out there, but you constantly need to validate your threshold of value by selling something.

 

 

Related Business Advice Articles.

Tips to make your WISP more profitable

Terry Gold’s blog has a good bit of Advice Sprinkled throughout

How I got my start the story of NetEqualizer

Building a software company from Scratch

How to Determine the True Speed of Video over Your Internet Connection


Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

More and more, Internet Service Providers are using caching techniques on a large scale to store local copies of Netflix Movies and YouTube videos. There is absolutely nothing wrong with this technology. In fact, without it, your video service would likely be very sporadic. When a video originates on your local provider’s caching server, it only has to make a single hop to get to your doorstep. Many cable operators now have dedicated wires from their office to your neighborhood, and hence very little variability in the speed of your service on this last mile.

So how fast can you receive video over the Internet? (Video that is not stored on your local providers caching servers.) I suppose this question would be moot if all video known to mankind was available from your ISP. In reality, they only store a tiny fraction of what is available on their caching servers. The reason why caching can be so effective is that, most consumers only watch a tiny fraction of what is available, and they tend to watch what is popular. To determine how fast you can receive video over the Internet you must by-pass your providers cache.

To insure that you are running a video from beyond your providers cache, google something really obscure. Like “Chinese language YouTube on preparing flowers.” Don’t use this search term if you are in a Chinese neighborhood, but you get the picture right? Search for something obscure that is likely never watched near you. Pick a video 10 minutes or longer, and then watch it. The video may get broken up, or more subtly you may notice the buffer bar falls behind or barely keeps up with the playing of the video. In any case, if you see a big difference watching an obscure video over a popular one, this will be one of the best ways to analyze your true Internet speed.

Note: Do not watch the same video twice in a row when doing this test. The second time you watch an obscure video from China, it will likely run from the your provider’s cache, thus skewing the experiment.

Google High Speed Internet Service is a Smart Play


Some day it will happen, a search engine that really understands the context of what you are looking for.  Maybe it will come from a young group of grad students with a school research project?  This would be an ironic twist for Google since this is exactly how they came to power; all the more reason to understand the dangers of complacency.

I must admit that I have noticed a difference since Google’s upgrade in May.  Things have gotten better, however, it is an incremental improvement in the battle to get rid of all the bogus linked up pages looking for higher rankings and muddling real content.  Their hold as the top search engine will always be a tenuous position.

My advice to Google

Now is the time to leverage your market position, and the best thing I can think of would be to build a rock-solid fiber network-to-the-home in a major metropolitan area.  A real meat and potatoes service that cannot be undermined by a rogue start-up. Combine your ISP with your ability to host content (worry about the anti-trust stuff later). With the largest and most efficient mass storage facilities in the world, and fiber-to-the-home, you can easily cache massive amounts of video content for instant delivery, thus easily creaming the competition’s delivery cost. You now have a product with a much higher entry barrier. Give it away at cost and fund it with your advertising network. Oh, it looks like Google is one step ahead of me hmm???

Wired Bandwidth Prices, and What to Expect in the Future


By Art Reisman

CTO – http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Bandwidth prices traditionally have a very regional component, and your experience may vary, but in the US there is a really good chance you can get quite a bit more bandwidth for a much lower price than what it would have cost you a few years ago. To site one example, we have a customer that contracts Internet services to supply several large residential housing units. Currently, commercial class business Internet service for 50 megabits runs $120 per month, which is the same price they were paying for 10 megabits 3 years ago. Essentially, they are getting five times as much bandwidth for the same price they signed up for 3 years ago. And they are not an anomaly. I am hearing the same story in almost every market in the US. We can conclude from our empirical data that bandwidth prices have dropped 80 percent in 3 years!

To answer the question on the future of bandwidth prices, we need to get a handle on what is driving them lower today.

Here are some of the factors:

1) The rise of Wave Division Multiplexing.

This has  most likely been the biggest factor in the recent reduction of prices. Although the technology has been around for a while, many businesses were locked into 3 and 4 year contracts. Now in 2012 , most carriers have  upgraded their networks to use WDM. The ability to greatly increase bandwidth  capacity without the cost of laying new cables, is now being passed onto the wholesale market.

2) The recession.

There is very little expansion of the customer base for demand of wired bandwidth. Yes, there is a huge space for wireless phones and such, and I’ll deal with those separately, but for the wired home or business there just are no new customers and there has not been for the past 8 years or so.

3) Broadband Initiative.

In some areas there have been subsidies to bring in higher speed lines where private business would have otherwise not made the investment.

4) Less infrastructure spending by traditional wired providers.

This seems a bit counter intuitive, but in the past few years, established providers have slowed laying out fiber to the home, and now they are free to charge a bit lower prices on their existing infrastructure because it is paid for. An analogy would be a rental car company that was able to go 3 or 4 years without investing in new cars, their expenses would drop and thus could lower their prices.

5) Competition.

This is somewhat related to the recession. Multiple providers in a market fighting for a flat or shrinking supply of new customers. Many of the contracts we see dropping to retain existing customers. Most of the sunk cost occurs in acquiring a new customer. Once you have a line in place with equipment at the customer premise, the last thing you want to have happen is to get outbid by an upstart, and you have room to move down in price so you discount heavily to retain the customer.

This may surprise you, but we believe the future (2013) holds higher prices.

Here are the reasons:

1) End of subsidies.

The government subsidies have worked but they have also been a huge embarrassment of waste and fraud, hence we won’t see any more of that for a little while.

2) Consolidation.

There will be consolidation in markets where there is competition, and the discounts will end. People love their wireless 4g, but those prices will never be competitive with wired to the business or home bandwidth. So once a region is down to a single wired supplier, they will be able to raise prices or at least stop discounting.

3) Expansion.

At some point, the real estate and business economy will begin to expand, at which time backbone and switching resources will become tighter from demand (this may happen just from video demand already). In other words, once providers have to start investing in more infrastructure, they will also need to raise prices to subsidize their new investments.

Related Articles and links

Business Phone News has a nice guide to purchasing bandwidth that explains the value of bandwidth management. This excerpt is take from their recent article on usage based billing.

Many business owners think, “I don’t need to worry about that as my IT director, IT department or IT contractor has got that covered.” Maybe yes, but maybe no! To double-check just how well your business bandwidth is being managed, download and take the “Business Bandwidth Management Self-Analysis Survey” in our Bandwidth Management Buyers Guide.

Four Reasons Why Companies Remain Vulnerable to Cyber Attacks


Over the past year, since the release of our IPS product, we have spent many hours talking to resellers and businesses regarding Internet security. Below are our observations about security investment, and more importantly, non-investment.

1) By far the number one reason why companies are vulnerable is procrastination.

Seeing is believing, and many companies have never been hacked or compromised.

Some clarification here, most attacks do not end in something being destroyed or any obvious trail of data being lifted. This does not mean they do not happen; it’s just that there was no immediate ramification in many cases hence, business as usual.

Companies are run by people, and most people are reactive, and furthermore somewhat single threaded, thus they can only address a few problems at a time. Without a compelling obvious problem, security gets pushed down the list. The exception to the procrastination rule would be verticals such as financial institutions, where security audits are mandatory (more on audits in a bit). Most companies, although aware of  risk factors, are reluctant to spend on a problem that has never happened. In their defense, a company that reacts to all the security FUD, might find itself hamstrung and out of business. Sometimes, to be profitable, you have to live with a little risk.

2) Existing security tools are ignored.

Many security suites are just too broad to be relevant. Information overload can lead to a false sense of coverage.

The best analogy I can give is the Tornado warning system used by the National Weather Service. Their warning system, although well-intended, has been so diffuse in specificity that after a while people ignore the warnings. The same holds true with security tools. In order to impress and out-do one another, security tools have become bloated with quantity, not quality. This overload of data can lead to an overwhelming glut of frivolous information. It would be like a stock analyst predicting every possible outcome and expecting you to invest on that advice. Without a specific, targeted piece of information, your security solution can be a distraction.

3) Security audits are mandated formalities.

In some instances, a security audit is treated as a bureaucratic mandate. When security audits are mandated as a standard, the process of the audit can become the objective. The soldiers carrying out the process will view the completed checklist as the desired result and thus may not actually counter existing threats. It’s not that the audit does not have value, but the audit itself becomes a minimum objective. And most likely the audit is a broad cookie-cutter approach which mostly serves to protect the company or individuals from blame.

4) It may just not be worth the investment.

The cost of getting hacked may be less than the ongoing fees and consumption of time required to maintain a security solution. On a mini-scale, I followed this advice on my home laptop running Windows. It was easier to re-load my system every 6 months when I got a virus rather than mess with all the security virus protection being thrown at me, slowing my system down. The same holds true on a corporate scale. Although nobody would ever come out and admit this publicly, or make it deliberately easy, but it might be more cost-effective to recover from a security breach than to proactively invest in preventing it. What if your customer records get stolen, so what? Consumers are hearing about the largest banks and government security agencies getting hacked every day. If you are a mid-sized business it might be more cost-effective to invest in some damage control after the fact rather than jeopardize cash flow today.

So what is the future for security products? Well, they are not going to go away. They just need to be smarter, more cost-effective, and turn-key, and then perhaps companies will find the benefit-to-risk more acceptable.

<Article Reference:  Security Data overload article >

APconnections CTO Quoted in Wall Street Journal Article


Art Reisman, CTO of APconnections, was recently quoted and interviewed as the primary source in an article in the Wall Street Journal regarding Procter & Gamble’s employees’ Internet use. Art was asked to comment, due to his expertise in bandwidth shaping, on Procter & Gamble’s plan to restrict Internet access to sites such as Netflix and Pandora.

The article appeared in the April 4th, 2012 print edition of the Wall Street Journal. You can read the full article here in the online edition (you may need to be a subscriber to view): http://online.wsj.com/article/SB10001424052702304072004577324142847006340.html

Here is Art’s expert commentary from the article:

…A number of businesses are struggling with bandwidth problems as extensive downloading soaks up network capacity and risks slowing connections. For instance, if a company has 500 employees and three are watching Netflix movies, they could use most of a company’s bandwidth if it doesn’t have a lot.

“Indeed, 300 employees surfing the Web could use the same amount as the movie watchers”, said Art Reisman, chief technology officer of NetEqualizer, which is part of traffic-management firm APconnections Inc.

“Let’s say you merge onto the freeway and no one will let you on. If these things are running, nobody else can get on,” said Mr. Reisman, who is based in Lafayette, Colo.

Of course, if you have a bandwidth shaper in place, such as the NetEqualizer, you are reducing contention on your Internet pipe.  The NetEqualizer uses fairness-based shaping, which will allocate bandwidth to the 300 employees surfing the web, while giving less bandwidth to the movie watchers (bandwidth hogs).

 

Case Study: A Successful BotNet-Based Attack


By Zack Sanders – Security Expert – APconnections

In early 2012, I took on a client who was a referral from someone I had worked with when I first got out of school. When the CTO of the company initially called me, they were actually in the process of being attacked at that very moment. I got to work right away using my background as both a web application hacker and as a forensic analyst to try and solve the key questions that we briefly touched on in a blog post just last week. Questions such as:

– What was the nature of the attack?

– What kind of data was it after?

– What processes and files on the machine were malicious and/or which legitimate files were now infected?

– How could we maintain business continuity while at the same time ensuring that the threat was truly gone?

– What sort of security controls should we put in place to make sure an attack doesn’t happen again?

– What should the public and internal responses be?

Background

For the sake of this case study, we’ll call the company HappyFeet Movies – an organization that specializes in online dance tutorials. HappyFeet has three basic websites, all of which help sell and promote their movies. Most of the company’s business occurs in the United States and Europe, with few other international transactions. All of the websites reside on one physical server that is maintained by a hosting company. They are a small to medium-sized business with about 50 employees locally.

Initial Questions

I always start these investigations with two questions:

1) What evidence do you see of an attack? Defacement? Increased traffic? Interesting log entries?

2) What actions have you taken thus far to stop the attack?

Here was HappyFeet’s response to these questions:

1) We are seeing content changes and defacement on the home page and other pages. We are also seeing strange entries in the Apache logs.

2) We have been working with our hosting company to restore to previous backups. However, after each backup, within hours, we are getting hacked again. This has been going on for the last couple of months. The hosting company has removed some malicious files, but we aren’t sure which ones.

Looking For Clues

The first thing I like to do in cases like this is poke around the web server to see what is really going on under the hood. Hosting companies often have management portals or FTP interfaces where you can interact with the web server, but having root access and a shell is extremely important to me. With this privileged account, I can go and look at all the relevant files for evidence that aligns with the observed behavior. Keep in mind, at this point I have not done anything as far as removing the web server from the production environment or shutting it down. I am looking for valuable information that really can only be discovered while the attack is in progress. The fact that the hosting company has restored to backup and removed files irks me, but there is still plenty of evidence available for me to analyze.

Here were some of my findings during this initial assessment – all of them based around one of the three sites:

1) The web root for one of the three sites has a TON of files in it – many of which have strange names and recent modification dates. Files such as:

db_config-1.php

index_t.php

c99.php

2) Many of the directories (even the secure ones) are world writable, with permissions:

drwxrwxrwx

3) There are SQL dumps/backups in the web root that are zipped so when visited by a web browser the user is prompted for a download – yikes!

4) The site uses a content management system (CMS) that was last updated in 2006 and the database setup interface is still enabled and visible at the web root.

5) Directory listings are enabled, allowing a user to see the contents of the directories – making discovery of file names above trivial task.

6) The Apache logs show incessant SQL injection attempts, which when ran, expose usernames and passwords in plain text.

7) The Apache logs also show many entries accessing a strange file called c99.php. It appeared to be some sort of interface that took shell commands as arguments, as is evident in the logs:

66.249.72.41 – – “GET /c99.php?act=ps_aux&d=%2Fvar%2Faccount%2F&pid=24143&sig=9 HTTP/1.1″ 200 286

Nature of the Attack

There were two basic findings that stood out to me most:

1) The c99.php file.

2) The successful SQL injection log entries.

c99.php

I decided to do some research and quickly found out that this is a popular PHP shell file. It was somehow uploaded to the web server and the rest of the mayhem was conducted through this shell script in the browser. But how did it get there?

The oldest log data on the server was December 19, 2011. At the very top of this log file were commands accessing c99.php, so I couldn’t really be sure how it got on there, but I had a couple guesses:

1) The most likely scenario I thought was that the attacker was able to leverage the file upload feature of the dated CMS – either by accessing it without an account, or by brute forcing an administrative account with a weak password.

2) There was no hardware firewall protecting connections to the server, and there were many legacy FTP and SSH accounts festering that hadn’t been properly removed when they were no longer needed. One of these accounts could have been brute forced – more likely an FTP account with limited access; otherwise a shell script wouldn’t really be necessary to interact with the server.

The log entries associated with c99.php were extremely interesting. There would be 50 or so GET requests, which would run commands like:

cd, ps aux, ls -al

Then there would be a POST request, which would either put a new file in the current directory or modify an existing one.

This went on for tens of thousands of lines. The very manual and linear nature of the entries seemed to me very much like an automated process of some type.

SQL Injection

The SQL injection lines of the logs were also very exploratory in nature. There was a long period of information gathering and testing against a few different PHP pages to see how they responded to database code. Once the attacker realized that the site was vulnerable, the onslaught began and eventually they were able to discover the information schema and table names of pertinent databases. From there, it was just a matter of running through the tables one at a time pulling rows of data.

What Was The Attack After?

The motives were pretty clear at this point. The attacker was a) attempting to control the server to use in other attacks or send SPAM, and b) gather whatever sensitive information they could from databases or configuration files before moving on. Exploited user names and passwords could later be used in identity theft, for example. Both of the above motives are very standard for botnet-based attacks. It should be noted that the attacker was not specifically after HappyFeet – in fact they probably knew nothing about them – they just used automated probing to look for red flags and when they returned positive results,  assimilated the server into their network.

Let the Cleanup Begin

Now that the scope of the attack was more fully understood, it was time to start cleaning up the server. When I am conducting this phase of the project, I NEVER delete anything, no matter how obviously malicious or how benign. Instead, I quarantine it outside of the web root, where I will later archive and remove it for backup storage.

Find all the shell files

The first thing I did was attempt to locate all of the shell files that might have been uploaded by c99.php. Because my primary theory was that the shell file was uploaded through a file upload feature in the web site, I checked those directories first. Right away I saw a file that didn’t match the naming convention of the other files. First of all, the directory was called “pdfs” and this file had an extension of PHP. It was also called broxn.php, whereas the regular files had longer names with camel-case that made sense to HappyFeet. I visited this file in the web browser and saw a GUI-like shell interface. I checked the logs for usage of this file, but there were none. Perhaps this file was just an intermediary to get c99.php to the web root. I used a basic find command to pull a list of all PHP files from the web root forward. Obviously this was a huge list, but it was pretty easy to run through quickly because of the naming differences in the files. I only had to investigate ten or so files manually.

I found three other shell files in addition to broxn.php. I looked for evidence of these in the logs, found none, and quarantined them.

What files were uploaded or which ones changed?

Because of the insane amount of GET requests served by c99.php, I thought it was safe to assume that every file on the server was compromised. It wasn’t worth going through the logs manually on this point. The attacker had access to the server long enough that this assumption is the only safe one. The less frequent occurrences of POST requests were much more more manageable. I did a grep through the Apache logs for POST requests submitted by c99.php and came up with a list of about 200 files. My thought was that these files were all either new or modified and could potentially be malicious. I began the somewhat pain-staking process of manually reviewing these files. Some had been overwritten back to their original state by the hosting company’s backup, but some were still malicious and in place. I noted these files, quarantined them, and retested website functionality.

Handling the SQL injection vulnerabilities

The dated CMS used by this site was riddled with SQL injection vulnerabilities. So much so, that my primary recommendation for handling it was building a brand new site. That process, however, takes time, and we needed a temporary solution. I used the log data that I had to figure out which pages the botnet was primarily targeting with SQL attacks. I manually modified the PHP code to do basic sanitizing on all inputs in these pages. This immediately thwarted SQL attacks going forward, but the damage had already been done. The big question here was how to handle the fact that all usernames and passwords were compromised.

Improving Security

Now that I felt the server was sufficiently cleaned, it was time to beef up the security controls to prevent future attacks. Here are some of the primary tasks I did to accomplish this:

1) Added a hardware firewall for SSH and FTP connections.

I worked with the hosting company to put this appliance in front of the web server. Now, only specific IPs could connect to the web server via SSH and FTP.

2) Audited and recreated all accounts.

I changed the passwords of all administrative accounts on the server and in the CMS, and regenerated database passwords.

3) Put IP restrictions on the administrative console of the CMS.

Now, only certain IP addresses could access the administrative portal.

4) Removed all files related to install and database setup for the CMS.

These files were no long necessary and only presented a security vulnerability.

5) Removed all zip files from the web root forward and disabled directory listings.

These files were readily available for download and exposed all sorts of sensitive information. I also disabled directory listings, which is helpful in preventing successful information gathering.

6) Hashed customer passwords for all three sites.

Now, the passwords for user accounts were not stored in plain text in the database.

7) Added file integrity monitoring to the web server.

Whenever a file changes, I am notified via email. This greatly helps reduce the scope of an attack should it breach all of these controls.

8) Wrote a custom script that blocks IP addresses that put malicious content in the URL.

This helps prevent information gathering or further vulnerability probing. The actions this script takes operate like a miniature NetGladiator.

9) Installed anti-virus software on the web server.

10) Removed world-writable permissions from every directory and adjusted ownership accordingly.

No directory should ever be world writable – doing so is usually just a lazy way of avoiding proper ownership. The world writable aspect of this server allowed the attack to be way more broad than it had to be.

11) Developed an incident response plan.

I worked with the hosting company and HappyFeet to develop an internal incident response policy in case something happens in the future.

Public Response

Due to the fact that all usernames and passwords were compromised, I urged HappyFeet to communicate the breach to their customers. They did so, and later received feedback from users who had experienced identity theft. This can be a tough step to take from a business point of view, but transparency is always the best policy.

Ongoing Monitoring

It is not enough to implement the above controls, set it, and forget it. There must be ongoing tweaking and monitoring to ensure a strong security profile. For HappyFeet, I set up a yearly monitoring package that includes:

– Manual and automated log monitoring.

– Server vulnerability scans once a quarter, and web application scans once every six months.

– Manual user history review.

– Manual anti-virus scans and results review.

Web Application Firewalls

I experimented with two types of web application firewalls for HappyFeet. Both of which took me down the road of broken functionality and over-robustness. One had to be completely uninstalled, and the other is in monitoring mode because protection mode disallowed legitimate requests. It also is alerting to probing attempts about 5,000 times per day – most of which are not real attacks – and the alert volume is unmanageable. Its only value is in generating data for improving my custom script that is blocking IPs based on basic malicious attempts.

This is a great example of how NetGladiator can provide a lot of value to the right environment. They don’t need an intense, enterprise-level intrusion prevention system – they just need to block the basics and not break functionality in their web sites. The custom script, much like NetGladiator, suits their needs to a T and can also be configured to reflect previous attacks and vulnerabilities I found in their site that are too vast to manually patch.

Lessons Learned

Here are some key take-aways from the above project:

– Being PROACTIVE is so much better than being REACTIVE when it comes to web security. If you are not sure where you stack up, have an expert take a look.

– Always keep software and web servers up to date. New security vulnerabilities arrive on the scene daily, and it’s extremely likely that old software is vulnerable. Often, security holes are even published for an attacker to research. It’s just a matter of finding out which version you have and testing the security flaw.

– Layered security is king. The security controls mentioned above prove just how powerful layering can be. They are working together in harmony to protect an extremely vulnerable application effectively.

If you have any questions on NetGladiator, web security, or the above case study, feel free to contact us any time! We are here to help, and don’t want you to ever experience an attack similar to the one above.

Why is the Internet Access in My Hotel So Slow?


The last several times I have stayed in Ireland and London, my wireless Internet became so horrific in the evening hours that I ended up walking down the street to work at the local Internet cafe. I’ll admit that hotel Internet service is hit or miss – sometimes it is fine , and other times it is terrible. Why does this happen?

To start to understand why slow Internet service persists at many hotels you must understand the business model.

Most hotel chains are run by Real Estate and Management type companies, they do not know the intricacies of wireless networks any more than they can fix a broken U-Joint on the hotel airport van. Hence, they hire out their IT – both for implementation and design consulting. The marching orders to their IT consultant is usually to build a system that generates revenue for the hotel. How can we charge for this service? The big cash cow for the hotel industry used to be the phone system, and then with advent of cell phones that went away. Then it was On-Demand Movies (mostly porn) , and that is fading fast. Competing on great free Internet service between operators has not been a priority. However, even with concessions to this model of business, there is no reason why it cannot be solved.

There are a multitude of reasons that Internet service can gridlock in a hotel, sometimes it is wireless interference, but by far the most common reason is too many users trying to watch video during peak times (maybe a direct result of pay on demand movies). When this happens you get the rolling brown out. The service works for 30 seconds or so, duping  you into thinking you can send an e-mail or finish a transaction; but just you as you submit your request, you notice everything is stuck, with no progress messages in the lower corner of your browser. And then, you get an HTTP time out. Wait perhaps 30 seconds, and all of a sudden things clear up and seem normal only to repeat again .

The simple solution for this gridlock problem is to use a dynamic fairness device such as our NetEqualizer. Many operators take the first step in bandwidth control and use their routers to enforcing fixed rate limits per customer, however this will  only provide some temporary relief and will not work in many cases.

The next time you experience the rolling brown out, send the hotel a link to this blog article (if you can get the email out). The  hotels that we have implemented our solution at are doing cartwheels down the street and we’d be happy to share their stories with anybody who inquires.