|
||||||||||||||||||||||
Editor’s Note: Updated July 27th, 2011 with material from www.pewinternet.org:
YouTube studies are continuing to confirm what I’m sure we all are seeing – that Americans are creating, sharing and viewing video online more than ever, this according to a Pew Research Center Internet & American Life Project study released Tuesday.
According to Pew, fully 71% of online Americans use video-sharing sites such as YouTube and Vimeo, up from 66% a year earlier. The use of video-sharing sites on any given day also jumped five percentage points, from 23% of online Americans in May 2010 to 28% in May 2011. This figure (28%) is slightly lower than the 33% Video Metrix reported in June, but is still significant.
To download or read the fully study, click on this link: http://pewinternet.org/Reports/2011/Video-sharing-sites/Report.aspx
———————————————————————————————————————————————————
YouTube viewership in May 2011 was approximately 33 percent of video viewed on the Internet in the U.S., according to data from the comScore Video Metrix released on June 17, 2011.
Google sites, driven primarily by video viewing at YouTube.com, ranked as the top online video content property in May with 147.2 million unique viewers, which was 83 percent of the total unique viewers tracked. Google Sites had the highest number of viewing sessions with more than 2.1 billion, and highest time spent per viewer at 311 minutes, crossing the five-hour mark for the first time.
To read more on the data released by comScore, click here. comScore, Inc. (NASDAQ: SCOR) is a global leader in measuring the digital world and preferred source of digital business analytics. For more information, please visit www.comscore.com/companyinfo.
This trend further confirms why our NetEqualizer Caching Option (NCO) is geared to caching YouTube videos. While NCO will cache any file sized from 2MB-40MB traversing port 80, the main target content is YouTube. To read more about the NetEqualizer Caching Option to see if it’s a fit for your organization, read our YouTube Caching FAQ or contact Sales at sales@apconnections.net.
Quality of service, or QoS as it’s commonly known, is one of those overused buzz words in the networking industry. In general, it refers to the overall quality of online activities such as video or VoIP calls, which, for example, might be judged by call clarity. For providers of Internet services, promises of high QoS are a selling point to consumers. And, of course, there are plenty of third-party products that claim to make quality of service that much better.
A year ago on our blog, we broke down the costs and benefits of certain QoS methods in our article QoS Is a Matter of Sacrifice. Since then, and in part to address some of the drawbacks and shortcomings we discussed, we’ve developed a new NetEqualizer release offering a very unique and novel way to provide QoS over your Internet link using a type of service (ToS) bit. In the article that follows, we’ll show that the NetEqualizer methodology is the only optimization device that can provide QoS in both directions of a voice or video call over an Internet link.
This is worth repeating: The NetEqualizer is the only device that can provide QoS in both directions for a voice or video call on an open Internet link. Traditional router-based solutions can only provide QoS in both directions of a call when both ends of a link are controlled within the enterprise. As a result, QoS is often reduced and limited. With the NetEqualizer, this limitation can now be largely overcome.
First, let’s step back and discuss why typical routers using ToS bits cannot ensure QoS for an incoming stream over the Internet. Consider a typical scenario with a VoIP call that relies on ToS bits to ensure quality within the enterprise. In this instance, both sending and receiving routers will make sure there is enough bandwidth on the WAN link to ensure the voice data gets across without interruption. But when there is a VoIP conversation going on between a phone within your enterprise and a user out on the cloud, the router can only ensure the data going out.
When communicating enterprise-to-cloud, the router at the edge of your network can see all of the traffic leaving your network and has the ability to queue up (slow down) less important traffic and put the ToS-tagged traffic ahead of everybody else leaving your network. The problem arises on the other side of the conversation. The incoming VoIP traffic is hitting your network and may also have a ToS bit set, but your router cannot control the rate at which other random data traffic arrives.
The general rule with using ToS bits to ensure priority is that you must control both the sending and receiving sides of every stream.
With data traffic originating from an uncontrolled source, such as with a Microsoft update, the Microsoft server is going to send data as fast as it can. The ToS mechanisms on your edge router have no way to control the data coming in from the Microsoft server, and thus the incoming data will crowd out the incoming voice call.
Under these circumstances, you’re likely to get customer complaints about the quality of VoIP calls. For example, a customer on a conference call may begin to notice that although others can hear him or her fine, those on the other end of the line break up every so often.
So it would seem that by the time incoming traffic hits your edge router it’s too late to honor priority. Or is it?
When we tell customers we’ve solved this problem with a single device on the link, and that we can provide priority for VoIP and video, we get looks as if we just proved the Earth isn’t flat for the first time.
But here’s how we do it.
First, you must think of QoS as the science of taking away bandwidth from the low-priority user rather than giving special treatment to a high-priority user. We’ve shown that if you create a slow virtual circuit for a non-priority connection, it will slow down naturally and thus return bandwidth to the circuit.
By only slowing down the larger low-priority connections, you can essentially guarantee more bandwidth for everybody else. The trick to providing priority to an incoming stream (voice call or video) is to restrict the flows from the other non-essential streams on the link. It turns out that if you create a low virtual circuit for these lower-priority streams, the sender will naturally back off. You don’t need to be in control of the router on the sending side.
For example, let’s say Microsoft is sending an update to your enterprise and it’s wiping out all available bandwidth on your inbound link. Your VPN users cannot get in, cannot connect via VoIP, etc. When sitting at your edge, the NetEqualizer will detect the ToS bits on your VPN and VoIP call. It will then see the lack of ToS bits on the Microsoft update. In doing so, it will automatically start queuing the incoming Microsoft data. Ninety-nine out of one hundred times this technique will cause the sending Microsoft server to sense the slower circuit and back off, and your VPN/VoIP call will receive ample bandwidth to continue without interruption.
For some reason the typical router is not designed to work this way. As a result, it’s at a loss as to how to provide QoS on an incoming link. This is something we’ve been doing for years based on behavior, and in our upcoming release, we’ve improved on our technology to honor ToS bits. Prior to this release, our customers were required to identify priority users by IP address. Going forward, the standard ToS bits (which remain in the IP packet even through the cloud) will be honored, and thus we have a very solid viable solution for providing QoS on an incoming Internet link.
Related article QOS over the Internet is it possible?
Related Example: Below is an excerpt from a user that could have benefited from a NetEqualizer. In this comment below, taken from an Astaro forum, the user is lamenting on the fact that despite setting QoS bits he can’t get his network to give priority to his VoIP traffic:
“Obviously, I can’t get this problem resolved by using QoS functionality of Astaro. Phone system still shows lost packets when there is a significant concurring traffic. Astaro does not shrink the bandwidth of irrelevant traffic to the favor of VoIP definitions, I don’t know where the problem is and obviously nobody can clear this up.
Astaro Support Engineer said “Get a dedicated digital line,” so I ordered one it will be installed shortly.
The only way to survive until the new line is installed was to throttle all local subnets, except for IPOfficeInternal, to ensure the latter will have enough bandwidth at any given time, but this is not a very smart way of doing this.“
Editor’s Note:Looks like the metered bandwidth is back in the news. We first addressed this subject back in June 2008. Below you’ll find our original commentary followed by a few articles on the topic.
Here is our original commentary on the subject:
The recent announcement that Time Warner Cable Internet plans to experiment with a quota-based bandwidth system has sparked lively debates throughout cyberspace. Although the metering will only be done in a limited market for now, it stands as an indication of the direction ISPs may be heading in the future. Bell Canada is also doing a metered bandwidth approach, in Canada much of the last mile for Bell is handled by resellers and they are not happy with this approach.
Over the past several years, we have seen firsthand the pros and cons of bandwidth metering. Ultimately, invoking a quota-based system does achieve the desired effect of getting customers to back off on their usage — especially the aggressive Internet users who take up a large amount of the bandwidth on a network.
However, this outcome doesn’t always develop smoothly as downsides exist for both the ISP and the consumer. From the Internet provider perspective, a quota-based system can put an ISP at a competitive disadvantage when marketing against the competition. Consumers will obviously choose unlimited bandwidth if given a choice at the same price. As the Time Warner article states, most providers already monitor your bandwidth utilization and will secretly kick you offline when some magic level of bandwidth usage has been reached.
To date, it has not been a good idea to flaunt this policy and many ISPs do their best to keep it under the radar. In addition, enforcing and demonstrating a quota-based system to customers will add overhead costs and also create more customer calls and complaints. It will require more sophistication in billing and the ability for customers to view their accounts in real time. Some consumers will demand this, and rightly so.
Therefore, a quota-based system is not simply a quick fix in response to increased bandwidth usage. Considering these negative repercussions, you may wonder what motivates ISPs to put such a system in place. As you may have guessed, it ultimately comes down to the bottom line.
ISPs are often getting charged or incurring cost overruns on total amount of bytes transferred. They are many times resellers of bandwidth themselves and may be getting charged by the byte and, by metering and a quota-based system, are just passing this cost along to the customers. In this case, on face value, quotas allow a provider to adopt a model where they don’t have to worry about cost overruns based on their total usage. They essentially hand this problem to their subscribers.
A second common motivation is that ISPs are simply trying to keep their own peak utilization down and avoid purchasing extra bandwidth to meet the sporadic increases in demand. This is much like power companies that don’t want to incur the expense of new power plants to just meet the demands during peak usage times.
Quotas in this case do have the desired effect of lowering peak usage, but there are other ways to solve the problem without passing the burden of byte counting on to the consumer. For example, behavior-based and fairness reallocation has proven to solve this issue without the downsides of quotas.
A final motivation for the provider is that a quota system will take some of the heat off of their backs from the FCC. According to other articles we have seen, ISPs have discreetly, if not secretly, been toying with bandwidth, redirecting it based on type and such. So, now, just coming clean and charging for what consumers use may be a step in the right direction – at least where policy disclosure is concerned.
For the consumer, this increased candor from ISPs is the only real advantage of a quota-based system. Rather than being misled and having providers play all sorts of bandwidth tricks, quotas at least put customers in the know. Although, the complexity and hassle of monitoring one’s own bandwidth usage on a monthly basis, similar to cell phone minutes, is something most consumers most likely don’t want to deal with.
Personally, I’m on the fence in regard to this issue. Just like believing in Santa Claus, I liked the illusion of unlimited bandwidth, but now, as quota-based systems emerge, I may be faced with reality. It will be interesting to see how the Time Warner experiment pans out.
Related Resource: Blog dedicated to stamping out usage-based billing in Canada.
Time Bomb Ticking on Netflix Streaming Strategy (Wall Street Journal)
How much casual driving would the average American do if gasoline cost $6 a gallon? A similar question may confront Web companies pushing bandwidth-guzzling services one day.
Several Web companies, including Amazon.com, Google and Netflix, are promoting services like music and video streaming that encourage consumers to gobble up bandwidth. Indeed, Netflix’s new pricing plans, eliminating the combined DVD-streaming offering, may push more people into streaming. These efforts come as broadband providers are discussing, or actually implementing, pricing plans that eventually could make those services pricey to use.
Most obviously this is an issue for the mobile Web, still a small portion of consumer Internet traffic in North America. Verizon Communications‘ majority-owned wireless service last week introduced tiered data pricing, about a year after AT&T made a similar move. But potentially much more disruptive is consumption-based pricing for “fixed broadband,” landlines that provide Internet access for consumers in their homes, either via a cable or a home Wi-Fi network. Long offered on an effectively unlimited basis, American consumers aren’t used to thinking about the bytes they consume online at home.
The Party’s Over: The End of the Bandwidth Buffet (CedMagazine.com)
As the consumption of video on broadband accelerates, moving to consumption billing is the only option.
Arguments over consumption billing and network neutrality flared up again this summer. The associative connector of the two issues is their technical underpinning: Consumption billing is based on the ability to measure, meter and/or monitor bits as they flow by. The problem is that those abilities are what worry some advocates of one version of network neutrality.
The summer season began with AT&T stirring things up with an announcement that it was moving toward adopting consumption billing for wireless broadband.
Internet Providers Want to Meter Usage: Customers Who Like To Stream Movies, TV Shows May Get Hit With Extra Fees (MSNBC)
If Internet service providers’ current experiments succeed, subscribers may end up paying for high-speed Internet based on how much material they download. Trials with such metered access, rather than the traditional monthly flat fee for unlimited connection time, offer enough bandwidth that they won’t affect many consumers — yet…
Editor’s final note: We are also seeing renewed interest in quota-based systems. We completely revamped our NetEqualizer quota interface this spring to meet rising demand.
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.
By Art Reisman
Art Reisman is currently CTO and co-founder of NetEqualizer.
Imagine if every time you went to a gas station the meters were adjusted to exaggerate the amount of fuel pumped, or the gas contained inert additives. Most consumers count on the fact that state and federal regulators monitor your local gas station to ensure that a gallon is a gallon and the fuel is not a mixture of water and rubbing alcohol. But in the United States, there are no rules governing truth in bandwidth claims. At least none that we are aware of.
Given there is no standard in regulating Internet speed, it’s up to the consumer to take the extra steps to make sure you’re getting what you pay for. In the past, we’ve offered some tips both on speeding up your Internet connection as well as questions you should ask your provider. Here are some additional tips on how to fairly test your Internet speed.
1. Use a speed test site that mimics the way you actually access the Internet.
Why?
Using a popular speed test tool is too predictable, and your Internet provider knows this. In other words, they can optimize their service to show great results when you use a standard speed test site. To get a better measure of you speed, your test must be unpredictable. Think of a movie star going to the Oscars. With time to plan, they are always going to look their best. But the candid pictures captured by the tabloids never show quite as well.
To get a candid picture of your providers true throughput, we suggest using a tool such as the speed test utility from M-Lab.
2. Try a very large download to see if your speed is sustained.
We suggest downloading a full Knoppix CD. Most download utilities will give you a status bar on the speed of your download. Watch the download speed over the course of the download and see if the speed backs off after a while.
Why?
Some providers will start slowing your speed after a certain amount of data is passed in a short period, so the larger the file in the test the better. The common speed test sites likely do not use large enough downloads to trigger a slower download speed enforced by your provider.
3. If you must use a standard speed test site, make sure to repeat your tests with at least three different speed test sites.
Different speed test sites use different methods for passing data and results will vary.
4. Run your tests during busy hours — typically between 5 and 9 p.m. — and try running them at different times.
Often times IPs have trouble providing their top advertised speeds during busy hours.
5. Make sure to shut off other activities that use the Internet when you test.
This includes other computers in your house, not just the computer you are testing from.
Why?
All the computers in your house share the same Internet pipe to your provider. If somebody is watching a Netflix movie while you run your test, the movie stream will skew your results.
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.
Editor’s Note: This article from The Associated Press appeared today on Yahoo! Finance. It sheds some light on Verizon’s plans and gives additional details on some of the issues discussed in our article below.
NEW YORK (AP) — Are you a wireless data glutton or a nibbler?
Many Verizon Wireless customers will have to figure that out — perhaps as soon as this week — as the country’s largest wireless carrier is set to introduce data plans with monthly usage caps.
Here’s some help determining which plan will work for you, even if you don’t know how many megabytes are in a gigabyte.
Verizon hasn’t said what its plans will look like. But because AT&T introduced capped data plans a year ago and T-Mobile USA eliminated its unlimited data plan in May, this is well-trod ground.
By Art Reisman, CTO, www.netequalizer.com
The subject of Internet speed and how to make it go faster is always a hot topic. So that begs the question, if everybody wants their Internet to go faster, what are some of the limitations? I mean, why can’t we just achieve infinite speeds when we want them and where we want them?
Below, I’ll take on some of the fundamental gating factors of Internet speeds, primarily exploring the difference between wired and wireless connections. As we have “progressed” from a reliance on wired connections to a near-universal expectation of wireless Internet options, we’ve also put some limitations on what speeds can be reliably achieved. I’ll discuss why the wired Internet to your home will likely always be faster than the latest fourth generation (4G) wireless being touted today.
To get a basic understanding of the limitations with wireless Internet, we must first talk about frequencies. (Don’t freak out if you’re not tech savvy. We usually do a pretty good job at explaining these things using analogies that anybody can understand.) The reason why frequencies are important to this discussion is that they’re the limiting factor to speed in a wireless network.
The FCC allows cell phone companies and other wireless Internet providers to use a specific range of frequencies (channels) to transmit data. For the sake of argument, let’s just say there are 256 frequencies available to the local wireless provider in your area. So in the simplest case of the old analog world, that means a local cell tower could support 256 phone conversations at one time.
However, with the development of better digital technology in the 1980s, wireless providers have been able to juggle more than one call on each frequency. This is done by using a time sharing system where bits are transmitted over the frequency in a round-robin type fashion such that several users are sharing the channel at one time.
The wireless providers have overcome the problem of having multiple users sharing a channel by dividing it up in time slices. Essentially this means when you are talking on your cell phone or bringing up a Web page on your browser, your device pauses to let other users on the channel. Only in the best case would you have the full speed of the channel to yourself (perhaps at 3 a.m. on a deserted stretch of interstate). For example, I just looked over some of the mumbo jumbo and promises of one-gigabit speeds for 4G devices, but only in a perfect world would you be able to achieve that speed.
In the real world of wireless, we need to know two things to determine the actual data rates to the end user.
The answer to part one is straightforward: A typical wireless provider has channel licenses for frequencies in the 800 megahertz range.
A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of data at 1/2 the frequency. For example, 800 megahertz is 800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically, with noise and other environmental factors, 1/10 of the original frequency is more likely. This gives us a maximum carrying capacity per channel of 80 megabits and a ballpark estimate for our answer to part one above.
However, the actual answer to variable two, the number of users sharing a channel, is a closely guarded secret among service providers. Conservatively, let’s just say you’re sharing a channel with 20 other users on a typical cell tower in a metro area. With 80 megabits to start from, this would put your individual maximum data rate at about four megabits during a period of heavy usage.
So getting back to the focus of the article, we’ve roughly worked out a realistic cap on your super-cool new 4G wireless device at four megabits. By today’s standards, this is a pretty fast connection. But remember this is a conservative benefit-of-the-doubt best case. Wireless providers are now talking about quota usage and charging severely for overages. That translates to the fact that they must be teetering on gridlock with their data networks now. There is limited frequency real estate and high demand for content data services. This is likely to only grow as more and more users adopt mobile wireless technologies.
So where should you look for the fastest and most reliable connection? Well, there’s a good chance it’s right at home. A standard fiber connection, like the one you likely have with your home network, can go much higher than four megabits. However, as with the channel sharing found with wireless, you must also share the main line coming into your central office with other users. But assuming your cable operator runs a point-to-point fiber line from their office to your home, gigabit speeds would certainly be possible, and thus wired connections to your home will always be faster than the frequency limited devices of wireless.
Related Article: Commentary on Verizon quotas
Interesting side note , in this article by Deloitte they do not mention limitations of frequency spectrum as a limiting factor to growth.
According to a report published in ChannelPartnersOnline on June 20th, 2011, Verizon is officially moving to a usage-based billing model for new smartphone subscribers as of July.
ChannelPartners reports that Verizon Wireless plans to move to tiered pricing next month on its data plans for new smartphone customers. On smartphones, including Apple’s iPhone, Verizon Wireless offers an unlimited email and data plan for $29.99 per month. Tiered pricing is very common internationally, but U.S. mobile operators have been slow to move away from all-you-can-eat data plans.
To read the full article, click here.
We were not asked to comment, but if we were, we would agree that usage-based billing more accurately applies charges for services to those using the services. In fact, since April 2010, Internet Providers (ISPs, WISPs, etc.) that want to charge their customers by usage can implement NetEqualizer’s Quota API to track usage over a specified time period.
In addition, if an Internet provider wants to enforce usage levels, the NetEqualizer also supports the use of “rate limits” through its Hard Limits feature. Internet Providers can set inbound and outbound Hard Limits by individual IP, for a whole Class B or Class C subnet, or any legal subnet value 1-32.
We believe that usage-based billing, when broadly adopted, will level the playing field throughout the Internet service space, enabling smaller Internet providers to compete more effectively with larger carriers. Many Internet providers have to charge for usage levels, in order to keep their contention ratios manageable and to remain profitable. In the past, this has been disadvantageous in markets where larger providers have come in and charged flat fees to consumers. With the advent of usage-based billing in the cellular space, consumers will be more apt to expect to pay for usage for all their Internet services.
We will keep watching the developments in this area, and reporting our thoughts here. If you are a small Internet provider, what is your take on usage-based billing? Let us know in the comments section below.
I just read the WordPress article about World IPv6 Day, and many of the comments in response expressed that they only had a very basic understanding of what an IPv6 Internet address actually is. To better explain this issue, we have provided a 10-point FAQ that should help clarify in simple terms and analogies the ramifications of transitioning to IPv6.
To start, here’s an overview of some of the basics:
Why are we going to IPv6?
Every device connected to the Internet requires an IP address. The current system, put in place back in 1977, is called IPv4 and was designed for 4 billion addresses. At the time, the Internet was an experiment and there was no central planning for anything like the commercial Internet we are experiencing today. The official reason we need IPv6 is that we have run out of IPv4 addresses (more on this later).
Where does my IP address come from?
A consumer with an account through their provider gets their IP address from their ISP (such as Comcast). When your provider installed your Internet, they most likely put a little box in your house called a router. When powered up, this router sends a signal to your provider asking for an IP address. Your provider has large blocks of IP addresses that were allocated to them most likely by IANI.
If there are 4 billion IPv4 addresses, isn’t that enough for the world right now?
It should be considering the world population is about 6 billion. We can assume for now that private access to the Internet is a luxury of the economic middle class and above. Generally you need one Internet address per household and only one per business, so it would seem that perhaps 2 billion would be plenty of addresses at the moment to meet the current need.
So, if this is the case, why can’t we live with 4 billion IP addresses for now?
First of all, industrialized societies are putting (or planning to put) Internet addresses in all kinds of devices (mobile phones, refrigerators, etc.). So allocating one IP address per household or business is no longer valid. The demand has surpassed this considerably as many individuals require multiple IP addresses.
Second, the IP addresses were originally distributed by IANI like cheap wine. Blocks of IP addresses were handed out in chunks to organizations in much larger quantities than needed. In fairness, at the time, it was originally believed that every computer in a company would need its own IP addresses. However, since the advent of NAT/PAT back in the 1980s, most companies and many ISPs can easily stretch a single IP to 255 users (sharing it). That brings the actual number of users that IPv4 could potentially support to well over a trillion!
Yet, while this is true, the multiple addresses originally distributed to individual organizations haven’t been reallocated for use elsewhere. Most of the attempted media scare surrounding IPv6 is based on the fact that IANI has given out all the centrally controlled IP addresses, and the IP addresses already given out are not easily reclaimed. So, despite there being plenty of supply overall, it’s not distributed as efficiently as it could be.
Can’t we just reclaim and reuse the surplus of IPv4 addresses?
Since we just very recently ran out, there is no big motivation in place for the owners to give/sell the unused IPs back. There is currently no mechanism or established commodity market for them (yet).
Also, once allocated by IANI, IP addresses are not necessarily accounted for by anyone. Yes, there is an official owner, but they are not under any obligation to make efficient use of their allocation. Think of it like a retired farmer with a large set of historical water rights. Suppose the farmer retires and retains his water rights because there is nobody to which he can sell them back. The difference here is that water rights are very valuable. Perhaps you see where I am going with this for IPv4? Demand and need are not necessarily the same thing.
How does an IPv4-enabled user talk to an IPv6 user?
In short, they don’t. At least not directly. For now it’s done with smoke and mirrors. The dirty secret with this transition strategy is that the customer must actually have both IPv6 and IPv4 addresses at the same time. They cannot completely switch to an IPv6 address without retaining their old IPv4 address. So it is in reality a duplicate isolated Internet where you are in one or the other.
Communication is possible, though, using a dual stack. The dual-stack method is what allows an IPv6 customer to talk to IPv4 users and IPv6 users at the same time. With the dual stack, the Internet provider will match up IPv6 users to talk with IPv6 if they are both IPv6 enabled. However, IPv4 users CANNOT talk to IPv6 users, so the customer must maintain an IPv4 address otherwise they would cut themselves off from 99.99+ percent of Internet users. The dual-stack method is just maintaining two separate Internet interfaces. Without maintaining the IPv4 address at the same time, a customer would isolate themselves from huge swaths of the world until everybody had IPv6. To date, in limited tests less than .0026 percent of the traffic on the Internet has been IPv6. The rest is IPv4, and that was for a short test experiment.
Why is it so hard to transition to IPv6? Why can’t we just switch tomorrow?
To recap previous points:
1) IPv4 users, all 4 billion of them, currently cannot talk to new IPv6 users.
2) IPv6 users cannot talk to IPv4 users unless they keep their old IPv4 address and a dual stack.
3) IPv4 still works quite well, and there are IPv4 addresses available. However, although the reclamation of IPv4 addresses currently lacks some organization, it may become more econimically feasible as problems with the transition to IPv6 crop up. Only time will tell.
What would happen if we did not switch? Could we live with IPv4?
Yes, the Internet would continue to operate. However, as the pressure for new and easy to distribute IP addresses for mobile devices heats up, I think we would see IP addresses being sold like real estate.
Note: A bigger economic gating factor to the adoption of the expanding Internet is the limitation of wireless frequency space. You can’t create any more frequencies for wireless in areas that are already saturated. IP addresses are just now coming under some pressure, and as with any fixed commodity, we will see their value rise as the holders of large blocks of IP addresses sell them off and redistribute the existing 4 billion. I suspect the set we have can last another 100 years under this type of system.
Is it possible that a segment of the Internet will split off and exclusively use IPv6?
Yes, this is a possible scenario, and there is precedent for it. Vendors, given a chance, can eliminate competition simply by having a critical mass of users willing to adopt their services. Here is the scenario: (Keep in mind that some of the following contains opinions and conjecture on IPv6, the future, and the motivation of players involved in pushing IPv6.)
With a complete worldwide conversion to IPv6 not likely in the near future, a small number of larger ISPs and content providers turn on IPv6 and start serving IPv6 enabled customers with unique and original content not accessible to customers limited to IPv4. For example, Facebook starts a new service only available on their IPv6 network supported by AT&T. This would be similar to what was initially done with the iPad and iPhone.
It used to be that all applications on the Internet ran from a standard Web browser and were device independent. However, there is a growing subset of applications that only run on the Apple devices. Just a few years ago it was a forgone conclusion that vendors would make Web applications capable of running on any browser and any hardware device. I am not so sure this is the case anymore.
When will we lose our dependency on IPv4?
Good question. For now, most of the push for IPv6 seems to be coming from vendors using the standard fear tactic. However, as is always the case, with the development of new products and technologies, all of this could change very quickly.
If you relied only on conspiracy theories to explain the origin of software bugs, they would likely leave little trust in the vendors and manufacturers providing your technology. In general, the more skeptical theories chalk software bugs up to a few nefarious, and easily preventable, causes:
Although I’ve certainly seen evidence of these policies many times over my 25-year career, the following case studies are more typical for understanding how a bug actually gets into a software release. It’s not necessarily the conspiracy it might initially seem.
My most memorable system failure took place back in the early 1990s. I was the system engineer responsible for the underlying UNIX operating system and Redundant Disk Drives (RAID) on the Audix Voice Messaging system. This was before the days of widespread e-mail use. I worked for AT&T Bell Labs at the time, and AT&T had a reputation of both high price and high reliability. Our customers, almost all Fortune 500 companies, used their voice mail extensively to catalog and archive voice messages. Customers such as John Hancock paid a premium for redundancy on their voice message storage. If there were any field-related problems, the buck stopped in my engineering lab.
For testing purposes, I had several racks of Audix (trade mark) systems and simulators combined with various stacks of disk drives in RAID configurations. We ran these systems for hours, constantly recording voice messages. To stress the RAID storage, we would periodically pull the power on a running disk drive. We would also smash them with a hammer while running. Despite the deliberate destruction of running disk drives, in every test scenario the RAID system worked flawlessly. We never lost a voice mail message in our laboratory.
However, about six months after a major release, I got a call from our support team. John Hancock had a system failure and lost every last one of their corporate voice mails. (AT&T had advised backing data up to tape, but John Hancock had decided not to utilize that facility because of their RAID investment. Remember, this was in the 1990s and does not reflect John Hancock current policies.)
The root cause analysis took several weeks of work with the RAID vendor, myself and some of the key UNIX developers sequestered in a lab in Santa Clara, California. After numerous brainstorm sessions, we were able to re-create the problem. It seemed the John Hancock disk drive had suffered what’s called a parity error.
A parity error can develop if a problem occurs when reading and writing data to the drive. When the problem emerges, the drives try to recover, but in the meantime the redundant drives read and write the bad data. As the attempts at auto recovery within the disk drive go on (sometimes for several minutes), all of the redundant drives have their copies of the data damaged beyond repair. In the case of John Hancock, when the system finally locked up, the voice message indices were useless. Unfortunately, very little could have been done on the vendor or manufacturing end to prevent this.
More recently, when APconnections released a new version of our NetEqualizer, despite extensive testing over a period of months including a new simulation lab, we had to release a patch to clean up some lingering problems with VLAN tags. It turned out the problem was with a bug in the Linux kernel, a kernel that normally gets better with time.
So what happened? Why did we not find this VLAN tag bug before the release? Well, first off, the VLAN tagging facility in the kernel had been stable for years. (The Linux kernel had been released as stable by Kernel.org.) We also had a reliable regression test for new releases that made sure it was not broken. However, our regression test only simulated the actual tag passing through the kernel. This made it much easier to test, and considering our bandwidth shaper software only affected the packets after the tag was in place, there was no logical reason to test a stable feature of the Linux kernel. To retest stable kernel features would not have been economically viable considering these circumstances.
This logic is common during pre-market testing. Rather than test everything, vendors use a regression test for stable components of their system and only rigorously test new features. A regression test is a subset of scenarios and is the only practical way to make sure features unrelated to those being changed do not break when a new release comes out. Think of it this way: Does your mechanic do a crash test when replacing the car battery to see if the airbags still deploy? This analogy may seem silly, but as a product developer, you must be pragmatic about what you test. There are almost infinite variations on a mature product and to retest all of them is not possible.
Therefore, in reality, most developers want nothing more than to release a flawless product. Yet, despite a developer’s best intentions, not every stone can be turned during pre-market testing. This, however, shouldn’t deter a developer from striving for perfection — both before a release as well as when the occasional bugs appear in the field.
Where have all the Wireless ISPs gone?
July 17, 2011 — netequalizerRachel Carlson wrote silent spring in 1962. She noticed a lack of Robins in her yard and eventually made the link back to DDT spraying. Robins are again abundant, given a fighting chance they seem to prosper quite well.
Much like the Robins of 1962 , in the past 3 years, I have noticed a die off in Business from Wireless ISPs. Four years ago, I spent at least an hour or two a day talking to various WISPs around the USA. The mood was always upbeat, many were adding subscribers at a rapid rate. Today the rural WISPs of the US are still out there, but expansion and investment has come to a standstill.
Is the private investment drought by small rural WISPs due to the recession?
Certainly some of the slowdown is due to the weakness in the housing market; but as one operator told me a couple years ago, his customers will keep the Internet connection up long after they have disconnected their Television and Phone. Some consumers will pay their Internet bill right up to the last day of a pending foreclosure.
Much of the slow down is due to the rural broadband stimulus.
The Rural BroadBand initiative, seems to be a solution looking for a problem. From our perspective the main thing this initiative accomplished is subsidizing a few providers, at the expense of freezing billions in private equity. Private equity that up until the initiative was effectively expanding the rural market through entrepreneurs.
Why did the private investment stop.
It was quite simple really, when the playing field was level, most small operators felt like they had an upper hand against the larger prividers in rural areas for example
– They worked smarter using with less overhead using back haul technologies
– There was an abundance of wireless equipment makers (based on 802.11 public requencies) ready to help
– They had confidence that the larger operators were not interested in these low margin niche markets
With the broad band initiative several things happened
– Nobody knew where the money was going to be spent or how broad the reach would be , this uncertainty froze all private expansion
– Many of these smaller providers applied for money, and only a few were awarded contracts ( if any) . Think of it this way suppose there were 4 restaurants in town all serving slightly different venues and then a giant came along and gave one Restaurant a 10 million dollar subsidy , the other three go out of business
Related article By the FCC’s own report it seems the rural broad band initiative has not changed access to higher speeds.
Prehaps someday the poison of select government subsidies will come to end , and the rural WISP will prosper again.
Update Nov 2011: It appears that not only did the rural broad band initiative freeze up the small home grown ISP market, but proves again that large government subsidies are a poison pill. Related article
By Art Reisman, CTO, www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to ISPs, Universities, Wireless ISPs, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably.
Share this: