Some Unique Ideas on How to Fight Copyright Piracy


I promised, half seriously, in my last commentary to help the RIAA, and the music industry, come up with some ideas to fight media piracy.

First, let’s go over the current primary method that the RIAA uses to root out copyright violations.

Note: These techniques were brought to my attention by institutions that have been served RIAA requests, and the following is educated conjecture based on those observations.

How the RIAA Roots Out Copyright Violations

P2P Directory Scan

Most P2P clients will publicly advertise a directory of stored files for download for other P2P clients to see. I suspect most consumers who use a P2P client are not aware that they are also setting up a server when they install their P2P client. For example, if you are running a P2P client on your laptop, you are also most likely running a P2P server advertising media files from your hard drive for others to download. To find you, it is just a simple matter of the RIAA agent, using another client, to ask your server what music files are available. If they find copyrighted material on your hard drive, they may then attempt to locate you and send you a cease and desist. Unless you are intentionally profiting and distributing large amounts of copyrighted material, this method is really the only practical method to track down a small-scale distributor.

So far so good, but the problem the RIAA often has with apprehension is that many home users have their IP address hidden behind their ISP provider. In other words, the RIAA can only track a user to their local ISP and from there the trail goes cold.  A good analogy would be to assume that you were dog the bounty hunter and all you had to go on was the address of an apartment building.  That gets you in the general area of a suspect, but you would still need some help in finding the unit number, thus making apprehension a bit more complex.

So essentially what they do is send a threatening letter to your ISP requesting that they do something about your downloading of illegal music. It is far more efficient for them to send this letter than to investigate further.  The copyright lobbyists also work for favorable laws to force ISPs to be accountable for pirated material going across their wires.  These laws often get into the grey area of jeopardizing the open Internet.

Okay, now for the fun part.  Here are some unique ideas from left field to help find copyright violators.

How to Fight Media Piracy (some wild ideas)

1) Seed the Internet with a music file deliberately containing a benevolent virus.

The virus’s only symptom would be to e-mail the RIAA information about the person playing the illegal download on their computer. The ironic thing about this method is that many P2P files are encrusted with viruses already. The intent of this virus would just be to locate the violator. I am not sure if this would be illegal or be considered entrapment; it would be like the police selling drugs to a user and then arresting them, but it would be effective.

2) Flood the internet with poor quality copies of the real recordings.

I am not sure if this would work or not, but the idea is if all the free black market copies of music out there were really poor quality, that would increase the incentive to get a real version from a reputable source.  Especially if the names and the titles, as well as the file sizes of the bad copies could not be determined until after they were downloaded.

3) Create a giant free site like MegaUpload (if you go to this site, it is now just an FBI piracy warning).

Let it fill up with bootleg material, and once users started using this site extensively, start appending little recorded messages on the music files as they go out that say things about violating copyright law.  So when they play, the user hears a threatening message about how they have violated the law and what can happen to them. This is a twist on idea #2 above.

Maybe the RIAA and music industry will take up one of my ideas and use it to stop copyright infringement.  If you can think of other ways to reduce piracy, please feel free to comment and add your ideas to my list.

How to Build Your Own Linux-Based Access Point in 5 Minutes


The motivation to build your access point using Linux are many, and I have listed a few compelling reasons below:

1) You can use the Linux-rich set of firewall rules to customize access to any segment of your wireless network.
2) You can use SNMP utilities to report on traffic going through your AP.
3) You can configure your AP to send e-mail alerts if there are problems with your AP.
4) You can custom coordinate communications with other access points – for example, build your own Mesh network.
5) You can build specialized user authentication services and run them from the Linux server.

Note: We had experimented with building access points with a Linux-based server several years ago, but found that the Linux support for Wireless Radio cards was severely lacking. Most of the compatibility issues have been solved in the newer Linux kernels.

Building your own Linux access point in about 5 minutes:

Yes, 5 minutes or less is what it just took me to configure an access point by following this document to test that it was written correctly. This was after creating the CF from a ready-made image containing Voyage. Also, I did “edit the CF directly” method mentioned below so I could just cut and paste the lines that belong in the four necessary files.

Building your own Linux access point using the Alix 3D2 and the Atheros-based Wistron CM9 MiniPCI card may not be the cheapest way to do your own access point if you have to buy all the parts but here is how you can do it. These instructions may be used to setup any number of other combinations of hardware such as leftover computers from your Pacman gaming days that happen to have an Atheros chipset wireless radio attached as long as Voyage sees it as the same device name and so on.

This access point has a transparent bridge and uses your existing DHCP server to give out IPs to wireless devices that connect to it. This means just plug in the Ethernet cable to your existing network and connect wirelessly without the fuss or muss just like you plugged into your switch. This is the only way that will be described in this article, but you can of course setup your own DHCP server on the unit if you know how to do so.

Parts list:
ALIX3D2 (ALIX.3D2)with 1 LAN and 2 miniPCI, LX800, 256Mb
18w (15v/1.2A) AC-DC Power Adapter with Power Cord
Wistron CM9 MiniPCI Card
N-Type female Straight Pigtail
ANT-N-5 – Outdoor Omni Antenna, 5.5Dbi, N-Ttpe male, Straight type (rubber ducky type)
Kingston 4 GB CompactFlash Memory Card CF/4GB

Total for the above from one provider was under $200.

Optional parts:
Power Over Ethernet Injector – for about $4 and only necessary if you want to run the unit out to some area that does not have power right there such as an attic.
Case for Alix3D2 – price and link not available as this is a bench test model.

Assembly:
Plug CF card (once imaged with Voyage software and optionally already configured as mentioned below) into board. Only goes one way and only one place to put it.
Plug in the pigtail with antenna attached to the CM9 antenna connection that is closest to the center of the radio. Its easier to do this with the radio out.
Plug in the CM9 wireless radio in the card slot on the other side of the Alix board which has the LAN port on it.
Plug in a standard LAN cable into your switch connected to your network.
Plug in the power adapter to the Alix board and then plug into the wall (when you do this, it boots up, so ready the CF first).

Configuration tools needed:
Null modem serial cable
Windows or Linux or Mac with some terminal software installed so as to access the serial port of your new access point for setup. Windows XP with Hyperterm or Linux with Minicom or Mac with Zterm.
Optionally, instead of using a Null modem and terminal software you can setup the new access point by editing the CF card directly prior to installing it. Editing it directly can be a lot easier than figuring out how to use the serial port and terminal software.

Software used was Voyage Linux. Searching for Voyage Linux will lead you to their home page at http://linux.voyage.hk/
Version used was 0.7.5 (there are probably newer versions by now)
You can create your own CF by following the instructions on the Voyage Linux website or you can search for ready made CF images. If you search for “voyage075_2GB_ALIX” you currently can find an image ready to go and will fit on a 2gb or larger CF card. Since the suggested CF card in the parts list says 4gb we are good.

Now, assuming you have created a CF card with Voyage Linux 0.7.5 on it and can log into the console with your terminal software, or have access to the CF directly from a computer that can read the Linux disk, then do the following steps:

(If logged into a booted-up Alix board with the CF installed on it using the serial port, then run remountrw first so you can create and edit files.)

Set it up as an access point by first creating a file in/root called apup. In that file, you can put the following lines:
#!/bin/sh
/sbin/ifconfig eth0 0.0.0.0 up
/usr/sbin/brctl addbr br0
/usr/sbin/brctl addif br0 eth0
/usr/sbin/hostapd -B /etc/hostapd/hostapd.wlan0.conf
/usr/sbin/brctl addif br0 wlan0
/sbin/ifconfig br0 192.168.0.100 netmask 255.255.255.0 up
/sbin/route add default gw 192.168.0.1
echo 1 > /proc/sys/net/ipv4/ip_forward
/sbin/iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE

Change that 192.168.0.100 and netmask to whatever you want the IP for the access point to be so that you can get to it via SSH. Change the 192.168.0.1 to your default route or gateway.

Now use chmod to make /root/apup executable with something like chmod a+x /root/apup

Now edit /etc/hostapd/hostapd.wlan0.conf and edit (if there already) so that it has the following:
interface=wlan0
driver=nl80211
logger_syslog=-1
logger_syslog_level=2
logger_stdout=-1
logger_stdout_level=2
debug=4
#dump_file=/tmp/hostapd.dump
#ctrl_interface=/var/run/hostapd
#ctrl_interface_group=0
channel=1
macaddr_acl=0
auth_algs=3
eapol_key_index_workaround=0
eap_server=0
wpa=3
ssid=alix
wpa_passphrase=voyage
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
eapol_version=1

Edit the file /etc/network/interfaces and change the area that brings up eth0 to:
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 192.168.0.100
netmask 255.255.255.0
gateway 192.168.0.1

This is so that if for some reason the bridge br0 does not come up then possibly you can still access eth0 via the same IP you put in apup.

Now, edit /etc/rc.local and put one line towards the bottom to run /root/apup so it looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will “exit 0” on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/root/apup
exit 0

That’s it for software setup. If you want to change the SSID and have it say something besides alix then edit the line in /etc/hostapd/hostapd.wlan0.conf and if you want a different wpa password then edit the line in there dealing with that as well. The channel the radio will use is also setup there.

If you logged into the unit using the serial port and if the CF is still in read/write mode then run remountro to put it back in readonly mode and reboot.

From a laptop you should see your new access point show up as alix and secured with WPA password of voyage.

What Does it Cost You Per Mbs for Bandwidth Shaping?


Sometimes by using a cost metric you can distill a relatively complicated thing down to a simple number for comparison. For example, we can compare housing costs by Dollars Per Square Foot or the fuel efficiency of cars by using the Miles Per Gallon (MPG) metric.  There are a number of factors that go into buying a house, or a car, and a compelling cost metric like those above may be one factor.   Nevertheless, if you decide to buy something that is more expensive to operate than a less expensive alternative, you are probably aware of the cost differences and justify those with some good reasons.

Clearly this makes sense for bandwidth shaping now more than ever, because the cost of bandwidth continues to decline and as the cost of bandwidth declines, the cost of shaping the bandwidth should decline as well.  After all, it wouldn’t be logical to spend a lot of money to manage a resource that’s declining in value.

With that in mind, I thought it might be interesting to looking at bandwidth shaping on a cost per Mbs basis. Alternatively, I could look at bandwidth shaping on a cost per user basis, but that metric fails to capture the declining cost of a Mbs of bandwidth. So, cost per Mbs it is.

As we’ve pointed out before in previous articles, there are two kinds of costs that are typically associated with bandwidth shapers:

1) Upfront costs (these are for the equipment and setup)

2) Ongoing costs (these are for annual renewals, upgrades, license updates, labor for maintenance, etc…)

Upfront, or equipment costs, are usually pretty easy to get.  You just call the vendor and ask for the price of their product (maybe not so easy in some cases).  In the case of the NetEqualizer, you don’t even have to do that – we publish our prices here.

With the NetEqualizer, setup time is normally less than an hour and is thus negligible, so we’ll just divide the unit price by the throughput level, and here’s the result:

I think this is what you would expect to see.

For ongoing costs you would need to add all the mandatory per year costs and divide by throughput, and the metric would be an ongoing “yearly” per Mbs cost.

Again, if we take the NetEqualizer as an example, the ongoing costs are almost zero.  This is because it’s a turn-key appliance and it requires no time from the customer for bandwidth analysis, nor does it require any policy setup/maintenance to effectively run (it doesn’t use policies). In fact, it’s a true zero maintenance product and that yields zero labor costs. Besides no labor, there’s no updates or licenses required (an optional service contract is available if you want ongoing access to technical support, or software upgrades).

Frankly, it’s not worth the effort of graphing this one. The ongoing cost of a NetEqualizer Support Agreement ranges from $29 (dollars) – $.20 (cents) per Mbs per year. Yet, this isn’t the case for many other products and this number should be evaluated carefully. In fact, in some cases the ongoing costs of some products exceed the upfront cost of a new NetEqualizer!

Again, it may not be the case that the lowest cost per Mbs of bandwidth shaping is the best solution for you – but, if it’s not, you should have some good reasons.

If you shape bandwidth now, what is your cost per Mbs of bandwidth shaping? We’d be interested to know.

If your ongoing costs are higher than the upfront costs of a new NetEqualizer and you’re open to a discussion, you should drop us a note at sales@apconnections.net.

Music Anti-Piracy in Perspective Once Again


By: Art Reisman

Art Reisman CTO www.netequalizer.com

Art Reisman is the CTO of APconnections. He is Chief Architect on the NetGladiator and NetEqualizer product lines.

I was going to write a commentary story a couple weeks ago when the news broke about the government shut down of the Megaupload site. Before I could get started, one of my colleagues pointed out this new undetectable file sharing tool. Although I personally condemn any kind of software or copyright piracy in any form, all I can say is the media copyright enforcement industry should have known better. They should have known that when you spray a cockroach colony with pesticide, a few will survive and their offspring will be highly resistant.

Here is a brief excerpt from rawstory.com:

The nature of its technology (file sharing technology) is completely decentralized, leaving moderation to the users. Individuals can rename files, flag phony downloads or viruses, create “channels” of verified downloads, and act as nodes that distribute lists of peers across the network.

In the recent U.S. debate over anti-piracy measures, absolutely none of the proposed enforcement mechanisms would affect Tribler: it is, quite literally, the content industry’s worst nightmare come to life.”

Flash back to our 2008 story about how the break up Napster caused the initial wave of P2P. Back in 2001, Napster actually wanted to work on licensing for all their media files, and yet they were soundly rebuked and crushed by industry executives and the legal departments who saw no reason to compromise for fear of undermining their retail media channels. Within a few months of Napster’s demise, decentralized P2P exploded with the first wave of Kazaa, Bearshare and the like.

In this latest round of piracy, decentralized file sharing has dropped off a bit, and consumers started to congregate at centralized depositories again, most likely for the convenience of finding the pirated files they want quickly. And now with the shutting down of these sites, they are scattering again to decentralized P2P. Only this time, as the article points out, we have decentralized P2P on steroids. Perhaps a better name would be P2P 3G or P2P 4G.

And then there was the SOPA Fiasco

The Internet is so much bigger than the Music Industry, and it is a scary thought that the proposed  SOPA laws went as far as they did before getting crushed.

I am going to estimate the economic power of the Internet at 30 trillion dollars. How did I arrive at that number?  Basically that number implies that roughly half the worlds GDP is now tied to the Internet, and I don’t mean just Internet financial transactions for on-line shopping. It is the first place most communication starts for any business. It is as important as railroads, shipping, and trucking combined in terms of economic impact. If you want, we can reduce that number to 10 trillion, 1/6 of the worlds GDP , it does not really matter for the point I am about to make.

The latest figure I could find is that the Music Industry did approximately 15 billion dollars worth of business at their peak before piracy, and has steadily declined since then. There is no denying that the Music Industry has suffered 5 to 6 billion dollars in losses due to on-line piracy in the past few years, however that number is roughly .06 percent of the total positive economic impact of the Internet. Think of a stadium with 1000 people watching a game and one person standing up in front and forcing everybody to stop cheering  so they could watch the game without the bothersome noise. That is the power we are giving to the copyright industry.  We have a bunch of sheep in our Congress running around creating laws to appease a few lobbyists that risk damaging the free enterprise that is the Internet. Risking damage to the only real positive economic driver of the past 10 years. The potential damage to free enterprise by these restrictive overbearing laws is not worth the risk. Again, I am not condoning piracy nor am I against the Music Industry enforcing their laws and going after criminals, but the peanut butter approach to using a morbid congress to recoup their losses is just stupid.  The less regulation we can put on the Internet the more economic impact it will have now and into the future.  These laws and heavy-handed enforcement tactics create unrealistic burdens on operators and businesses and need to be put into perspective. There has to be a more intelligent way to enforce existing laws besides creating a highly-regulated Internet.

Stay tuned for some suggestions in my next article.

FCC is the Latest Dupe in Speed-Test Shenanigans


Shenanigans: is defined as the deception or tomfoolery on the part of carnival stand operators. In the case of Internet speed, claims made in the latest Wall Street Journal article, the tomfoolery is in the lack of details on how these tests were carried out.

According to the article, all the providers tested by the FCC delivered 50 megabits or more of bandwidth consistently for 24 hours straight. Fifty megabits should be enough for 50 people to continuously watch a YouTube stream at the same time. With my provider, in a large metro area, I often can’t even watch one 1 minute clip for more than a few seconds without that little time-out icon spinning in my face. By the time the video queues up enough content to play all the way through, I have long since forgotten about it and moved on. And then, when it finally starts playing again, I have to go back and frantically find it and kill the YouTube window that is barking at me from somewhere in the background.

So what gives here? Is there something wrong with my service?

I am supposed to have 10 megabit service. When I run a test I get 20 megabits of download enough to run 20 YouTube streams without issue, so far so good.

The problem with translating speed test claims to your actual Internet experience is that there are all kinds of potentially real problems once you get away from the simplicity of a speed test, and yes, plenty of deceptions as well.

First, lets look at the potentially honest problems with your actual speed when watching a YouTube video:

1) Remote server is slow: The YouTube server itself could actually be overwhelmed and you would have no way to know.

How to determine: Try various YouTube videos at once, you will likely hit different servers and see different speeds if this is the problem.

2) Local wireless problems: I have been the victim of this problem. Running two wireless access points and a couple of wireless cameras jammed one of my access points to the point where I could hardly connect to an Internet site at all.

How to determine: Plug your computer directly into your modem, thus bypassing the wireless router and test your speed.

3) Local provider link is congested: Providers have shared distribution points for your neighborhood or area, and these can become congested and slow.

How to determine: Run a speed test. If the local link to your provider is congested, it will show up on the speed test, and there cannot be any deception.

 

The Deceptions

1) Caching

I have done enough testing first hand to confirm that my provider caches heavily trafficked sites whenever they can. I would not really call this a true deception, as caching benefits both provider and consumer; however, if you end up hitting a YouTube video that is not currently in the cache, your speed will suffer at certain times during the day.

How to Determine: Watch a popular YouTube video, and then watch an obscure, seldom-watched YouTube.

Note: Do not watch the same YouTube twice in a row as it may end up in your local cache, or your providers local cache, after the first viewing.

2) Exchange Point Deceptions

The main congestion point between you and the open Internet is your providers exchange point. Most likely your cable company or DSL provider has a dedicated wire direct to your home. This wire, most likely has a clean path back to the NOC central location. The advertised speed of your service is most likely a declaration of the speed from your house to your providers NOC, hence one could argue this is your Internet speed. This would be fine except that most of the public Internet content lies beyond your provider through an exchange point.

The NOC exchange point is where you leave your local providers wires and go out to access information from data hosted on other provider networks. Providers pay extra costs when you leave their network, in both fees and in equipment costs. A few of things they can do to deceive you are:

– Give special priority to your speed tests through their site to insure the speed test runs as fast as possible.

– Re-route local traffic for certain applications back onto their network. Essentially limiting and preventing traffic from leaving their network.

– They can locally host the speed test themselves.
How to determine: Use a speed test tool that cannot be spoofed.

See also:

Is Your ISP Throttling your Bandwidth

NetEqualizer YouTube Caching

NetEqualizer News: February 2012


February 2012

Greetings!

Enjoy another issue of NetEqualizer News! This month, we discuss our newly developed Intrusion Prevention System: NetGladiator – a tool that will effectively protect your websites without hampering network performance! As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…


Coming into the new year, we are currently investing in technology that will allow our entire product line to do more parallel processing. This is good news. Parallel processing is what large computing powers do to make intelligent systems like the computer that plays Jeopardy! The key for us is to do it seamlessly without raising prices – so don’t expect to see anything except better, higher-end systems and the same low price points.

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly here. I would love to hear from you!

NetGladiator – The NEW Intrusion Prevention System

APconnections – makers of the NetEqualizer – is excited to announce the release of the next great Intrusion Prevention System – the NetGladiator!

The NetGladiator is unlike any other Intrusion Prevention System on the market. Tested against some of the worlds best hackers, NetGladiator uses proven Deep Packet Inspection technology to identify an attack not based on predefined signatures, but on behavior-based anomalies that occur in your network.

The idea behind the NetGladiator technology is that the way a potential hacker interacts with your web infrastructure is vastly different from how a normal user interacts with your sites. NetGladiator identifies these anomalies and blocks the attackers before they’ve begun

Because NetGladiator comes to you from APconnections, a name you know and trust in the bandwidth arbitration space, your network will experience zero latency effects. It also will prove to be the simplest and easiest-to-install product on the market with a very fair price point that provides great value.

Here are just some of the common attacks that NetGladiator protects against:
– SQL Injection
– Brute Force Directory Traversal
– Cross-Site Scripting
– Reflected URL Redirects
– Remote Administrative Brute Forcing
– Remote Shell Execution
– And many more…

Engineers at APconnections have cut through the hype surrounding intrusion prevention products with this simple, yet effective product.

For more information on the NetGladiator IPS, take a look at our website.

You can also visit our blog or contact us at:

ips@apconnections.net -or-

worldwide: (303) 997-1300 x. 123 -or-

toll-free U.S.: 888-287-2492


NetEqualizer 5.6 Release Advised

The 5.6 Software Release for NetEqualizer is advised for all customers who are on 5.x that utilize pools, VLANs, and connection limits.

For More information on the Software Release, take a look at our Software Update Notes for 5.6. You can also visit our blog or contact us:

sales@apconnections.net -or-

worldwide (303) 997-1300 x103. -or-

toll-free U.S. (888) 287-2492


Best Of The Blog

Cloud Computing – Do You Have Enough Bandwidth? And a Few Other Things to Consider.

By Art Reisman – CTO – NetEqualizer

The following is a list of things to consider when using a cloud-computing model.

Bandwidth: Is your link fast enough to support cloud computing?

We get asked this question all the time: What is the best-practice standard for bandwidth allocation?

Well, the answer depends on what you are computing.

First, there is the application itself. Is your application dynamically loading up modules every time you click on a new screen? If the application is designed correctly, it will be lightweight and come up quickly in your browser. Flash video screens certainly spruce up the experience, but I hate waiting for them. Make sure when you go to a cloud model that your application is adapted for limited bandwidth.

Second, what type of transactions are you running? Are you running videos and large graphics or just data? Are you doing photo processing from Kodak? If so, you are not typical, and moving images up and down your link will be your constraining factor.

Third, are you sharing general Internet access with your cloud link? In other words, is that guy on his lunch break watching a replay of royal wedding bloopers on YouTube interfering with your salesforce.com access?

The good news is (assuming you will be running a transactional cloud-computing environment – e.g. accounting, sales database, basic email, attendance, medical records – without video clips or large data files), you most likely will NOT need additional Internet bandwidth. Obviously, we assume your business has reasonable Internet response times prior to transitioning to a cloud application…

Photo Of The Month

Longing for Warmer Weather

Colorado weather is a fascinating phenomenon. It can be a glorious 70 degrees in the morning, and be 32 degrees and freezing by nightfall. While there are spotty days of warmth in January and February, we are still in the dead of winter. Pictures like this make us yearn for spring and summer! This photo was taken by one of our staff at a farm in Kansas in August.

Seven Things to Look for When Choosing an Intrusion Prevention System


The following list was submitted by the APconnections technical staff.

APconnections is a company that specializes in turn-key bandwidth control and intrusion prevention system (IPS) products.

1) Don’t degrade your network speed. Make sure your IPS system is not going to slow down your network. If you have a T1 or smaller sized network, chances are just about any tool you choose will not slow down your connection; however with links approaching 10 megabits and higher,  it is worth investing in a tool whose throughput speeds can be quantified. Higher speeds generally will require a tool specifically designed and tested as an IPS device and rated for your link speed. Problems can arise if you buy a software add-on module for your web server. A stand-alone physical device specifically designed to prevent intrusion is likely your best option. A good IPS system is very CPU intensive, and lower-end routers, switches, and heavily utilized web servers generally do not have the extra CPU cycles to support an IPS system. For example, IT managers are aware that large web server sites must use multiple servers to handle large volumes of HTTPS pages, which are also CPU intensive.  The same metrics will apply to an IPS system on a smaller scale,  so make sure you are not underpowered.

2) Watch out for high license fees. Try to get a tool with a one-time cost and a small licensing fee. Many vendors sell their equipment below cost with the hopes of getting a monthly fee on per seat license. Yes, you should expect to pay a yearly support fee, but it should be a small fraction of the tool’s original cost.

3) More features is not necessarily better when it comes stopping intrusion from hackers.  You may not realize that large, robust “all-in-one” IPS solutions can be rendered useless by alerting you thousands of times a day, as you will ignore their alerts at that volume.  They can also block legitimate requests (“false positives”), and can break web
functionality. They can also block legitimate requests (“false positives”), and can break web functionality.

You should consider solutions that are not as fully-featured but are targeted to your security concerns, so that you receive meaningful alerts on real potential intrusion attempts.  More features can just introduce clutter, where you are not able to sift through your alerts to find what you really care about.  Also, doing everything can dilute the mission of the toolset, so instead of doing one thing well, it does everything poorly.

Remember, the biggest threat to your enterprise is a person that breaks into your internal systems and attains access to your customer data.  A typical PC virus or Denial of Service (DoS) attack does not pose this type of threat.  Although it may be counter-intuitive to your experience, it is a good idea to make sure you have a solid intrusion detection system before investing in things like virus prevention, web-filters and reporting.  Yes, viruses are a pain and can bring down systems, but the damage will likely not compare in real cost to a hacker that steals your customer records.

4) Block first ask questions later.  An intruder usually behaves oddly when compared to a normal visitor. Your intrusion detection device should block first and ask questions later. It is better to accidentally block a small number of friendlies than to let one hacker into your network. You will get feedback if legitimate visitors are locked out from your website, and it won’t take long to hear from them if your intrusion device is accidentally blocking a friendly visitor.

5) Don’t rely on manpower for detection. Let the device do the work. If you are relying on a reporting system and a human to make a final decision on what to block, you will get hacked. Your device must be automated and on the job 24/7. There is nothing wrong with an analyst doing the follow-up.

6) Use a white knight to expose your security risks. There was an article in the Wall Street Journal today on how anybody can hire a professional hacker. What they failed to mention is that you can also hire a white knight to test your armor and let you know if you have any weaknesses. Most weaknesses are common back doors in web servers that can easily be remedied once exposed by a white knight.

7) Use a combination of techniques. The only way to 100 percent secure your enterprise is to block all outside access, and with the silo mentality of a some security zealots you could end up with this TSA mentality solution if not careful. Given the reality that you must have a public portal for your customers, the next best thing to locking them out is a combination of white knight testing, plugging holes in web servers and entry points and a permanent watch dog intrusion prevention system – this should keep you safe from a hacker.

Some good intrusion prevention links:

Lanner

Checkpoint

NetGladiator  (our product)

Solera Networks

SourceFIRE

Developing Technology to Detect a Network Hacker


Editors note:  Updated on Feb 1st, 2012.  Our new product, NetGladiator, has been released.  You can learn more about it on the NetGladiator website at www.netgladiator.net or calling us at 303.997.1300 x123.

In a few weeks we will be releasing a product to automatically detect and prevent a web application hacker from breaking into a private enterprise. What follows are the details of how this product was born.  If you are currently seeking or researching intrusion detection & prevention technology, you will find the following quite useful.

Like many technology innovations, our solution resulted from the timely intersection of two technologies.

Technology 1: About one year ago we starting working with a consultant in our local tech community to do some programming work on a minor feature in our NetEqualizer product line. Fiddlerontheroot is the name of their company, and they specialize in ethical hacking. Ethical hacking is the process of deliberately hacking into a high-profile client company with the intention of exposing their weaknesses. The key expertise that they provided was a detailed knowledge of how to hack into a network or website.

Technology 2: Our NetEqualizer technology is well known for providing state-of-the-art bandwidth control. While working with Fiddler on the Root, we realized our toolset could be reconfigured to spot, and thwart, unwanted entry into a network. A key piece to the puzzle would be our long-forgotten Deep Packet Inspection technology. DPI is the frowned upon practice of looking inside data packets traversing the Internet.

An ironic twist to this new product journey was that, due to the privacy controversy, as well as finding a better way to shape bandwidth, we removed all of our DPI methodology from our core bandwidth shaping product four years ago.  Just like with any weapon, there are appropriate uses for DPI. Over a lunch conversation one day, we realized that using DPI to prevent a hacker intrusion was a legitimate use of DPI technology. Preventing an attack is much different from a public ISP scanning and censoring customer data.

So how did we merge these technologies to create a unique heuristics-based IPS system?

Before I answer that question, perhaps you are thinking that revealing our techniques might provide a potential hacker or competitor with inside secrets? More on this later…

The key to using DPI to prevent an intrusion (hack) revolves around 3 key facts:

1) A hacker MUST try to enter your enterprise by exploiting weaknesses in your normal entry points.

2) One of the normal entry points is a web page, and everybody has them. After all, if you had no publicly available data there would be no reason to be attached to the Internet.

3) By using DPI technology to monitor incoming requests and looking for abnormalities, we can now reliably spot unwanted intrusion attempts.

When we met with Fiddler on the Root, we realized that a normal entry by a customer and a probing entry by a hacker are radically different. A hacker attempts things that no normal visitor could even possibly stumble into. In our new solution we have directed our DPI technology to watch for abnormal entry intrusion attempts. This involved months of observing a group of professional hackers and then developing a set of profiles which clearly distinguish them from a friendly user.

What other innovations are involved in a heuristics-based Intrusion Prevention System (IPS)?

Spotting the hacker pattern with DPI was only part of a complete system. We also had to make sure we did not get any false positives – this is the case where a normal activity might accidentally be flagged as an intruder, and this obviously would be unacceptable. In our test lab we have a series of computers that act like users searching the Internet, the only difference is we can ramp these robot users up to hyper-speed so that they access millions of pages over a short period of time. We then measure our “false positive” rate from our simulation and ensure that our false positive rate on intrusion detection is below 0.001 percent.

Our solution, NetGladiator, is different than other IPS appliances.  We are not an “all-in-one solution”, which can be rendered useless by alerting you thousands of times a day, can block legitimate requests, and break web functionality.  We do one thing very well – we catch & stop hackers during their information discovery process – keeping your web applications secure.  NetGladiator is custom-configured for your environment, alerting you on meaningful attempts without false positive alerts.

We also had to dig into our expertise in real-time optimization. Although that sounds like marketing propaganda to impress somebody, we can break that statement down to mean something.

When doing DPI, you must look at and analyze every data stream and packet coming into your enterprise, skipping something might lead to a security breach. Looking at data and analyzing it requires quite a bit more CPU power than just moving it along a network. Many intrusion detection systems are afterthoughts to standard routers and switches. These devices were originally not designed to do computing-intensive heuristics on data. Doing so may slow your network down to a crawl, a common complaint with lower-end affordable security updates. We did not want to force our customers to make that trade-off. Our technology uses a series of processors embedded in our equipment all working in unison to analyze each packet of Internet data without causing any latency. Although we did not invent the idea of using parallel processing for analysis of data, we are the only product in our price range able to do this.

How did we validate and test our IPS solution?

1) We have been putting our systems in front of beta test sites and asking white knights to try to hack into them.

2) We have been running our technology in front of some of our own massive web crawlers. Our crawlers do not attempt anything abnormal but can push through millions of sites and web pages. This is how we test for false positives blocking a web crawler that is NOT attempting anything abnormal.

Back to the question, does divulging our methodology render it easier to breach?

The holes that hackers exploit are relatively consistent – in other words there really is only a finite number of exploitations that hackers use. They can either choose to exploit these holes or not, and if they attempt to exploit the hole they will be spotted by our DPI. Hence announcing that we are protecting these holes is more likely to discourage a hacker, who will then look for another target.

Hacking is Obvious, Why Can’t We Stop Them?


Your website is just like any other business, whether it be a bank or a restaurant or a hardware store, a large majority of visitors are honest and enter with an intent to browse your information or perform a transaction. All legitimate customers follow a similar pattern. They browse your public HTML pages and perhaps interact with public fields and forms displayed on your site. Just like in a brick and mortar store, a normal cyber customer will observe basic rules of etiquette and stay within the boundaries of your web presence.

A hacker, on the other hand, is not likely to behave in any way close to a normal customer. If they did, they would not be very successful. A hacker will pound your website with force looking for a weaknesses. They will probe every nook and cranny of your web server until they find something to exploit. Their entry point could be one of those old orphaned web pages that you do not advertise, or they might create their own hole by inserting an SQL command within a URL. These kind of probes are way out of the ordinary and glaringly out-of-place.

Hacker intrusion is analogous to someone entering a brick and mortar store and proceeding to tip over shelves while scrounging on the floor for spilled documents. Imagine a customer asking rude questions to the sales clerk, and rattling doors off their hinges. At the very least, this behavior in the physical world would prompt a call to the police and a disorderly conduct charge.

So why is hacking so prevalent? Why isn’t the hacker immediately spotted and removed?

In many cases, hackers are detected and blocked, but all it takes is one. Just like my bank that is constantly turning off my credit cards every time I travel, a good business practice would be to err on the side of caution. Even accidentally locking out 1 in 1000 customers from your website is a much better proposition than letting one hacker in. The economic damage from a hacker is typically far worse than a short-term potential 0.1 percent drop in web visits.

In our opinion, there are several reasons why this solvable problem is so prevalent:

1) Broadbase security tools that try to do everything.

Businesses are sold an expensive set of tools that do many things unrelated to intrusion prevention. A tool that removes viruses from e-mails, prevents DOS attacks, or runs the generic firewall, is useful but the investment in a heuristics-based intrusion detection system is often on the light side of the all-in-one. Money spent on the broad-based tool is usually out of proportion with the potential economic damage of a real attack.

For example, you might lose a day of business if a virus gets loose in your enterprise and brings down a few workstations; however, the potential loss of stolen property and the damage to brand reputation that can be wreaked by a hacker is a magnitude above a nuisance virus infecting your laptops.

2) Businesses may not have the resources for an expensive tool, so they use what is at hand as best they can. We can certainly understand cash flow issues and where to spend resources. Look for some breakthroughs in cost with commercial hacker prevention tools in the near-term. A focused tool can be put in place at a reasonable cost, and does not require an IT staff to maintain.

3) Businesses cultures can get hung up on analysis of data, and don’t realize they must trust their security to a computer that makes decisions now. A hacker must be detected and blocked immediately. Many businesses may hesitate to use an automated tool, as it certainly may make a mistake and block a friendly user. However as we have mentioned above, blocking an occasional friendly user can be mitigated. Explaining the loss of 10,000 credit card numbers is hard to recover from.

So how does a good intrusion tool stop a hacker without an army of IT people?

It simply needs to quickly quantify abnormal behavior and block the IP immediately, with no questions asked or any hesitation. There really is no need to wait. The signs of intrusion are so different from a normal customer that you can with 99.99 percent accuracy toss them out before damage is done. In the coming few months we will be introducing a new turn-key product that will work like this.

Won’t the hacker try to subvert a heuristic tool once they suspect it is guarding your site?

Even if the hacker is trying to break through a heuristic based tool, the problem for the hacker is in order to get access to something they are not supposed to have, they will have to do something odd at some point, acting normal won’t cut it, and acting abnormal will get flagged. The tool will alert administrators to suspicious behavior and block the IP address of the malicious user. Now, with their increased alertness, administrators can lock down interfaces, manually review logs, and focus their diligence on the attack at hand.

—————————————————————————————————————————————————-
Editor’s note: update 01/23/2012

A wall street journal article came out today exposing how easy it is to hire  a hacker. If you think about it, the media likes to portray a hacker as some kind of amazing brilliant savant with super human powers. The truth is, tools to hack are readily available, and anybody with a background in computers and suspect moral character can do it. It also supports our premise that stopping a hacker is just a matter of plugging the common holes and entry points.

Editor’s note: update on 02/01/2012
Today APconnections, maker of the NetEqualizer, released a new intrusion prevention system (IPS) product,
the NetGladiator, which is designed to detect & prevent network intrusions. You can learn more about NetGladiator at www.netgladiator.net or by calling us at 303.997.1300 x123.

Is Equalizing Technology the Same as Bandwidth Fairness?


Editors Note:

The following was posted in a popular forum in response to the assumption that the NetEqualizer is a simple fairness engine. We can certainly understand how our technology can be typecast in the same bucket with simple fairness techniques; however, equalizing provides a much more sophisticated solution as the poster describes in detail below.

You have stated your reservations, but I am still going to have to recommend the NetEqualizer. Carving up the bandwidth equally will mean that the user perception of the Internet connection will be poor even when you have bandwidth to spare. It makes more sense to have a device that can maximize the user’s perception of a connection. Here are some example scenarios.

NetEQ when utilization is low, and it is not doing anything:
User perception of Skype like services: Good
User perception of Netflix like services: Good
User perception of large file downloads: Good
User perception of “ajaxie” webpages that constantly update some doodad on the page: Good
User perception of games: Good

Equally allocated bandwidth when utilization is low:
User perception of Skype like services: OK as long as the user is not doing anything else.
User perception of Netflix like services: OK as long as long as the user is not doing anything else.
User perception of large file downloads: Slow all of the time regardless of where the user is downloading the file from.
User perception of “ajaxie” webpages that constantly update some doodad on the page: OK
User perception of games: OK as long as the user is not doing anything else. That is until the game needs to download custom content from a server, then the user has to wait to enter the next round because of the hard rate limit.

NetEQ when utilization is high and penalizing the top flows:
User perception of Skype like services: Good
User perception of Netflix like services: Good – The caching bar at the bottom should be slightly delayed, but the video shouldn’t skip. The user is unlikely to notice.
User perception of large file downloads: Good – The file is delayed a bit, but will still download relatively quickly compared to a hard bandwidth cap. The user is unlikely to notice.
User perception of “ajaxie” webpages that constantly update some doodad on the page: Good
User perception of games: Good downloading content between rounds might be a tiny bit slower, but fast compared to a hard rate limit.

Equally allocated bandwidth when utilization is high:
User perception of Skype like services: OK as long as the user is not doing anything else.
User perception of Netflix like services: OK as long as long as the user is not doing anything else.
User perception of large file downloads: Slow all of the time regardless of where the user is downloading the file from.
User perception of “ajaxie” webpages that constantly update some doodad on the page: OK as long as the user is not doing anything else.
User perception of games: OK as long as the user is not doing anything else. That is until the game needs to download custom content from a server, then the user has to wait to enter the next round because of the hard rate limit.

As far as the P2P thing is concerned. While I too realized that theoretically P2P would be favored, in practice it wasn’t really noticeable.  If you wish, you can use connection limits to deal with this.

One last thing to note:  On Obama’s inauguration day, the NetEQ at our University was able to tame the ridiculous number of live streams of the event without me intervening to change settings.  The only problems reported turned out to be bandwidth problems on the other end.

NetEqualizer News: December 2011


NetEqualizer NewsDecember 2011

Greetings!

Enjoy another issue of NetEqualizer News! This month, we talk about our first round of beta testing our new release features, showcase our new website design, and discuss why caching alone will not solve your congestion issues! As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
 

Whew! I am finally taking a breather after a busy and productive year – trying to get ready for the holidays and step back to assess all that I am thankful for. I want to take a moment to thank YOU, our customers, for making 2011 a great year for us! You are the reason we do all this and work so hard on making the NetEqualizer the best bandwidth controller out there.  

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at art. I would love to hear from you!

In This Issue:
:: Update On Fall Release Features
:: DiffServ Priority Findings
:: New Website Design
:: Thank You!
:: Best Of The Blog

 Our Website         Contact Us         NetEqualizer Demo         Price List     Join Our Mailing List

 

Update On  

Fall Release Features

In our previous issue of NetEqualizer News, we previewed some of our exciting new features that are available in the Fall Release. The Fall Release is currently undergoing beta testing at various customer sites, but if you are interested in the GA release, let us know, and we’ll contact you when it’s available.

Please note, this release will be a quick update for anyone already on version 5.x.

Here is a brief update on some of those features with screenshots:

Email Notification

The Fall Release provides users with the ability to set an email account that the NetEqualizer can send alerts to. For example, users can set their account to be notified when IPv6 traffic exceeds 1%. Here is a screenshot from the email notification feature setup screen:

Setup Email Alerts in the NetEqualizer

IPv6 Visibility

The Fall Release also includes features that provide enhanced visibility to IPv6 traffic.

With this release, we now provide a connection table in the GUI that shows all of the IPv6 flows and their bandwidth consumption. We also provide a way to monitor your total IPv6 traffic from an historical perspective.

These two features provide useful data in order to better position your organization for the eventual shift to IPv6.

Here are some screenshots from the IPv6 interface in the NetEqualizer GUI:

IPv6 Traffic in the NetEqualizer

View Total IPv6 Traffic

For more information on the Fall Release, take a look at our Software Update Notes for version 5.5.

You can also visit our blog or contact us:

email sales -or-

call worldwide (303) 997-1300 x. 103 -or-

toll-free U.S.(800-918-2763).

 

DiffServ Priority Findings   

 

In the Fall Release, NetEqualizer included a feature to give priority to traffic which had the ToS/DiffServ bit set to a non-zero value. This bit is supposed to signify that the traffic has priority on the Internet. This feature allows our customers to give priority to important traffic without having to set up a priority handling connection.  

Through our research, however, we’ve discovered that sites like YouTube, in an attempt to receive priority access across the Internet, often set this bit for all traffic. Thus, with no control on who can set this bit, customers could find that their link is bogged down by too much requested priority.  

Once you try it on your own network with the NetEqualizer, we want to hear about your experiences with this feature. How would you assess its effectiveness? Also, if you have experience using the DiffServ bit in other applications, how useful was it and in what ways? All feedback is welcome! 

 

Contact us at sales with your story or thoughts!

 

New Website Design  

 

NetEqualizer is very excited to introduce our new website and design! The new website makes trying out the product, purchasing, and support that much easier.

 

Our new menus allow for quick navigation to common NetEqualizer tools and case studies. Be sure to check it out!

 

Thank You!   

 

As we celebrate the holiday season, we at APconnections want to express our thanks to all of our customers!

To start, we’re pleased to introduce an expanded version of our NetEqualizer lifetime trade-in policy. Customers with NetEqualizers purchased four or more years ago qualify for a credit of 50 percent of the original unit’s purchase price (not including NSS, NHW, etc.) toward a new NetEqualizer!

This offer is an addition to our original lifetime trade-in policy guaranteeing that in the event of an unrepairable failure of a NetEqualizer unit, customers have the option to purchase a replacement unit at a 50 percent discount off the listed price.

While this policy is unique in its own right, we are also challenging tech-industry tradition by offering it on units purchased from authorized NetEqualizer resellers.

To learn more, or to get your trade-in started, contact us: 

email sales -or- 

call worldwide (303) 997-1300 x.103 -or- 

toll-free U.S.(800-918-2763).

 

Best Of The Blog

 

Why Caching Alone Will Not Solve Your Congestion Issue

by Art Reisman – CTO – NetEqualizer

 

Editors Note:

The intent of this article to is to help set appropriate expectations of using a caching server on an uncontrolled Internet link. There are some great speed gains to be had with a caching server; however, caching alone will not remedy a heavily congested Internet connection

.

Are you going down the path of using a caching server (such as Squid) to decrease peak usage load on a congested Internet link?

 

You might be surprised to learn that Internet link congestion cannot be mitigated with a caching server alone. Contention can only be eliminated by:

1) Increasing bandwidth

2) Some form of bandwidth control

3) Or a combination of 1) and 2)

A common assumption about caching is that somehow you will be able to cache a large portion of common web content – such that a significant amount of your user traffic will not traverse your backbone to your provider. Unfortunately, caching a large portion of web content to attain a significant hit ratio is not practical, and here is why:

Lets say your Internet trunk delivers 100 megabits and is heavily saturated prior to implementing caching or a bandwidth-control solution. What happens when you add a caching server to the mix?

To keep reading, click here.  

 

Photo Of The Month  

Happy Holidays!

Happy Holidays from everyone at NetEqualizer! We hope you enjoy this special time of year more than our dog, Nick, likes wearing these antlers.

    View our videos on YouTube

Cloud Computing – Do You Have Enough Bandwidth? And a Few Other Things to Consider


The following is a list of things to consider when using a cloud-computing model.

Bandwidth: Is your link fast enough to support cloud computing?

We get asked this question all the time: What is the best-practice standard for bandwidth allocation?

Well, the answer depends on what you are computing.

– First, there is the application itself.  Is your application dynamically loading up modules every time you click on a new screen? If the application is designed correctly, it will be lightweight and come up quickly in your browser. Flash video screens certainly spruce up the experience, but I hate waiting for them. Make sure when you go to a cloud model that your application is adapted for limited bandwidth.

– Second, what type of transactions are you running? Are you running videos and large graphics or just data? Are you doing photo processing from Kodak? If so, you are not typical, and moving images up and down your link will be your constraining factor.

– Third, are you sharing general Internet access with your cloud link? In other words, is that guy on his lunch break watching a replay of royal wedding bloopers on YouTube interfering with your salesforce.com access?

The good news is (assuming you will be running a transactional cloud computing environment – e.g. accounting, sales database, basic email, attendance, medical records – without video clips or large data files), you most likely will not need additional Internet bandwidth. Obviously, we assume your business has reasonable Internet response times prior to transitioning to a cloud application.

Factoid: Typically, for a business in an urban area, we would expect about 10 megabits of bandwidth for every 100 employees. If you fall below this ratio, 10/100, you can still take advantage of cloud computing but you may need  some form of QoS device to prevent the recreational or non-essential Internet access from interfering with your cloud applications.  See our article on contention ratio for more information.

Security: Can you trust your data in the cloud?

For the most part, chances are your cloud partner will have much better resources to deal with security than your enterprise, as this should be a primary function of their business. They should have an economy of scale – whereas most companies view security as a cost and are always juggling those costs against profits, cloud-computing providers will view security as an asset and invest more heavily.

We addressed security in detail in our article how secure is the cloud, but here are some of the main points to consider:

1) Transit security: moving data to and from your cloud provider. How are you going to make sure this is secure?
2) Storage: handling of your data at your cloud provider, is it secure once it gets there from an outside hacker?
3) Inside job: this is often overlooked, but can be a huge security risk. Who has access to your data within the provider network?

Evaluating security when choosing your provider.

You would assume the cloud company, whether it be Apple or Google (Gmail, Google Calendar), uses some best practices to ensure security. My fear is that ultimately some major cloud provider will fail miserably just like banks and brokerage firms. Over time, one or more of them will become complacent. Here is my check list on what I would want in my trusted cloud computing partner:

1) Do they have redundancy in their facilities and their access?
2) Do they screen their employees for criminal records and drug usage?
3) Are they willing to let you, or a truly independent auditor, into their facility?
4) How often do they back-up data and how do they test recovery?

Big Brother is watching.

This is not so much a traditional security threat, but if you are using a free service you are likely going to agree, somewhere in their fine print, to expose some of your information for marketing purposes. Ever wonder how those targeted ads appear that are relevant to the content of the mail you are reading?

Link reliability.

What happens if your link goes down or your provider link goes down, how dependent are you? Make sure your business or application can handle unexpected downtime.

Editors note: unless otherwise stated, these tips assume you are using a third-party provider for resources applications and are not a large enterprise with a centralized service on your Internet. For example, using QuickBooks over the Internet would be considered a cloud application (and one that I use extensively in our business), however, centralizing Microsoft excel on a corporate server with thin terminal clients would not be cloud computing.

How Safe is The Cloud?


By Zack Sanders, NetEqualizer Guest Columnist

There is no question that cloud-computing infrastructures are the future for businesses of every size. The advantages they offer are plentiful:

  • Scalability – IT personnel used to have to scramble for hardware when business decisions dictated the need for more servers or storage. With cloud computing, an organization can quickly add and subtract capacity at will. New server instances are available within minutes of provisioning them.
  • Cost – For a lot of companies (especially new ones), the prospect of purchasing multiple $5,000 servers (and to pay to have someone maintain them) is not very attractive. Cloud servers are very cheap – and you only pay for what you use. If you don’t require a lot of storage space, you can pay around 1 cent per hour per instance. That’s roughly $8/month. If you can’t incur that cost, you should probably reevaluate your business model.
  • Availability – In-house data centers experience routine outages. When you outsource your data center to the cloud, everything server related is in the hands of industry experts. This greatly increases quality of service and availability. That’s not to say outages don’t occur – they do – just not nearly as often or as unpredictably.

While it’s easy to see the benefits of cloud computing, it does have its potential pitfalls. The major questions that always accompany cloud computing discussions are:

  • “How does the security landscape change in the cloud?” – and
  • “What do I need to do to protect my data?”

Businesses and users are concerned about sending their sensitive data to a server that is not totally under their control – and they are correct to be wary. However, when taking proper precautions, cloud infrastructures can be just as safe – if not safer – than physical, in-house data centers. Here’s why:

  • They’re the best at what they do – Cloud computing vendors invest tons of money securing their physical servers that are hosting your virtual servers. They’ll be compliant with all major physical security guidelines, have up-to-date firewalls and patches, and have proper disaster recovery policies and redundant environments in place. From this standpoint, they’ll rank above almost any private company’s in-house data center.
  • They protect your data internally – Cloud providers have systems in place to prevent data leaks or access by third parties. Proper separation of duties should ensure that root users at the cloud provider couldn’t even penetrate your data.
  • They manage authentication and authorization effectively – Because logging and unique identification are central components to many compliance standards, cloud providers have strong identity management and logging solutions in place.

The above factors provide a lot of piece of mind, but with security it’s always important to layer approaches and be diligent. By layering, I mean that the most secure infrastructures have layers of security components that, if one were to fail, the next one would thwart an attack. This diligence is just as important for securing your external cloud infrastructure. No environment is ever immune to compromise. A key security aspect of the cloud is that your server is outside of your internal network, and thus your data must travel public connections to and from your external virtual machine. Companies with sensitive data are very worried about this. However, when taking the following security measures, your data can be just as safe in the cloud:

  • Secure the transmission of data – Setup SSL connections for sensitive data, especially logins and database connections.
  • Use keys for remote login – Utilize public/private keys, two-factor authentication, or other strong authentication technologies. Do not allow remote root login to your servers. Brute force bots hound remote root logins incessantly in cloud provider address spaces.
  • Encrypt sensitive data sent to the cloud – SSL will take care of the data’s integrity during transmission, but it should also be stored encrypted on the cloud server.
  • Review logs diligently – use log analysis software ALONG WITH manual review. Automated technology combined with a manual review policy is a good example of layering.

So, when taking proper precautions (precautions that you should already be taking for your in-house data center), the cloud is a great way to manage your infrastructure needs. Just be sure to select a provider that is reputable and make sure to read the SLA. If the hosting price is too good to be true, it probably is. You can’t take chances with your sensitive data.

About the author:

Zack Sanders is a Web Application Security Specialist with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

The Benefits of Requiring Online Registration Forms


By Zack Sanders, NetEqualizer Guest Columnist

The registration form is quickly becoming antiquated in the online world. Once viewed as an easy way to sign up or declare your interest in a company or product, the annoyance level and security concerns associated with filling out your personal data in a web form has led many businesses to utilize other techniques to grab new clientele. For a lot of companies, this is the right approach. There are metrics that show conversion rates for sales and sign-ups are higher when one asks for less information up front. This works particularly well for business-to-consumer sites, social networks that rely on ad revenue and large user bases, and web startups who need to gain a following.

For example, signing up for an online dating site might require you only enter in your sex, age, and email address. Then, once you’ve used the site a little bit, they’ll have you fill out other information in your profile. They’ve already hooked you at this point so obtaining a little more data is a trivial task. If they asked for all your information initially before letting you try the site, they’d be much less likely to gain you as a user.

A lot of companies might be quick to switch to this sort of registration method (after all, it’s the increasingly popular choice), but they should be careful about acting too hastily. It isn’t the best choice for every business. In fact, most business-to-business (B2B) organizations will see more success from a typical registration form. This is true for the following reasons:

  • Business customers usually have more strategic, long-term goals and have already determined there is a business need for your product. They usually aren’t just browsing with little intent to buy.
  • Your sales team will be more efficient because their calls to potential clients will convert better. They won’t be wasting their time as often when they know they are talking to at least semi-serious customers.
  • More sophisticated products might require a discussion between an expert/engineer and the customer. Every organization has slightly different problems they are trying to solve and it’s important to determine quickly whether your product will really help solve their issue. Just like with sales, you want to be efficient with these discussions too.
  • B2B transactions are usually large in volume or cost. Any organization or individual looking to purchase an expensive product won’t mind filling in their information. Because they are serious, the annoyance factor associated with a form goes down.
  • B2B companies have established reputations. Likely, potential customers already know you are legitimate. They won’t be as concerned about providing you with their personal details.

Figuring out what information to ask for is also an important task. You want to walk the fine line of getting complete data without being too invasive. Your form will be best received when you:

  • Make sure that the information you ask for is relevant to your product.
  • Make sure the customer feels confident about your privacy policy. No one wants their information sold to third parties.
  • Don’t hound potential clients with sales calls. Repeat calls from vendors can be extremely annoying and are a huge turnoff.

At NetEqualizer, we’ve tried both the quick/no registration method as well as our current method of requiring a form to be completed. We’ve found that the above benefits of a registration process outweigh the ease of not requiring any information. Our sales team and engineers can make more targeted, efficient phone calls and it gives us the opportunity to explain the benefits of our solution completely to potential customers. In return, the customers get better, more tailored service and support.

About the author:

Zack Sanders is a Web Application Security Specialist with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

NetEqualizer News: November 2011


NetEqualizer News

November 2011

Greetings!

Enjoy another issue of NetEqualizer News! This month, we talk more about our exciting Fall Release features (Email Notification, IPv6 Visibility, and DiffServ Priority), as well as announce our newly-designed Product Demonstration Guide! As always, feel free to pass this along to others who might be interested in NetEqualizer News.

In This Issue:
:: Fall Release Features
:: New Product Demonstration Guide
:: Best Of The Blog

Our Website Contact Us NetEqualizer Demo Price List Join Our Mailing List

Fall Release Features

Our Fall Release is now in beta! We will have a limited number of slots available for beta testing these features. Please contact us if you are interested.

email sales -or-

worldwide (303) 997-1300 x. 103 -or-

toll-free U.S.(800-918-2763)

General availability will be in December 2011.

As always, the Fall Release will be available at no charge to customers with valid

NetEqualizer Software Subscriptions (NSS).


Here is a preview of our exciting new features:

Email Notification

The Fall Release will provide users with the ability to set an email account that the NetEqualizer can send alerts to. For example, users can set their account to be notified when IPv6 traffic exceeds 1%. There will also be many other types of notifications to configure, but we don’t want to give too much away – you’ll have to try it out yourself!

IPv6 Visibility

As we await the need to handle significant amounts of IPv6 traffic, NetEqualizer is already implementing solutions to meet the shift head-on. The Fall Release will include features that will provide enhanced visibility to IPv6 traffic.

The best way to begin this transition in our software is to provide users with a way to see how IPv6 traffic is passing through their network. The most effective way to convey these details is to provide a connection table in the GUI that shows all of the IPv6 flows and their bandwidth consumption. We will also be providing a way to monitor your total IPv6 traffic from an historical perspective. These two features will provide useful data in order to better position your organization for the eventual shift to IPv6.

Here is a screen shot of sample IPv6 traffic in the NetEqualizer GUI:

It should be noted that for now, even for customers with dual stacks, we do not expect the IPv6 traffic to eclipse more than a fraction of a percent of network traffic.

Read more IPv6-related articles from our blog.


DiffServ Priority

We are now seeing an influx of customers looking to provide priority bandwidth to VoIP and video connections on their links without all the hassle of complex router rules.

NetEqualizer’s new DiffServ Priority feature is the solution. Included in the Fall Release, the DiffServ Priority feature will automatically prioritize connections that are utilizing services like VoIP and video – as well as a host of other types of important connections. This will provide improved quality of service on your network.

For more information on DiffServ and priority handling in general, check out this article from our blog:

For more information on the Fall Release, take a look at our Software Update Notes for version 5.5.

You can also visit our blog or contact us at:

email sales -or-

worldwide (303) 997-1300 x. 103 -or-

toll-free U.S.(800-918-2763)

New Product Demo Guide

NetEqualizer is excited to announce a new and improved product demonstration experience.

Our revamped Product Demonstration Guide and demonstration website allows users to take a self-guided tour of the NetEqualizer – walking through key features and screens.

Once you’ve been introduced to the NetEqualizer and its features, the demonstration allows you interact with a real NetEqualizer so that you can try out the features for yourself. Ample documentation with screen shots and examples is also provided to assist you on your tour.

Register for a Product Demonstration today! If you have any questions, feel free to contact us:

email sales -or-

worldwide (303) 997-1300 x. 103 -or-

toll-free U.S.(800-918-2763)


Best Of The Blog

How to Speed Up Your Internet Connection with a Bandwidth Controller

by Art Reisman – CTO – NetEqualizer

It occurred to me today, that in all the years I have been posting about common ways to speed up your Internet, I have never really written a plain and simple consumer explanation dedicated to how a bandwidth controller can speed up your Internet. After all, it seems intuitive, that a bandwidth controller is something an ISP would use to slow down your Internet; but there can be a beneficial side to a bandwidth controller, even at the home-consumer level.

Quite a bit of slow Internet service problems are due to contention on your link to the Internet. Even if you are the only user on the Internet, a simple update to your virus software running in the background can dominate your Internet link. A large download often will cause everything else you try (email, browsing) to come to a crawl.

What causes slowness on a shared link?

Everything you do on your Internet creates a connection from inside your network to the Internet, and all these connections compete for the limited amount of bandwidth which your ISP provides.

Your router (cable modem) connection to the Internet provides first-come, first-serve service to all the applications trying to access the Internet. To make matters worse, the heavier users (the ones with the larger persistent downloads), tend to get more than their fair share of router cycles. Large downloads are like the school yard bully – they tend to butt in line, and not play fair.

So how can a bandwidth controller make my Internet faster?

A smart bandwidth controller will analyze all your Internet connections on the fly. It will then selectively take away some bandwidth from the bullies. Once the bullies are removed, other applications will get much needed cycles out to the Internet, thus speeding them up.

To keep reading, click here.

Photo Of The Month

Gobble Gobble!

Gobble Gobble!

Happy Thanksgiving from everyone at NetEqualizer! Check out this wild turkey one of our wildlife cameras caught on film!

View our videos on YouTube