Five Tips to Control Encrypted Traffic on Your Network


Editors Note:

Our intent with our tips is to exemplify some of the impracticalities involved with “brute force” shaping of encrypted traffic, and to offer some alternatives.

1) Insert Pre-Encryption software at each end node on your network.

This technique requires a special a custom APP that would need to be installed on Iphones, Ipads, and the laptops of end users. The app is designed  to relay all data to a centralized shaping device in an un-encrypted format.

  •   assumes that the a centralized  IT department has the authority to require special software on all devices using the network. It would not be feasible for environments where end users freely use their own equipment.

ssltraffic

2) Use a sniffer traffic shaper that can decrypt the traffic on the fly.

  • The older 40 bit encryption codes could be hacked by a computer in about a one week, the newer 128 bit encryption codes would require the computer to run longer than the age of the Universe.

3) Just drop encrypted traffic, don’t allow it, forcing users to turn off SSL on their browsers.   Note: A traffic shaper, can spot encrypted traffic, it  just can’t tell you specifically what it is by content.

  • Seems rather draconian to block secure private transmissions, however the need to encrypt traffic over the Internet is vastly overblown. It is actually extremely unlikely for a personal information or credit card to get stolen in transit , but that is another subject
  • Really not practical where you have autonomous or public users, it will cause confusion at best, a revolt at worst.

4) Perhaps re-think what you are trying to accomplish.   There are more heuristic approaches to managing traffic which are immune to encryption.  Please feel free to contact us for more details on a heuristic approach to shaping encrypted traffic.

5) Charge a premium for encrypted traffic.  This would be more practical than blocking encrypted traffic, and would perhaps offset some of the costs for associate with the  overuse of p2p encrypted traffic.

NetEqualizer News: December 2012


December 2012

Greetings!

Enjoy another issue of NetEqualizer News! This month, we preview feature additions to NetEqualizer coming in 2013, offer a special deal on web application security testing for the Holidays, and remind NetEqualizer customers to upgrade to Software Update 6.0. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

artdaughterThis month’s picture is from Parent’s Night for my daughter’s volleyball team. In December, as I get ready for the Holidays, I often think about what is important to me – like family, friends, my health, and how I help to run this business. While pondering these thoughts, I came up with some quotes that have meaning to me, which I am sharing here. I hope you enjoy them, or that they at least get you thinking about what is important to you!

“Technology is not what has already been done.”
“Following too closely ruins the journey.”
“Innovation is not a democratic endeavor.”
“Time is not linear, it just appears that way most of the time.”

What are your favorite quotes? We love it when we hear back from you – so if you have a quote or a story you would like to share with us of how we have helped you, let us know. Email me directly at art@apconnections.net. I would love to hear from you!

NetEqualizer: Coming in 2013

We are always looking to improve our NetEqualizer product line such that our customers are getting maximum value from their purchase. Part of this process is brainstorming changes and additional features to adapt and help meet that need.

Here are a couple of ideas for changes to NetEqualizer that will arrive in 2013. Stay tuned to NetEqualizer News and our blog for updates on these features!

1) NetEqualizer in Mesh Networks and Cloud Computing

As the use of NAT distributed across mesh networks becomes more widespread, and the bundling of services across cloud computing becomes more prevalent, our stream-based behavior shaping will need to evolve.

This is due to the fact that we base our decision of whether or not to shape on a pair of IP addresses talking to each other without considering port numbers. Sometimes, in cloud or mesh networks, services are trunked across a tunnel using the same IP address. As they cross the trunk, the streams are broken out appropriately based on port number.

So, for example, say you have a video server as part of a cloud computing environment. Without any NAT, on a wide-open network, we would be able to give that video server priority simply by knowing its IP address. However, in a meshed network, the IP connection might be the same as other streams, and we’d have no way to differentiate it. It turns out, though, that services within a tunnel may share IP addresses, but the differentiating factor will be the port number.

Thus, in 2013 we will no longer shape just on IP to IP, but will evolve to offer shaping on IP(Port) to IP(Port). The result will be quality of service improvements even in heavily NAT’d environments.

2) 10 Gbps Line Speeds without Degradation

Some of our advantages over the years have been our price point, the techniques we use on standard hardware, and the line speeds we can maintain.

Right now, our NE3000 and above products all have true multi-core processors, and we want to take advantage of that to enhance our packet analysis. While our analysis is very quick and efficient today (sustained speeds of 1 Gbps up and down), in very high-speed networks, multi-core processing will amp up our throughput even more. In order to get to 10 Gbps on our Intel-based architecture, we must do some parallel analysis on IP packets in the Linux kernel.

The good news is that we’ve already developed this technology in our NetGladiator product (check out this blog article here).

Coming in 2013, we’ll port this technology to NetEqualizer. The result will be low-cost bandwidth shapers that can handle extremely high line speeds without degradation. This is important because in a world where bandwidth keeps getting cheaper, the only reason to invest in an optimizer is if it makes good business sense.

We have prided ourselves on smart, efficient, optimization techniques for years – and we will continue to do that for our customers!


Secure Your Web Applications for the Holidays!

We want YOU to be proactive about security. If your business has external-facing web applications, don’t wait for an attack to happen – protect yourself now! It only takes a few hours of our in-house security experts’ time to determine if your site might have issues, so, for the Holidays, we are offering a $500 upfront security assessment for customers with web applications that need testing!

If it is determined that our NetGladiator product can help shore up your issues, that $500 will be applied toward your first year of NetGladiator Software & Support (GSS). We also offer further consulting based on that assessment on an as-needed basis.

To learn more about NetGladiator, check out our video here.

Or, contact us at:

ips@apconnections.net

-or-

303-997-1300 x123


Don’t Forget to Upgrade to 6.0!: With a brief tutorial on User Quotas

If you have not already upgraded your NetEqualizer to Software Update 6.0, now is the perfect time!

We have discussed the new upgrade in depth in previous newsletters and blog posts, so this month we thought we’d show you how to take advantage of one of the new features – User Quotas.

User quotas are great if you need to track bandwidth usage over time per IP address or subnet. You can also send alerts to notify you if a quota has been surpassed.

To begin, you’ll want to navigate to the Manage User Quotas menu on the left. You’ll then want to start the Quota System using the third interface from the top, Start/Stop Quota System.

Now that the Quota System is turned on, we’ll add a new quota. Click on Configure User Quotas and take a look at the first window:

quota1

Here are the settings associated with setting up a new quota rule:

Host IP: Enter in the Host IP or Subnet that you want to give a quota rule to.

Quota Amount: Enter in the number of total bytes for this quota to allow.

Duration: Enter in the number of minutes you want the quota to be tracked for before it is reset (1 day, 1 week, etc.).

Hard Limit Restriction: Enter in the number of bytes/sec to allow the user once the quota is surpassed.  

Contact: Enter in a contact email for the person to notify when the quota is passed.

After you populate the form, click Add Rule. Congratulations! You’ve just set up your first quota rule!

From here, you can view reports on your quota users and more.

Remember, the new GUI and all the new features of Software Update 6.0 are available for free to customers with valid NetEqualizer Software & Support (NSS).

If you don’t have the new GUI or are not current with NSS, contact us today!

sales@apconnections.net

-or-

toll-free U.S. (888-287-2492),

worldwide (303) 997-1300 x. 103


Best Of The Blog

Internet User’s Bill of Rights

By Art Reisman – CTO – APconnections

This is the second article in our series. Our first was a Bill of Rights dictating the etiquette of software updates. We continue with a proposed Bill of Rights for consumers with respect to their Internet service.

1) Providers must divulge the contention ratio of their service. 

At the core of all Internet service is a balancing act between the number of people that are sharing a resource and how much of that resource is available.

For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks – perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.

The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.

The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial up…

Photo Of The Month

sandybike

Kansas Clouds

The wide-open ranch lands in middle America provide a nice retreat from the bustle of city life. When he can find time, one of our staff members visits his property in Kansas with his family. The Internet connection out there is shaky, but it is a welcome change from routine.

Getting the Keys to the Kingdom: SQL Injection


By Zack Sanders

Director of Security – www.netgladiator.net

SQL injection is one of the most well-known vulnerabilities in web application security. Because so many web sites today are database driven, an SQL injection vulnerability puts the entire application and its users at risk. The purpose of this article is to explain what SQL injection is, show how easily it can be exploited, and discuss what steps you can take to make sure your site is secure from this devastating attack vector.

What is SQL injection?

SQL injection is performed by including portions of SQL statements in a web form entry field in an attempt to get the web site to pass a newly formed malicious SQL command to the database. The vulnerability happens when user input is either incorrectly filtered or user input is not strongly typed and unexpectedly executed. SQL commands are thus injected from the web form into the database of an application (like queries) to change the database content or dump the database information like credit card or passwords to the attacker. Average websites can experience 100’s of SQL injection attempts per hour from automated bots scouring the Internet.

How do attackers discover it?

When searching for SQL injection, an attacker is looking for an application that behaves differently based on varying inputs to a form. For example, a vulnerable web form might accept expected content just fine, but if SQL characters are inputted, a system-level SQL error is generated saying something like, “There is an error in your MySQL syntax.” This tells the attacker that the SQL code is being interpreted, even though it is incorrect. This indicates that the application is vulnerable.

How is a site that is vulnerable exploited?

Once an application is deemed vulnerable, an attacker will try using an automated injection tool to glean information about the database. Structure data like the information schema, the version of SQL being run, and table names are all trivial to gather. Once the structure is defined and understood, custom SQL statements can be written to download data from interesting tables like, “users”, “customers”, “payments”, etc. Here is a screenshot from a recent client of mine whose site was vulnerable. These are just a few of the columns from the “users” table.

* Names, email addresses, partial passwords, usernames, and addresses are blocked out.

What happens next?

With this level of access, the sky is the limit. Here are a few things an attacker might do:

1) Take all of the hashed passwords and run them against a rainbow table for matches. This is why long passwords are so important. Even though hashing is a one-way algorithm for encryption, the hashes for short and common passwords are all known and can easily be looked up reversely. An attacker might then use the passwords, along with email addresses, to compromise other accounts owned by those users.

2) Change the super administrator flag for a user they know the password for, and log in to gain further access. A common goal is to get to a file upload interface so that a script can be uploaded to the server that would give an attacker remote control.

3) Drop the entire database purely to wreak havoc.

How do you protect your site from SQL injection?

ALL GET and POST requests involving the database need to be filtered and analyzed before being run. This includes actions like:

1) Stripping away SQL characters. In MySQL, this would be the mysql_real_escape_string() function.

2) Analyze for expected input. Should the entry only be a number 1-50? Check to make sure it is a positive number, non-zero, and no more than two characters.

3) Have strong database permissions. Different database users should be created with only needed permissions for their function. For example, don’t use the root MySQL user to connect your web application to your database.

4) Hire an expert to assess your web application. The cost of performing this type of health check is miniscule compared to the cost of being exploited.

5) Install an intrusion protection system like NetGladiator that looks for SQL characters in URL’s.

The keys to the kingdom

Hopefully you can now see the danger of SQL injection. The level of control and access coupled with the ease of discovery and exploitation make it extremely problematic. The good news is, putting basic protections in place is relatively easy.

Contact us today if you want help securing your web application!

Four Reasons Why Companies Remain Vulnerable to Cyber Attacks


Over the past year, since the release of our IPS product, we have spent many hours talking to resellers and businesses regarding Internet security. Below are our observations about security investment, and more importantly, non-investment.

1) By far the number one reason why companies are vulnerable is procrastination.

Seeing is believing, and many companies have never been hacked or compromised.

Some clarification here, most attacks do not end in something being destroyed or any obvious trail of data being lifted. This does not mean they do not happen; it’s just that there was no immediate ramification in many cases hence, business as usual.

Companies are run by people, and most people are reactive, and furthermore somewhat single threaded, thus they can only address a few problems at a time. Without a compelling obvious problem, security gets pushed down the list. The exception to the procrastination rule would be verticals such as financial institutions, where security audits are mandatory (more on audits in a bit). Most companies, although aware of  risk factors, are reluctant to spend on a problem that has never happened. In their defense, a company that reacts to all the security FUD, might find itself hamstrung and out of business. Sometimes, to be profitable, you have to live with a little risk.

2) Existing security tools are ignored.

Many security suites are just too broad to be relevant. Information overload can lead to a false sense of coverage.

The best analogy I can give is the Tornado warning system used by the National Weather Service. Their warning system, although well-intended, has been so diffuse in specificity that after a while people ignore the warnings. The same holds true with security tools. In order to impress and out-do one another, security tools have become bloated with quantity, not quality. This overload of data can lead to an overwhelming glut of frivolous information. It would be like a stock analyst predicting every possible outcome and expecting you to invest on that advice. Without a specific, targeted piece of information, your security solution can be a distraction.

3) Security audits are mandated formalities.

In some instances, a security audit is treated as a bureaucratic mandate. When security audits are mandated as a standard, the process of the audit can become the objective. The soldiers carrying out the process will view the completed checklist as the desired result and thus may not actually counter existing threats. It’s not that the audit does not have value, but the audit itself becomes a minimum objective. And most likely the audit is a broad cookie-cutter approach which mostly serves to protect the company or individuals from blame.

4) It may just not be worth the investment.

The cost of getting hacked may be less than the ongoing fees and consumption of time required to maintain a security solution. On a mini-scale, I followed this advice on my home laptop running Windows. It was easier to re-load my system every 6 months when I got a virus rather than mess with all the security virus protection being thrown at me, slowing my system down. The same holds true on a corporate scale. Although nobody would ever come out and admit this publicly, or make it deliberately easy, but it might be more cost-effective to recover from a security breach than to proactively invest in preventing it. What if your customer records get stolen, so what? Consumers are hearing about the largest banks and government security agencies getting hacked every day. If you are a mid-sized business it might be more cost-effective to invest in some damage control after the fact rather than jeopardize cash flow today.

So what is the future for security products? Well, they are not going to go away. They just need to be smarter, more cost-effective, and turn-key, and then perhaps companies will find the benefit-to-risk more acceptable.

<Article Reference:  Security Data overload article >

Apconnections Backs up Security Device Support with an unusual offer, “We’ll hack your network”


What gets people excited about purchasing an intrusion detection system? Not much. Certainly, fear can be used to sell security devices. But most, mid sized companies are spread thin with their IT staff, they are focused on running their business operations. To spend money to prevent something that has never happened to them would be seen as somewhat foolish. There are a large number of potential threats to a business, security being just one of them.

One expert pointed out recently:

“Sophisticated fraudsters are becoming the norm with data breaches, carder forums, and do it yourself (DIY) crime kits being marketed via the Internet.” Excerpt from fraudwar blog spot.

Thus, getting data stolen happens so often that it can be considered a survivable event, it is the new normal. Your customers are not going to run for the hills, as they have been conditioned to roll with this threat. But there still is a steep cost for such an event. So our staff put our heads together and asked the question… there must be an easy, quantifiable, minimum investment way to objectively evaluate data risk without a giant cluster of data security devices in place, spewing gobs of meaningless drivel.

One of our internal, white knight, hackers pointed out, that in his storied past, he had been able to break into almost any business at will (good thing he is a white knight and does not steal or damage anything). While talking to some of our channel resellers we have also learned that most companies, although aware of outside intrusion, are reluctant to throw money and resources at a potential problem that they can’t easily quantify.

Thus arose an idea for our new offer. For a small refundable retainer fee, we will attempt to break into a customers data systems from the outside. If we can’t get in, then we’ll return the retainer fee. Obviously, if we get in, we can then propose a solution with indisputable evidence of the vulnerability, and if we don’t get in, then the customer can have some level of assurance that their existing infrastructure thwarted a determined break in.

What to Do If Your Organization Has Been Hacked


By Zack Sanders – Security Expert – APconnections

It’s a scary scenario that every business fears; a successful attack on your web site that results in stolen information or embarrassing defacement.

From huge corporations, to mom-and-pop online shops, data security is (or should be) a keystone consideration. As we’ve written about before, no one is immune to attack – not even local businesses with small online footprints. I, personally, have worked with many clients whom you would not think would be targeted by hackers, and they end up being the victims of reasonably intricate and damaging attacks that cost many thousands of dollars to mitigate.

Because no set of security controls or solutions can make you truly safe from exploitation, it is important to have a plan in place in case you do get hacked. Having a documented plan ready BEFORE an attack occurs allows you to be calm and rational with your response. Below are some basic steps you should consider in an incident response plan and/or follow in case a breach occurs.

1) Stay calm.

An attack, especially one in progress, naturally causes panic. While understandable, these feelings will only cause you to make mistakes in handling the breach. Stay calm and stick to your plan.

2) DO NOT unplug the system.

Unplugging the affected system, deleting malicious files, or restoring to a backup are all panic-driven responses to a security incident. When you take measures such as these, you potentially destroy key evidence in determining what, if anything, was taken, how it was taken, and when. Leave the system in place and call an expert as soon as possible.

3) Call an expert.

There are many companies that specialize in post-breach analysis, and it is important to contact these folks right away. They can help determine how the breach occurred, what was taken, and when. They can also help implement controls and improve security so that the same attack does not happen again. If you’ve been hacked, this is the most important step to take.

4) Keep a record.

For possible eventual legal action and to simply keep track of system changes, always keep a record of what has happened to the infected system – who has touched it, when, etc.

5) Determine the scope of the attack, stop the bleeding, and figure out what was taken.

The expert you phoned in will analyze the affected system and follow the steps above. Once the scope is understood, the system will be taken offline and the security hole that caused the problem will be discovered and closed. After that, the information that was compromised will be reviewed. This step will help determine how to proceed next.

6) Figure out who to tell.

Once you’ve determined what kind of information was compromised, it is very important to communicate that to the right people. If it was internal documents, you probably don’t need to make that public. If it was usernames and passwords, you must let your users know.

7) Have a security assessment performed and improve security controls.

Have your expert analyze the rest of your infrastructure and web applications for security holes that could be a problem in the future. After this occurs, the expert can recommend tools that will vastly improve your security layering.

Of course, many of these tasks can be performed proactively to greatly reduce the likelihood of ever needing this process. Contact an expert now and have them analyze your systems for security vulnerabilities.

How Safe is The Cloud?


By Zack Sanders, NetEqualizer Guest Columnist

There is no question that cloud-computing infrastructures are the future for businesses of every size. The advantages they offer are plentiful:

  • Scalability – IT personnel used to have to scramble for hardware when business decisions dictated the need for more servers or storage. With cloud computing, an organization can quickly add and subtract capacity at will. New server instances are available within minutes of provisioning them.
  • Cost – For a lot of companies (especially new ones), the prospect of purchasing multiple $5,000 servers (and to pay to have someone maintain them) is not very attractive. Cloud servers are very cheap – and you only pay for what you use. If you don’t require a lot of storage space, you can pay around 1 cent per hour per instance. That’s roughly $8/month. If you can’t incur that cost, you should probably reevaluate your business model.
  • Availability – In-house data centers experience routine outages. When you outsource your data center to the cloud, everything server related is in the hands of industry experts. This greatly increases quality of service and availability. That’s not to say outages don’t occur – they do – just not nearly as often or as unpredictably.

While it’s easy to see the benefits of cloud computing, it does have its potential pitfalls. The major questions that always accompany cloud computing discussions are:

  • “How does the security landscape change in the cloud?” – and
  • “What do I need to do to protect my data?”

Businesses and users are concerned about sending their sensitive data to a server that is not totally under their control – and they are correct to be wary. However, when taking proper precautions, cloud infrastructures can be just as safe – if not safer – than physical, in-house data centers. Here’s why:

  • They’re the best at what they do – Cloud computing vendors invest tons of money securing their physical servers that are hosting your virtual servers. They’ll be compliant with all major physical security guidelines, have up-to-date firewalls and patches, and have proper disaster recovery policies and redundant environments in place. From this standpoint, they’ll rank above almost any private company’s in-house data center.
  • They protect your data internally – Cloud providers have systems in place to prevent data leaks or access by third parties. Proper separation of duties should ensure that root users at the cloud provider couldn’t even penetrate your data.
  • They manage authentication and authorization effectively – Because logging and unique identification are central components to many compliance standards, cloud providers have strong identity management and logging solutions in place.

The above factors provide a lot of piece of mind, but with security it’s always important to layer approaches and be diligent. By layering, I mean that the most secure infrastructures have layers of security components that, if one were to fail, the next one would thwart an attack. This diligence is just as important for securing your external cloud infrastructure. No environment is ever immune to compromise. A key security aspect of the cloud is that your server is outside of your internal network, and thus your data must travel public connections to and from your external virtual machine. Companies with sensitive data are very worried about this. However, when taking the following security measures, your data can be just as safe in the cloud:

  • Secure the transmission of data – Setup SSL connections for sensitive data, especially logins and database connections.
  • Use keys for remote login – Utilize public/private keys, two-factor authentication, or other strong authentication technologies. Do not allow remote root login to your servers. Brute force bots hound remote root logins incessantly in cloud provider address spaces.
  • Encrypt sensitive data sent to the cloud – SSL will take care of the data’s integrity during transmission, but it should also be stored encrypted on the cloud server.
  • Review logs diligently – use log analysis software ALONG WITH manual review. Automated technology combined with a manual review policy is a good example of layering.

So, when taking proper precautions (precautions that you should already be taking for your in-house data center), the cloud is a great way to manage your infrastructure needs. Just be sure to select a provider that is reputable and make sure to read the SLA. If the hosting price is too good to be true, it probably is. You can’t take chances with your sensitive data.

About the author:

Zack Sanders is a Web Application Security Specialist with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

%d bloggers like this: