Five Things to Consider When Building a Commercial Wireless Network


By Art Reisman, CTO, APconnections,  www.netequalizer.com

with help from Sam Beskur, CTO Global Gossip North America, http://hsia.globalgossip.com/

Over the past several years we have provided our Bandwidth Controllers as a key component in many wireless networks.  Along the way we have seen many successes, and some not so successful deployments.  What follows are some key learnings  from our experiences with wireless deployment,

1) Commercial Grade Access Points versus Consumer Grade

Commercial grade access points use intelligent collision avoidance in densely packed areas. Basically, what this means is that they make sure that a user with access to multiple access points is only being serviced by one AP at a time. Without this intelligence, you get signal interference and confusion. An analogy would be if  you asked a sales rep for help in a store, and two sales reps start talking back to you at the same time; it would be confusing as to which one to listen to. Commercial grade access points follow a courtesy protocol, so you do not get two responses, or possibly even 3, in a densely packed network.

Consumer grade access points are meant to service a single household.  If there are two in close proximity to each other, they do not communicate. The end result is interference during busy times, as they will both respond at the same time to the same user without any awareness.  Due to this, users will have trouble staying connected. Sometimes the performance problems show up long after the installation. When pricing out a solution for a building or hotel be sure and ask the contractor if they are bidding in commercial grade (intelligent) access points.

2) Antenna Quality

There are a limited number of frequencies (channels) open to public WiFi.  If you can make sure the transmission is broadcast in a limited direction, this allows for more simultaneous conversations, and thus better quality.  Higher quality access points can actually figure out the direction of the users connected to them, such that, when they broadcast they cancel out the signal going out in directions not intended for the end-user.  In tight spaces with multiple access points, signal canceling antennas will greatly improve service for all users.

3) Installation Sophistication and Site Surveys

When installing a wireless network, there are many things a good installer must account for. For example,  the attenuation between access points.  In a perfect world  you want your access points to be far enough apart so they are not getting blasted by their neighbor’s signal. It is okay to hear your neighbor in the background a little bit, you must have some overlap otherwise you would have gaps in coverage,  but you do not want them competing with high energy signals close together.   If you were installing your network in a giant farm field with no objects in between access points, you could just set them up in a grid with the prescribed distance between nodes. In the real world you have walls, trees, windows, and all sorts of objects in and around buildings. A good installer will actually go out and measure the signal loss from these objects in order to place the correct number of access points. This is not a trivial task, but without an extensive site survey the resultant network will have quality problems.

4) Know What is Possible

Despite all the advances in wireless networks, they still have density limitations. I am not quite sure how to quantify this statement other than to say that wireless does not do well in an extremely crowded space (stadium, concert venue, etc.) with many devices all trying to get access at the same time. It is a big jump from designing coverage for a hotel with 1,000 guests spread out over the hotel grounds, to a packed stadium of people sitting shoulder to shoulder. The other compounding issue with density is that it is almost impossible to simulate before building out the network and going live.  I did find a reference to a company that claims to have done a successful build out in Gillette Stadium, home of the New England Patriots.  It might be worth looking into this further for other large venues.

5) Old Devices

Old 802.11b devices on your network will actually cause your access points to back off to slower speeds. Most exclusively-b devices were discontinued in the mid 2000’s, but they are still around. The best practice here is to just block these devices, as they are rare and not worth bringing the speed of your overall network down.

We hope these five (5) practical tips help you to build out a solid commercial wireless network. If you have questions, feel free to contact APconnections or Global Gossip to discuss.

Related Article:  Wireless Site Survey With Free tools

Wireless is Nice, but Wired Networks are Here to Stay


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The trend to go all wireless in high density housing was seemingly a slam dunk just a few years ago. The driving forces behind the exclusive deployment of wireless over wired access was two fold.

  • Wireless cost savings. It is much less expensive to strafe a building with a mesh network  rather than to pay a contractor to insert RJ45 cable throughout the building.
  • People expect wireless. Nobody plugs a computer into the wall anymore – or do they?

Something happened on the way to wireless Shangri-La. The physical limitations of wireless, combined with the appetite for ever increasing video, have caused some high density housing operators to rethink their positions.

In a recent discussion with several IT administrators representing large residential housing units, the topic turned to whether or not the wave of the future would continue to include wired Internet connections. I was surprised to learn that the consensus was that wired connections were not going away anytime soon.

To quote one attendee…

“Our parent company tried cutting costs by going all wireless in one of our new builds. The wireless access in buildings just can’t come close to achieving the speeds we can get in the wired buildings. When push comes to shove, our tenants still need to plug into the RJ45 connector in the wall socket. We have plenty of bandwidth at the core , but the wireless just does can’t compete with the expectations we have attained with our wired connections.”

I found this statement on a Resnet Mailing list from Brown University.

“Greetings,

     I just wanted to weigh-in on this idea. I know that a lot of folks seem to be of the impression that ‘wireless is all we need’, but I regularly have to connect physically to get reasonable latency and throughput. From a bandwidth perspective, switching to wireless-only is basically the same as replacing switches with half-duplex hubs.
     Sure, wireless is convenient, and it’s great for casual email/browsing/remote access users (including, unfortunately, the managers who tend to make these decisions). Those of us who need to move chunks of data around or who rely on low-latency responsiveness find themselves marginalized in wireless-only settings. For instance: RDP, SSH, and X11 over even moderately busy wireless connections are often barely usable, and waiting an hour for a 600MB Debian ISO seems very… 1997.”

Despite the tremendous economic pressure to build ever faster wireless networks, the physics of transmitting signals through the air will ultimately limit the speed of wireless connections far below of what can be attained by wired connections. I always knew this, but was not sure how long it would take reality to catch up with hype.

Why is wireless inferior to wired connections when it comes to throughput?

In the real world of wireless, the factors that limit speed include

  1. The maximum amount of data that can be transmitted on a wireless channel is less than wired. A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of  data at 1/2 the frequency. For example, 800 megahertz ( a common wireless carrier frequency) has  800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically though, with imperfect signals (noise) and other environmental factors, 1/10 of the original frequency is more likely the upper limit. This gives us a maximum carrying capacity per channel of 80 megabits on our 800 megahertz channel. For contrast, the upper limit of a single fiber cable is around 10 gigabits, and higher speeds are attained by laying cables in parallel, bonding multiple wires together in one cable, and on major back bones, providers can transmit multiple frequencies of light down the same fiber achieving speeds of 100 gigabits on a single fiber! In fairness, wireless signals can also use multiple frequencies for multiple carrier signals, but the difference is you cannot have them in close proximity to each other.
  2. The number of users sharing the channel is another limiting factor. Unlike a single wired connection, wireless users in densely populated areas must share a frequency, you cannot pick out a user in the crowd and dedicate the channel for a single person.  This means, unlike the dedicated wire going straight from your Internet provider to your home or office, you must wait your turn to talk on the frequency when there are other users in your vicinity. So if we take our 80 megabits of effective channel bandwidth on our 800 megahertz frequency, and add in 20 users, we are no down to 4 megabits per user.
  3. The efficiency of the channel. When multiple people are sharing a channel, the efficiency of how they use the channel drops. Think of traffic at a 4-way stop. There is quite a bit of wasted time while drivers try to figure out whose turn it is to go, not to mention they take a while to clear the intersection. Same goes for wireless users sharing techniques there is always overhead in context switching between users. Thus we can take our 20 user scenario down to an effective data rate of 2 megabits
  4. Noise.  There is noise and then there is NOISE. Although we accounted for average noise in our original assumptions, in reality there will always be segments of the network that experience higher noise levels than average. When NOISE spikes there is further degradation of the network, and sometimes a user cannot communicate at all with an AP. NOISE is a maddening and unquantifiable variable. Our assumptions above were based on the degradation from “average noise levels”, it is not unheard of for an AP to drop its effective transmit rate by 4 or 5 times to account for noise, and thus an effective data rate for all users on that segment from our original example drops down to 500kbs, just barely enough bandwidth to watch a bad video.

Long live wired connections!

Are You Unknowingly Sharing Bandwidth with Your Neighbors?


Editor’s Note: The following is a revised and update version of our original article from April 2007.

In a recent article titled, “The White Lies ISPs Tell about Broadband Speeds,” we discussed some of the methods ISPs use when overselling their bandwidth in order to put on their best face for their customers. To recap a bit, oversold bandwidth is a condition that occurs when an ISP promises more bandwidth to its users than it can actually deliver hence, during peak hours you may actually be competing with your neighbor for bandwidth. Since the act of “overselling” is a relative term, with some ISPs pushing the limit to greater extremes than others, we thought it a good idea to do a quick follow-up and define some parameters for measuring the oversold condition.

For this purpose we use the term contention ratio. A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to-1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.

So what is an acceptable contention ratio?

From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call? It’s best to leave the moral debate to a university assignment or a Sunday sermon.

So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?
Here are some basic unofficial observations:
  • Rural customers in the US and Canada: Contention ratios of 10 to 1 are common (2007 this was 20 to 1)
  • International customers in remote areas of the world: Contention ratios of 20 to 1 are common (2007 was 80 to 1)
  • Internet providers in urban areas: Contention ratios of 5 to 1 are to be expected (2007 this was 10 to 1) *

* Larger cable operators have extremely fast last mile connections, most of their speed claims are based on the speed of their last mile connection and not their Internet Exchange point thresholds. The numbers cited are related to their connection to the broader Internet and not the last mile from their office (NOC) to your home. Admittedly, the lines of what is the Internet can be blurred as many cable operators cache popular local content (NetFlix Movies, for example). The movie is delivered from a server at their local office direct to your home, hence technically we would not consider this related to your contention ratio to the Internet.

The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.

From the customers perspective of speed, contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.

However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.

Is this really true? And if so, what are its implications for your business?

This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their over subscription ratios increase with the size of their trunk while customer perception of speed remains about the same.

A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.

Related Articles

How to speed up access on your iPhone

How to determine the true speed of video over your Internet Connection

Case Study: A Simple Solution to Relieve Congestion on Your MPLS Network


Summary: In the last few months, we have set up several NetEqualizer systems on hub and spoke MPLS networks. Our solution is very cost effective because it differs from many TOS/Compression-based WAN optimization products that require multiple pieces of hardware.  Normally, for WAN optimization, a device is placed at the HUB and a partner device is placed at each remote location. With the NetEqualizer technology, we have been able to simply and elegantly solve contention issues with a single device at the central hub.

The problem:

A customer has a hub and spoke MPLS network where remote sites get their public Internet and corporate data by coming in on a spoke to a central site.  Although the network at the host site has plenty of bandwidth, the spokes have a fixed allocation over the MPLS and are experiencing contention issues (e.g. slow response times to corporate sales data, etc.).

The solution:

By placing a NetEqualizer at a central location, so that all the remote spokes come in through the NetEqualizer, we are able to sense when a remote spoke has reached its contention level. We then perform prioritization on all the competing applications and user streams coming in over the congested link.

Why it works:

QoS and priority is really quite simple: it is always the case where some large selfish application is dominating a shared link. The NetEqualizer is able to spot these selfish applications and scale them back using a technique called Equalizing. QoS and priority are just a matter of taking away bandwidth from somebody else. See our related article: QOS is a matter of sacrifice.

Okay, but how does it really work?

How does NetEqualizer solve the congested MPLS link issue?

The NetEqualizer solution, which is completely compatible with MPLS, works by taking advantage of the natural inclination of applications to back off when artificially restrained. We’ll get back to this key point in a moment.

NetEqualizer will adjust selfish application streams by adding latency, forcing them to back off and allow potentially starved data applications to establish communications – thus eliminating any disruption.

Once you have determined the peak capacity of an MPLS spoke (if you don’t know for sure it can be determined empirically through busy hour observation), you then tell the centralized NetEqualizer the throughput of the spoke through its defined subnet range or VLAN identification tag. This tells the NetEqualizer to kick into gear when that upper limit on the spoke is reached.

Once configured, the NetEqualizer constantly (every second) measures the total aggregate bandwidth throughput traversing every spoke on your network. If it senses the upper limit is being reached, NetEqualizer will then isolate the dominating flows and encourage them to back off.

Each connection between a user on your network and the Internet constitutes a traffic flow. Flows vary widely from short dynamic bursts, which occur, for example, when searching a small Web site, to large persistent flows, as when performing peer-to-peer file sharing or downloading a large file.

By keeping track of every flow going through each MPLS spoke, the NetEqualizer can make a determination of which ones are getting an unequal share of bandwidth and thus crowding out flows from weaker applications.

NetEqualizer determines detrimental flows from normal ones by taking the following questions into consideration:

  1. How persistent is the flow?
  2. How many active flows are there?
  3. How long has the flow been active?
  4. How much total congestion is currently on the link?
  5. How much bandwidth is the flow using relative to the link size?

Once the answers to these questions are known, NetEqualizer will adjust offending flows by adding latency, forcing them to back off and allow potentially starved applications to establish communications – thus eliminating any disruption. Selfish Applications with more aggressive bandwidth needs will be throttled back during peak contention. This is done automatically by the NetEqualizer, without requiring any additional programming by administrators.

The key to making this happen over an MPLS link relies on the fact that if you slow a down a selfish application it will back off. This can be done via the NetEqualizer without any changes to the topology of your MPLS network, since the throttling is done independent of the network.

Questions and Answers

How do you know congestion is caused by a heavy stream?

We have years of experience optimizing networks with this technology. It is safe to say that on any congested network, roughly five percent of users are responsible for 80 percent of Internet traffic. This seems to be a law of Internet usage.2

Can certain applications be given priority?

NetEqualizer can give priority by IP address, for video streams, and in its default mode it naturally gives priority to VoIP, thus addressing a common need for commercial operators.

———————————————————————————————————————————————–

2Randy Barrett, “Putting the Squeeze on Internet Hogs: How Operators Deal with Their Greediest Users.” Multichannel News. 7 Mar. 2007. Retrieved 1 Aug. 2007 http://www.multichannel.com/article/CA6439454.html

Update: Bandwidth Consumption and the IT Professionals that are Tasked to Preserve It


“What is the Great Bandwidth Arms Race? Simply put, it is the sole reason my colleague gets up and goes to work each day. It is perhaps the single most important aspect of his job—the one issue that is always on his mind, from the moment he pulls into the campus parking lot in the morning to the moment he pulls into his driveway at home at night. In an odd way, the Great Bandwidth Arms Race is the exact opposite of the “Prime Directive” from Star Trek: rather than a mandate of noninterference, it is one of complete and intentional interference. In short, my colleague’s job is to effectively manage bandwidth consumption at our university. He is a technological gladiator, and the Great Bandwidth Arms Race is his arena, his coliseum in which he regularly battles conspicuous bandwidth consumption.”

The excerpt above is from an article written by Paul Cesarini, a Professor at Bowling Green University back 2007. It would be interesting to get some comments and updates from Paul at some point, but for now, I’ll provide an update from the vendor perspective.

Since 2007, we have seen a big drop in P2P traffic that formerly dominated most networks. A report from bandwidth control vendor Sandvine tends to agree with our observations.

Sandvine Report
— The growth of Netflix, the decline of P2P traffic, and the end of the PC era are three notable aspects of a new report by network equipment company Sandvine. Netflix accounted for 27.6% of downstream U.S. Internet traffic in the third quarter, according to Sandvine’s “Global Internet Phenomena Report” for Fall 2011. YouTube accounted for 10 percent of downstream traffic and BitTorrent, the file-sharing protocol, accounted for 9 percent.”

We also agree with Sandvine’s current findings that video is driving bandwidth consumption; however, for the network professionals entrenched in the battle of bandwidth consumption, there is another factor at play which may indicate some hope on the horizon.

There has been a precipitous drop on raw bandwidth costs over the past 10 years. Commercial bandwidth rates have dropped from around $100 or more per megabit to as little as $10 per megabit. So the question now is: Will the availability of lower-cost bandwidth catch up to the demand curve? In other words, will the tools and human effort put into the fight against managing bandwidth become moot? And if so, what is the time frame?

I am going to go out halfway on limb and claim we are seeing bandwidth catch up with demand and hence the battle for the IT professional is going to subside over the coming years.

The reason for my statement is that once we get to a price point where most consumers can truly send and receive interactive video (note this is the not the same as ISPs using caching tricks), we will see some of the pressure spent on micro-managing bandwidth consumption with human labor ease up. Yes, there will be consumers that want HD video all the time, but with a few rules in your bandwidth control device you will be able allow certain levels of bandwidth consumption through, including low resolution video for Skype and YouTube, without crashing your network. Once we are at this point, the pressure for making trade-offs on specific kinds of consumption will ease off a bit.  What this implies is that the cost of human labor to balance bandwidth needs will be relegated to dumb devices and perhaps obsolete this one aspect of the job for an IT professional.

Our Take on Network Instruments 5th Annual Network Global Study


Editors Note: Network Instruments released their “Fifth Annual State of the Network Global study” on March 13th, 2o12. You can read their full study here. Their results were based on responses by 163 network engineers, IT directors, and CIOs in North America, Asia, Europe, Africa, Australia, and South America. Responses were collected from October 22, 2011 to January 3, 2012.

What follows is our take (or my .02 cents) on the key findings around Bandwidth Management and Bandwidth Monitoring from the study.

Finding #1: Over the next two years, more than one-third of respondents expect bandwidth consumption to increase by more than 50%.

Part of me says “well, duh!” but that is only because we hear that from many of our customers. So I guess if you were an Executive, far removed from the day-to-day, this would be an important thing to have pointed out to you. Basically, this is your wake up call (if you are not already awake) to listen to your Network Admins who keep asking you to allocate funds to the network. Now is the time to make your case for more bandwidth to your CEO/President/head guru. Get together budget and resources to build out your network in anticipation of this growth – so that you are not caught off guard. Because if you don’t, someone else will do it for you.

Finding #2: 41% stated network and application delay issues took more than an hour to resolve.

You can and should certainly put monitoring on your network to be able to see and react to delays. However, another way to look at this, admittedly biased from my bandwidth shaping background, is get rid of the delays!

If you are still running an unshaped network, you are missing out on maximizing your existing resource. Think about how smoothly traffic flows on roads, because there are smoothing algorithms (traffic lights) and rules (speed limits) that dictate how traffic moves, hence “traffic shaping.” Now, imagine driving on roads without any shaping in place. What would you do when you got to a 4-way intersection? Whether you just hit the accelerator to speed through, or decided to stop and check out the other traffic probably depends on your risk-tolerance and aggression profile. And the result would be that you make it through OK (live) or get into an ugly crash (and possibly die).

Similarly, your network traffic, when unshaped, can live (getting through without delays) or die (getting stuck waiting in a queue) trying to get to its destination. Whether you look at deep packet inspection, rate limiting, equalizing, or a home-grown solution, you should definitely look into bandwidth shaping. Find a solution that makes sense to you, will solve your network delay issues, and gives you a good return-on-investment (ROI). That way, your Network Admins can spend less time trying to find out the source of the delay.

Finding #3: Video must be dealt with.

24% believe video traffic will consume more than half of all bandwidth in 12 months.
47% say implementing and measuring QoS for video is difficult.
49% have trouble allocating and monitoring bandwidth for video.

Again, no surprise if you have been anywhere near a network in the last 2 years. YouTube use has exploded and become the norm on both consumer and business networks. Add that to the use of video conferencing in the workplace to replace travel, and Netflix or Hulu to watch movies and TV, and you can see that video demand (and consumption) has risen sharply.

Unfortunately, there is no quick, easy fix to make sure that video runs smoothly on your network. However, a combination of solutions can help you to make video run better.

1) Get more bandwidth.

This is just a basic fact-of-life. If you are running a network of < 10Mbps, you are going to have trouble with video, unless you only have one (1) user on your network. You need to look at your contention ratio and size your network appropriately.

2) Cache static video content.

Caching is a good start, especially for static content such as YouTube videos. One caveat to this, do not expect caching to solve network congestion problems (read more about that here) – as users will quickly consume any bandwidth that caching has freed up. Caching will help when a video has gone viral, and everyone is accessing it repeatedly on your network.

3) Use bandwidth shaping to prioritize business-critical video streams (servers).

If you have a designated video-streaming server, you can define rules in your bandwidth shaper to prioritize this server. The risk of this strategy is that you could end up giving all your bandwidth to video; you can reduce the risk by rate capping the bandwidth portioned out to video.

As I said, this is just my take on the findings. What do you see? Do you have a different take? Let us know!

You May Be the Victim of Internet Congestion


Have you ever had a mysterious medical malady? The kind where maybe you have strange spots on your tongue, pain in your left temple, or hallucinations of hermit crabs at inappropriate times – symptoms seemingly unknown to mankind?

But then, all of a sudden, you miraculously find an exact on-line medical diagnosis?

Well, we can’t help you with medical issues, but we can provide a similar oasis for diagnosing the cause of your slow network – and even better, give you something proactive to do about it.

Spotting classic congested network symptoms:

You are working from your hotel room late one night, and you notice it takes a long time to get connected. You manage to fire off a couple emails, and then log in to your banking website to pay some bills. You get the log-in prompt, hit return, and it just cranks for 30 seconds, until… “Page not found.” Well maybe the bank site is experiencing problems?

You decide to get caught up on Christmas shopping. Initially the Macy’s site is a bit a slow to come up, but nothing too out of the ordinary on a public connection. Your Internet connection seems stable, and you are able to browse through a few screens and pick out that shaving cream set you have been craving – shopping for yourself is more fun anyway. You proceed to checkout, enter in your payment information, hit submit, and once again the screen locks up. The payment verification page times out. You have already entered your credit card, and with no confirmation screen, you have no idea if your order was processed.

We call this scenario, “the cyclical rolling brown out,” and it is almost always a problem with your local Internet link having too many users at peak times. When the pressure on the link from all active users builds to capacity, it tends to spiral into a complete block of all access for 20 to 30 seconds, and then, service returns to normal for a short period of time – perhaps another 30 seconds to 1 minute. Like a bad case of Malaria, the respites are only temporary, making the symptoms all the more insidious.

What causes cyclical loss of Internet service?

When a shared link in something like a hotel, residential neighborhood, or library reaches capacity, there is a crescendo of compound gridlock. For example, when a web page times out the first time, your browser starts sending retries. Multiply this by all the users sharing the link, and nobody can complete their request. Think of it like an intersection where every car tries to proceed at the same time, they crash in the middle and nobody gets through. Additional cars keep coming and continue to pile on. Eventually the police come with wreckers and clear everything out of the way. On the Internet, eventually the browsers and users back off and quit trying – for a few minutes at least. Until late at night when the users finally give up, the gridlock is likely to build and repeat.

What can be done about gridlock on an Internet link?

The easiest way to prevent congestion is to purchase more bandwidth. However, sometimes even with more bandwidth, the congestion might overtake the link. Eventually most providers also put in some form of bandwidth control – like a NetEqualizer. The ideal solution is this layered approach – purchasing the right amount of bandwidth AND having arbitration in place. This creates a scenario where instead of having a busy four-way intersection with narrow streets and no stop signs, you now have an intersection with wider streets and traffic lights. The latter is more reliable and has improved quality of travel for everyone.

For some more ideas on controlling this issue, you can reference our previous article, Five Tips to Manage Internet Congestion.

%d bloggers like this: