CWNE#307, Petri Riihikallio

I am the first Certified Wireless Network Expert in Finland. I was certified in February 2019. This article is about CWNE and how I got into this.

The highest certification level of CWNP is the CWNE. CWNP certifications are vendor neutral, which means they are about wireless networking and specifically about Wi-Fi on a general level. Cisco CCIE Wireless and Aruba ACMX are also high level certifications, but they are about a single vendor’s products and solutions. The world is much more diversified.


Certified Wireless Network Professionals is an U.S. based organization, which has been developing a certification program for Wi-Fi/802.11/WLAN technology since 1999. The fundamental idea was to be vendor-neutral. IEEE 802.11 standard applies to all – as do the laws of physics, governing radio waves and their behavior. If you know the Wi-Fi fundamentals applying the knowledge to any vendor’s solution is fairly straightforward.

The entry level CWNP certifications are the CWS and CWT (Certified Wireless Specialist and Technician). The former is for non-technical personnel like salespersons and the latter is somewhat more technical and is intended for installers, for example. These two used to be a single certification known as CWTS. No entry level certification is required for the higher levels.

Certified Wireless Network Administrator or CWNA is more advanced. It requires thorough understanding of radio frequency physics, radio waves, antennas and software side like encryption and authentication. CWNA is an excellent way of finding out where you stand in the Wi-Fi crowd. From a Finnish standpoint outdoor antennas and microwave links are a lot less important than they appear in CWNA. On the other hand, they are only covered in CWNA.

After CWNA comes three professional level certifications: Design Pro, Security Pro and Analysis Pro (CWDP, CWSP and CWAP). Each one of these expands on subjects already covered in CWNA. Design Pro is about designing wireless networks, Security Pro is about data protection: encryption and authentication, while Analysis Pro is about protocol details, network packet contents and troubleshooting.

CWNE is different. There are no tests or classes to attend. CWNA, CWDP, CWSP and CWAP are prerequisites. In addition you need to have three years of full-time field experience, other supporting certifications, documented Wi-Fi projects and endorsers. CWNE doesn’t cost anything, you can’t buy it. You can apply for CWNE status and CWNP will grant it if they deem you worthy. The CWNE program started in 2001 and as of writing there are 307 certifications. In recent years about 50 certifications have been granted annually worldwide.

My Path

I have been a full time IT consultant since 1980’s. The first networks were AppleTalk and telecommunication was accomplished with modems. I came across TCP/IP on Unix systems already in the 80’s, but the Internet made TCP/IP ubiquitous in the 90’s.

I remember having seen my first Wi-Fi access points around the turn of the millennium. I did install some early 802.11b access points, but they were curiosity items and didn’t see much real use. Slowly APs became commonplace and the number of users increased.

Some Wi-Fi networks didn’t work as expected and I was often asked to troubleshoot. Unfortunately I couldn’t do much beyond checking the configuration. I tried to find someone who was skilled to troubleshoot Wi-Fi but I never found anyone. Nobody wanted to admit anything, but I saw shaking heads and shrugs. I, too, started to believe Wi-Fi was impossible to understand, you could only hope it worked.

After 2010 Wi-Fi became a necessity. Smartphones and tablets increased the demand for wireless connectivity. The users wanted to use their laptops wirelessly, too. 802.11n provided the capacity, but not all networks performed the same. I still couldn’t find anyone to help, so I decided to dig into it myself. If it was designed by humans it had to be comprehensible for humans.

I read books, tried out products from different vendors and set up labs for different scenarios. I still was uneasy wether I was doing the right thing. Then I came across CWNP. In 2015 in took the CWTS test without any preparation. I passed at 50/60, which has been my worst score so far. I knew next to nothing about outdoor Wi-Fi, microwave links and special antennas. All my experience was based on indoor office networks. On the other hand, I knew them, I was on the right track. And now I had found a learning path to follow.

For all my CWNP certifications I have been reading books by myself. There have been only occasional CWNP classes in Finland. On the other hand, I don’t believe a single week can prepare you for a test. At least I have studied for months, but not full time of course. On the side I have acquired vendor specific certifications like Cisco, Ubiquiti, MikroTik, Aruba and LigoWave. All the vendors must adhere to the same laws of physics and the 802.11 standard so there is more in common in the solutions than there are differences. Naturally the products all look different and the level of configuration control differs.

A common question is “Which order is best for the Pro-exams?” There is no set answer. The only requirement is that you must have passed all of them and they must be valid at the time of application. I did Security Pro first, because I have a strong security background. Key exchanges, encryption methods, handshakes and PKI are familiar to me. I took the Analysis Pro next and Design Pro last. With hindsight I would suggest Design Pro first, because I found it the easiest and most applicable to general Wi-Fi work. Take Security Pro second to save Analysis Pro last. AP last because it is the toughest and you don’t want it to expire in case you take more than three years to pass them all. They also overlap somewhat, so by taking DP and SP first you have already prepared for some of the AP material.

From the books I have read I can recommend the CWNP series by Sybex. CWNP used to have a deal with Sybex to publish books aligned with the certifications. The exams are updated every three years and Sybex would produce a new edition to cover the latest exam. This deal doesn’t exist any longer and CWNP is publishing their own “official” study guides. The quality of these books hasn’t been impressive. Fortunately Sybex has continued updating their series. Just make certain the book covers the latest version of the exam before buying. Even if you don’t plan to certify, the recently renewed CWNA is an excellent handbook that covers Wi-Fi thoroughly.



In Wi-Fi world you are bound to come across decibels. Yet they confuse many of us. Sound is measured in decibels, but Wi-Fi? And what about those negative numbers? How can a signal be negative?

Decibels were invented in the Bell phone company to quantify signal loss in telephone wires. Soon the new unit was discovered to be too large so in practice the tenth of the unit was used, thus deciBells. The second L was dropped so we are using decibels. The original capitalization still shows in the abbreviation dB.


Decibel is a logarithmic unit of level that doesn’t contain any actual unit. It is the ratio of two values. Ratio is division so the units cancel out. In Wi-Fi the signal levels and transmit powers are compared to milliwatts so the unit is called dBm. Antennas are most often compared to a isotropic radiator and then the unit is dBi.

If you compare two decibel values to each other, the resulting is always plain dB. For example signal to noise ratio (SNR) is the signal (in dBm) minus the noise level (in dBm) resulting in plain dB. For example -66dBm-(-96dBm)=30dB. It is still a ratio since in the logarithmic scale the subtraction equals division in the linear world.

Zero cannot be represented in a logarithmic scale. 0dB represents equality. For example one milliwatt of power is 0dBm. Respectively negative numbers are smaller than the reference. -20dBm is 0,01mW or a hundredth of a milliwatt. -70dBm is a common signal level, which would be 0,0000001 in milliwatts. Logarithmic decibel makes it easier to present and compare very small and also very large numbers.


Multiplication turns into addition in logarithmic world and respectively division turns into subtraction. This simplifies many calculations. For example we need to calculate the net transmit level (Effective Isotropic Radiated Power a.k.a. EIRP) for a system where the amplifier outputs 20dBm. The antenna gain is 15dBi, the 4 meter cable loss is 3db per meter and the connector loss is 2dB each:

The result is 20-2-4×3-2+15=19dBm

You can convert milliwatts to decibels and back as mental calculations. Let’s start with milliwatts to decibels. Dozens of milliwatts is 10 in dBm, hundreds is 20 and thousands is 30. Respectively tenths is -10, hundredths is -20 and thousandths is -30. Doubling milliwatts equals three decibel increase. For example 200mW is in the hundreds or “20”. One hundred has to be doubled so add 3. The answer is 23dBm. Respectively 400mW is 20+3+3 or 26dBm or 80mW is 10+3+3+3 or 19dBm.

You can convert decibels to milliwatts in the same fashion. Divide the decibel value by ten (to get full Bells). Then move the decimal point by the result. Positive values move the decimal point to the right and negative to the left. Then look at the remainder and multiply (or divide if negative) by two for every three in the remainder. For example 16dBm is 10mW×2×2 or 40mW and -66dBm is 0,000001mW÷2÷2 or 0,00000025mW.

With a scientific calculator you get more exact results, of course. How many dBm is 200mW? Type 200 log10 × 10 =

How many milliwatts is 19dBm?
Type 19 ÷ 10 = 10x


Wi-Fi Roaming

Roaming or switching from one access point to the next is a common source of confusion. Technically Wi-Fi roaming is the opposite of the cellular network. Wi-Fi access points are passive and the client devices choose which access point they want to use and when to switch, if they switch. What are the consequences?

A small network at home or in a small office may consist of only a single access point. It covers some area and outside of coverage there is no network – very simple. To cover a larger office or a multi-storey home requires multiple access points. The users may move from the vicinity of one access point to the next. How does Wi-Fi roaming actually work?


In the 802.11 standard the client devices choose the access point they want to associate with. APs don’t have any control over roaming. The good news is that no coordination is required for roaming. As long as the APs broadcast a network with the same name and the same password, the clients can transparently switch from one AP to another. Roaming works even across APs from different brands. You can have an Asus in the living room and a D-Link in the kitchen and your Skype call will transparently continue when you walk from room to room. No configuration is required.

The roaming above does require that the access points are just bare access points. Most consumer grade APs are also routers, firewalls and IP address dispensers. The Skype call will disconnect if the user moves from behind a firewall to behind another and even gets a new IP address. The interruption will not show up in browsing or email applications since they don’t require continuous connection.

Enterprise access points are typically plain access points and the firewall is at the uplink to Internet. In this scenario the roaming is seamless. You can achieve this with consumer grade equipment as well, but it will require some configuration changes.

Roaming problems

Sometimes the roaming is not seamless. The most common problem occurs when the client device won’t roam even when there is a better access point nearby. Most client devices won’t active look for better APs but will only start looking for one when the connection degrades significantly. For example Apple iOS devices will start looking for a new AP when the signal level drops below -70dBm. Most other vendors don’t publish exact figures but they are in the same range. For some reason the trigger is typically only signal strength or RSSI. Noise ratio (SNR) or even transmission errors or retries don’t matter if the received signal is strong enough.

Why would there be transmission errors if the signal is strong? If there is a big mismatch between AP and client transmit powers. The AP may transmit at 200mW when a cell phone may max out at 15mW. The phone will receive a strong signal from the AP while the AP can barely receive the phone. As long as there is some kind of connection the phone won’t even look at other APs. The current one is still showing “full bars”. Only when the connection is completely lost will the phone start scanning, which may take seconds. In that time all connections will break. This will occur even if the user is standing right below an AP.

Power Mismatch
The signal from AP is strong but the AP is at the edge of the coverage area of the phone.

Transmit power level mismatch is also  the most common cause for uneven distribution of clients over the access points. In the worst case all client devices are associated with the AP in the lobby, if it covers the entire office. The devices associated with the lobby AP when they arrived and haven’t roamed since because the signal is strong enough. All other APs are underutilized while one or few are over utilized.

The solution to these roaming problems is to turn down the transmit power of the APs to better match the user devices. A big difference in power levels won’t do any good, it will only cause trouble. Transmit power is like good cognac or whiskey: enough is good, too much is bad.

802.11k, 802.11r and 802.11v

The basic principles of 802.11 roaming aren’t going to change. However, there are a few add-ons to help client devices with roaming.

802.11k adds a list of channels to all beacons the access point transmits. This list will tell clients which channels are in use for this network. The client devices won’t need to scan through the whole spectrum when looking for a new AP. The devices will typically listen on each channel for 200ms while scanning. If there are 24 channels on 5GHz it will take almost five seconds to scan through all of them.

802.11v adds information on access point utilization to the beacons. This will hint the clients to choose an AP with less load, even if the signal is slightly weaker. Data throughput may still be better than in a crowded cell.

802.11r standardizes client authentication improvements and is often known as Fast Roaming. In the common WPA2 Personal or Pre-Shared Key the authentication is fast enough already so there is no benefit in using 802.11r. However, in WPA2 Enterprise the AP has to contact a RAIDUS server to authenticate the client, which is slow. In 802.11r the client devices will pre-authenticate with nearby access points just in case it will need to roam. When it does, the authentication is already completed so the switch is quick.

802.11 k, r and v (or a subset) are either on, optional, off or unsupported depending on the system. If you have 802.11k and/or 802.11v then you should turn them on in a multi-AP environment. They may confuse some old Android devices so you need to test the effects in your network. In WPA2 Enterprise networks you should test 802.11r, but don’t turn it on in WPA2 Personal networks. 802.11r causes even more compatibility problems so you should again test the net effect.


Automatic Wi-Fi channel management

Most Wi-Fi systems have some kind of automatic setting for selecting the channel. Systems with a central controller have advanced RRM or Radio Resource Management solutions. The promise is to optimize channel selection, transmit power levels and other settings. Can you trust this automation?

Wi-Fi channel planning is often a difficult problem. There are only so many channels and neighboring networks use the same channels. You should set the power level to match user devices. There are many kinds of obstructions which will affect the coverage. Luckily vendors offer a seemingly simple solution to this complex problem: let the automation take care of it. Practically all systems have an auto setting which is supposed to find the optimal solution. What does it really do?

Simple systems

In the simplest case the auto setting just means that the access point will choose the least occupied channel at start up after a short scanning period. Transmit power is typically “full on”.

The problem with the simplest approach is timing. If you are like me, most of the installations happen in the evening or over the weekend. You turn on the device when all the neighboring offices are closed and their Wi-Fi networks idle. It will be a totally different environment when all the offices are fully manned during work days. However, simple systems won’t change the channel once it has been selected. The change would break all active network connections and the access point can’t know when it would be a good time for it. You could be stuck with a very poor channel choice until next restart.

Some systems can schedule channel reselection. However, if you set it to occur at night, the environment won’t reflect the actual use case during the day. If you schedule reselection during daytime you will risk user connections.

Multi access point systems are vulnerable to a special case with electrical black-outs. After a black-out all access points are started exactly at the same moment. They will scan the channels at the same time. As a result, they will often choose the same channel. The net result is the worst case, since the access points will need to alternate with their transmissions.

Selecting the proper transmit power level is also a difficult task. Full power is the least risky choice for the vendors, since client devices will show full bars as the signal strength. The devices may struggle with data transfer, however, since Wi-Fi is always bidirectional. The client devices need to transmit to the access points and especially phones have much lower transmit power than access points. You will end up with poor transfer rates even though the signal appears to be great – in one direction.

Smart systems

Larger Wi-Fi systems usually have a centralized management controller. The controller collects data from the access points and can see the big picture. You can get much better results with centralized systems. Cisco WLC is a good example of a smart system.

Actually centralization is not the key. Many low-end centralized systems leave the channel choice to the access points. They are no better than simple systems described above and have the same problems. On the other hand, Aruba Instant is a distributed systems (without a central controller) but the access points will negotiate with themselves and get the big picture. Actually Aruba’s automatic channel management is one of the best, even when it is distributed. Aruba has a centralized controller, too, if you have dozens of access points.

A smart system will know the transmit power of each access point. It will also collect information on how the access points are receiving each other. With transmit power levels and remote signal levels the system can create a network map, which will show how the APs are located relative to each other. With this information the system can choose channels for the APs so they are not interfering with each other. The system can also deduce that at the midpoint between the APs the signal is twice the strength the other AP is receiving. The system can therefore set the transmit power levels so that there is required signal level between the APs.

This all requires dense enough access point placement. In theory the APs wouldn’t need to receive each other at all. It suffices if the the coverage reaches the midpoint between APs. However, in that case the system can’t build the network map to base its choices on. However, with a denser AP placement the system can compensate for an AP that is malfunctioning or being upgraded for example. The system will just turn up the transmit powers so the neighboring APs will cover the hole.

A smart system can also adapt to changes in the environment. When an access point finds a new transmitter on its channel it will report it to the controller. The system can then change the channel plan to accommodate the intruder.

System stabilization will take some time since the process is iterative. A new Wi-Fi network may need a couple of days to reach a stable state. Occasionally automatic adjustments can lead to an unstable state, where the channels are constantly being changed. For example, an office building with a central elevator shaft may cause the channel changes to circulate around the floor incessantly.

Fortunately transient transmitters like personal hotspots or passing trains or buses, are typically on 2.4GHz. 5GHz band tends to be more stable.


Manual channel planning yields better results than simple automation.

Smart system are useful, at least when the environment is stable enough. You just need to verify that all useful channels are being utilized. Many systems will only use non-DFS channels on 5GHz, even though they default on 40MHz channel width. That will leave you with only two channels and inevitable interference problems.

Even the smart systems tend to set the transmit power levels quite high. The aim is to please customers with “good coverage”. In many systems you can set the maximum and minimum power levels. However, in practice the system will use the set maximum power level for all access points. The down side of limiting the automation is that the system can’t exceed the set limits even in an equipment failure case.

The conclusion is that smart system will help with Wi-Fi management, if you have familiarized yourself with the system and its settings. No automation will completely remove the need for skill and knowledge of the environment. Smart systems are especially good at coping with unexpected changes in the environment.

More on the subject

WPA3 is the latest Wi-Fi Protected Access

Wireless communication is easy to intercept if you are within range. Good security measures are a must. Wi-Fi security has evolved from WEP to WPA, to WPA2 and now to forthcoming WPA3. What will change?

WPA2 or Wi-Fi Protected Access (or 802.11i) has been a long-lived solution. WPA was published in 2003 and WPA2 in 2004. Fourteen years is a a long time for any security solution in IT, where hardware capacities grow exponentially. Recently we have seen some reports on WPA2 vulnerabilities. They are not yet very practical but are warning signs of the age of WPA2. To keep Wi-Fi secure Wi-Fi Alliance has published WPA3 in June 2018. What will it bring?

More secure connecting process

The reported WPA2 vulnerabilities have been based on the password exchange in the association phase of the connection. The password is obviously not exchanged as clear text but as a hash. You cannot recover the password from the hash, but… It is possible to precalculate a large dictionary of potential passwords. These dictionaries are called Rainbow Tables and they have been produced for years now. The probability, that the password you are using is in a dictionary, is increasing all the time.

In WPA3 the password hash is not exchanged per se, but SAE (Simultaneous Authentication of Equals) is used instead. 802.11s introduced SAE, which is based on the widely accepted Diffie–Hellman key exchange. In SAE both parties must be active. If a third party is just listening in and recording, he can’t make use of the information. This property will protect against Off-line Brute Force attacks. Another property of SAE is Forward Secrecy. Even if the key is exposed, old recordings cannot be decrypted. Only transfers made after exposure can be decoded. The rumour goes that large intelligence agencies have been storing encrypted transmissions in the hopes that the key can be recovered in the future.

For the ordinary user these technical details are not significant. You will authenticate as before. Technically the authentication is different and for WPA3 to be used both the access point and the user device must support WPA3.

Enhanced Open

Many guest networks today are open, that is unencrypted. The web login used in many airport and hotel hotspots does not provide for encryption. On those networks the connection is clear text and very easy to eavesdrop. If you log on to a service, which doesn’t use SSL/TLS encryption, then all credential information is sent in clear for anyone to receive.

WPA3 Enhanced Open will provide for encrypted connections even if there is no password. All Wi-Fi traffic will always be encrypted. Enhanced Open will not authenticate either party, however. The user can inadvertently connect to a hostile network that is using a familiar, trusted name. This threat has been in Wi-Fi since the beginning. Enhance Open will not help there, but will prevent passive eavesdropping.

Easy Connect

Connecting a computer or smartphone to a Wi-Fi network is a familiar and easy procedure for most of us. However, connecting printers, media servers, weather stations, wireless speakers and other devices, which don’t have a display or keyboard, is another matter. In the future with IoT there will be all kinds of sensors, home appliances, building automation, lightning fixtures to be connected as well.

WPS or Wi-Fi Protected Setup was introduced in 2006 to solve this problem. In WPS you needed to press a button on the access point or enter a short PIN code to the device to connect to the network. WPS was too easy and afterwards many security weaknesses have been found. WPS should not be used at all anymore.

WPA3 Easy Connect is a secure solution to the same problem. In Easy Connect you will use a configurator device like a smartphone, that is already connected, to connect a new device to the network. One way is to scan a QR code on the device and authorize it to connect. Easy Connect is based on trusted public key encryption methods.

WPA3 Personal and Enterprise

Like WPA2 also WPA3 has two modes:

  • WPA3 Personal, where all users share a common password.
  • WPA3 Enterprise, where all users have their unique credentials on a RADIUS server.

New in WPA3 Enterprise is the increased length of the encryption key: from 128 bits to 192 bits. In WPA3 Personal the key length will remain at 128 bits. At this time the difference is quite theoretical since 128 bit keys are still considered secure.

Should I upgrade?

As of now there are no WPA3 devices available, yet. The situation will certainly be different already by the beginning of 2019. WPA3 capability is of no use unless both the access point and the user device support it. If either one only supports WPA2 then WPA2 will be used. Co-existence will continue for several years at least, especially in guest and BYOD networks.

The way Wi-Fi Alliance has defined WPA3 requires that devices must support the whole WPA3 to be compliant. The requirements are thus the new authentication process, Enhanced Open, Easy Connect and 192 bit WPA3 Enterprise.

New devices will soon be WPA3 compliant. There is no reason to produce WPA2-only devices. However, upgrading old devices may be limited. Computers and recent smartphones have enough computing power for the new encryption requirements, unless encryption has been offloaded to a special circuit. If the circuit has been designed for 128 bit encryption it cannot be used for 192 bits. Upgrade options for access points will probably be poor. APs have very modest computing power so I doubt the upgrade could be done with a simple firmware update. The vendors will be happy to sell you new hardware, though 😬

Tuning your Wi-Fi by adjusting transfer rates

In most Wi-Fi systems you can disable the slowest transfer rates. This is typically done to improve efficiency since the transfers at slower rates eat up limited air time. This can backfire however with unexpected results.

In 802.11 standard defines basic rates and supported rates. The access points broadcast this information in every beacon. To associate with the network every device has to support all basic rates (i.e. they are requirements). Supported rates are optional. Supported rates that are common to both the device and the AP may be used. Typically there are one or a couple of basic rates and they are at the slow end of the scale. In most systems the administrator can configure the the rates by disabling unwanted ones.

Devices and APs will always use the highest rate the connection can carry. The rate is adjusted for each frame if the connection quality changes, for example the device moves. The lowest basic rate is used for all management frames (beacon, probe, probe request etc.) and for broadcasts and multicasts.

Available basic and supported rates:

802.11b (2.4 GHz only) 1, 2, 5.5 and 11 Mbps
802.11 OFDM 6, 9, 12 and 18 Mpbs
802.11 OFDM Extended 24, 36, 48 and 54 Mbps

Disabling lowest rates

It appears obvious that by disabling the lowest supported rates you can increase the throughput of the network. Slow transfers eat up a lot of the air time. Historically the difference  wasn’t that large, but today it can be 600 fold (1 Mbps vs. 600 Mbps). It also appears that roaming would improve since by disabling slow (i.e. bad) connections the devices will be forced to roam earlier.

Unfortunately it doesn’t quite work that way. The air time won’t increase as intended unless there is always a close AP nearby. If there isn’t a better AP the devices will stay associated with too high transfer rate that the connection can’t carry. This will result in transmission errors and retries, which will eat up the freed air time. The user at the end of the bad connection will experience high latencies and especially jitter (i.e. variation of latency).

If you think you have a dense network with APs close by you need to think of the edges, too. At the fringes the coverage will inevitably grow thinner. In the best case the fringe will be outside of the building where there are no users (in the upper floors at least). On ground level it is difficult to prevent devices from associating as soon as the network is detected.

Disabling the lower rates won’t help with roaming either. Most devices only track RSSI. As long as the signal is above the threshold the device won’t roam. The threshold doesn’t include rate information at all. The device will try to keep associated to the AP that appears to have signal strong enough – even if it can’t reliably exchange data. Only if the connection is completely lost will the device start looking for a new AP. This will appear as a connection problem to the user. In the worst case the “new” AP is the same since no other AP was stronger. The retries will continue.

Basic rates

In most networks there is only a single required basic rate like 1 or 6 Mbps. The idea is that the less requirements there are the more clients can connect. The devices will use the higher supported rates anyways, so what’s the point. However, all  unicast control frames (like acknowledgements) are sent using basic rates. Remember that in Wi-Fi all unicast frames are ack’ed.

If the basic rates are 6, 12 and 24 Mbps for example, then all control frames are sent using the highest basic rate that is lower than the current transfer rate. For example in a 150 Mbps connection the acks will be sent using 24 Mbps. In 18 Mpbs connection 12 Mbps would be used. While the acks are short there are a lot of them. It would be silly to send them at 1 or 6 Mbps. You shouldn’t add all possible rates as required basic rates since a device can’t connect if it doesn’t support all of them. 6, 12 and 24 Mbps are very commonly supported rates.

Disabling highest rates

Occasionally I have seen guest networks where the highest rates have been disabled. The thought must have been to limit the bandwidth the guest network can use. I practice it works just the other way around. The bandwidth isn’t limited, the data is just transferred more slowly. The net effect is that guest network will consume more air time and has a bigger impact on other users.


Old 802.11b-only devices are getting rare, yet most APs still support 802.11b just in case. If you can’t disable 802.11b altogether then you should at least disable speeds 1, 2, 5.5 and 11 Mbps in both basic and supported rates.

In 5GHz band the lowest rate is 6 Mbps. If you have a dense enough deployment you should consider setting the lowest rate to 12 Mbps. Most devices will support the change. Some devices appear to have problems if the lowest rate is 18 or 24 Mbps. Disabling 6 Mbps will hinder devices in areas with weak signal. If you have holes in the coverage or there are users outside the building, like on the rooftop or in the parking lot, disabling 6 Mbps may have a negative effect.

You should set 6, 12 and 24 Mbps as basic rates (or 12 and 24 Mpbs if you have disabled 6 Mpbs).

If you want to decrease the cell size to improve roaming you should decrease the transmit power of the APs. This will decrease the signal so the devices will roam earlier. This will also require dense AP placement so there will be a better AP avalable. In most Wi-Fi systems you can configure the minimum signal level required to associate with the network (Minimum RSSI or similar). However, setting min RSSI too high can backfire as well and cause same problems as disabling lower rates.

Do not disable higher rates. The faster the data is transferred the more air time there is to share. To limit guest network traffic there is typically a setting lige bandwidth shaping or bandwidth throttling.


Increase your cell phone battery life with a small change in the Wi-Fi network

This may sound silly, but yes, you can really improve your cell phone battery life with a small change in the access point. The change has no drawbacks and is easy to make. It has even more impact in your home Wi-Fi where your cell phone spends most of its time in sleep mode.

Buried deep in the advanced settings of the Wi-Fi access point there is a setting with a friendly title like DTIM Interval or DTIM Period. The default setting is typically one. Change it to three, five or slightly larger, but don’t go over ten. Often you can set it separately for 2.4GHz and 5GHz, but use the same value for both. This is the short answer. Read on to find out what this is all about.


To extend the battery life all cell phones and tablets spend most of their time in different sleep modes. Switching off radio transmitters is one of the most efficient ways to save power since the transmitters are very power hungry. Even if the display is on and there is a game running the Wi-Fi radio may be off. The Wi-Fi radio will be powered on only when the user browses the net or some background app checks for messages, then it will go off again. As a matter of fact, the radio is actually switched off multiple times during browsing. It is switched on only momentarily as needed. It really makes a difference in the battery life.

Most network connections are opened from the client device. For example the device will periodically check for emails. In some apps the initiative is on the server side: the server will send a message that the device should perform some action. For example a VoIP call is coming in. If the Wi-Fi radio is off there is now way to receive such messages.

Broadcast and multicast messages are similar. The server will send a single message that is addressed to all or a group of devices. However, the server doesn’t know which devices are sleeping and which aren’t. Somehow the sleeping devices should also receive the message.

Wi-Fi access points send a beacon ten times a second. In the beacon there is a map of client devices which have buffered packets addressed to them. All devices wake up every tenth of a second to check if they should start receiving data. These wake-ups are very short and the display is not powered up, but still they consume power.

The DTIM Interval setting controls which beacons contain this information about upcoming data packets. If you set it to three then only every third beacon will contain the info. This means the dozing devices can sleep over two beacons and only wake up on the third. That means they will wake up only three times a second. With larger DTIM values they will sleep longer. For example with the value of 5 they will wake up just twice a second and with 10 just once every second.

So what does DTIM stand for? It is an acronym for Delivery Traffic Indication Message (or Map). The Message just Indicates that there is some Traffic to be Delivered to Mapped devices. The setting this article is about is DTIM Interval or Period, which is the multiplier, but it is commonly referred to as DTIM value.

Apple’s solution

Apple iPhones won’t wake up more often than to every third beacon, even if the DTIM value for the network is one. Apple’s customers are very concerned about their battery life and the delay of a few tenths of a second is a small price to pay. Apple has made the decision for its customers.

What will happen to data packets destined to an iPhone if the DTIM value is one but the iPhone acts like it were three? Nothing much. Unicast packets are buffered at the access point until the iPhone wakes up. The buffering will take up some memory space in the AP but no data is lost. Some broadcast and multicast packets will be lost, but most of the time they are not important for the dozing device. By setting the DTIM value to at least three you can avoid the loss of data and give Android devices the same advantage of better battery life.


During daytime phones are typically used actively, which will consume much more energy than waking up from sleep. At home the devices spend much more time sleeping. Or they would, if they didn’t wake up ten times a second. That’s why you should increase the DTIM value in your home Wi-Fi. The larger the value the less the battery will drain during the night.

Are there any drawbacks to increasing the DTIM value? An incoming VoIP call alarm may be delayed by a fraction of a second – that shouldn’t matter. Broadcasts and multicasts are buffered at the AP until they are delivered. The AP memory is limited and in theory the buffer could grow by a hundred megabytes every second. That’s why ten is a good rule of thumb for the upper limit of DTIM. Many APs won’t even accept larger values. Of course you can test and see if you notice any drawbacks.

There are consumer grade Wi-Fi APs that won’t let you adjust the DTIM Interval, but most APs appear to support it. It is hidden amongst the scary “advanced settings” but it is really safe to modify. You can always go back if you need. If you can’t find the setting try searching the web for your AP model name and the word DTIM.

Another way to decrease battery consumption during the night is to place the phone as close to the AP as possible. The phone will use lower transmit power level which will save energy. Even a small change will help. Just by leaving the phone on the other side of the bed is an improvement, if it is closer to the AP.

Wi-Fi Repeaters, WDS, Mesh and Other Wireless Backbones

Quite often the most expensive part of a Wi-Fi deployment is the cabling. Cabling? Wasn’t this supposed to be wireless? Can’t we use these APs wirelessly? At least there are lots of products claiming to do so.

In a normal Wi-Fi deployment all the access points are connected to the wired network. Why can’t we use the wireless network itself? That’s a lucrative idea. No wonder there are so many products claiming to do it. Performance is a totally different matter. The vendor may say it works since the web page did open eventually. Another vendor might say it works since you can connect to the Wi-Fi, nobody promised opening web pages. The customer may have expected to stream music or even videos over the network.

To overcome performance problems some salesmen have recommended adding more APs. It is always a good idea to sell some more, isn’t it. In reality adding more APs may slow down the network even more. Did you say slow down? How come? What’s going on here?


Let’s start with the simplest devices called Repeaters or Wi-Fi Range Extenders. A repeater is a device that just repeats everything it receives. It doesn’t care which way the data is going, the packet is just repeated like an echo. A repeater is placed on the edge of the coverage area so it can receive packets from the AP and repeat them to devices outside. Respectively the packets the outside devices send are relayed to the AP. A simple system, doesn’t require any additional settings and works as long as there isn’t much traffic.

The downsides of repeaters could be visualized by imaging a conference room with a very long table. The table is so long attendees at the other end can’t hear what is spoken at the other end. This is solved by placing a parrot in the middle. The parrot will repeat everything it hears. This will slow down the communication, since after each sentence there has to be a pause to give the parrot time to repeat the last sentence. Each turn will take twice as long so the throughput is halved. The parrot will also repeat all sentences even if they were local only (will you pass the coffee, please).

What if the table is so long you will need more than one parrot? The throughput will halve for each parrot. Since the parrots cannot listen and repeat at the same time, they have to wait for the next parrot to finish and so forth. Only after all are finished can the next sentence be passed.

Wireless Distribution System (WDS)

Normal Wi-Fi traffic between an AP and a user device requires only three addresses and that’s how the original 802.11 was written. If you are relaying packets you need all four addresses: transmitter, receiver, original source and final target. WDS adds the fourth address to 802.11 packets. Unfortunately WDS was never standardized to the last detail. Each vendor made their own solution which are mostly incompatible. In practice you can only expect WDS to work between devices of the same brand, since they need to understand each other. In this respect WDS is different from the repeater model, which doesn’t require any changes to the network.

Four addresses provides for relaying a packet through multiple APs also known as multi hop. Each AP knows which clients it serves and it can even learn which clients are served by the neighboring APs. In practice the clients move and roam so it is safest to repeat all packets like a repeater.

Since the APs need to send packets to each other they need to be on the same channel. This makes the whole network a single collision domain. Only one device can send at a time. This drawback and relaying the packets (the parrot!) will slow down the network considerably. The speed will halve for each hop. If the packet needs three hops to arrive at is destination, the speed will be just ⅛ of the nominal speed.

Another problem with WDS is encryption. In ordinary Wi-Fi the client devices will authenticate to the APs. The devices thus have different roles. In WDS the APs would need to figure out the roles when authenticating to each other. The secure encryption methods will also change keys every so often, which will cause problems with multilateral authentication. For these reasons the original WDS networks were unencrypted or only supported weak WEP encryption. Later vendors invented their own solutions to implement WPA2 Personal (a.k.a. Preshared Key) but these solutions don’t interoperate. 802.11s was supposed to solve this but it hasn’t proven popular.


Mesh is not standardized at all. It just appears that many vendors have their own mesh solution. Those are just basic WDS or with some improvements. The most popular improvement is to use a separate radio for AP to AP traffic. For example the APs can use 5GHz for inter-AP traffic while the client devices are only supported on 2.4GHz. This way adjacent APs can use different channels on 2.4GHz, but they still need to be on the same 5GHz channel.

There are mesh APs with three radios. Two are for client communications and the third is for inter-AP traffic. This way the clients can be supported on both 2.4GHz and 5GHz and still use a separate channel for inter-AP traffic. This will improve the throughput as long as there is room in the 5GHz band.

With multiple radios you will avoid the first halving of throughput. The AP can receive with one radio and transmit on another at the same time. Only if the next AP needs to repeat the packet the throughput will halve. These mesh solutions may also use 80MHz or 160MHz wide channels for inter-AP traffic. The 802.11ac’s wide channels have multifold capacity that will solve the halving of throughput to some extent.

Without standardization the mesh solutions are proprietary. Mesh APs are typically more expensive, especially if they have three radios. You will also need to consider the AP density. In normal Wi-Fi the coverage cells of the APs don’t need to overlap. In a mesh the APs need to be able to receive each other well. This means halving the distance between the APs. You will need at least twice as many APs.

Ordinary Wi-Fi network


Wireless Backbone

Many vendors have their own point to point (PtP) and point to multipoint (PtMP) solutions. Typically the AP is integrated with a strongly directional antenna in a weatherproof enclosure. With these devices you can build a wireless backbone between the Wi-Fi APs. The backbone can have throughput comparable with a wired network. The solutions typically are proprietary with more efficient TDMA (Time Division Multiple Access) instead of 802.11 and may use totally different bands like 24GHz or 60GHz.

A wireless backbone might sound like an overkill, but it is a viable solution with a reasonable price tag. For example on a campus with a few buildings a wireless backbone can be a better solution than digging trenches for fiber cables. In some historical environments digging is out of question or prohibitively expensive. A PtMP solution may be the least expensive way of distributing the internet feed from the main building to the cottages by the lake for example.

Top notch gear for long distances are expensive of course. For a few hundred meters or a couple a kilometers you can set up a PtP connection for less than 200 euros. You need to add the Wi-Fi access point to that, because user devices can’t connect to a TDMA network directly.


Always cable Wi-Fi access points if possible. An Ethernet cable is full duplex, error free and has superior capacity. If there isn’t that much traffic or users then you can supplement wired APs with mesh APs. You just need to be aware of the performance consequences. You should also consider the higher price of mesh APs. Could you use that money for cabling and end up with a better network? If you are forced to use meshing consider cabling as many APs as possible. The performance will improve as the hop chains will be shorter. If you need good performance but can’t install cabling, build a wireless backbone for the APs with PtP or PtMP connections.

How many users can one Wi-Fi access point support?

Sounds like a simple question. However, no matter how much you search, you can’t seem to find an answer. The reason is simple, it depends…

Let’s adjust the question slightly first. Let’s talk about devices when we are discussing technical aspects. Each user can have up to five devices (yes: laptop, tablet, work phone, personal phone, watch, weather station, internet radio…)

What are the user expectations or requirements?

It is a completely different matter to serve 40 IoT devices that occasionally transmit sensor data than to serve 40 user devices displaying separate 4K video streams each. VoIP calls like Skype won’t transfer that much data, but they are very sensitive to delays (called latency) and fluctuation in the delay (called jitter). So you first need to find out what kind of applications the users have in mind. The problem is that this is a moving target. Each time you improve the network the users will find uses for the new capacity. That means the expectations and requirements will be totally different next year.

Limited air time

Wi-Fi is a shared media, which means the devices compete for air time. While one device is transmitting the others are receiving or at least wait in silence. (802.11ac introduced MU-MIMO which allows concurrent transmission for a few devices at a time, but this hasn’t changed the situation significiantly.) The net result is that as the number of devices grows so does the queue of devices waiting for a turn to transmit, which will increase latency and jitter.

The more data is transferred the higher is the utilization, which increases the wait times as well, since the turns are longer.

Think of it as a negotiation table. As the number of negotiators increases the longer it takes for each get a turn to speak. If the speeches are long it will take even longer to get a new turn. In Wi-Fi there is no chairman to give turns in order, but the process is somewhat random.

The cell size

The more area an access point covers the more variety there will be in the client connection qualities. Some devices will be closer to the AP while some will be at the edge. The devices on the edge will use hundreds of times more time (the minimum rate is 1Mbps) to transmit the same amount of data as the devices close to the AP (at 300/450/600Mbps).

In theory transmitting one megabyte at 1Mbps takes 8 seconds or 8000ms while it takes just 13ms at 600Mbps. In practice you should double those figures to account for the overheads and acknowledgments but still the ratio is 1:600.

So what IS the answer?

Some vendors will give you a figure: 50, 100 or 255 devices. The last one probably just means how many devices the AP can hold in memory at one time. 255 devices competing for air time is an impossible scenario, unless we are talking about the IoT devices sending sensory data once a minute. The 50–100 devices per AP is the well known case when a hotel advertises “We have FREE Wi-Fi!” Yes, they do, but it is useless.

According to 802.11 the theoretical limit is 2007. Most chipsets will set the bar lower: 100, 128, 255 or some other number. There is no point in comparing the figures. They don’t tell anything about the power or quality of the AP.

One access point can support a high number of clients, if the devices are near by. This is the case for HD or High Density access points some vendors offer. They have a little more memory and a better processor, but the main difference is in the antenna design. The antennas are designed for short range. If all the devices are within the same conference room they can all be connected at 300Mbps at least. The air time can be split into very short slots so everyone gets a share and still have useful bandwidth. These APs are designed to prevent connections from outside the room because those would be slower. You can place multiple HD APs in a large auditorium if necessary since the coverage is designed to be small, at least if you turn the transmit power down.

You can use a couple of dozen of devices per access point as a rule of thumb. If you are designing for a high capacity network, use more APs and smaller cells. There will be less devices in each cell and each one will have a better connection. Both will improve the capacity of the network on its own and together they complement each other. It is a win-win and the cost of modern APs makes this affordable.

If you need more accurate estimates I recommend the Excel models by Andrew von Nagy.


WiFi 5GHz band and wide channels

On 5GHz WiFi there are more channels and less interference, both are important for fast wireless communications. The bandwidth can even be increased multifold by combining channels.

5GHz was introduced in 802.11a, but the radios were expensive and the band didn’t gain popularity. 802.11n was defined for both 2.4GHz and 5GHz bands, which finally launched 5GHz use. The latest 802.11ac is only defined for 5GHz but all devices still support 802.11n and most also on 2.4GHz


The 5GHz band is divided into 5MHz channels like the 2.4GHz band. Fortunately only every fourth channel (36, 40, 44…) is used which provides for de facto 20MHz channel width without the overlap problems of 2.4GHz. Most devices even cannot be tuned to the intermediate channels. The whole 5–6GHz is not available since there are some forbidden channels and some channels have special restrictions.

5GHz Channels
5GHz Channels

Originally only the four lowest channels were available in the U.S. where they are called UNII-I. Later more channels have been made available, but they have several restriction for their use in the U.S.

In Europe (or in ETSI jurisdiction) channels 36–64 are restricted for indoor use only. The maximum transmission power is 200mW (23dBm), which is greater than the 100mW (20dBm) allowed for 2.4GHz, but still doesn’t quite compensate for the 6dB attenuation due to higher frequency. In access point use the maximum transmission power is practically irrelevant, since typical user devices have less transmit power. In WiFi the connection is always bidirectional so there is no point in receiving the access point if you cannot send a reply. Common default for access points is maximum power, which means 2.4GHz signal will be received 3dBm stronger, which in turn will make most devices choose 2.4GHz signal instead of the 5GHz.

On channels 100–140 the maximum transmit power is 1W (30dBm) and the channels can be used outdoors as well. In access point use 30dBm is irrelevant, but for point-to-point connections this enables long distance links (from 10km to 50km or even more). Weather radars use channels 120–128 and access points must yield to them. At start-up the access points will listen for radar signals for 10 minutes before transmitting. On other DFS channels 52–140 this start-up delay is one minute. If the access point detects a radar signal it will switch channel automatically. Most APs will play it safe and choose a non-DFS channel 36–48 which may result in overlaps in channel use.

The upper channels 149–165 are on every fourth odd channel. In Europe they can be used according to Short Range Device (SRD) specification for transmissions up to 25mW (14dBm), but most devices don’t support these channels. For access point use the 14dBm would suffice and there are no DFS or other restrictions, but the sparse client support needs to tested before deployment.

Coverage and cell size

The wavelength of 5GHz is half of 2.4GHz, which implies higher attenuation. 2.4GHz will be received at 6dBm stronger or quad-fold when compared to 5GHz signal. A 5GHz access point will thus cover a smaller area in open space and won’t penetrate walls like 2.4GHz. Because of the stronger signal many devices will rather associate with the 2.4GHz AP. The simplest solution is to reduce the transmission power of the 2.4GHz AP by 6–7dBm.

The higher attenuation and poorer penetration can be turned into an advantage to reduce access point cell size. When the AP covers a smaller area there will be less users to compete for airtime, which translates to faster data transfer. You will need more APs to cover the area, but smaller cell size is the key to high performance WiFi.

Wide channels

802.11n introduced the concept of combining channels. Combining two 20MHz channels will yield over twice the bandwidth, since there is no need for an isolation gap between channels. In 802.11n you could combine channels on 2.4GHz as well, but there really are not enough channels available. On 5GHz combining channels is actually useful and 40MHz channels appear to be currently default on most access points.

Combined channels need to be accounted for in channel planning. If you place two adjacent access points on channels 36 and 40 and enable 40MHz channels, the APs will end up taking turns. The AP on channel 36 will use channels 36–43 and the other will use 40–47. Due to the overlap they cannot transmit or receive at the same time. You should place them on channels 36 and 44 to account for this. In the standard the 40MHz channels are numbered 38, 46, 54… to avoid overlapping, but most user interfaces seem to use 20MHz numbering.

802.11ac introduced 80MHz and 160MHz channels. They have their own channel numbers as well, because extra wide channels are so easy to set to overlap. Using such wide channels makes even the 5GHz band crowded. Another problem is wait time for the channel availability. If there are other APs in the neighborhood our AP cannot transmit before all the channels are quiet at the same time. 802.11ac provides for dynamic channel width, which turns the setting to a maximum and the AP will choose the used channel width according to the environment. The extra wide channels also require support in the user device as well to be used. 160MHz channels are a Wave 2 feature so there is not much support at this point. In reality you cannot make a channel plan with just two channels so 160MHz should be reserved for point-to-point links where they really are useful.

The maximum transmission power set by the authorities is for the whole transmission. The maximums are calculated for 20MHz channels. The maximum should be halved (-3dBm) for 40MHz, quartered (-6dBm) for 80MHz and only one eighth (-9dBm) for 160MHz. Usually this doesn’t matter for access point use, since the maximums shouldn’t be used anyways. In point-to-point links it does matter and occasionally you need to concentrate the power on fewer channels to get a stable link. A double channel with the same nominal transmit power will also consume twice the electric power, which is important factor for mobile devices. If the wide channel will respectively increase the transmission speed (i.e. shorten transmission time) then it will cancel the increase in power consumption. In practice the power consumption will increase somewhat due to retransmissions.

20MHz kanavat ovat edelleen hyvin käyttökelpoisia etenkin ympäristöissä, joissa on paljon häiriöitä tai tukiasemia. Leveät kanavat vastaanottavat häiriöt koko kanavan leveydeltä. Mitä kapeampi kanava, sen vähemmän häiriöitä. Jos tukiasemia on paljon, niin on järkevämpää käyttää kapeita kanavia ja antaa jokaiselle tukiasemalle oma kanava.

20MHz channels are still very useful and often recommended. Especially in noisy environments or with lots of APs. Wide channels pick up noise on the whole channel width. Narrow channels pick up less noise. If there are many APs within reach, then it is better to assign each AP a separate channel. There are twice as many 20MHz channels than 40MHz channels.


Compatible user devices should be steered to 5GHz since there is more capacity and less interference. The simplest way is to turn down the transmission power of the 2.4GHz to the minimum or turn it off altogether.

40MHz channels on 5GHz are well supported and increase bandwidth. You just need to plan the channel use not to cause overlaps. Even if the original channel plan is perfect, DFS may cause unexpected channel switches causing overlaps. If you have a quiet environment and your devices support them, you may use 80MHz channels. One example of consideration is the fact that the lowest 80MHz will cover all non-DFS channels. If you want to use the wider channels you have to live with DFS restrictions. Don’t use 160MHz channels except for point-to-point or other special cases.

Related articles: