Speed tests and Mbps – why can’t you hit the 1Gpbs mark?

0
12
Speed tests and Mbps

Long-time internet users might remember the days of dial-up connections. Every line speed upgrade brought a noticeable difference. Fast forward to today, and that feeling of improvement with each upgrade is often missing. Why is that the case?

This is where it becomes necessary to understand the differences between capacity and throughput, along with evolving connectivity needs at home and in the workplace.

Speed tests and Mbps – the confusion starts with upgrades

A favourite activity for many people, after getting a new internet connection or upgrading an old one, is to run a speed test. But something strange happens. A customer with a 500 Mbps connection might achieve 480 Mbps when downloading a file from a fast server. After upgrading to 1 Gbps, they might expect 960 Mbps. Instead, they only get 600 Mbps.

At lower speeds, the internet connection itself acts as the bottleneck. This means users can get close to the plan’s advertised speed. However, once speeds go beyond a certain point, the bottleneck shifts. It’s no longer the access speed but external factors such as server performance, end-to-end network conditions, and the TCP/IP protocol.

Even with 1 Gbps, 10 Gbps, or 100 Gbps connections, the maximum achievable throughput may still be 600 Mbps. This limitation is often due to server-side or network constraints. In this case, capacity refers to the maximum data rate under ideal conditions. Throughput, on the other hand, is the actual data rate achieved in the real world. As fibre internet line capacities continue to rise, these differences will become more apparent.

The same principle applies to wireless services. For instance, 5G may be capable of theoretical speeds of 20 Gbps. However, this is based on the best equipment operating in ideal conditions. Even then, radio frequency limitations prevent those speeds from being reached in real-world use cases.

ISPs and contention ratios

Internet service providers (ISPs) use contention to separate service profiles. Dedicated services have much less contention than broadband services.

Higher-contended products allow customers to use their full connection simultaneously. However, they share a portion of the network’s capacity. This means users may see higher throughput during off-peak hours and lower throughput during peak times.

The same applies to wireless services. When more people connect to a single tower, each user experiences lower throughput.

Speed tests and Mbps – not always a true reflection

Back to Speed tests and Mbps: for an accurate result, the test must eliminate all other variables. This means using a wired connection from a laptop directly to the router. The device used must also be capable.

For example, a laptop with a 100 Mbps network port will never exceed that speed. Even the LAN cable matters. A CAT 4 cable handles up to 16 Mbps. A CAT 5 cable manages 100 Mbps. CAT 5e can handle up to 1 Gbps.

Every component in the network path can impact speed. Servers, switches, routers, cables, firewalls and access points all play a role. For example, during fibre network speed upgrades over the past few years, many customers couldn’t benefit. Their routers couldn’t handle speeds over 100 Mbps. Unaware of this, many blamed their ISP or fibre network provider.

Another issue is that high-speed connections like a gigabit link are not designed to deliver 1 Gbps to a single user or device. Speed tests often reflect performance on one device. In reality, a gigabit connection supports multiple users, devices and applications at once. It allows everyone to use the network efficiently and simultaneously. It does not guarantee flawless video calls for a single user at all times. Instead, it enables multiple users to enjoy good-quality video calls at the same time.

Matching throughput to demand

The total required throughput of a link depends on simultaneous usage. For example, if 30 users or devices each need 10 Mbps, then a 300 Mbps service is required.

The goal is to ensure each device or user has a decent experience. Here, the limitations of individual devices or protocols don’t matter. It’s no longer about one device trying to download at 1 Gbps. It’s about multiple devices accessing the cloud, downloading, streaming and gaming together. They all use the available bandwidth efficiently.

More to connectivity than just speed

We are entering a time when speed tests no longer reflect what you can actually do with a high-speed connection. A proper modern test would involve multiple concurrent connections to measure the total capacity.

Many of us remember when local network limitations were the main bottleneck. Back then, every upgrade brought a noticeable jump in speed. Speed tests were useful in such contexts. However, technology has changed. Doubling your speed today doesn’t mean halving the download time.

On fibre, the line speed indicates the maximum download throughput. In wireless, there’s a difference between theoretical and practical speeds. Real-world performance depends on network load and equipment.

Although theoretical speeds were previously discussed for wireless, they now apply to fibre too. As fibre speeds rise, we are reaching the limits of what the medium can do. Testing a 10 Gbps line is difficult when the device can’t support that speed due to hardware limits in memory, processing or other components.

The truth is, we are reaching a phase of bandwidth abundance. Service providers can now offer more bandwidth than most users actually need. At this point, speed is no longer the most important factor. What matters is capacity. Enough capacity ensures that every user and every device enjoys a smooth, high-quality experience.


Theo van Zyl | Head | Wireless | mail me | Andre Eksteen | Senior Product Manager | FTTB | mail me |
Vox |





LEAVE A REPLY

Please enter your comment!
Please enter your name here