What is latency?
Let’s start with what latency really means. A quick Google search gives us the following definition: “latency is a time interval between the stimulation and response”. If you’re still confused by this definition let’s imagine you’re playing an online computer game. If your Internet connection is slow you will experience something called ‘lag’. Let’s say you want to move your game character from one place to another, you’re clicking a mouse button and nothing is happening. You notice that there was a delay between the time you clicked the mouse button and the time your character started moving. This is what latency really is. It is the delay between the time the action is initialized and the time the action actually takes place.
Low latency and Ultra Low Latency
We already know what latency is, but how to define low latency? First of all, we need to find out what ‘low’ really means. Human perception of latency is completely different from the machine perception. All of our reactions are roughly delayed by 300 milliseconds (if you don’t believe me click on this link). This means that everything below 300 ms is classified by our brains as real-time or 0 delay. Of course, this is completely different for the machines. For a trading system, 300ms latency would be a true nightmare.
Term ultra-low latency is used by many companies to describe sub 1ms latencies. Ultra-low latencies are very often associated with trading as this is one of the areas where speed plays the biggest role. Very often the winner takes all and the second place is worth nothing.
Where is low latency crucial?
There are many systems where low latency plays a major role. Let’s have a look at some examples:
- trading systems (order execution, matching engines, pricing engines)
- video/audio streaming
- online games
- real-time data applications
Low latency in Java
Java, because of its virtual machine and garbage collection is very often perceived slow. Fortunately, with the right optimisations, we can make it extremely fast.
Let’s focus on the key elements influencing low latency:
- Garbage Collection. GC pauses can dramatically increase our latency, but they become manageable with the right optimisation. We can also use non-standard, low-latency JVMs like Zing from Azul.
- JIT. Compiling hot code paths to the machine code can seriously speed up our system. We need to take extra care to warm up our system.
- Locking. Using lock-free algorithms and I/O becomes crucial and has a big impact on latency.
- Classloading. All key classes should be loaded before we even start using our system.
- Data structures. We need to know what data structures are optimal for storing our data and how to make sure they’re not slowing us down. For example, using ConcurrentHashMap for a greater degree of concurrency over synchronizedMap.
- The complexity of algorithms. Our latency can be greatly reduced if the time complexity of our algorithms is poor. We need to make sure that our algorithms run at the best possible speed.
Let’s have a look at pros and cons of using Java for low latency systems:
|Time to market||It might be still slower than C++|
|Time to stability||Finding right GC/JIT optimisation might take a while|
We need to remember that optimisation of our Java code and JVM is only a tip of the iceberg and if we want to get to sub 50ms latencies we need to look into other directions. The next section is going to shed some light on the other major elements of low latency systems.
Other major factors influencing latency
There are many other factors influencing latency. We just described Java and JVM. Now, let’s quickly describe the other elements:
- Operating system: pinning our process to dedicated cores, page cache, power management tuning,
- Hardware: commodity hardware, FPGA,
- Internet connection: fibre or microwaves (for example for High-Frequency Trading)
When it comes to developing low latency systems there are many different aspects that we need to consider. A programming language is only one of them and Java with the right JVM configuration might be a good option, especially if we want to balance between speed and time to market. In general, low latency is a huge area to study and this article only touches the surface of this interesting topic. I will be delving into the details in the future posts.