 The rise of artificial intelligence and machine learning techniques has spurred efforts across the computing world to make these tools work faster. A significant part of executing most machine learning models is performing multiplication operations on what are known as “matrices”, grids of numbers that have specific mathematical rules for (amongst other things) multiplication and addition. In a hardware context, these multiplications can be performed using elementary “Multiply and Accumulate” (MAC) operations.

While the operations needed for machine learning are almost entirely multiplications and additions of two numbers, a great many of them need to be performed in a very short period of time. Graphics Processor Units (GPUs) and even more specialised Tensor Processing Units (TPUs) are the fastest way of performing these tasks using silicon electronics.

The optical alternative, silicon photonics, offers an entirely different way of performing these operations at incredible speed, and most companies working in silicon photonics are pursuing a similar objective to this.

. Simple multiplication is not the only useful computing process that can be accelerated by silicon photonics, and our hardware is designed to perform another very specific mathematical function. This function is not only of tremendous importance to scientists and engineers, but has great relevance to the digital world around us.

If you’ve ever watched a show on Netflix, listened to a piece of digital music or just browsed the web, this function has been applied countless times in analysing, processing, and transmitting this information.

This function is known as the Fourier Transform.

The Fourier transform is a mathematical tool for analysing repeating patterns. Given a piece of information, which could be an audio track, an image, or a machine-learning data-set, the Fourier transform gives a way by which we can unpick and isolate the simple wave-like harmonic patterns (sines and cosines) that, when added back together, return the original piece of information.

For such a phenomenally powerful tool, it has a surprisingly simple equation:

Despite the deceptive simplicity, this mathematical function has applications across the entire spectrum of information and data processing. In some cases, the Fourier transform is directly useful, such as when we are dealing with signals that contain information that we want to extract or alter. In other cases, it can massively simplify complex problems that would otherwise be intractable. To say that it is one of the most useful mathematical functions in history is not an overstatement.

However, when we want to apply the Fourier transform using a computer, there’s a catch. The most efficient computational method for performing a Fourier transform on a data-set, the Cooley-Tukey Fast-Fourier Transform (FFT) algorithm, has to break the data-set down into many smaller Fourier transforms and then progressively recombine them. This need to perform multiple operations, one after the other, means that there is a fundamental limit to how efficiently a digital system can perform the Fourier transform.

## Digital Efficiency of the FFT

If we want to use the Fourier transform to simplify a problem or speed up a calculation, we have to contend with this problem of efficiency.

In computer science, the measure of algorithmic complexity is often written in what is known as “Big-O” notation, which gives us an idea of the minimum number of operations that need to be performed when running an algorithm. For example, if we need to perform n operations given n pieces of data, this is an “O(n)” algorithm with linear complexity. If instead we need to perform operations, we’d say that the algorithm works or “scales” in polynomial time. Here’s what that looks like on a graph.

Scroll to Top