Behind the Paper

Controlling chaos using edge computing hardware

Researchers at The Ohio State University shed light on the high complexity and power consumption of modern machine learning algorithms and offer a potential solution.

In a recent publication in Nature Communications, the QuantInfo Group at The Ohio State University implement an efficient machine learning-based controller to stabilize chaos in an electronic circuit. Their work paves the way for deploying efficient machine learning algorithms to the computing “edge.” For additional context, see the press release from Ohio State News. The authors of this study are pictured below.

The Problem with Today's Controllers

Advanced devices, such as autonomous cars and aircraft, have become increasingly prevalent over the past decade, in part due to advances in the processing power of computer chips. A self-driving car needs such processing power because it must take in an enormous amount of information, such as data from the external cameras, sensors, and speedometer, and make split-second decisions on how control the car to avoid an accident. The problem is that the machine learning models deployed to process the data and make decisions are also increasing in size and complexity, requiring additional evaluation time, which could mean difference between life and death. Moreover, these advanced computer chips and machine learning models come with high power consumption, which limits the range or battery life of mobile platforms such as autonomous aircraft or handheld devices.

Traditional controller designs do not suffer from the time and power drawbacks; however, they do have a few key downsides compared to their machine learning counterparts. For example, when a system exhibits complex behavior, a traditional controller often requires an accurate physical model of the system, which is not always available. In contrast, machine learning-based controllers can learn the model directly from data. This limitation is particularly evident in the traditional approaches for controlling chaotic systems, which do not use physical models, and therefore can only control the system to regions where the dynamics are locally linear. Additionally, machine learning-based controllers offer the advantage of adaptability; they can re-learn the physical model in response to changing conditions, such as a sudden flat tire.

We were motivated to develop a controller that bridges the gap between the traditional and machine learning-based controllers – a highly efficient machine learning model that is fast, low-power, and efficient enough to be implemented on a low-cost chip.

A Better Solution

Typical machine learning-based controllers use extensive feedforward neural networks consisting of interconnected layers of neurons, which not only require a long evaluation time but are not inherently designed to process data from dynamical systems, or systems that evolve over time, making it difficult for them to learn an accurate physical model of the system. Instead, we employ a reservoir computer as the core of our controller, characterized by a “reservoir” of neurons with recurrent connections that impart short-term memory, making them ideal for processing data from dynamical systems. Taking this a step further, we utilize a next-generation reservoir computer that replaces the explicit recurrent connections with time-delayed polynomials of the inputs, which requires less data for learning and increases its efficiency. Building on these improvements, we increase its efficiency further by employing system identification techniques to select only the most important components of the model.

The idea of using a reservoir computer-based controller is not new, but its success hinges on the chosen control law, or the set of rules that determine the controller’s actions. For example, there is a previous approach that uses a reservoir computer to learn an inverse-based control law, but it failed to achieve accurate control on the same chaotic circuit we study here. In our study, we employ a control law based on feedback linearization, inspired by an approach that we found in a control engineering textbook by Sarangapani from 2006. Here, the machine learning model predicts the system's future state, which is used to cancel the system’s nonlinear dynamical evolution, allowing traditional linear control techniques to be used thereafter. The combination of an efficient reservoir computer and a simple control law means that our approach is significantly less complex and more accurate than the previous inverse-based controller.

Challenges

As mentioned previously, we tested our approach by controlling a chaotic circuit to a variety of complex states that are difficult for traditional approaches to achieve. We implemented the controller using a low-cost field-programmable gate array (FPGA), which is a type of computer chip that can be reconfigured for different applications, and allows for parallel processing to increase the efficiency. In our setup, shown in the image below, the FPGA is responsible for measuring voltages from the chaotic circuit, evaluating the control law, and applying control perturbations to the chaotic circuit.

Despite the simplicity of our control algorithm, implementing it on an FPGA was the biggest challenge. This is because an FPGA is not programmed like a typical computer – they do not have operating systems, and the user must decide the fate of every binary digit, or bit, in every mathematical operation. We chose a fixed-point representation for the variables, which is an efficient way to represent fractional numbers in binary, but it meant that we had to carefully specify the number of bits used in every addition, multiplication, and rounding operation to ensure the reservoir computer is evaluated correctly.

The second biggest challenge was to minimize the latency between measuring the system, evaluating the reservoir computer, and applying control perturbations. This is crucial to avoid destabilization when controlling chaotic systems—an issue that can quickly escalate due to the exponential nature of chaos. Reducing the evaluation time was simple, as our control algorithm only required a handful of mathematical operations, which were done in parallel. Reducing the measurement and perturbation time was the real challenge, as they are limited by the hardware in our low-cost FPGA device. However, we were able to significantly reduce the latency of both by writing custom code to drive the hardware at a higher frequency.

Outlook

In our study, we showed that our controller design is more efficient and more accurate than other approaches, and is efficient enough to be implemented on a low-cost chip. We envision that this algorithm will be implemented on a broad variety of platforms to control more complex and higher-dimensional systems, with learning on the fly. However, our current focus is shifted towards using higher-end FPGA boards to apply similar algorithms to applications such as quantum information processing.