Toward intelligent and programmable virtualized NG RANs

By Rahman Doost-Mohammady, Santiago Segarra, and Ashutosh Sabharwal

5G and beyond will facilitate a paradigm shift with respect to its 4G predecessor and will transform many industries as it will affect our way of living. One of the key advantages of 5G is its ability to support a much larger number of devices, a higher volume of data traffic, and more stringent end-to-end latency compared to previous generations of wireless communication. This will enable a wide range of new applications and services, such as self-driving cars, smart cities, and virtual reality, which will require low latency and high reliability.

One of the catalysts in achieving the 5G goals will be the use of artificial intelligence (AI) to not only enable these vertical applications but to also optimize and accelerate the 5G network performance itself. AI methods can help to optimize the performance of 5G networks by learning from data and adapting to changing conditions in real-time; these methods are especially effective when the relationship between network parameters and performance is complex and analytically intractable. AI-aided network design could also improve the efficiency of 5G networks by enabling dynamic resource allocation and intelligent scheduling of data transmissions. For example, AI algorithms could analyze traffic patterns and predict future demand and allocate resources accordingly to maximize the utilization of the network in order to reduce costs and improve the user experience. See a related blog post here.

To realize the above goals, there is a need for real-world data to train AI models. In particular, it is essential for researchers to have access to a 5G-compliant software-defined system that enables dataset generation at each layer. Aerial is an excellent solution powered by NVIDIA GPUs and its accelerated networking architecture.

In this blog post, we will overview a few breakthrough research directions that the Aerial platform could enable. In all these directions, the Aerial platform provides the following capabilities that help pursue these research directions:

Next, we will describe each research direction that we are pursuing at Rice University.

               

Figure 1. MIMO research at Rice includes neural network-based massive MIMO detection and radar detection using massive MIMO antennas.        

Deep learning-based MIMO Detection

MIMO detection is a decades-old topic that is concerned with decoupling multiple spatial streams transmitted by multiple user antennas on the same time-frequency resource. The provably optimal detection algorithm, i.e. the maximum likelihood, and its more efficient variants, e.g. sphere decoder, have very high computational complexity that scales unfavorably with the number of antennas at the base station and the user terminals as well as the number of symbol options that prohibits them from being used in real-world systems. Hence, instead, linear but suboptimal algorithms are used in practice. Recently, there has been a surge in the number of research works that are using machine learning models to solve the problem. From deep unrolling [1], to the use of transformers [2], the results have been promising in simulation.

At Rice, we have developed a new method based on Langevin dynamics [3] that has shown to be effective in getting very close to the optimal performance. The next step is demonstrating experimental evidence that AI-based methods will deliver gains in real systems. We are currently exploring the effectiveness of these AI-based detection methods using real-world 5G-compliant data collected from the Aerial platform. Follow-up research will be a real-time implementation of such algorithms (through online training and inference) on the NVIDIA GPUs that are running Aerial.

Radar Detection and Coexistence

The demand for sub-6 GHz spectrum to deploy 5G networks has forced the federal government to rethink the allocation of several bands used for radar applications. This has already resulted in the allocation of the 3.55-3.7 GHz or the CBRS band for secondary use in the US with the navy radars as the primary users. Further, the FCC is evaluating the allocation of 3.1-3.55 GHz, home to aerial radars, for secondary use by the 5G operators. While the secondary use in the former band is done through spectrum databases, the deployment of 5G networks in the latter band will be challenging as it will require stringent interference avoidance criteria. While many radar detection schemes have been proposed in the literature, there is a need to look at this problem from the 5G lens. In other words, it is key to design methods that can detect the aerial radar signal within the 5G frame format.

At Rice, we have designed a radar emulator (using a software-defined radio mounted on a drone) and we are using the Aerial platform to collect the CSI from users polluted by the radar signals with different configurations such as range and speed. Using the CSI datasets, we are investigating the use of AI in detecting radar signal characteristics in CSI data. Future work will include the real-time implementation of such AI-based techniques as an add-on module to Aerial that not only could detect radar but also take interference avoidance measures in a latency-sensitive manner.

Self-adapting vRANs

The RAN softwarization enables rethinking the RAN control and optimization in ways that were impossible with black-boxed hardware. To achieve higher throughput or latency in vRAN software, for example through higher-order spatial multiplexing, more compute resources must be used. This translates to a higher number of cores in CPUs or a higher number of compute kernels in GPUs. The programmability of the Aerial platform allows us to benchmark the wireless performance under varying compute loads.

At Rice, we are investigating how much GPU resources are needed to achieve a certain data rate. Since the achievable data rate in wireless is also dependent on the propagation environment or the wireless channel, it is important to jointly model both effects in the vRAN context. In general, this leads to the design of RAN schedulers that jointly and adaptively allocate compute resources as well as radio resources in the RAN environment. The scheduler considers the available compute resources as well the QoS requirement of the RAN users to allocate radio resources. It can be composed of a local and central module. The local scheduler not only adapts to the varying channel conditions but also to varying compute loads caused by running apps on the same computing environment. In the case of Aerial, latency-sensitive apps, such as the one explained in the previous section, can run on the same GPU. The central scheduler controls multiple local schedulers connected to different vRAN instances. Each scheduler can be designed using deep reinforcement learning (DRL) models where based on the compute and channel environments, the scheduler makes decisions on how to allocate resources. Our recent work has shown the feasibility and scalability of deep reinforcement learning (DRL) in massive MIMO user scheduling where the action space is very large. Figure 2 shows an overview of resource scheduler architecture that enables self-adapting vRANs.

Figure 2. Self-adapting vRANs architecture

Conclusion

GPU-based vRANs, as demonstrated by Aerial, provide an exciting opportunity for researchers as they enable demonstrating the capabilities of AI in improving the performance of next-generation cellular networks. At Rice, we will continue to work on enhancing the robustness of next-generation wireless network architectures through the demonstration of the research idea on the Aerial platform.

Authors

Rahman Doost-Mohammady is an assistant research professor in the Department of Electrical and Computer Engineering at Rice University. His work focuses on the design and efficient implementation of next-generation wireless network systems. He is the technical lead for the Rice RENEW project. He received his Ph.D. at Northeastern University in 2014, M.S. at Delft University of Technology in 2009, and B.Sc. at Sharif University of Technology, in 2007.

Santiago Segarra is an associate professor in the Department of Electrical and Computer Engineering at Rice University.  His research interests include network theory, data analysis, machine learning, and graph signal processing. He received the B.Sc. degree in Industrial Engineering with highest honors (Valedictorian) from the Instituto Tecnológico de Buenos Aires (ITBA), Argentina, in 2011, the M.Sc. in Electrical Engineering from the University of Pennsylvania (Penn), Philadelphia, in 2014 and the Ph.D. degree in Electrical and Systems Engineering from Penn in 2016.

Ashutosh Sabharwal is a professor and chair in the Department of Electrical and Computer Engineering at Rice University. He works in two independent areas: wireless networks and digital health. His recent work in wireless focuses on joint wireless communications and imaging and large-scale experimental platforms; he leads the Rice RENEW project. He received his Ph.D. and MS at Ohio State University in 1999 and 1995, respectively, and B.Tech. at the Indian Institute of Technology, New Delhi, in 1993.

References

[1] M. Khani, M. Alizadeh, J. Hoydis and P. Fleming, "Adaptive Neural Signal Detection for Massive MIMO," in IEEE Transactions on Wireless Communications, vol. 19, no. 8, pp. 5635-5648, Aug. 2020, doi: 10.1109/TWC.2020.2996144.

[2] K. Pratik, B. D. Rao and M. Welling, "RE-MIMO: Recurrent and Permutation Equivariant Neural MIMO Detection," in IEEE Transactions on Signal Processing, vol. 69, pp. 459-473, 2021, doi: 10.1109/TSP.2020.3045199.

[3] N. Zilberstein, C. Dick, R. Doost-Mohammady, A. Sabharwal and S. Segarra, "Annealed Langevin Dynamics for Massive MIMO Detection," in IEEE Transactions on Wireless Communications, 2022, doi: 10.1109/TWC.2022.3221057.