قالب وردپرس درنا توس
Home / SmartTech / ASU researchers debut with ViWi-BT, a mmWave beam guide for AI / computer vision

ASU researchers debut with ViWi-BT, a mmWave beam guide for AI / computer vision



The shift in the cellular industry from long-range radio signals to short-range millimeter waves is one of the greatest changes in the 5G era, which is expected to continue with submillimeter waves in the next decade. To target millimeter wave and future terahertz frequency signals more precisely to user devices, Arizona State University researchers have developed ViWi-BT, a vision wireless framework that enhances beam tracking using computer vision and deep learning.

Historically, smartphones worked in a similar way to other radio devices that scan the air waves for omnidirectional tower signals and adjust them to what is strongest and / or closest. In the 5G and 6G epochs, small cell networks used beamforming antennas to target their signals more specifically to discovered client devices that may be considering connections from multiple base stations at the same time. The aim of ViWi-BT is to identify physical obstacles and advantages for the beam aiming process with the aid of AI and camera or lidar functions of a device, thus enabling “visionary wireless communication”.

In short, a system with ViWi-BT functions will do this. Learn about the 3D environment using a database of previously transmitted millimeter-wave rays and visual images, and then predict the optimal rays for future users who are in the move the same room. The framework is conveyed with visual and wireless signal information from static elements (buildings, streets and open skies), frequent locations of moving obstacles (vehicles and people) and generally open spaces. Based on this knowledge, the system can predict where to send both direct line of sight rays and reflected non-line of sight rays, and adjust them based on live information about known conditions.

The researchers developed simulations of how the model's physical data work, and distilled highly detailed 3D objects into simpler approximations that the computer can use more efficiently for calculations without significantly affecting the accuracy of the results. Each object is given a fixed or moving role in the simulation, including its real electromagnetic properties with respect to 28 GHz millimeter wave signals, so that absorption, reflection and diffraction can be taken into account.

Predictions are made by a recurring neuron network (RNN) that has been trained on previously observed beam sequences collected from base stations in space. While the RNN can predict the future direction of a single ray well without computer vision support, it gets significantly worse when asked to predict three or five rays and does not get better with deeper training. Adding appropriately trained computer vision to the mix, according to ASU researchers, would enable the system to identify possible future obstacles, reflective surfaces, and user movement patterns in the rooms.

Although research is still at an early stage, this is likely to become increasingly important for performance enhancement as millimeter wave and sub millimeter wave systems are required for extremely low latency communication. At the very least, this could pave the way for base stations with their own camera hardware ̵

1; a development that could transform modern surveillance into actionable intelligence that improves wireless communication.


Source link