With the advent of 5G, both synchronization and the related issue of precise timing have become critical considerations.
One of the most important functions of a radio network system is synchronization. With the advent of 5G, both synchronization and the related issue of precise timing have become critical considerations, mainly because the latest generation of cellular relies on time-division duplexing (TDD) radio technology.
Synchronization refers to the time-accuracy requirements that must be enforced among different radio components within a network to ensure and maintain reliable transmission. The key issue is that if the radio clock of a communications network loses synchronization accuracy — or the radios are not synchronized — in a TDD channel, framing will drift outside of the guard period and interfere with adjacent cell sites.
One solution to avoid this is to equip all base stations with a common phase clock reference.
Hear from experts from Qualcomm, Rohde & Schwarz, Keysight Technologies, Movandi, EdgeQ, Infineon, and SPIL, who will share their insights and strategies—from the test and measurement, to semiconductor design and packaging, all the way to deployment and new applications and services—to stay ahead of this rapidly growing mobile technology.
Different timing synchronization techniques are available to ensure that all the radio units in a network are synchronized and allow the scheduler at the base stations to ensure that interference is minimized.
“The thing is, with wireless communications, the receiver has no prior knowledge of the channel or propagation delay associated with the transmitted signal,” Kashif Hussain, wireless solutions director at network testing and monitoring specialist at Viavi Solutions (Scottsdale, Arizona), told EE Times Europe. “And typical receivers use low-cost oscillators to keep the cost of devices manageable. Inherently, such oscillators will have a degree of rift; thus, the need for timing synchronization.”
Hussain noted that “previous generations of cellular also mandated frequency synchronization, mainly to prevent interference when cells overlap. But with 5G, we are dealing with extremely stringent requirements, with TDD streams in both the uplink and downlink sectors. And to complicate matters, here you are also dealing with MIMO.”
TDD offers a full-duplex channel over a half-duplex communications link. Thus, both transmitter and receiver use the same frequency but transmit and receive traffic at different times, using synchronized time intervals. TDD allows for improved spectral efficiencies but brings synchronization and timing challenges. Because downlink and uplink share the same spectrum, tough timing restrictions are necessarily imposed on a TDD system to avoid interference.
To mitigate such interference, all base stations in a network need to be synchronized with a common phase clock reference signal, noted Hussain. According to the guidelines of the ITU-T standards, both 5G NR TDD and LTE-DD networks need to be phase- synchronized to limit the end-to-end time error to within 1.5 µs — comprising no more than 1.1 µs up to the access point and 0.4 µs over the fronthaul to the radio.
Another complication in 5G — where TDD is the only option for the C-band — is that frame and slot synchronization also needs to be applied to avoid inter-network interference. And, with the increasing interest in open radio access network (Open-RAN) architectures, timing and synchronization will be even more crucial because additional delays from open interface network nodes will need to be considered for seamless 5G services, Hussain cautioned.
Of course, 5G is not simply about delivering voice and data via enhanced broadband; the whole point is that it will be needed to connect and support other demanding and emerging use cases and services based on the 3GPP Ultra-Reliable Low-Latency Communications (URLCC) protocol. Typical examples include IoT devices, machines on a factory floor, connected cars, and robotics.
On the transport side, for 4G LTE, the industry currently relies on the Common Public Radio Interface (CPRI), a synchronous fronthaul interface that enforces stringent delay requirements, which are fine for centralization but create challenges when it comes to bandwidth and node flexibility. This may not necessarily be practical for all such use cases.
The interface provides a dedicated transport protocol that was specifically designed to transport radio waveforms between the radio unit and the digital unit. In operation, CPRI frames expand with increased radio channel bandwidth and the number of antenna elements.
“The bottom line is CPRI is not very efficient in terms of statistical multiplexing and cannot scale to the demands of 5G, in particular for massive MIMO and larger bandwidth requirements,” said Hussain.
He added that the required bandwidth and antennas for a typical 5G scenario would push the CPRI bandwidth above 100 Gbits/s. That’s one of the reasons that using Ethernet for fronthaul and midhaul is so practical, he said.
To improve bandwidth efficiency, 5G NR typically uses the new eCPRI protocol, which is claimed to reduce bandwidth requirements tenfold. Equally importantly, unlike its predecessor, eCPRI is not designed specifically to carry synchronization information; hence, the need for new synchronization protocols, such as Precision Time Protocol (PTP), and radio-interface–based techniques that synchronize distributed radio units in the evolved RAN architecture.
In a typical setup, the upper and lower parts of the 5G New Radio (NR) RAN are separated in the different logical units: the centralized unit (CU), the distributed unit (DU), and the radio unit (RU). In this way, the baseband function in a base station is separated into two logical units; the CU would host the higher-layer protocols, while a DU would handle the lower layers to the user equipment.
While synchronization for backhaul in 5G will be similar to that for LTE, there are significant challenges for fronthaul in the absence of a synchronous interface, said Hussain.
Deploying satellite receivers at each RU and using GPS or GNSS, as is common now, will not be cost-effective, notably for small cells, C-band radios, and millimeter-wave (mmWave) radios. Hussain said there are likely to be satellite connections at the Centralized RAN (C-RAN) hub location with tight timing controls out to the radios.
In effect, this means timing and synchronization distribution is collapsed to work over Ethernet, and in many cases, PTP will be used to distribute time of day (ToD) and Synchronous Ethernet (SyncE) to distribute frequency so that RUs will be synchronized over Ethernet Ricardo Querios, head of RAN Security OAM and Transport at Ericsson, concurred. “Both GNSS and PTP can be sufficient, and the choice will depend on timing and costs,” he said in an e-mail exchange with EE Times Europe. “Many operators prefer to have redundancy, though … and will likely use several sources and combinations of GNSS and PTP to synchronize the RAN nodes.
“Complementary technologies may be relevant for cases that require GNSS redundancy” — for example, to protect against jamming or spoofing events, he added. “And there will be cases where GNSS is just not feasible, due to satellites’ visibility issues.”
Hussain noted that “reliability is arguably the biggest challenge, and the problem with in-building access to satellite signal operation is not far behind. And if you think about scaling that up, that is just not economically feasible.”
As already noted, to address fronthaul requirements, the ITU-T has defined a standard (G.8273.2) that mandates the addition of enhanced boundary clocks to meet the stringent synchronization requirements of disaggregated 5G networks. These specialist timing devices allow accurate distribution of timing in the network — for example, by using time-sensitive networking (TSN) Ethernet bridges that incorporate such telecom boundary clocks.
Moreover, network-based timing offers improved visibility of flaws and, in combination with PTP, makes it possible to monitor the flow from the grandmaster clock to the PTP client. That offers the network provider a complete view of all synchronized PTP clients, enhancing network visibility and therefore making the network much easier to control.
TSN, of course, has been widely used in the fixed network where very low latency is required. To promote TSN’s use in the wireless sector, silicon providers and networking gear vendors have established an interoperability and testing program under the auspices of the Avnu Alliance. The initiative includes companies such as Intel, Broadcom, NXP, Microchip, Cisco, Extreme Networks, and Keysight.
Importantly, the latest Release 16 for 5G under the auspices of the 3GPP includes TSN support.
But the TSN standards were specified by the IEEE with Ethernet in mind and thus target the link layer of a network, which is not the case with 3GPP 5G standards or the 802.11 Wi-Fi communications links. In both the latter, these specifications are embedded in the communications layer.
The emerging Wi-Fi 6 and 6E networks, based on the 802.11ax standard, deploy a different scheduling mechanism that can more accurately and efficiently schedule simultaneous transmissions from multiple devices. That makes it possible to provide bounded latency and high reliability.
TSN can be made to work with Wi-Fi and 5G by integrating it over the top so that it will have minimal impact on the RAN. The 3GPP approach outlined in Release 16 calls for the TSN time-domain information to be distributed between the TSN translator functions in the network and the device.
The next iteration of the 3GPP specifications, Release 17, is expected to simplify matters so that the TSN capability will reside within the 5G device.
There are other issues to be resolved, most importantly those related to mobility, because as devices move from one cell site to another, connectivity can be disrupted, impacting latency and reliability.
But since the applications in which wireless TSNs are most likely to figure — such as industrial IoT and robotics — have relatively low mobility requirements, those working on the dilemma believe the challenge can be solved.
It is highly likely that as operators continue to roll out 5G, they will increasingly look to network-based timing as a backup source of synchronization. The advantages include a way to circumvent the security issues around satellite-based systems, clearer visibility of synchronization flows, and the potential to reduce cell site costs.
This article was originally published on EE Times Europe.