The 5G Access Gateway Function (AGF): An Evolutionary Tale
Once upon a time, tucked away in a tiny garage in Anytown USA, there was a modest Remote Access Server. Its flickering green lights indicated that the rack of mighty 56K V.90 Modems sitting next to it were busy linking a new wave of Internet users to the wonders of the World Wide Web. But the RS.2321 umbilical bond between the two would soon be severed as the little RAS grew up and acquired its own integrated digital modem capabilities, complete with breakneck T1/E1/PRI interfaces direct to network service providers. The standalone modem’s reign came to an end, and our Remote Access Server made its rightful accession to the Internet gateway throne. And with MAX -- the predominant product during this industry transition, from a small company called Ascend -- the rightful succession was almost assured.
The Remote Access Server was not only found in the garages of middle America, of course. It was primarily racked and stacked en masse within the data centers of larger Internet Service Providers (ISPs) and enterprises wishing to set up early analog or ISDN Virtual Private Networking (VPN) access for remote workers. It was simply an interesting dynamic of that time, where individuals with a modicum of tech savvy could become local Internet points-of-presence (POPs). Together, they served a growing number of local users not willing to be tied to a walled-garden Internet being served up by the likes of AOL or tired of paying for the cost of a long-distance phone call on top of the Internet access charge.
Thousands would eventually be swallowed up as those same major ISPs evolved and fought for a subscriber base. In the meantime, garage operations around the world made money in the typical way -- by charging subscriber fees greater than the cost of hosting the infrastructure and buying the connections -- or in a more unconventional manner, like being paid by the network operators to terminate calls or serving ads. This latter dynamic led to free Internet offerings.
The Remote Access Server
The Remote Access Server acted as a demarcation and traffic aggregation point between the subscriber and the Internet. It terminated the Layer 2 IP transport protocols established by the dial-up client -- first the Serial Line Internet Protocol (SLIP)2 then the Point-to-Point Protocol (PPP) -- and interacted with external databases for user authentication, authorization and accounting (AAA).3 Authentication was achieved using the sprightly named Password Authentication Protocol (PAP) or Challenge Handshake Authentication Protocol (CHAP) between the user and the RAS. Variations of the old UNIX Terminal Access Controller Access Control System (TACACS) or, later, the Remote Access Dial-In User Service (RADIUS) were employed when the RAS needed to dip into an external database for any part of the AAA process. (See “Backstory” under the footnotes.)
RADIUS has now been superseded by the Diameter protocol for such purposes – not a clever acronym, but simply a play on the word radius. The Diameter standardization process began in the IETF’s AAA working group in late 2000/early 2001 – just as the migration of Internet users from narrowband dial-up to broadband Digital Subscriber Line (DSL) was getting into full swing. So embedded is the Radius protocol, however, it still remains in widespread use even today.
Enter the Broadband Remote Access Server
The advent of Asymmetric Digital Subscriber Lines (ADSL) demanded an update of the central office Remote Access Servers that supported these emerging high-speed fixed-line connections. Enter the BRAS. (Duh!) While the box was bigger, the operational concept was practically identical to that of the RAS, in that the BRAS fundamentally performed subscriber management and data aggregation. So similar was the concept, in fact, that in the early days you had to actually dial up the DSL line from a PC client. PPP over Ethernet (PPPoE) was employed from the user to the DSL Modem, which then stripped off the Ethernet frame, replacing it with an Asynchronous Transfer Mode (ATM)4 header (PPPoA). The variable bit rate (VBR), packet-centric, ATM Adaptation Layer 5 (AAL5) encapsulation was employed on both the client side5 and the network side of the Digital Subscriber Line Access Multiplexer (DSLAM), which is the device that terminates the line-side DSL twisted pair.
Cheap home routers quickly eliminated the need for a PPP client on the PC, enabling multiple devices to easily share the DSL connection. PPPoE was still employed, but just between the router and the DSL modem. This is all combined into one Wireless Access Point (WAP) these days, of course. Because PPP and its various encapsulations became unwieldy, and as the need to authenticate users gave way to the desire to simply authenticate an endpoint, DHCP was increasingly favored by ISPs. DHCP was being employed regardless for IP address allocation so the IETF extended it6 with an option -- Option 82, to be exact -- which enabled additional strings, such user IDs and passwords, to be inserted by the router into DHCP DISCOVER messages.
IP encapsulation in PPP in the old days of ADSL
The BRAS interfaced with a RADIUS server for AAA, in the same way as the RAS, but as a broadband aggregator, the BRAS was also tasked with traffic policing. This primarily took the form of rate-limiting IP flows to prevent DDoS attacks, but the BRAS could also apply fair queuing and weighted discard techniques in support of IP QoS. It rarely need bother, however, as in those days the IP Differentiated Services Code Point (DSCP) was given little respect by any network function.
That changed at the start of the 21st century, when differentiated services over DSL became all the rage. Long before the call for net neutrality, network operators were facing a dilemma: how to squeeze a broad range of IP services, such as multi-channel standard-definition and emerging HDTV plus voice and Internet over a single, relatively low-speed DSL pipe. Like today, oversubscription solved a multitude of QoS issues, but the DSL links could not be made faster without pushing the DSLAM closer to the home. That sounds easy enough but was an expensive proposition and the Local Exchange Carriers (LECs) of the time7 were facing the regulatory prospect of having to unbundle any new fiber or copper infrastructure they deployed, to any competitor.
A carrier-driven initiative, the DSL Forum -- now known as the Broadband Forum – undertook the task of defining the business case (TR-058) and technical architecture (TR-059) for providing QoS-enabled IP Services over DSL and the requirements for the BRAS to enable it (TR-92). Demanding complete ATM switching and IP routing, the five-level hierarchical traffic shaping prerequisites (IP, PPP, ATM VC, ATM VP, DSL PHY) were hard for existing BRAS vendors to deliver on. Fortunately, it didn’t matter, in the end, as Washington, D.C. lobbyists prevailed, and the threat of forced unbundling subsided.
Ratified in 2003/04; if you search Wikipedia for “BRAS,” it is TR-059 and TR-92 you’ll see referenced.8 There’s also a third -- TR-101: Migration to Ethernet-Based Broadband Aggregation. Though completed later, in 2006, TR-101 initiative (or Working Text 101 / WT-101, at the time) was running concurrently with TR-92 and was an acknowledgement that one should never bet against Ethernet everywhere. Shedding the shackles of ancient ATM entirely, while acknowledging that there are other subscriber access mechanisms other than DSL, a TR-101-compliant platform was to be known as a Broadband Network Gateway (BNG). And outside of the odd update (Rev 2.0 2011), there we have remained for over a decade.
Introducing the Access Gateway Function for FMC
In 2018, the Broadband Forum (BBF) decided to step into the access fray, once more. This time, it was to tackle Fixed Mobile Convergence (FMC) in 5G networks. With the prospect of 5G fixed wireless access speeds either supporting (i.e for back-up, bursting, etc.) or outright mitigating the need for local loops, the BBF are liaising with the 3GPP in defining an Access Gateway Function (AGF) that resides between the wireline access infrastructure and wireless core network. The case for 5G FMC is being outlined within SD-407, while the requirements of the AGF will be defined within working text WT-456. Work is also ongoing on the corresponding 3GPP technical specification: TS 23.316 V0.2.0 Group Services and System Aspects Wireless and wireline convergence access support for the 5G System (5GS), aka 5G WWC.9
5G Fixed Mobile Convergence (FMC)
Implemented with complete control and user plane separation (CUPS), the AGF will support Residential Gateways (RGs) that include 5G Non-Access Stratum (NAS) signaling (5G-RG) and RGs that are purely wireline (FN-RG), as defined within TR-124 issue 5: Functional Requirements for Broadband Residential Gateway Devices. These RNs will employ the application layer protocol for remote management of customer-premises equipment (CPE), defined within TR-069.
Not only is the 5G core delivering on mobility services, it is also acting as the backbone infrastructure for services typically delivered via a wireline network. Outside the control plane (CP) and user plane (UP) separation, the AGF differs from a classic BNG (or BRAS) in that the northbound signaling interfaces for AAA are those defined within 5G specifications, with the AUSF providing the common server functionality. N1 may originate natively, in the case of the 5G-RN, and be transported either over the wireline access or wireless infrastructure. Where there is no native NAS support, as in the case of the FN-RG, the control plane function of the AGF-CP performs interworking between the wireline user/endpoint control plane and authentication techniques and the NAS. The same goes for the user plane, where any transport of tunneling mechanisms are stripped and replaced by the GPRS Tunneling protocol by the AGF-UP. While the N2 and N3 reference interfaces are well defined,10 the interfaces downstream from the AGF to the RG will be specified within the BBF’s WT-456.
AGF control and user plane protocol stack interworking
Employing fixed mobile core convergence dramatically reduces the number of distinct components and technologies required to support broadband services. Not only does this lower capital and operational expenses, it simplifies management and provisioning. With 5G’s Service Based Architecture (SBA), we can also inject wireline access with the kind of features promised by this modern applications infrastructure, such as end-to-end network slicing and service chaining. With common Quality-of-Service (QoS) policy control -- regardless of the mix of access mechanisms employed -- traffic can also be policed end-to-end.
Like every other 5G Core component, the control plane and user plane access gateway is a cloud native network function that is deployed within a multi-access edge compute environment.11 While this is a relatively simple proposition for the control plane component, the AGF user plane requires high-power packet processing of the type typically enabled by dedicated Application Specific Integrated Circuits (ASICs) and network processors, rather than the x86 CPU architecture that is the foundation of cloud infrastructures.
There have been major inroads in cloud packet processing in the last few years,12 but these initiatives still fall short of delivering an alternative to dedicated switching and routing hardware. As a cloud native software company, this is something our engineering team have been focused on in the last few years. The result is our Composable Network Application Processor (CNAP), which is a technology I’ve highlighted in previous posts13 and is what we employ to support the equally demanding 5G User Plane Function (UPF).14
As I continually enjoy pointing out in these posts, new technology is rarely actually new. As we make the transition to pure cloud native network architectures, however, even an evolutionary platform such as the AGF demands a significant degree of innovation. Outside the huge gains in spectral efficiencies and bandwidth, 5G represents an opportunity to completely rethink how wireless and wireline infrastructures are built… and who builds them.
1. These data rates could be achieved over a low-speed (V.24/V.28) interface because of the short distances involved.
2. SLIP was ditched in favor of PPP because it only supported TCP/IP
3. The RAS also had enterprise features such as dial-back, alleviating the costly phone call for the remote worker.
4. ATM still represents the quintessential example of “designed by committee” for me, where voice guys wanted a 32-byte payload while the data guys wanted 64. They finally agreed to meet in the middle with a 48-byte payload. The addition of a 5-byte header resulted in a seemingly arbitrary 53-byte cell.
5. AAL1 could be employed with Symmetrical DSL (SDSL) to support constant bit rate (CBR) circuit switch emulation services or voice and video. AAL2 exploded onto the startup scene in the late 1990s, supporting a more efficient Voice over DSL (VoDSL) service that could be deployed alongside AAL5. That market imploded just as quickly in the early 2000s as VoIP became more viable. Who knows what AAL3/4 was used for… or if they were ever used.
7. The Bell System, in the U.S., was still intact... sort of.
8. If you search the Internet for BRAS, you get something entirely different.
9. Unlike the BBF working text, you can take a look at the 3GPP’s work-in-progress right here.
10. 3GPP TS 23.501: System Architecture for the 5G System. Stage 2
Backstory: How did this ever work in the days before people realized they could make money creating specifications forums and bodies with tie-in trade shows and events? The network operators themselves held large interop events where vendors set up in a big room and tested to each other. I attended one at Pacific Bell at their San Ramon HQ in 1995 when Scott Adams was still working in the ISDN lab. Well, collecting his mail from there, once a week, at least. We all got a signed photocopy of this comic strip, which, after 20 years of moving it around and between several houses, I finally threw out last month… right when I could have shown it, here.
Simon is the Director of Technical Marketing and a man of few words.