Thursday, December 29, 2022

OLT enable MAC Address Anti-Floating Cause BRASs become Double VRRP Master

Problem Description

When two BRASs are connected through one OLT, occur VRRP became two masters.

 

Handling Process

1. Two BRASs are in the master state. This is because the VRRP packets sent from the active BRAS to the standby BRAS are discarded on the OLT side. The standby BRAS consider that the active BRAS is faulty and then becomes the active BRAS. Perform ACL statistics on the OLT. It is found that the port connected to the active BRAS receives VRRP packets.

 

1

 

Perform mirror on the OLT Huawei MA5680T for example port connecting the standby BRAS. It is found that VRRP packets are not sent out. Therefore, packets are lost on the OLT, as shown in the following figure.

 

1

 

2. The OLT discards VRRP packets sent from the active BRAS to the standby BRAS because anti-MAC address flapping is enabled. The port connected to the standby BRAS learns the virtual MAC address first, and then receiving a VRRP packet with the same virtual MAC address on master BRAS, master BRAS discards the packet due to a MAC address learning conflict.

 

Query the MAC address on the OLT. It is found that the virtual MAC address of the VRRP packet is learned on the port connected to the standby BRAS. The anti-MAC flapping function is enabled on the device. Therefore, the virtual MAC address of the VRRP packet is configured as a static MAC address to prevent flapping to other ports.

 

1

 

The virtual MAC address is learned on the port connected to the standby BRAS because the optical path between the active BRAS and the OLT is interrupted. As a result, services are switched to the standby BRAS, and the MAC address of the original port is deleted. The re-learning is performed on the port connected to the standby BRAS. After the optical path on the active BRAS recovers, the interface connected to the active BRAS receives a VRRP packet with the same virtual MAC address. In this case, the interface discards the VRRP packet because of a MAC address learning conflict.

 

Root Cause 

When two BRASs are connect through an OLT, they use the same virtual MAC address. The MAC address anti-flapping function is enabled on the OLT. After the port connected to the standby BRAS learns the virtual MAC address, the port connected to the active BRAS receives the VRRP packets with the same virtual MAC address. As a result, the port directly discards the VRRP packets due to MAC address learning conflicts. As a result, the standby BRAS fails to receive VRRP heart-beat packets from the active BRAS and becomes the active BRAS

Solution

Run the following command to disable the MAC address anti-flapping function on the OLT:

(config)#security anti-macduplicate disable

Tuesday, December 20, 2022

How to build a 10Gbps home network?

 In this day and age, the internet has become a necessity of life. In-home network design, we have various choices, Wi-Fi, wired, which one is more suitable for the home network? How should the home network be deployed? How to choose the network cable and ultimately how to build a 10Gbps home network?


Ethernet vs. Wi-Fi

Today's Wi-Fi bandwidth is getting bigger and bigger, with the latest commercial standard Wi-Fi 6 reaching 10Gbps wireless rates and the draft Wi-Fi 7 being revised to reach ultra-high speeds of 30Gbps. This leaves many wired out of reach. It seems that the future of Wi-Fi is bright. Although Wi-Fi speed is getting faster and faster, but wired networks also have unparalleled advantages:

  1. Stability: After the initial deployment of the network cable is completed, it will basically not be adjusted later and the transmission rate will remain basically the same.

  2. High speed: Although Wi-Fi 6 rate is high, but, at present, the network cable is basically 1Gbps rate or higher, the rate fully meets the home use.

  3. Extremely resistant to interference: the wired network rate will not be affected by microwave ovens, walls, etc.

  4. Wide range of use: Benefit from the RJ45 interface specification, all wired network interfaces (non-fiber) are consistent, even if the equipment replacement, will not affect its access to the existing network.

Therefore, when you have a lot of wired devices that need to access the network, the RJ45 network cable is your best choice, if you only have laptops and cell phones, only deploying Wi-Fi is also possible.

So how do we need to design and build the home network formation? What equipment do we need to use?


Modem

As the first device in the home, the optical GPON ONT modem is responsible for modulating the Ethernet digital signal from the intranet into a baseband signal or an optical signal. According to the type of modulated signal, it can be divided into electric modem and optical modem. With the popularity of optical fiber, more and more electrical modems are being replaced by optical cats.


home network design


On the optical modem device, there is usually one fiber optic interface and two to four Ethernet ports. The fiber optic interface is mainly used to connect to the fiber optic of the ISP, and the Ethernet port is mainly used to connect to the devices of the intranet. If the optical modem is only single-stored for signal modulation and demodulation, then usually it has only one Ethernet interface. If, in addition to signal modulation and demodulation, the optical modem is also responsible for other functions, such as a router, then it will have more Ethernet interfaces to meet the access of devices.

For example, Huawei HG8245H has an integrated optical modem, router, wireless router, and other functions.


Router

The router plays the role of connecting different networks and forwarding data. In home networks, it also plays the role of address translation.


Switch

Switches are not used much in home networks and are usually only used when there are many wired access devices. For example, in addition to wired PCs, there are NAS, servers, TVs, cameras, etc.


Network cable

Network cable as a cable to connect each wired device, while in the pre-decoration, once laid will not be changed. Therefore, it is especially important to determine the type of network cable and the design of the access point.

At present, the mainstream use is MODEM 6A UTP cable. Compared with MODEM 6A STP cable, UTP cable does not require grounding, the construction process requirements are simple.


home network design

 

CAT 6A vs CAT 7

I recommend CAT 6A over CAT 7 for the following reasons:

  1. CAT 6A UTP deployment is simpler.

    CAT 6A UTP and STP are two optional, in the home network of this short distance applications, the difference between the two is not much, but CAT 7 are with shielding layer, in terms of wire diameter, CAT 6A UTP general diameter is 6.2mm, CAT 7 line diameter reached 7.8mm, and with braided mesh shielding layer. CAT 7 line than CAT 6A UTP CAT 7 wire is much stiffer than CAT 6A UTP and will be more difficult to operate in the later threading.

  2. CAT 6A UTP crimp RJ45 connectors are simpler.

    In order to achieve high speed, CAT 7 adds a shielding layer between PVC jacket and cable pairs and requires grounding in the shielding layer, which puts higher requirements on the making of RJ45 connectors.

  3. CAT 6A is cheaper

    Compared to CAT 7 cable, CAT 6A is half the price per meter of CAT 7. The cost of use is much lower.


Home network build

A common home network networking structure is shown in the following figure.


home network design


In the example above, the switch is both the hub point for device connectivity and also acts as a PoE switch to provide power to the AP.

After the network planning is complete, all we need is to thread the wires.


home network design


In the previous period, weak electric PVC pipes have been laid, and in order to thread the pipes, we can use some auxiliary threading tools.


home network design


After completing the threading, the next step is to make RJ45 connectors only.

Currently, there are two types of RJ45 connectors, namely T568A and T568B. Since the equipment nowadays can automatically switch between T568A and T568B, you can choose any one of them when making RJ45 connectors.


home network design


home network design


With the above steps, you are basically done with your home network deployment.

If there is anything missing, please add it in the comment section.

Wednesday, December 7, 2022

ONU failure to go online – Problems with ODN

 This actical mainly about problems with ODN and how we can fix it. This is the last article about problems with ODN. In the previous post, I explained – connected connectors of different types and connected fibers of different types.


As I said in the first article about problems with ODN, there are the following problems:

  1. Feeder/distribution/drop fibers are broken,

  2. Dirty optical connectors,

  3. Bent fibers,

  4. Problem with optical splitters,

  5. Connected connectors of different types,

  6. Connected fibers of different types,

  7. Bad splices,

  8. Incorrectly designed networks or incorrectly realized networks.



Today I will explain bad splices and incorrectly designed networks or incorrectly realized networks.


7. Bad splices


There are three types of connection optical fibers: fusion splices, mechanical splices, and optical connectors. Now, I will shortly explain fusion and mechanical splices, bad splices, and the process for fix this problem.


We choose the way of connecting fibers based on: low attenuation, low reflection, and high reliability. In addition, application and price.


The most reliable and commonly used way of connecting fibers is fusion splice. This way of connecting the optical fibers enables low attenuation, low reflection, and high reliability. It is used mainly in the feeder and distribution segments, but also in the drop part of ODN. Generally, wherever on/off fiber is not required.


For fusion splicing, we use a special machine, is fusion splicer. In short, the fibers are prepared using certain tools (e.g. cable slitter, tube cutter, fiber striper, etc.), then cleaning with alcohol or some other liquid for fibers and wipes, then the optical fibers are cut using an optical cleaver. After that, the optical fibers are connected by a fusion connector. In the end, the fusion machine checks the splice: the strength of the joint and evaluates the attenuation.


Mechanical splices are very rarely used. The reason is potentially high attenuation, high reflection, and low reliability. Index matching gel becomes obsolete over time and parameters of the mechanical splice degrades. A mechanical splice is used when we do not have a sufficient number of fusion splicers, when we rarely splice and when there is an emergency intervention. Because that, they are used a little, I will not talk about them and the problems they can cause. That will be one of the new topics.


After short explaining how to connect fibers, I will now explain the most common problems with fusion splices.


There are two main causes of bad splices. Dirty or insufficiently cleaned fibers will deviate during the process of splicing. Another problem is bad to cut fibers. The fiber must be cut at an angle of 90 degrees. This is very important to ensure a quality connection - a little attenuation and little reflection.


Sometimes fusion splicers can make a bad splice. Therefore, the electrodes and software settings should be checked.


Good splice has attenuation about 0.01 dB, max attenuation is 0.1dB. A bad splice may have a greater attenuation than the 0.1dB, values are from 1 to 5 dB. These values of attenuation can disable the connection between Huawei GPON ONT and GPON OLT such as MA5800 OLT. We can see the problem immediately after the splice is completed and repeat the splicing. After finished build the network, we must test all-optical lines by OTDR. With OTDR we can see all incorrect events on the route.


In the next two pictures, we can see the process of fusion splicing (figure 1.) and common cleave problems (figure 2.). In figure 2, there are three common cleave problems: lip, chip, and angle.


sl1


Figure 1. Process of fusion splicing

(https://imedea.uib-csic.es/~salvador/docencia/coms_optiques/addicional/ibm/ch06/06-04.html)


sl.2


Figure 2. Common cleave problems

(https://www.fiberoptics4sale.com/blogs/archive-posts/95049286-fiber-optic-cleaver)


8. Incorrectly designed networks or incorrectly realized network


Sometimes an error is made during design or building and an inadequate optical splitter is installed. In this way, we don`t have the optimal number of fiber divisions. Optical power may be higher or more often lower than required. For example, instead of splitting the fiber 32 times, it is split 16 or 64 times.


We can locate this problem with OTDR or using a PON meter. Incorrect optical power can disable connections between ONUs and OLT. The problem will be fixed when we change the inadequate optical splitter.


Tuesday, October 18, 2022

MA5800 - It is needed to understand whether the CCU will allow the door control to be remotely managed

Issue Description

Huawei MA5800 - We will need to understand whether the CCU will allow the door control to be remotely managed, meaning that we are able (from a remote location) release the door control of outdoor cabinets to provide access to on-site personnel, as opposed to only using electronic door cards.

We wish to understand if this is supported since we do not see this referenced in the documentation for the CCU.


Handling Process

A cabinet security system mainly includes an electronic door lock, a door status sensor, an infrared sensor, and an audio alarm device. Cabinet security includes electronic door lock management and audio alarm reporting. 

  •  Electronic door lock management 

− Access of at most three electronic compartment door locks 

− Remote unlocking on the NMS and near-end unlocking using a card. 

− Storage of at most 1000 permission records with two permission setting modes (by start/end time and by time period). 

− Support for a temporary access card. During deployment, if several locks to be managed are configured, a temporary access card using a fixed ID can be used to unlock a door. Once permissions are set, the temporary card becomes invalid. − Unlocking record retrieval. Unlocking records can be recorded at a basis of an engineer, site, compartment door, or time. A maximum of 50 unlocking records are supported. A complete unlocking record includes a door opening action and a corresponding door closing action. 

− Door unlocking event (using a card) reporting and lock status change event reporting. These events can be exported from the NMS.

  • Audio alarm reporting 

Audio alarm reporting associates with unlocking information, a door status alarm, or infrared detection of a moving object to deter thefts onsite. 

− Normal door unlocking using a card does not trigger an audio alarm. If a door stays in open state after a specified period, an alarm is reported. This period is configurable and defaults to 30 minutes.  

− There is an association among a door lock, its door status sensor, and its infrared sensor. This association does not take effect on a door lock whose communication is abnormal.  

− To trigger an infrared sensor alarm or door status alarm, enable the related door status sensor and infrared sensor alarms. If they are not enabled, the related sensors are not associated with a door lock. 

 − Multiple locks are associated with one audio alarming device. If any compartment door meets alarm conditions, an audio alarm is reported. If any compartment door is unlocked using a card, the reported audio alarm is immediately ended.  

− Associating an infrared alarm with an audio alarm aims at preventing a compartment door being forcibly damaged. For example, if an unauthorized person attempts to cut a hole on a compartment door for a theft, an infrared sensor detects a moving person and then produces an infrared alarm. At this time point, the door is locked, which meets audio alarm conditions. In this way, a three-dimension anti-theft scenario is constructed.  

− To prevent lasting alarm voice interference when audio alarm conditions are met, the audio alarm duration and muting during can be set so that an audio alarm is reported for a period and then muted for a period. 

Wednesday, September 14, 2022

What Is QoS?

QoS improves network resource utilization and allows different types of traffic to compete for network resources based on their priorities, so that voice, video, and important data applications are preferentially processed on network devices.

Importance of QoS

Services on the IP network can be classified into real-time and non-real-time services. Real-time services, such as voice services, occupy fixed bandwidth and are sensitive to network quality changes. Therefore, they have high requirements on network stability. The bandwidth occupied by non-real-time services is unpredictable, and burst traffic often occurs. Burst traffic will deteriorate network quality, cause network congestion, increase the forwarding delay, and even cause packet loss. As a result, service quality deteriorates or even services become unavailable.

Increasing network bandwidth is the best solution, but is costly compared to using a service quality guarantee policy that manages traffic congestion.

QoS is applicable to scenarios where traffic bursts occur and the quality of important services needs to be guaranteed. If service quality requirements are not met for a long time (for example, the service traffic volume exceeds the bandwidth limit for a long time), expand the network capacity or use dedicated devices to control services based on upper-layer applications.

In recent years, traffic of video applications have grown explosively. For enterprises, applications such as HD video conference and HD video surveillance also generate a large amount of HD video traffic on the network. Video traffic occupies more bandwidth than voice traffic. Especially, interactive video applications have high requirements on real-time performance. In addition, with the development of wireless networks, more and more users and enterprises use wireless terminals. The moving of wireless terminals results in more unpredictable traffic on the network. Therefore, the QoS solution design faces more challenges.

QoS Counters

The network quality is affected by the bandwidth of the transmission link, delay and jitter of packet transmission, as well as packet loss rate, which are known as key QoS counters.

Bandwidth

Bandwidth, also called throughput, refers to the maximum number of data bits transmitted between two ends within a specified period (1 second) or the average rate at which specific data flows are transmitted between two network nodes. Bandwidth is expressed in bit/s. There are two common concepts related to bandwidth: uplink rate and downlink rate. The uplink rate refers to the rate at which users send information to a network, and the downlink rate refers to the rate at which a network sends information to users. For example, the rate at which users upload files to a network is determined by the uplink rate, and the rate at which users download files is determined by the downlink rate.

Delay

Delay refers to the time required to transmit a packet or a group of packets from the transmit end to the receive end. It consists of the transmission delay and processing delay. Voice transmission is used as an example. A delay refers to the period from when words are spoken to when they are heard. Generally, people are insensitive to a delay of less than 100 ms. If a delay is the range of 100 ms to 300 ms, both parties of the call can sense slight pauses in the peer party's reply, which may seem annoying to both. If a delay is longer than 300 ms, both the speaker and responder obviously sense the delay and have to wait for responses. If the speaker cannot wait and repeats what has been said, voices overlap and the quality of the conversation deteriorates severely.

Jitter

If network congestion occurs, the delays of packets over the same connection are different. The jitter is used to describe the degree of delay change, that is, the time difference between the maximum delay and the minimum delay. Jitter is an important parameter for real-time transmission, especially for real-time services, such as voice and video, which are intolerant to the jitter because the jitter will cause voice or video interruptions. The jitter also affects protocol packet transmission. Some protocols send interactive packets at a fixed interval. If the jitter is too large, protocol flapping occurs. Jitter is prevalent on networks but generally does not affect service quality if it does not exceed a specific tolerance. The buffer can overcome the excessive jitter, but it will increase the delay.

Packet Loss Rate

The packet loss rate refers to the percentage of the number of packets lost during data transmission to the total number of packets sent. Slight packet loss does not affect services. For example, users are unaware of the loss of a bit or a packet in voice transmission. The loss of a bit or a packet in video transmission may cause the image on the screen to become garbled instantly, but the image can be restored quickly. TCP can be used to transmit data to handle slight packet loss because TCP allows the lost packets to be retransmitted. If severe packet loss does occur, the packet transmission efficiency is affected. QoS focuses on the packet loss rate. The packet loss rate must be controlled within a certain range during transmission.

Application Scenarios of QoS

Take enterprise office as an example. In addition to the basic web browsing and email services, services such as Telnet-based device login, remote video conferences, real-time voice calls, FTP file upload and download, and video playback must also have their network quality guaranteed during busy hours. If services have varying network quality requirements, you can configure corresponding QoS functions or enable QoS only for some services to meet the requirements.

qos

  • Network protocols and management protocols: such as OSPF and Telnet

    These services require low delay and low packet loss rate, but do not require high bandwidth. To meet the requirements of such services, configure priority mapping to map the priority of the service packets into a higher CoS value so that the network device can preferentially forward the packets.

  • Real-time services: such as video conference and VoIP

    Video conferences require high bandwidth, low delay, and low jitter. To meet the requirements of such services, configure traffic policing to provide high bandwidth for video packets and priority mapping to increase the priority of video packets.

    VoIP refers to the real-time voice call over the IP network. It requires low packet loss, delay, and jitter. If these requirements cannot be met, both parties of a call will suffer from poor call quality. To resolve this problem, configure priority mapping so that voice packets take precedence over video packets and configure traffic policing to provide the maximum bandwidth for voice packets. This ensures that voice packets are preferentially forwarded in the case of network congestion.

  • Heavy-traffic services: such as FTP, database backup, and file dump

    Heavy-traffic services refer to network services in which a large amount of data is transmitted for a long time. Such services require a low packet loss rate. To meet the requirements of such services, configure traffic shaping to cache the service packets sent from an interface in the data buffer. This reduces packet loss upon congestion caused by burst traffic.

  • Streaming media: such as online audio playback and video on demand (VOD)

    Users can cache audio and video programs before playing them, reducing requirements on the network delay, packet loss, and jitter. To reduce the packet loss rate and delay of these services, configure priority mapping to increase the priority of the service packets.

  • Common services: such as HTML web page browsing and email

    Common services have no special requirements on the network. You do not need to deploy QoS for them.

Service Models

How are QoS indicators defined within proper ranges to improve network service quality? The answer lies in the QoS model. QoS is an overall solution, instead of being merely a single function. When two hosts on a network communicate with each other, traffic between them may traverse a large number of devices. QoS can guarantee E2E service quality only when all devices on the network use a unified QoS service model.

The following describes the three mainstream QoS models. Huawei switches, routers, firewalls, and WLAN devices support QoS based on Differentiated Services (DiffServ), which is the most commonly used.

Best-Effort

Best-Effort is the default service model for the Internet and applies to various network applications, such as FTP and email. It is the simplest service model, in which an application can send any number of packets at any time without notifying the network. The network then makes the best effort to transmit the packets but provides no guarantee of performance in terms of delay and reliability. The Best-Effort model is suitable for services that have low requirements for delay and packet loss rate.

Integrated Service (IntServ)

In the IntServ model, an application uses a signaling protocol to notify the network of its traffic parameters and apply for a specific level of QoS before sending packets. The network reserves resources for the application based on the traffic parameters. After the application receives an acknowledgement message and confirms that sufficient resources have been reserved, it starts to send packets within the range specified by the traffic parameters. The network maintains a state for each packet flow and performs QoS behaviors based on this state to guarantee application performance.

The IntServ model uses Resource Reservation Protocol (RSVP) for signaling. Resources such as bandwidth and priority are reserved on a known path, and each network element along the path must reserve required resources for data flows requiring QoS guarantee. Each network element checks whether sufficient resources can be reserved based on these RSVP messages. The path is available only when all involved network elements can provide sufficient resources.

DiffServ

DiffServ classifies packets on a network into multiple classes and provides differentiated processing for each class. In this way, when congestion occurs, classes with a higher priority are given preference. Packets of the same class are aggregated and sent as a whole to ensure the same delay, jitter, and packet loss rate.

In the DiffServ model, traffic classification and aggregation are completed on border nodes. A border node flexibly classifies packets based on a combination of different fields (such as the source and destination addresses, priority in the ToS field, and protocol type), and marks different classes of packets with appropriate priority values. Other nodes only need to identify these markings to allocate resources and control traffic.

Unlike the IntServ model, the DiffServ model does not require a signaling protocol. In this model, an application does not need to apply for network resources before sending packets. Instead, the application sets QoS parameters in the packets, through which the network can learn the QoS requirements of the application. The network provides differentiated services based on the QoS parameters of each data flow and does not need to maintain a state for each data flow. DiffServ takes full advantage of IP networks' flexibility and extensibility and transforms information in packets into per-hop behaviors (PHBs), greatly reducing signaling operations.

Mechanisms in the DiffServ Model

The DiffServ model involves the following QoS mechanisms:

  • Traffic classification and marking

    Traffic classification and marking are prerequisites for implementing differentiated services. Traffic classification divides packets into different classes, and can be implemented using traffic classifiers configured using Modular QoS Command-Line Interface (MQC). Traffic marking sets different priorities for packets and can be implemented through priority mapping and re-marking. Packets carry different types of precedence field depending on the network type. For example, packets carry the 802.1p field in a VLAN network, the EXP field on an MPLS network, and the DSCP field on an IP network.

  • Traffic policing, traffic shaping, and interface-based rate limiting

    Traffic policing and traffic shaping control the traffic rate within a bandwidth limit. Traffic policing drops excess traffic when the traffic rate exceeds the limit, whereas traffic shaping buffers excess traffic. Traffic policing and traffic shaping can be performed on an interface to implement interface-based rate limiting.

  • Congestion management and congestion avoidance

    Congestion management buffers packets in queues upon network congestion and determines the forwarding order using a specific scheduling algorithm. Congestion avoidance monitors network resource usage and drops packets to mitigate network overloading when congestion worsens.

Traffic classification and marking are the basis of differentiated services. Traffic policing, traffic shaping, interface-based rate limiting, congestion management, and congestion avoidance control network traffic and resource allocation to implement differentiated services.

The following figure shows the QoS service process on network devices.

qs

QoS vs. HQoS

Traditional QoS technologies can provide differentiated services to meet requirements of voice, video, and data services. As the number of users and their individual bandwidth consumptions continue to grow, these technologies are facing new problems, including:
  • Traffic is scheduled based on interface bandwidth, allowing differentiation of traffic based on service levels. However, it is difficult to differentiate services based on users. Therefore, traditional QoS is typically applied to the core layer, instead of the access layer.

  • Traditional QoS cannot manage or schedule traffic of multiple services for multiple users simultaneously.

Hierarchical Quality of Service (HQoS) has been introduced to address these issues by differentiating traffic of different users and scheduling traffic based on service priorities. HQoS uses multiple levels of queues to further differentiate service traffic, and provides uniform management and hierarchical scheduling for transmission objects such as users and services. It enables network devices to control internal resources, providing QoS guarantee for VIP users while reducing network construction costs.

q

Thursday, September 8, 2022

Introduction to Portal Authentication

802.1X and PPPoE access control methods require dedicated client software to be installed and are effective only at the access layer, which does not facilitate network deployment and user access. To solve this problem, an access control mode, which does not require dedicated client software and allows authentication control points to be flexibly deployed, is required.

Portal authentication is developed in this context. It does not require dedicated clients, providing a flexible access control mode. Access control can be implemented at the access layer and the ingress of key data to be protected. Portal authentication is also called web authentication because it uses popular web pages for authentication, which means that users can be authenticated using only a web browser.

MAC address-prioritized portal authentication can be used to avoid frequent password and account entry for reauthentication in the case that a user roams or goes offline and then online again in various scenarios.

In MAC address-prioritized portal authentication, the access device sends the MAC address of a terminal to the RADIUS server for authentication when the terminal performs portal authentication for the first time. If the authentication fails, portal authentication is triggered for the user so that the user can enter the user name and password for identity authentication. The RADIUS server caches a terminal user's MAC address after the first authentication succeeds. If the terminal user is disconnected and then connected to the network within the MAC address validity period, the RADIUS server searches for the MAC address of the terminal user in the cache to authenticate the terminal user. After the authentication succeeds, the portal authentication page is not pushed to the user, and the user can directly access network resources.

  • Portal authentication takes effect based on physical ports. If a user connected to a port passes the authentication, the user can access network resources through the port. If a user fails to pass the authentication, the user cannot access network resources.
  • Currently, ports that are enabled with portal authentication support only network resource access through HTTP, and does not support other services (such as connected printers, IPC services, AP services, and dumb terminals). If other services need to be supported, use other ports for service isolation.
  • Currently, only the portal protocol 2.0 is supported.

HTTP has security risks due to its limitations. Ensure that HTTP is used in a secure environment.

A portal authentication system consists of authentication clients, access device Huawei MA5800 OLT or MA5600T GPON OLT, portal server, and RADIUS server. The portal server and RADIUS server are built in iMaster NCE-Campus, as shown in Figure 1.

Figure 1 Portal authentication system


  • Authentication client: A browser that runs the HTTP protocol or a host that runs the portal client software.
  • Access device (OLT Huawei MA5800 X7 for example):
    • Redirects all HTTP requests of a user to the portal server before authentication.
    • Interacts with the portal server and RADIUS server to implement identity authentication.
    • Allows the user to access authorized network resources after the authentication succeeds.
  • Portal server: Receives authentication requests from a portal client, provides portal services and authentication web pages, and exchanges authentication information of the authentication client with the access device.
  • RADIUS server: Interacts with the access device to authenticate users.

Tuesday, August 16, 2022

How do I prevent rogue devices from connecting to my Huawei ONT's Wi-Fi?

Huawei EchoLife ONT include HG8010, HG8010H, HG8045, HG8045A, HG8240, HG8240H, HG8240T, HG8240W, HG8245, HG8245H, HG8245Q, HG8245T, HG8247, HG8247H, and router etc.

HG8010, HG8010H, HG8240, HG8240H, HG8240T, HG8240W are bridging ONTs and do not have the Wi-Fi function.You need to perform on the connected router, consult the router vendor or the carrier service hotline.

For Huawei ONTs with Wi-Fi functions, such as HG8045, HG8045A, HG8245, HG8245H, HG8245Q, HG8245T, HG8247 and HG8247H, perform the following steps:

  1. Exercise caution when using software to access Wi-Fi without permission. Such software uploads a user's Wi-Fi network information without notification. Another user can find this Wi-Fi network name, obtain the Wi-Fi network key from the software server through a 3G or 4G network, and access this Wi-Fi network without permission.
  2. Check whether encryption has been performed for the Wi-Fi network. Specifically, log in to the ONT web page. On the main menu, choose WLAN > 2.4G/5G Basic Network Settings. In WPA PreSharedKey , do not select Hide. .
  3. Check whether the Wi-Fi password is too simple to be cracked. You are advised to set Authentication Mode to WPA-PSK/WPA2-PSK. Configure and record a password with high security level, for example, a password containing digits, uppercase letters, lowercase letters, and special characters. This prevents password cracking.

    The following figure shows the details.




  4. By configuring MAC address filtering, you can allow only permitted devices (whitelist) to access the network, and prevent rogue devices (blacklist) from accessing the network.

Note: For some ONTs, you can configure a Wi-Fi password after choosing Advanced Configuration > WLAN > 2.4G/5G Basic Network Settings. on the ONT web page.