WO2020091434A1 - Procédé et dispositif pour effectuer une authentification à l'aide d'informations biométriques dans un système de communication sans fil - Google Patents

Procédé et dispositif pour effectuer une authentification à l'aide d'informations biométriques dans un système de communication sans fil Download PDF

Info

Publication number
WO2020091434A1
WO2020091434A1 PCT/KR2019/014525 KR2019014525W WO2020091434A1 WO 2020091434 A1 WO2020091434 A1 WO 2020091434A1 KR 2019014525 W KR2019014525 W KR 2019014525W WO 2020091434 A1 WO2020091434 A1 WO 2020091434A1
Authority
WO
WIPO (PCT)
Prior art keywords
authentication
network
challenge code
biometric information
public key
Prior art date
Application number
PCT/KR2019/014525
Other languages
English (en)
Korean (ko)
Inventor
김준웅
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2020091434A1 publication Critical patent/WO2020091434A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication

Definitions

  • the present invention relates to a wireless communication system, and more particularly, to a method and apparatus for authenticating a user equipment (UE) and a network authentication server using biometric information.
  • UE user equipment
  • Mobile communication systems have been developed to provide voice services while ensuring user mobility.
  • the mobile communication system has expanded not only to voice but also to data services, and now, due to the explosive increase in traffic, a shortage of resources is caused and users demand for a higher-speed service, so a more advanced mobile communication system is required. have.
  • MIMO Massive Multiple Input Multiple Output
  • NOMA Non-Orthogonal Multiple Access
  • Super Wideband Various technologies such as wideband support and device networking have been studied.
  • An object of the present invention is to propose a method for a terminal to authenticate using biometric information in a wireless communication system.
  • an object of the present invention is to propose a method for authenticating using biometric information through a network authentication server in a wireless communication system.
  • a method for a user equipment (UE) to authenticate using biometric information in a wireless communication system comprising: receiving a challenge code for registration from a network authentication server; Receiving a first public key from the network authentication server; Generating a second public key and a second private key based on the received first biometric information; Encrypting the challenge code for verification and the second public key with the first public key; Transmitting the encrypted challenge code for verification and the second public key to the network authentication server; And receiving a biometric authentication result message indicating that the second public key is stored in the network authentication server.
  • biometric information authentication registration result message is the same as the registration challenge code and the confirmation challenge code, it may indicate that the second public key is stored in the network authentication server.
  • requesting a subscriber authentication procedure to the network authentication server receiving a challenge code for authentication from the network authentication server; Generating a third private key based on the second biometric information; And signing the challenge code for authentication using the third secret key, and transmitting the challenge code for authentication to the network authentication server.
  • a method for a network authentication server to authenticate using biometric information in a wireless communication system comprising: transmitting a challenge code for registration to a terminal; Generating a first public key and a first private key; Transmitting the first public key to the terminal; Receiving a second public key encrypted with the first public key and a challenge code for verification from the terminal; Decrypting the encrypted second public key and the challenge code for verification with the first secret key; And when the challenge code for verification and the challenge code for registration are the same, the second public key is stored, and the biometric authentication registration result message indicating that the second public key is stored in the network authentication server is sent to the terminal It includes the step of transmitting, the second public key may be generated based on the first biometric information.
  • the signature includes a timestamp and / or a serving network name, and determines whether the challenge code for authentication is valid through the timestamp and / or the serving network name verification. It may further include a step.
  • receiving the third biological information And generating a third master key based on the third biometric information, and may perform a subscriber authentication procedure using the third master key.
  • the method may further include generating a second master key based on data received from the terminal.
  • a result message indicating whether the confirmation challenge code and the registration challenge code are the same is delivered to the terminal, and the result message may include a key index and / or a separate identifier (ID). have.
  • a user equipment (UE) for authenticating by using biometric information in a wireless communication system includes: a communication module; Memory; And a processor that controls the communication module and the memory, wherein the processor receives a challenge code for registration from a network authentication server through the communication module, and a first public key (Public key) from the network authentication server. ), And generates a second public key and a second private key based on the received first biometric information, and confirms the challenge code and the second public key to the first public key. It may be encrypted with a key, transmitted to the network authentication server through the communication module, and indicate that the second public key is stored in the network authentication server.
  • Public key public key
  • biometric information authentication registration result message is the same as the registration challenge code and the confirmation challenge code, it may indicate that the second public key is stored in the network authentication server.
  • the processor requests a subscriber authentication procedure to the network authentication server through the communication module, receives a challenge code for authentication, generates a third private key based on the second biometric information, and The third challenge key is used to sign the challenge code for authentication, and the challenge code for authentication can be transmitted to the network authentication server through the communication module.
  • the signature includes a timestamp and / or a serving network name
  • the processor determines whether the challenge code for authentication is valid through the timestamp and / or the serving network name verification. Can judge.
  • a terminal in a wireless communication system, can effectively authenticate using biometric information.
  • authentication can be performed using biometric information through a network authentication server in a wireless communication system.
  • FIG 1 shows an AI device according to an embodiment of the present invention.
  • FIG 2 shows an AI server according to an embodiment of the present invention.
  • FIG 3 shows an AI system according to an embodiment of the present invention.
  • FIG. 12 illustrates an NG-RAN architecture to which the present invention can be applied.
  • FIG. 13 is a diagram illustrating a radio protocol stack in a wireless communication system to which the present invention can be applied.
  • 16 is an example of an authentication procedure in 5G-AKA (Authentication and Key Agreement Protocol) to which the present invention can be applied.
  • 5G-AKA Authentication and Key Agreement Protocol
  • 17 is an example of a prior verification and registration procedure for authentication through biometric information of a subscriber that can be applied in the present invention.
  • FIG. 18 illustrates a block diagram of a communication device according to an embodiment of the present invention.
  • FIG. 19 illustrates a block diagram of a communication device according to an embodiment of the present invention.
  • a base station has a meaning as a terminal node of a network that directly communicates with a terminal. Certain operations described in this document as being performed by a base station may be performed by an upper node of the base station in some cases. That is, it is apparent that various operations performed for communication with a terminal in a network composed of a plurality of network nodes including a base station can be performed by a base station or other network nodes other than the base station.
  • the term 'base station (BS)' may be replaced by terms such as a fixed station, Node B, evolved-NodeB (eNB), base transceiver system (BTS), or access point (AP). .
  • the 'terminal (Terminal)' may be fixed or mobile, UE (User Equipment), MS (Mobile Station), UT (user terminal), MSS (Mobile Subscriber Station), SS (Subscriber Station), AMS ( It can be replaced with terms such as Advanced Mobile Station (WT), Wireless terminal (WT), Machine-Type Communication (MTC) device, Machine-to-Machine (M2M) device, and Device-to-Device (D2D) device.
  • WT Advanced Mobile Station
  • WT Wireless terminal
  • MTC Machine-Type Communication
  • M2M Machine-to-Machine
  • D2D Device-to-Device
  • downlink means communication from a base station to a terminal
  • uplink means communication from a terminal to a base station.
  • the transmitter may be part of the base station, and the receiver may be part of the terminal.
  • the transmitter may be part of the terminal, and the receiver may be part of the base station.
  • Embodiments of the invention may be supported by standard documents disclosed in at least one of the wireless access systems IEEE 802, 3GPP and 3GPP2. That is, steps or parts that are not described in order to clearly reveal the technical idea of the present invention among the embodiments of the present invention may be supported by the documents. Also, all terms disclosed in this document may be described by the standard document.
  • 3GPP 5G (5 Generation) system is mainly described, but the technical features of the present invention are not limited thereto.
  • the three main requirements areas of 5G are: (1) Enhanced Mobile Broadband (eMBB) area, (2) Massive Machine Type Communication (mMTC) area, and (3) Super-reliability and Ultra-reliable and Low Latency Communications (URLLC) domain.
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • URLLC Ultra-reliable and Low Latency Communications
  • KPI key performance indicator
  • eMBB goes far beyond basic mobile Internet access, and covers media and entertainment applications in rich interactive work, cloud or augmented reality.
  • Data is one of the key drivers of 5G, and it may not be possible to see dedicated voice services for the first time in the 5G era.
  • voice will be processed as an application program simply using the data connection provided by the communication system.
  • the main causes for increased traffic volume are increased content size and increased number of applications requiring high data rates.
  • Streaming services audio and video
  • interactive video and mobile internet connections will become more widely used as more devices connect to the internet. Many of these applications require always-on connectivity to push real-time information and notifications to users.
  • Cloud storage and applications are rapidly increasing in mobile communication platforms, which can be applied to both work and entertainment.
  • cloud storage is a special use case that drives the growth of uplink data transfer rate.
  • 5G is also used for remote work in the cloud, requiring much lower end-to-end delay to maintain a good user experience when a tactile interface is used.
  • Entertainment For example, cloud gaming and video streaming are another key factor in increasing demand for mobile broadband capabilities. Entertainment is essential for smartphones and tablets anywhere, including high mobility environments such as trains, cars and airplanes.
  • Another use case is augmented reality and information retrieval for entertainment.
  • augmented reality requires very low delay and instantaneous amount of data.
  • URLLC includes new services that will transform the industry through ultra-reliable / low-latency links, such as remote control of the main infrastructure and self-driving vehicles. Reliability and level of delay are essential for smart grid control, industrial automation, robotics, drone control and coordination.
  • 5G can complement fiber-to-the-home (FTTH) and cable-based broadband (or DOCSIS) as a means to provide streams rated at hundreds of megabits per second to gigabit per second. This fast speed is required to deliver TV in 4K (6K, 8K and higher) resolutions as well as virtual and augmented reality.
  • Virtual Reality (VR) and Augmented Reality (AR) applications include almost immersive sports events. Certain application programs may require special network settings. For VR games, for example, game companies may need to integrate the core server with the network operator's edge network server to minimize latency.
  • Automotive is expected to be an important new driver for 5G, along with many use cases for mobile communications to vehicles. For example, entertainment for passengers requires simultaneous high capacity and high mobility mobile broadband. This is because future users continue to expect high-quality connections regardless of their location and speed.
  • Another example of application in the automotive field is the augmented reality dashboard. It identifies objects in the dark over what the driver sees through the front window, and superimposes and displays information telling the driver about the distance and movement of the object.
  • wireless modules will enable communication between vehicles, exchange of information between the vehicle and the supporting infrastructure and exchange of information between the vehicle and other connected devices (eg, devices carried by pedestrians).
  • the safety system helps the driver to reduce the risk of accidents by guiding alternative courses of action to make driving safer.
  • the next step will be remote control or a self-driven vehicle.
  • This requires very reliable and very fast communication between different self-driving vehicles and between the vehicle and the infrastructure.
  • self-driving vehicles will perform all driving activities, and drivers will focus only on traffic beyond which the vehicle itself cannot identify.
  • the technical requirements of self-driving vehicles require ultra-low delays and ultra-high-speed reliability to increase traffic safety to levels beyond human reach.
  • Smart cities and smart homes will be embedded in high-density wireless sensor networks.
  • the distributed network of intelligent sensors will identify the conditions for cost and energy-efficient maintenance of the city or home. Similar settings can be made for each assumption.
  • Temperature sensors, window and heating controllers, burglar alarms and consumer electronics are all connected wirelessly. Many of these sensors are typically low data rates, low power and low cost. However, for example, real-time HD video may be required in certain types of devices for surveillance.
  • the smart grid interconnects these sensors using digital information and communication technologies to collect information and act accordingly. This information can include supplier and consumer behavior, so smart grids can improve efficiency, reliability, economics, production sustainability and distribution of fuels like electricity in an automated way.
  • the smart grid can be viewed as another sensor network with low latency.
  • the health sector has many applications that can benefit from mobile communications.
  • the communication system can support telemedicine that provides clinical care from a distance. This can help reduce barriers to distance and improve access to medical services that are not continuously available in remote rural areas. It is also used to save lives in critical care and emergency situations.
  • a wireless sensor network based on mobile communication can provide remote monitoring and sensors for parameters such as heart rate and blood pressure.
  • Wireless and mobile communications are becoming increasingly important in industrial applications. Wiring is expensive to install and maintain. Thus, the possibility of replacing cables with wireless links that can be reconfigured is an attractive opportunity in many industries. However, achieving this requires that the wireless connection operates with cable-like delay, reliability and capacity, and that management is simplified. Low latency and very low error probability are new requirements that need to be connected to 5G.
  • Logistics and freight tracking are important use cases for mobile communications that enable the tracking of inventory and packages from anywhere using location-based information systems.
  • Logistics and freight tracking use cases typically require low data rates, but require wide range and reliable location information.
  • the present invention may be implemented by combining or changing each embodiment to satisfy the above-described requirements of 5G.
  • Machine learning refers to the field of studying the methodology to define and solve various problems in the field of artificial intelligence. do.
  • Machine learning is defined as an algorithm that improves the performance of a job through steady experience.
  • An artificial neural network is a model used in machine learning, and may mean an overall model having a problem-solving ability, composed of artificial neurons (nodes) forming a network through a combination of synapses.
  • the artificial neural network may be defined by a connection pattern between neurons of different layers, a learning process for updating model parameters, and an activation function that generates output values.
  • the artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer contains one or more neurons, and the artificial neural network can include neurons and synapses connecting neurons. In an artificial neural network, each neuron may output a function value of an input function input through a synapse, a weight, and an active function for bias.
  • the model parameter means a parameter determined through learning, and includes weights of synaptic connections and bias of neurons.
  • the hyperparameter means a parameter that must be set before learning in a machine learning algorithm, and includes learning rate, number of iterations, mini-batch size, initialization function, and the like.
  • the purpose of training an artificial neural network can be seen as determining model parameters that minimize the loss function.
  • the loss function can be used as an index for determining an optimal model parameter in the learning process of an artificial neural network.
  • Machine learning can be classified into supervised learning, unsupervised learning, and reinforcement learning according to the learning method.
  • Supervised learning refers to a method of training an artificial neural network while a label for training data is given, and a label is a correct answer (or a result value) that the artificial neural network must infer when the training data is input to the artificial neural network.
  • Unsupervised learning may refer to a method of training an artificial neural network without a label for learning data.
  • Reinforcement learning may mean a learning method in which an agent defined in a certain environment is trained to select an action or a sequence of actions to maximize cumulative reward in each state.
  • Machine learning implemented as a deep neural network (DNN) that includes a plurality of hidden layers among artificial neural networks is also referred to as deep learning (deep learning), and deep learning is part of machine learning.
  • DNN deep neural network
  • machine learning is used to mean deep learning.
  • a robot can mean a machine that automatically handles or acts on a task given by its own capabilities.
  • a robot having a function of recognizing the environment and performing an operation by determining itself can be referred to as an intelligent robot.
  • Robots can be classified into industrial, medical, household, and military according to the purpose or field of use.
  • the robot may be provided with a driving unit including an actuator or a motor to perform various physical operations such as moving a robot joint.
  • a driving unit including an actuator or a motor to perform various physical operations such as moving a robot joint.
  • the movable robot includes a wheel, a brake, a propeller, and the like in the driving unit, so that it can travel on the ground or fly in the air through the driving unit.
  • Autonomous driving refers to the technology of driving on its own, and autonomous driving means a vehicle that operates without a user's manipulation or with a minimum manipulation of the user.
  • a technology that maintains a driving lane a technology that automatically adjusts speed such as adaptive cruise control, a technology that automatically drives along a predetermined route, and a technology that automatically sets a route when a destination is set, etc. All of this can be included.
  • the vehicle includes a vehicle having only an internal combustion engine, a hybrid vehicle having both an internal combustion engine and an electric motor, and an electric vehicle having only an electric motor, and may include a train, a motorcycle, etc. as well as a vehicle.
  • the autonomous vehicle can be viewed as a robot having an autonomous driving function.
  • Augmented reality refers to virtual reality (VR), augmented reality (AR), and mixed reality (MR).
  • VR technology provides real-world objects or backgrounds only as CG images
  • AR technology provides CG images made virtually on real objects
  • MR technology is a computer that mixes and combines virtual objects in the real world.
  • MR technology is similar to AR technology in that it shows both real and virtual objects.
  • a virtual object is used as a complement to a real object, whereas in MR technology, there is a difference in that a virtual object and a real object are used with equal characteristics.
  • HMD Head-Mount Display
  • HUD Head-Up Display
  • mobile phone tablet PC, laptop, desktop, TV, digital signage, etc. It can be called.
  • FIG 1 shows an AI device 100 according to an embodiment of the present invention.
  • the AI device 100 is a TV, projector, mobile phone, smartphone, desktop computer, laptop, digital broadcasting terminal, PDA (personal digital assistants), PMP (portable multimedia player), navigation, tablet PC, wearable device, set-top box (STB) ), DMB receivers, radios, washing machines, refrigerators, desktop computers, digital signage, robots, vehicles, and the like.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • STB set-top box
  • DMB receivers radios
  • washing machines refrigerators
  • desktop computers digital signage
  • robots, vehicles and the like.
  • the terminal 100 includes a communication unit 110, an input unit 120, a running processor 130, a sensing unit 140, an output unit 150, a memory 170, a processor 180, and the like. It can contain.
  • the communication unit 110 may transmit and receive data to and from external devices such as other AI devices 100a to 100e or the AI server 200 using wired / wireless communication technology.
  • the communication unit 110 may transmit and receive sensor information, a user input, a learning model, a control signal, etc. with external devices.
  • the communication technology used by the communication unit 110 includes Global System for Mobile Communication (GSM), Code Division Multi Access (CDMA), Long Term Evolution (LTE), 5G, Wireless LAN (WLAN), Wireless-Fidelity (Wi-Fi). ), Bluetooth (Radio Frequency Identification), RFID (Infrared Data Association; IrDA), ZigBee, Near Field Communication (NFC).
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multi Access
  • LTE Long Term Evolution
  • 5G Fifth Generation
  • WLAN Wireless LAN
  • Wi-Fi Wireless-Fidelity
  • Bluetooth Radio Frequency Identification
  • RFID Infrared Data Association
  • ZigBee ZigBee
  • NFC Near Field Communication
  • the input unit 120 may acquire various types of data.
  • the input unit 120 may include a camera for inputting a video signal, a microphone for receiving an audio signal, a user input unit for receiving information from a user, and the like.
  • the camera or microphone is treated as a sensor, and the signal obtained from the camera or microphone may be referred to as sensing data or sensor information.
  • the input unit 120 may acquire training data for model training and input data to be used when obtaining an output using the training model.
  • the input unit 120 may obtain raw input data.
  • the processor 180 or the learning processor 130 may extract input features as pre-processing of the input data.
  • the learning processor 130 may train a model composed of artificial neural networks using the training data.
  • the trained artificial neural network may be referred to as a learning model.
  • the learning model can be used to infer a result value for new input data rather than learning data, and the inferred value can be used as a basis for determining to perform an action.
  • the learning processor 130 may perform AI processing together with the learning processor 240 of the AI server 200.
  • the learning processor 130 may include a memory integrated or implemented in the AI device 100.
  • the learning processor 130 may be implemented using memory 170, external memory directly coupled to the AI device 100, or memory maintained in the external device.
  • the sensing unit 140 may acquire at least one of AI device 100 internal information, AI device 100 environment information, and user information using various sensors.
  • the sensors included in the sensing unit 140 include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and a lidar. , And radar.
  • the output unit 150 may generate output related to vision, hearing, or tactile sense.
  • the output unit 150 may include a display unit for outputting visual information, a speaker for outputting auditory information, a haptic module for outputting tactile information, and the like.
  • the memory 170 may store data supporting various functions of the AI device 100.
  • the memory 170 may store input data, learning data, learning models, learning history, etc. acquired by the input unit 120.
  • the processor 180 may determine at least one executable action of the AI device 100 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. Also, the processor 180 may control components of the AI device 100 to perform a determined operation.
  • the processor 180 may request, search, receive, or utilize data of the learning processor 130 or the memory 170, and perform an operation that is predicted or determined to be preferable among the at least one executable operation. It is possible to control the components of the AI device 100 to execute.
  • the processor 180 may generate a control signal for controlling the corresponding external device, and transmit the generated control signal to the corresponding external device when it is necessary to link the external device to perform the determined operation.
  • the processor 180 may acquire intention information for a user input, and determine a user's requirement based on the obtained intention information.
  • the processor 180 uses at least one of a Speech To Text (STT) engine for converting voice input into a string or a Natural Language Processing (NLP) engine for acquiring intention information of natural language, and a user Intention information corresponding to an input may be obtained.
  • STT Speech To Text
  • NLP Natural Language Processing
  • At this time, at least one of the STT engine or the NLP engine may be configured as an artificial neural network at least partially learned according to a machine learning algorithm. And, at least one or more of the STT engine or the NLP engine is learned by the learning processor 130, learned by the learning processor 240 of the AI server 200, or learned by distributed processing thereof May be
  • the processor 180 collects history information including the user's feedback on the operation content or operation of the AI device 100 and stores it in the memory 170 or the running processor 130, or the AI server 200, etc. Can be sent to external devices. The collected history information can be used to update the learning model.
  • the processor 180 may control at least some of the components of the AI device 100 to drive an application program stored in the memory 170. Furthermore, the processor 180 may operate by combining two or more of the components included in the AI device 100 with each other to drive the application program.
  • FIG 2 shows an AI server 200 according to an embodiment of the present invention.
  • the AI server 200 may refer to an apparatus for learning an artificial neural network using a machine learning algorithm or using a trained artificial neural network.
  • the AI server 200 may be composed of a plurality of servers to perform distributed processing, or may be defined as a 5G network.
  • the AI server 200 is included as a configuration of a part of the AI device 100, and may perform at least a part of AI processing together.
  • the AI server 200 may include a communication unit 210, a memory 230, a running processor 240 and a processor 260.
  • the communication unit 210 may transmit and receive data with an external device such as the AI device 100.
  • the memory 230 may include a model storage unit 231.
  • the model storage unit 231 may store a model (or artificial neural network, 231a) being trained or trained through the learning processor 240.
  • the learning processor 240 may train the artificial neural network 231a using learning data.
  • the learning model may be used while being mounted on the AI server 200 of the artificial neural network, or may be mounted and used on an external device such as the AI device 100.
  • the learning model can be implemented in hardware, software, or a combination of hardware and software. When part or all of the learning model is implemented in software, one or more instructions constituting the learning model may be stored in the memory 230.
  • the processor 260 may infer the result value for the new input data using the learning model, and generate a response or control command based on the inferred result value.
  • FIG 3 shows an AI system 1 according to an embodiment of the present invention.
  • the AI system 1 includes at least one of an AI server 200, a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e. It is connected to the cloud network 10.
  • the robot 100a to which AI technology is applied, the autonomous vehicle 100b, the XR device 100c, the smartphone 100d, or the home appliance 100e may be referred to as AI devices 100a to 100e.
  • the cloud network 10 may form a part of the cloud computing infrastructure or may mean a network existing in the cloud computing infrastructure.
  • the cloud network 10 may be configured using a 3G network, a 4G or a Long Term Evolution (LTE) network, or a 5G network.
  • LTE Long Term Evolution
  • each device (100a to 100e, 200) constituting the AI system 1 may be connected to each other through the cloud network (10).
  • the devices 100a to 100e and 200 may communicate with each other through a base station, but may communicate with each other directly without passing through the base station.
  • the AI server 200 may include a server performing AI processing and a server performing operations on big data.
  • the AI server 200 includes at least one or more among robots 100a, autonomous vehicles 100b, XR devices 100c, smart phones 100d, or home appliances 100e, which are AI devices constituting the AI system 1. It is connected through the cloud network 10 and can assist at least some of the AI processing of the connected AI devices 100a to 100e.
  • the AI server 200 may train the artificial neural network according to the machine learning algorithm in place of the AI devices 100a to 100e, and may directly store the learning model or transmit it to the AI devices 100a to 100e.
  • the AI server 200 receives input data from the AI devices 100a to 100e, infers a result value to the received input data using a learning model, and issues a response or control command based on the inferred result value. It can be generated and transmitted to AI devices 100a to 100e.
  • the AI devices 100a to 100e may infer a result value with respect to input data using a direct learning model and generate a response or control command based on the inferred result value.
  • the AI devices 100a to 100e to which the above-described technology is applied will be described.
  • the AI devices 100a to 100e illustrated in FIG. 3 may be viewed as specific embodiments of the AI device 100 illustrated in FIG. 1.
  • AI technology is applied to the robot 100a, and may be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, and an unmanned flying robot.
  • the robot 100a may include a robot control module for controlling an operation, and the robot control module may mean a software module or a chip implemented with hardware.
  • the robot 100a acquires state information of the robot 100a using sensor information obtained from various types of sensors, detects (recognizes) surrounding environment and objects, generates map data, or moves and travels. You can decide on a plan, determine a response to user interaction, or determine an action.
  • the robot 100a may use sensor information acquired from at least one sensor among a lidar, a radar, and a camera in order to determine a movement route and a driving plan.
  • the robot 100a may perform the above operations using a learning model composed of at least one artificial neural network.
  • the robot 100a may recognize a surrounding environment and an object using a learning model, and may determine an operation using the recognized surrounding environment information or object information.
  • the learning model may be directly learned from the robot 100a or may be learned from an external device such as the AI server 200.
  • the robot 100a may perform an operation by generating a result using a direct learning model, but transmits sensor information to an external device such as the AI server 200 and receives the result generated accordingly. You may.
  • the robot 100a determines a moving path and a driving plan using at least one of map data, object information detected from sensor information, or object information obtained from an external device, and controls the driving unit to determine the determined moving path and driving plan. Accordingly, the robot 100a can be driven.
  • the map data may include object identification information for various objects arranged in a space in which the robot 100a moves.
  • the map data may include object identification information for fixed objects such as walls and doors and movable objects such as flower pots and desks.
  • the object identification information may include a name, type, distance, and location.
  • the robot 100a may perform an operation or travel by controlling a driving unit based on a user's control / interaction. At this time, the robot 100a may acquire intention information of an interaction according to a user's motion or voice utterance, and determine an answer based on the obtained intention information to perform an operation.
  • the autonomous driving vehicle 100b is applied with AI technology and can be implemented as a mobile robot, a vehicle, or an unmanned aerial vehicle.
  • the autonomous driving vehicle 100b may include an autonomous driving control module for controlling an autonomous driving function, and the autonomous driving control module may refer to a software module or a chip implemented with hardware.
  • the autonomous driving control module may be included therein as a configuration of the autonomous driving vehicle 100b, but may be configured and connected to a separate hardware outside the autonomous driving vehicle 100b.
  • the autonomous vehicle 100b acquires state information of the autonomous vehicle 100b using sensor information obtained from various types of sensors, detects (recognizes) surrounding objects and objects, generates map data,
  • the route and driving plan may be determined, or an operation may be determined.
  • the autonomous vehicle 100b may use sensor information obtained from at least one sensor among a lidar, a radar, and a camera, like the robot 100a, to determine a movement path and a driving plan.
  • the autonomous driving vehicle 100b may receive sensor information from external devices or recognize an environment or an object for an area where a field of view is obscured or a predetermined distance or more, or receive information recognized directly from external devices. .
  • the autonomous vehicle 100b may perform the above-described operations using a learning model composed of at least one artificial neural network.
  • the autonomous vehicle 100b may recognize a surrounding environment and an object using a learning model, and may determine a driving line using the recognized surrounding environment information or object information.
  • the learning model may be learned directly from the autonomous vehicle 100b or may be learned from an external device such as the AI server 200.
  • the autonomous vehicle 100b may perform an operation by generating a result using a direct learning model, but transmits sensor information to an external device such as the AI server 200 and receives the generated result accordingly. You can also do
  • the autonomous vehicle 100b determines a moving path and a driving plan using at least one of map data, object information detected from sensor information, or object information obtained from an external device, and controls the driving unit to determine the moving path and driving According to the plan, the autonomous vehicle 100b may be driven.
  • the map data may include object identification information for various objects arranged in a space (for example, a road) in which the autonomous vehicle 100b travels.
  • the map data may include object identification information for fixed objects such as street lights, rocks, buildings, and movable objects such as vehicles and pedestrians.
  • the object identification information may include a name, type, distance, and location.
  • the autonomous vehicle 100b may perform an operation or travel by controlling a driving unit based on a user's control / interaction. At this time, the autonomous driving vehicle 100b may acquire intention information of an interaction according to a user's motion or voice utterance, and determine an answer based on the obtained intention information to perform an operation.
  • AI technology is applied to the XR device 100c, HMD (Head-Mount Display), HUD (Head-Up Display) provided in a vehicle, television, mobile phone, smart phone, computer, wearable device, home appliance, digital signage , It can be implemented as a vehicle, a fixed robot or a mobile robot.
  • HMD Head-Mount Display
  • HUD Head-Up Display
  • the XR device 100c generates location data and attribute data for 3D points by analyzing 3D point cloud data or image data obtained through various sensors or from an external device, thereby providing information about surrounding space or real objects.
  • the XR object to be acquired and output can be rendered and output.
  • the XR device 100c may output an XR object including additional information about the recognized object in correspondence with the recognized object.
  • the XR device 100c may perform the above operations using a learning model composed of at least one artificial neural network.
  • the XR device 100c may recognize a real object from 3D point cloud data or image data using a learning model, and provide information corresponding to the recognized real object.
  • the learning model may be directly trained in the XR device 100c or may be learned in an external device such as the AI server 200.
  • the XR device 100c may perform an operation by generating a result using a direct learning model, but transmits sensor information to an external device such as the AI server 200 and receives the generated result accordingly. You can also do
  • the robot 100a is applied with AI technology and autonomous driving technology, and can be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, and an unmanned flying robot.
  • the robot 100a to which AI technology and autonomous driving technology are applied may mean a robot itself having an autonomous driving function or a robot 100a that interacts with the autonomous driving vehicle 100b.
  • the robot 100a having an autonomous driving function may collectively refer to moving devices by moving itself or determining the moving line according to a given moving line without user control.
  • the robot 100a and the autonomous vehicle 100b having an autonomous driving function may use a common sensing method to determine one or more of a moving path or a driving plan.
  • the robot 100a and the autonomous vehicle 100b having an autonomous driving function may determine one or more of a moving route or a driving plan using information sensed through a lidar, a radar, and a camera.
  • the robot 100a interacting with the autonomous vehicle 100b exists separately from the autonomous vehicle 100b, and is connected to an autonomous vehicle function inside or outside the autonomous vehicle 100b, or the autonomous vehicle 100b ) Can perform the operation associated with the user on board.
  • the robot 100a that interacts with the autonomous vehicle 100b acquires sensor information on behalf of the autonomous vehicle 100b and provides it to the autonomous vehicle 100b, acquires sensor information, and obtains environment information or By generating object information and providing it to the autonomous vehicle 100b, it is possible to control or assist the autonomous vehicle driving function of the autonomous vehicle 100b.
  • the robot 100a interacting with the autonomous vehicle 100b may monitor a user on the autonomous vehicle 100b or control a function of the autonomous vehicle 100b through interaction with the user. .
  • the robot 100a may activate the autonomous driving function of the autonomous vehicle 100b or assist control of a driving unit of the autonomous vehicle 100b.
  • the function of the autonomous driving vehicle 100b controlled by the robot 100a may include not only an autonomous driving function, but also a function provided by a navigation system or an audio system provided inside the autonomous driving vehicle 100b.
  • the robot 100a interacting with the autonomous vehicle 100b may provide information or assist a function to the autonomous vehicle 100b from outside the autonomous vehicle 100b.
  • the robot 100a may provide traffic information including signal information to the autonomous vehicle 100b, such as a smart traffic light, or interact with the autonomous vehicle 100b, such as an automatic electric charger for an electric vehicle.
  • An electric charger can also be automatically connected to the charging port.
  • the robot 100a is applied with AI technology and XR technology, and can be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, and a drone.
  • the robot 100a to which XR technology is applied may mean a robot that is a target of control / interaction within an XR image.
  • the robot 100a is separated from the XR device 100c and can be interlocked with each other.
  • the robot 100a which is the object of control / interaction within the XR image, acquires sensor information from sensors including a camera
  • the robot 100a or the XR device 100c generates an XR image based on the sensor information.
  • the XR device 100c may output the generated XR image.
  • the robot 100a may operate based on a control signal input through the XR device 100c or a user's interaction.
  • the user can check the XR image corresponding to the viewpoint of the robot 100a remotely linked through an external device such as the XR device 100c, and adjust the autonomous driving path of the robot 100a through interaction or , You can control the operation or driving, or check the information of the surrounding objects.
  • the autonomous vehicle 100b is applied with AI technology and XR technology, and may be implemented as a mobile robot, a vehicle, or an unmanned aerial vehicle.
  • the autonomous driving vehicle 100b to which the XR technology is applied may mean an autonomous driving vehicle having a means for providing an XR image or an autonomous driving vehicle targeted for control / interaction within the XR image.
  • the autonomous vehicle 100b which is the object of control / interaction within the XR image, is distinguished from the XR device 100c and may be interlocked with each other.
  • the autonomous vehicle 100b having a means for providing an XR image may acquire sensor information from sensors including a camera, and output an XR image generated based on the acquired sensor information.
  • the autonomous vehicle 100b may provide an XR object corresponding to a real object or an object on the screen to the occupant by outputting an XR image with a HUD.
  • the XR object when the XR object is output to the HUD, at least a portion of the XR object may be output so as to overlap with an actual object facing the occupant's gaze.
  • the XR object when the XR object is output to a display provided inside the autonomous vehicle 100b, at least a part of the XR object may be output to overlap with an object in the screen.
  • the autonomous vehicle 100b may output XR objects corresponding to objects such as lanes, other vehicles, traffic lights, traffic signs, two-wheeled vehicles, pedestrians, buildings, and the like.
  • the autonomous vehicle 100b which is the object of control / interaction within the XR image, acquires sensor information from sensors including a camera
  • the autonomous vehicle 100b or the XR device 100c is based on the sensor information.
  • the XR image is generated, and the XR device 100c may output the generated XR image.
  • the autonomous vehicle 100b may operate based on a user's interaction or a control signal input through an external device such as the XR device 100c.
  • EPS Evolved Packet System
  • EPC Evolved Packet Core
  • IP Internet Protocol
  • UMTS Universal Mobile Telecommunications System
  • -eNodeB base station of the EPS network. It is installed outdoors and has coverage of a macro cell.
  • IMSI International Mobile Subscriber Identity
  • PLMN Public Land Mobile Network
  • 5GS 5G System
  • 5G access network AN: Access Network
  • 5G core network 5G core network
  • UE User Equipment
  • 5G-AN 5G Access Network
  • AN New Generation Radio Access Network
  • NG-RAN New Generation Radio Access Network
  • non- 3GPP AN non-5G Access Network
  • NG-RAN New Generation Radio Access Network
  • RAN A radio access network that has a common feature of connecting to 5GC and supports one or more of the following options:
  • New radio an anchor that supports E-UTRA extension.
  • Standalone E-UTRA eg, eNodeB
  • 5G Core Network 5G Core Network
  • Core network connected to 5G access network
  • NF Network Function
  • -NF service A function exposed by NF through a service-based interface and consumed by other authenticated NF (s)
  • -Network Slice A logical network that provides specific network capability (s) and network feature (s)
  • -Network Slice instance a set of NF instance (s) and required resource (s) (e.g., computation, storage and networking resources) to form the network slice being deployed.
  • NF instance a set of NF instance (s) and required resource (s) (e.g., computation, storage and networking resources) to form the network slice being deployed.
  • required resource e.g., computation, storage and networking resources
  • PDU Protocol Data Unit
  • PDU Connectivity Service A service that provides the exchange of PDU (s) between a UE and a data network.
  • -PDU Connectivity Service A service that provides the exchange of PDU (s) between a UE and a data network
  • PDU Session Association between the UE and the data network that provides the PDU Connectivity Service (association).
  • the association type may be Internet Protocol (IP), Ethernet, or unstructured.
  • -NAS Non-Access Stratum: A functional layer for exchanging signaling and traffic messages between a terminal and a core network in an EPS and 5GS protocol stack. The main function is to support the mobility of the terminal and to support the session management procedure.
  • -AS Access Stratum
  • a protocol layer below the NAS layer on the interface protocol between the access network and the UE or between the access network and the core network For example, in the control plane protocol stack, a radio resource control (RRC) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, a medium access control (MAC) layer, and a physical layer (PHY) are collectively referred to as Alternatively, any one of the layers may be referred to as an AS layer. Alternatively, in the user plane protocol stack, the PDCP layer, the RLC layer, the MAC layer, and the PHY layer may be collectively referred to as one of the AS layers.
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • RLC radio link control
  • MAC medium access control
  • PHY physical layer
  • RM Registration Management
  • RM DEREGISTERED Registration Management
  • AMF Access and Mobility Management Function
  • -RM REGISTERED state In this state, the UE is registered to the network. The UE may receive a service requiring registration with the network.
  • CM-Connection Management (CM)-Children (CM-IDLE) state UE in this state does not have an established NAS signaling connection with AMF through N1. In this state, the UE performs cell selection / reselection and PLMN selection.
  • CM-CONNECTED the UE in this state has an AMF and NAS signaling connection through N1.
  • the NAS signaling connection uses an RRC connection between a UE and a radio access network (RAN), and an NGAP (NG Application Protocol) UE association between an access network (AN) and an AMF.
  • RAN radio access network
  • NGAP NG Application Protocol
  • the 5G system is an advanced technology from the 4th generation LTE mobile communication technology, and is a new radio access technology (RAT) or LTE (Long) through an improvement of the existing mobile communication network structure or a clean-state structure.
  • RAT new radio access technology
  • LTE Long
  • Long Long
  • extended technology of Term Evolution it supports extended LTE (eLTE), non-3GPP (eg, Wireless Local Area Network (WLAN)) access, and the like.
  • the 5G system architecture is defined to support data connections and services to enable deployments to use technologies such as Network Function Virtualization and Software Defined Networking.
  • the 5G system architecture utilizes service-based interactions between Control Plane (CP) Network Functions (NF).
  • CP Control Plane
  • NF Network Functions
  • each NF can interact directly with the other NF.
  • the architecture does not preclude the use of intermediate functions to route control plane messages.
  • the architecture is defined as a converged core network with a common AN-CN interface incorporating different access types (eg 3GPP access and non-3GPP access).
  • UP functions can be deployed close to the access network to support low latency services and access to the local data network
  • the 5G system is defined as a service-based, and the interaction between network functions (NFs) in the architecture for the 5G system can be represented in two ways as follows.
  • NFs network functions
  • FIG. 4 Network functions (eg, AMF) in the control plane (CP) allow other authenticated network functions to access their service. This expression also includes a point-to-point reference point if necessary.
  • AMF Access Management Function
  • CP control plane
  • a point-to-point reference point e.g., N11
  • two NFs e.g., AMF and SMF
  • FIG. 4 illustrates a wireless communication system architecture to which the present invention can be applied.
  • the service-based interface illustrated in FIG. 4 represents a set of services provided / exposed by a given NF.
  • the service-based interface is used within the control plane.
  • the 5G system architecture may include various components (ie, a network function (NF)), corresponding to some of them in FIG. 4, an authentication server function (AUSF: Authentication Server) Function), access and mobility management function (AMF: (Core) Access and Mobility Management Function), session management function (SMF: Session Management Function), policy control function (PCF), application function (AF) ), Unified Data Management (UDM), Data network (DN), User plane function (UPF), Network Exposure Function (NEF), NF storage function (NRF) : NF Repository Function (NF), (Radio) Access Network ((R) AN), and User Equipment (UE).
  • AUSF Authentication Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • PCF policy control function
  • UDM Unified Data Management
  • DN Data network
  • UPF User plane function
  • NEF Network Exposure Function
  • NRF NF storage function
  • NF NF Repository Function
  • UE User Equipment
  • Each NF supports the following functions.
  • -AUSF stores data for UE authentication.
  • -AMF provides functions for access and mobility management on a per-UE basis, and can be basically connected to one AMF per UE.
  • the AMF is inter-CN signaling for mobility between 3GPP access networks, termination of a radio access network (RAN) CP interface (i.e., N2 interface), termination of NAS signaling (N1), NAS signaling security (NAS encryption (ciphering) and integrity protection (integrity protection)), AS security control, registration management (registration area (Registration Area) management), connection management, idle mode UE accessibility (reachability) (control of paging retransmission and Performance), mobility management control (subscription and policy), intra-system mobility and inter-system mobility support, network slicing support, SMF selection, Lawful Intercept (to AMF events and LI systems) Interface), UE and SMF provide session management (SM) message delivery, transparent proxy for SM message routing, access Authentication (Access Authentication), access authorization including roaming authority check, delivery of SMS messages between the UE and the Short Message Service Function (SMSF), security anchor function (SEA), security context management (SCM) : Security Context Management).
  • RAN radio access network
  • N1 termination of
  • AMF Access Management Function
  • -DN means, for example, operator service, Internet access, or third party service.
  • the DN transmits a downlink protocol data unit (PDU) to the UPF, or receives a PDU transmitted from the UE from the UPF.
  • PDU downlink protocol data unit
  • PCF provides the function to determine the policy such as mobility management and session management by receiving packet flow information from the application server.
  • PCF provides a unified policy framework to control network behavior, provides policy rules for CP function (s) (eg, AMF, SMF, etc.) to enforce policy rules, user data storage (UDR : User Data Repository) supports functions such as front end implementation to access related subscription information for policy decision.
  • CP function e.g, AMF, SMF, etc.
  • UDR User Data Repository
  • -SMF provides a session management function, and when a UE has multiple sessions, it can be managed by a different SMF for each session.
  • SMF is used for session management (eg, establishing, modifying and releasing sessions including maintaining tunnels between UPF and AN nodes), assigning and managing UE IP addresses (optionally including authentication), and selecting UP functions And control, setting traffic steering to route traffic from UPF to the appropriate destination, terminating the interface towards policy control functions, enforcing the control portion of policy and QoS, and lawful intercept ( For SM events and interfaces to LI systems), termination of the SM part of the NAS message, downlink data notification, AN initiator of specific SM information (delivered to the AN via N2 via AMF), It supports functions such as SSC mode determination of session and roaming function.
  • session management eg, establishing, modifying and releasing sessions including maintaining tunnels between UPF and AN nodes
  • assigning and managing UE IP addresses optionally including authentication
  • selecting UP functions And control setting traffic steering to route traffic from UPF to the appropriate destination, terminating the interface towards policy control functions, enforcing the control portion of policy and QoS
  • Some or all of the functions of the SMF can be supported within a single instance of one SMF.
  • UDM stores user's subscription data, policy data, etc.
  • the UDM includes two parts: an application front end (FE) and a user data repository (UDR).
  • FE application front end
  • UDR user data repository
  • UDM FE includes UDM FE, which is responsible for location management, subscription management, and credential processing, and PCF, which is responsible for policy control.
  • UDR stores the data required for the functions provided by UDM-FE and the policy profile required by PCF.
  • Data stored in the UDR includes user subscription data and policy data including subscription identifier, security credential, access and mobility related subscription data, and session related subscription data.
  • UDM-FE accesses the subscription information stored in the UDR, and supports functions such as authentication credential processing, user identification handling, access authentication, registration / mobility management, subscription management, SMS management, etc. do.
  • -UPF delivers the downlink PDU received from the DN to the UE via (R) AN, and delivers the uplink PDU received from the UE via (R) AN to the DN.
  • UPF is an anchor point for intra / inter RAT mobility, an external PDU session point of interconnect to a data network, packet routing and forwarding, packet inspection, and User plane part of policy rule enforcement, Lawful Intercept, traffic usage reporting, uplink classifier to support the routing of traffic flows to the data network, multi-homed PDU sessions Branching point to support, QoS handling for user plane (for example, packet filtering, gating, uplink / downlink rate enforcement), uplink traffic verification (service data flow (SDF : SDF mapping between Service Data Flow) and QoS flow), transport level packet marking in uplink and downlink, downlink packet buffering and downlink data notification It supports functions such as triggering functions. Some or all functions of the UPF may be supported within a single instance of one UPF.
  • -AF interacts with the 3GPP core network to provide services (e.g. application impact on traffic routing, access to Network Capability Exposure, interaction with the policy framework for policy control, etc.) Works.
  • services e.g. application impact on traffic routing, access to Network Capability Exposure, interaction with the policy framework for policy control, etc.
  • the -NEF is a service provided by 3GPP network functions, for example, for third parties, internal exposure / re-exposure, application functions, edge computing It provides a means for safely exposing fields and abilities.
  • the NEF receives information (based on the exposed capability (s) of other network function (s)) from other network function (s).
  • the NEF can store received information as structured data using a standardized interface to the data storage network function. The stored information is re-exposed to other network function (s) and application function (s) by the NEF, and can be used for other purposes, such as analysis.
  • -NRF supports service discovery function.
  • An NF discovery request is received from the NF instance, and information on the found NF instance is provided to the NF instance. It also maintains available NF instances and the services they support.
  • -(R) AN is a new radio that supports both the evolved E-UTRA (E-UTRA), an evolved version of 4G radio access technology, and the new radio access technology (NR: New RAT) (e.g., gNB). Generic term for access network.
  • E-UTRA evolved E-UTRA
  • NR New RAT
  • gNB new radio access technology
  • gNB is a function for radio resource management (i.e., radio bearer control, radio admission control, connection mobility control, and dynamic uplink / downlink resource resources to the UE) Dynamic allocation of resources (i.e., scheduling)), Internet Protocol (IP) header compression, encryption and integrity protection of user data streams, routing from information provided to the UE to AMF is not determined
  • radio resource management i.e., radio bearer control, radio admission control, connection mobility control, and dynamic uplink / downlink resource resources to the UE
  • Dynamic allocation of resources i.e., scheduling)
  • IP Internet Protocol
  • IP Internet Protocol
  • routing from information provided to the UE to AMF is not determined
  • selection of AMF selection of AMF, user plane data routing to UPF (s), control plane information routing to AMF, connection setup and release, scheduling and transmission of paging messages (from AMF), system Scheduling and transmission of broadcast information (from AMF or operating and maintenance (O & M)), measurement and measurement reporting settings for mobility and scheduling, phase Transport level packet
  • -UE means a user device.
  • the user device may be referred to in terms of a terminal, a mobile equipment (ME), or a mobile station (MS).
  • the user device may be a portable device such as a laptop, a mobile phone, a personal digital assistant (PDA), a smart phone, a multimedia device, or a non-portable device such as a personal computer (PC) or a vehicle-mounted device.
  • a portable device such as a laptop, a mobile phone, a personal digital assistant (PDA), a smart phone, a multimedia device, or a non-portable device such as a personal computer (PC) or a vehicle-mounted device.
  • PC personal computer
  • UDSF unstructured data storage network function
  • SDSF structured data storage network function
  • SDSF is an optional function to support the function of storing and retrieving information as structured data by any NEF.
  • -UDSF is an unstructured data by any NF and is an optional function to support information storage and retrieval functions.
  • the following illustrates a service-based interface included in the 5G system architecture represented as FIG. 4.
  • An NF service is a type of ability exposed by a NF (ie, NF service provider) to another NF (ie, NF service consumer) through a service-based interface.
  • the NF may expose one or more NF service (s). The following criteria apply to define NF services:
  • -NF services are derived from information flows to describe end-to-end functionality.
  • NF_B i.e., the NF service provider
  • NF_A i.e., the NF service consumer
  • NF_B responds to the NF service result based on the information provided by NF_A in the request.
  • NF_B can alternately consume NF services from other NF (s).
  • NF NF
  • communication is performed one-to-one between two NFs (ie, consumer and provider).
  • the control plane NF_A subscribes to the NF service provided by another control plane NF_B (ie, NF service provider). Multiple control plane NF (s) may subscribe to the same control plane NF service. NF_B notifies the interested NF (s) subscribed to this NF service of the results of this NF service. Subscription requests from consumers may include notification requests for notifications triggered through periodic updates or specific events (eg, changes in requested information, reaching certain thresholds, etc.). This mechanism also includes the case where the NF (s) (eg, NF_B) implicitly subscribe to a specific notification without an explicit subscription request (eg due to a successful registration procedure).
  • FIG. 5 illustrates a wireless communication system architecture to which the present invention can be applied.
  • a conceptual link connecting between NFs in the 5G system is defined as a reference point.
  • the following illustrates a reference point included in the 5G system architecture represented as FIG. 5.
  • -N1 (or NG1): reference point between UE and AMF
  • -N24 (or NG24): Reference point between PCF in the visited network and PCF in the home network
  • -N11 (or NG11): reference point between AMF and SMF
  • -N13 Reference point between UDM and Authentication Server function (AUSF)
  • -N15 (or NG15): reference point between PCF and AMF for non-roaming scenarios, reference point between PCF and AMF in visited network for roaming scenarios
  • -N16 (or NG16): a reference point between two SMFs (for roaming scenarios, a reference point between an SMF in a visited network and an SMF in a home network)
  • FIG. 5 illustrates a reference model for a case where a UE accesses one DN using one PDU session for convenience of description, but is not limited thereto.
  • FIG. 6 illustrates a wireless communication system architecture to which the present invention can be applied.
  • non-roaming for a UE concurrently accessing two (ie, local and central) data networks (DNs) using multiple PDU sessions using a reference point representation (non-roaming) 5G system architecture.
  • DNs local and central data networks
  • non-roaming 5G system architecture
  • FIG. 6 illustrates an architecture for multiple PDU sessions when two SMFs are selected for different PDU sessions.
  • each SMF may have the ability to control both the local UPF and the central UPF in the PDU session.
  • FIG. 7 illustrates a wireless communication system architecture to which the present invention can be applied.
  • a ratio for a case where concurrent access to two (ie, local and central) data networks (DNs) using a reference point representation is provided within a single PDU session Represents a non-roaming 5G system architecture.
  • FIG. 8 illustrates a wireless communication system architecture to which the present invention can be applied.
  • FIG. 8 shows a roaming 5G system architecture for an LBO scenario with a service-based interface in a control plane.
  • FIG. 9 illustrates a wireless communication system architecture to which the present invention can be applied.
  • FIG. 9 shows a roaming 5G system architecture for a home routed scenario with a service-based interface in a control plane.
  • FIG. 10 illustrates a wireless communication system architecture to which the present invention can be applied.
  • FIG. 10 shows a roaming 5G system architecture for an LBO scenario using a reference point representation.
  • FIG. 11 illustrates a wireless communication system architecture to which the present invention can be applied.
  • FIG. 11 shows a roaming 5G system architecture for a home routed scenario using reference point portaging.
  • FIG. 12 illustrates an NG-RAN architecture to which the present invention can be applied.
  • NG-RAN New Generation Radio Access Network
  • gNB NR NodeB
  • eNodeB eNodeB
  • the gNB (s) are also interconnected using the Xn interface between the gNB (s) and the eNB (s) connected to the 5GC.
  • the gNB (s) and eNB (s) are also connected to the 5GC using the NG interface, and more specifically to the AMF using the NG-C interface (ie, N2 reference point), which is the control plane interface between the NG-RAN and the 5GC. It is connected and connected to the UPF using the NG-U interface (ie, N3 reference point), which is a user plane interface between NG-RAN and 5GC.
  • FIG. 13 is a diagram illustrating a radio protocol stack in a wireless communication system to which the present invention can be applied.
  • FIG. 13 (a) illustrates the air interface user plane protocol stack between the UE and the gNB
  • FIG. 13 (b) illustrates the radio interface control plane protocol stack between the UE and the gNB.
  • the control plane means a path through which control messages used by the UE and the network to manage calls are transmitted.
  • the user plane means a path through which data generated at the application layer, for example, voice data or Internet packet data, is transmitted.
  • the user plane protocol stack may be divided into a first layer (Layer 1) (ie, a physical layer (PHY) layer) and a second layer (Layer 2).
  • Layer 1 ie, a physical layer (PHY) layer
  • Layer 2 a second layer
  • the control plane protocol stack includes a first layer (ie, PHY layer), a second layer, and a third layer (ie, radio resource control (RRC) radio resource control (RRC) layer), It may be divided into a non-access stratum (NAS) layer.
  • a first layer ie, PHY layer
  • a second layer ie, a third layer
  • RRC radio resource control
  • RRC radio resource control
  • NAS non-access stratum
  • the second layer includes a medium access control (MAC) sublayer, a radio link control (RLC) sublayer, a packet data convergence protocol (PDCP) sublayer, and a service data adaptation protocol ( It is divided into SDAP (Service Data Adaptation Protocol) sublayer (for user plane).
  • MAC medium access control
  • RLC radio link control
  • PDCP packet data convergence protocol
  • SDAP Service Data Adaptation Protocol
  • the radio bearers are classified into two groups: a data radio bearer (DRB) for user plane data and a signaling radio bearer (SRB) for control plane data.
  • DRB data radio bearer
  • SRB signaling radio bearer
  • the first layer provides an information transfer service to an upper layer by using a physical channel.
  • the physical layer is connected to the MAC sublayer located at a higher level through a transport channel, and data is transmitted between the MAC sublayer and the PHY layer through the transport channel. Transmission channels are classified according to how and with what characteristics data is transmitted through a wireless interface. Then, data is transmitted between different physical layers, between a PHY layer of a transmitting end and a PHY layer of a receiving end through a physical channel.
  • the MAC sublayer includes mapping between a logical channel and a transport channel; Multiplexing / demultiplexing of MAC service data units (SDUs) belonging to one or different logical channels to / from a transport block (TB) transmitted to / from the PHY layer through a transport channel; Scheduling information reporting; Error correction through hybrid automatic repeat request (HARQ); Priority handling between UEs using dynamic scheduling; Priority handling between logical channels of one UE using logical channel priority; Padding is performed.
  • SDUs MAC service data units
  • TB transport block
  • HARQ hybrid automatic repeat request
  • Each logical channel type defines what type of information is delivered.
  • Logical channels are classified into two groups: Control Channel and Traffic Channel.
  • control channel is used to transmit only control plane information and is as follows.
  • BCCH Broadcast Control Channel
  • PCCH -Paging Control Channel
  • CCCH Common Control Channel
  • DCCH Dedicated Control Channel
  • Traffic channel is used to use only user plane information:
  • DTCH Dedicated Traffic Channel
  • DTCH can exist in both uplink and downlink.
  • connection between the logical channel and the transport channel is as follows.
  • BCCH can be mapped to BCH.
  • BCCH may be mapped to DL-SCH.
  • PCCH may be mapped to PCH.
  • CCCH may be mapped to DL-SCH.
  • DCCH may be mapped to DL-SCH.
  • DTCH may be mapped to DL-SCH.
  • CCCH may be mapped to UL-SCH.
  • DCCH may be mapped to UL-SCH.
  • DTCH may be mapped to UL-SCH.
  • the RLC sublayer supports three transmission modes: transparent mode (TM), unacknowledged mode (UM), and acknowledgment mode (AM).
  • TM transparent mode
  • UM unacknowledged mode
  • AM acknowledgment mode
  • the RLC setting can be applied for each logical channel.
  • TM or AM mode is used for SRB, whereas UM or AM mode is used for DRB.
  • the RLC sub-layer carries the upper layer PDU; Sequence numbering independent of PDCP; Error correction through automatic repeat request (ARQ); Segmentation and re-segmentation; Reassembly of SDUs; RLC SDU discard; RLC re-establishment is performed.
  • the PDCP sublayer for the user plane includes sequence numbering; Header compression and decompression (only for robust header compression (RoHC: Robust Header Compression)); User data delivery; Reordering and duplicate detection (if delivery to a layer higher than PDCP is required); PDCP PDU routing (for split bearer); Retransmission of PDCP SDUs; Ciphering and deciphering; PDCP SDU discard; PDCP re-establishment and data recovery for RLC AM; PDCP PDU replication is performed.
  • the PDCP sublayer for the control plane additionally includes sequence numbering; Ciphering, deciphering and integrity protection; Control plane data transfer; Replication detection; PDCP PDU replication is performed.
  • Replication in PDCP involves sending the same PDCP PDU (s) twice. One is delivered to the original RLC entity, and the second is delivered to the additional RLC entity. At this time, the original PDCP PDU and the corresponding copy are not transmitted in the same transport block.
  • Two different logical channels may belong to the same MAC entity (for CA) or different MAC entities (for DC). In the former case, logical channel mapping restrictions are used to ensure that the original PDCP PDU and its replica are not transmitted on the same transport block.
  • the SDAP sublayer performs i) mapping between QoS flow and data radio bearer, and ii) QoS flow identifier (ID) marking in downlink and uplink packets.
  • a single protocol object of SDAP is set for each individual PDU session, but in the case of dual connectivity (DC), two SDAP objects can be set.
  • DC dual connectivity
  • RRC sublayer broadcasts of system information related to AS (Access Stratum) and NAS (Non-Access Stratum); Paging initiated by 5GC or NG-RAN; Establishment, maintenance, and release of RRC connection between UE and NG-RAN (additional modification and release of carrier aggregation, and additionally, dual connectivity between E-UTRAN and NR or within NR) Connectivity).
  • Security functions including key management; Establishment, establishment, maintenance and release of SRB (s) and DRB (s); Handover and context delivery; Control of UE cell selection and disaster control and cell selection / reselection; Mobility functions including mobility between RATs; QoS management function, UE measurement report and report control; Detection of radio link failure and recovery from radio link failure; NAS message transfer from the NAS to the UE and NAS message transfer from the UE to the NAS are performed.
  • 3GPP TS has several requirements describing the procedure. For example, in section 5.1.2 of 3GPP TS 33.501 (version 15.2.0), the general requirements for authentication are described, and in sections 5.2.4 and 5.2.5 the UE security requirements are included. .
  • the 5G system must meet the following requirements.
  • the serving network must authenticate a subscription permanent identifier (SUPI) during the authentication process and negotiate a key between the UE and the network.
  • SUPI subscription permanent identifier
  • the UE With respect to serving network authentication, the UE must authenticate the serving network identifier through implicit key authentication.
  • implicit key authentication means that authentication is provided as a result of successful key use by authentication and key negotiation within a subsequent procedure. The preceding requirement does not mean that the UE authenticates a specific entity, for example, AMF in the serving network.
  • the serving network In connection with authorization of the terminal, the serving network must authorize the terminal through a subscription profile obtained from a home network.
  • the authorization of the terminal is based on the authenticated SUPI.
  • the 5G system In order to meet the regular requirements of some regions in relation to unauthenticated emergency services, the 5G system must support unauthorized access for emergency services. This requirement applies only to serving networks where there are regular requirements for all mobile devices (MEs: Mobile Equipments) and unauthenticated emergency services. Serving networks located in areas where unauthorized emergency services are not allowed do not support this function.
  • MEs Mobile Equipments
  • the following requirements apply to the storage and processing of subscriber credentials used to access 5G networks.
  • Subscriber credentials must be integrity protected within the terminal using anti-counterfeiting hardware components.
  • the long-term key of the subscriber's certificate must be protected confidentially in the terminal using anti-counterfeiting hardware components.
  • the long-term key of the subscriber credentials may never be used outside of the anti-counterfeiting hardware components.
  • the authentication algorithm used for subscriber authentication must always be executed within the anti-counterfeiting hardware component.
  • Security evaluation should be performed according to the security requirements of each anti-counterfeiting hardware component.
  • the security evaluation system is outside the scope of 3GPP.
  • the terminal should support 5G-GUTI (Globally Unique Temporary Identifier).
  • 5G-GUTI Globally Unique Temporary Identifier
  • SUPI should not be transmitted in clear text on 5G-RAN except for routing information such as, for example, Mobile Country Code (MCC) and Mobile Network Code (MNC).
  • MCC Mobile Country Code
  • MNC Mobile Network Code
  • the home network public key must be stored in the USIM.
  • the protection scheme identifier should be stored in the USIM.
  • ME must support null-scheme.
  • the calculation of the SUCI (The SUbscription Concealed Identifier) is performed by the USIM or mobile device, as determined by the home operator presented by the USIM. If there is no such indication, the calculation is performed by the mobile device.
  • the provision and update of the home network public key in USIM is controlled by the home network operator. Providing and updating such a home network public key is outside the scope of this document. This can be implemented, for example, by an OTA (Over the Air) mechanism.
  • OTA Over the Air
  • Subscriber personal information enablement must be under the control of the subscriber's home network.
  • the terminal may transmit a permanent equipment identifier (PEI: Permanent Equipment Identifier) within the NAS protocol after the NAS security context is established.
  • PEI Permanent Equipment Identifier
  • the routing identifier should be stored in the USIM. If the routing identifier does not exist in USIM, the mobile device must set it to the default value defined in TS 23.003.
  • the purpose of the basic authentication and key negotiation procedure is to enable mutual authentication between the terminal and the network, and to provide a key material that can be used between the terminal and the serving network within a subsequent security procedure.
  • the key material generated in the basic authentication and key negotiation procedure results in an anchor key called K SEAF provided to the serving network's Security Anchor Function (SEAF) by the AUSF (Authentication Server Function) of the home network.
  • SEAF Security Anchor Function
  • Keys for one or more security contexts can be extracted from K SEAF without performing new authentication.
  • the authentication execution through the 3GPP access network can provide a key for establishing security between the terminal and a Non-3GPP Inter-Working Function (N3IWF) used for untrusted non-3GPP access.
  • N3IWF Non-3GPP Inter-Working Function
  • K SEAF is extracted from an intermediate key called K AUSF .
  • K AUSF can be safely stored according to the home operator policy using the key. This function is an optimization that may be useful, for example, when a terminal registers with another serving network for access defined by 3GPP and untrusted non-3GPP access (which is possible according to TS 23.501). Discussion of the details of these functions is not within the scope of this document.
  • Subsequent authentication based on K AUSF stored in AUSF provides a weak guarantee compared to authentication directly related to Authentication Credential Repository and Processing Function (ARPF) and USIM. This can be compared to fast re-authentication in EAP-AKA (Extensible Authentication Protocol-Authentication and Key Agreement).
  • ARPF Authentication Credential Repository and Processing Function
  • the terminal and the serving network must support EAP-AKA and 5G AKA authentication methods.
  • USIM must be in the Universal Integrated Circuit Card (UICC).
  • UICC may or may not be removable.
  • the non-3GPP access network USIM is applied to a terminal having a 3GPP access function.
  • the qualification to be used with EAP-AKA and 5G-AKA for the non-3GPP access network must be in the UICC.
  • the EAP framework is specified in RFC 3748. It defines the following roles, peer, pass-through authenticator, and back-end authentication server.
  • the back-end authentication server operates as an EAP server and ends the peer and EAP authentication methods.
  • EAP-AKA is used in 5G system
  • the EAP framework is supported in the following way.
  • the terminal plays the role of a peer.
  • -AUSF serves as a back-end authentication server.
  • the basic authentication and key negotiation procedure binds K SEAF to the serving network. Binding to the serving network prevents one serving network from being claimed to be another serving network, thus providing implicit serving network authentication to the terminal.
  • This implicit serving network authentication is applied to both 3GPP and non-3GPP access networks because it must be provided to the terminal regardless of access network technology.
  • the anchor key provided to the serving network must be specified for authentication occurring between the terminal and the 5G core network. That is, K SEAF should be cryptographically distinct from K-ASME delivered from the home network to the serving network.
  • Anchor key binding must be performed by including the parameter "serving network name" through the key extraction chain from the long-term subscriber key to the anchor key. The definition of the serving network name value will be described later.
  • the key extraction chain leading from the long-term subscriber key to the anchor key will be described below in relation to each (class) of the authentication method.
  • the key extraction rules are described in Annex A. Note that parameters like "Access network type” are not used for anchor key binding. This is because the 5G core process is not interested in the access network.
  • the "serving network name” is used to derive the anchor key. This serves the following dual purposes. That is, it is confirmed that the anchor key is bound to the serving network including the SN (Serving Network) ID, and that the anchor key is specified in the authentication between the 5G core network and the terminal by including the service code set to 5G.
  • the serving network name has a similar purpose of binding RES * (response) and XESS * (the expected response) to the serving network.
  • the serving network name is the combination of the SN ID and the service code with the service code prefixed with the SN :. Parameters such as access network type are not used in the serving network name. This is because the 5G core process is not interested in the access network.
  • the terminal should configure the name of the serving network as follows.
  • SEAF should form the name of the serving network as follows:
  • the network identifier must be set in the SN ID of the serving network to which authentication data is transmitted by AUSF.
  • AUSF receives the name of the serving network from SEAF. Before using the serving network name, AUSF verifies that SEAF is authorized to use the serving network name as specified below.
  • SEAF may initiate authentication with a terminal during an arbitrary procedure for establishing a signal connection with the terminal according to SEAF policy.
  • the terminal must use SUCI or 5G-GUTI for registration requests.
  • SEAF should call Nausf_UEAuthentication service by sending Nausf_UEAuthentication_Authenticate request message to AUSF every time authentication is started.
  • the Nausf_UEAuthentication_Authenticate call message must contain one of the following:
  • SEAF If SEAF has a valid 5G-GUTI and re-authenticates the terminal, SEAF must include SUPI in the Nausf_UEAuthentication_Authenticate request. Otherwise, SUCI is included in the Nausf_UEAuthentication_Authenticate request message.
  • the SUPI / SUCI architecture is part of the stage 3 protocol design.
  • the Nausf_UEAuthentication_Authenticate request should further include:
  • a local policy for selecting an authentication method need not be specified for each terminal, but may be the same for all terminals.
  • the AUSF Upon receipt of the Nausf_UEAuthentication_Authenticate request message, the AUSF must verify that the SEAF requested on the serving network has permission to use the serving network name in the Nausf_UEAuthentication_Authenticate request by comparing the serving network name with the expected serving network name. The AUSF must temporarily store the received serving network name. If the serving network is not authorized to use the serving network name, the AUSF should respond with a "Serving network not authorized" in the Nausf_UEAuthentication_Authenticate response.
  • SIDF Subscription Identifier De-concealing Function
  • UDM / ARPF should select an authentication method based on subscription data.
  • the Nudm_UEAuthentication_Get response in response to the Nudm_UEAuthentication_Get request and the Nausf_UEAuthentication_Authenticate response message in response to the Nausf_UEAuthentication_Authenticate request message are described as part of the following authentication procedure.
  • UDM / ARPF must first generate an authentication vector with a separation bit of AMF (Authentication Management Field) defined in TS 33.102.
  • UDM / ARPF must calculate CK '(Cipher Key) and IK' (Integrity Key) and replace CK and IK with CK 'and IK' according to the normative Annex A.
  • UDM subsequently converts the authentication vector AV '(RAND, AUTN, XRES, CK', IK ') to the AUSF receiving the Nudm_UEAuthentication_Get request with an indication that AV' will be used for EAP-AKA 'using the Nudm_UEAuthentication_Get response message. ).
  • Network name is a concept of RFC 5448. This is transferred to the AT_KDF_INPUT attribute of EAP-AKA '.
  • the value of the ⁇ network name> parameter is not defined in RFC 5448, but is defined in the 3GPP specification. In the case of EPS, it is defined in TS 24.302 as “access network identity”, and in the case of 5G, it is defined as "serving network name”.
  • UDM includes SUPI in the Nudm_UEAuthentication_Get response.
  • AUSF and UE must proceed as described in RFC 5448 until AUSF is ready to send EAP-Success.
  • AUSF should send EAP-Request / AKA'-Challenge message to SEAF through Nausf_UEAuthentication_Authenticate response message.
  • SEAF should set the Anti-Bidding down Between Architectures (ABBA) parameters as defined in Annex A 7.1.
  • SEAF must transparently transmit the EAP-Request / AKA'-Challenge message to the UE in the NAS message authentication request message.
  • the ME must deliver the RAND (random challenge) and AUTN (Authentication Token) received in the EAP-Request / AKA'-Challenge message to the USIM.
  • This message includes ngKSI and ABBA parameters. That is, SEAF must include ngKSI and ABBA parameters in all EAP-authentication request messages. ngKSI is used to identify the partial and unique security context generated when authentication is successful by the terminal and the AMF.
  • SEAF must determine the authentication method used is the EAP method by evaluating the type of authentication method based on the Nausf_UEAuthentication_Authenticate response message.
  • the USIM Upon receiving the RAND and AUTN, the USIM should verify the AV's novelty by checking whether the AUTN can be accepted, as described in TS 33.102. In this case, USIM computes the response RES. USIM must return RES, CK, and IK to ME. If USIM calculates Kc (i.e.GPRS Kc) from CK and IK using conversion function c3 as described in TS 33.102 and sends it to ME, ME ignores such GPRS Kc and sends GPRS Kc to USIM or ME internal Will not save on. ME should extract CK 'and IK' according to Annex A.
  • Kc i.e.GPRS Kc
  • the terminal should transmit the EAP-Response / AKA'-Challenge message to the SEAF through the NAS message Auth-Resp message.
  • the SEAF must transparently transmit the EAP-Response / AKA-Challenge message to the AUSF through the Nausf_UEAuthentication_Authenticate request message.
  • the AUSF must verify these messages, and if the AUSF has successfully confirmed this message, it should proceed as follows, otherwise it should return an error.
  • EAP-Request / AKA- Notification and EAP-Response / AKA- Notification messages can exchange EAP-Request / AKA- Notification and EAP-Response / AKA- Notification messages through SEAF.
  • SEAF must communicate these messages transparently.
  • EAP-AKA Notifications described in RFC 4187 and EAP Notifications described in RFC 3748 may be used at any time. These notifications can be used, for example, when displaying a protected result or when an EAP server detects an error in a received EAP-AKA response.
  • AUSF derives the Extended Master Session Key (EMSK) from CK 'and IK', as described in RFC 5448 and Annex F. AUSF uses the first 256 bits of EMSK as K AUSF and calculates K SEAF from K AUSF . AUSF should send an EAP success message to SEAF in the Nausf_UEAuthentication_Authenticate response. Nausf_UEAuthentication_Authenticate response message includes the K SEAF. If AUSF receives SUCI from SEAF when authentication is initiated, AUSF must include SUPI in the Nausf_UEAuthentication_Authenticate response message.
  • EMSK Extended Master Session Key
  • the SEAF should send the EAP success message to the terminal as an N1 message.
  • This message also includes ngKSI and ABBA parameters.
  • SEAF should set the Anti-Bidding down Between Architectures (ABBA) parameters as defined in Annex A 7.1.
  • K SEAF derives the K AMF from the K SEAF , ABBA parameters and SUPI, and sends it to the AMF, according to Annex A.7.
  • the UE Upon receiving the EAP-success message, the UE obtains EMSK from CK 'and IK' as described in RFC 5448 and Annex F.
  • ME uses the first 256 bits of EMSK as K AUSF , and calculates K SEAF in the same way through AUSF.
  • the terminal should extract K AMF from K SEAF , ABBA parameters and SUPI, according to Annex A.7.
  • EAP-Response / AKA-Challenge message is not successfully verified, subsequent AUSF actions are determined according to the policy of the home network. If AUSF and SEAF decide that the certification has been successful, SEAF provides ngKSI and K AMF to AMF.
  • 5G AKA enhances EPS AKA by providing the home network with evidence of successful authentication of the UE from the visited network. This evidence is transmitted by the visited network in the authentication confirmation message. 5G AKA does not request multiple 5G AV (Authentication Vectors) and does not pre-fetch 5G AV in the home network for future use.
  • 5G AV Authentication Vectors
  • the authentication procedure in 5G-AKA is as follows.
  • the UDM / Authentication Credential Repository and Processing Function must create a 5G Home Environment (HE) AV.
  • UDM / ARPF performs this by creating an AV with the AMF (Authentication Management Field) separation bit set to "1" as defined in TS 33.102.
  • AMF Authentication Management Field
  • UDM / ARPF should derive K AUSF and calculate the XRES * (the expected response) according to Annex A.4.
  • UDM / ARPF should generate 5G HE AV from RAND, AUTN, XRES * and K AUSF .
  • UDM must return 5G HE AV to AUSF with an indication that 5G HE AV will be used as 5G-AKA in Nudm_UEAuthentication_Get response. If SUCI is included in the Nudm_UEAuthentication_Get request, UDM includes SUPI in the Nudm_UEAuthentication_Get response.
  • the AUSF must temporarily store the XRES * with the SUCI or SUPI received.
  • AUSF can store K AUSF .
  • HXRES * the hash of the 'Expected Response'
  • K SEAF from K AUSF according to Annex A.6
  • XRES * to HXRES * Replace it with to generate 5G AV from 5G HE AV received from UDM / ARPF.
  • HXRES * and K AUSF are present in 5G HE AV together with K SEAF .
  • AUSF should remove K SEAF and return 5G SE AV (RAND, AUTN, HXRES * ) to SEAF through Nausf_UEAuthentication_Authenticate response.
  • SEAF must send RAND and AUTN to the terminal through NAS message Authentication-Request.
  • This message should also include the ngKSI to be used by the terminal and the AMF to identify the K AMF and the partial native security context created when authentication is successful.
  • This message should also include ABBA parameters.
  • the SEAF should set the ABBA parameters, as defined in Annex A.7.1.
  • the ME must pass the RAND and AUTN received in the NAS message authentication request to the USIM. ABBA parameters are included to enable bid protection of the security functions described later.
  • USIM Upon receiving RAND and AUTN, USIM should verify the novelty of 5G AV by checking if AUTN can be accepted as described in TS 33.102. In this case, the USIM calculates the response RES. USIM must return RES, CK, and IK to ME. If USIM calculates Kc (i.e.GPRS Kc) from CK and IK using conversion function c3 as described in TS 33.102 and sends it to ME, ME ignores such GPRS Kc and places GPRS Kc within USIM or ME Will not save. The ME should then calculate RES * from RES, according to Annex A.4. ME should calculate K AUSF from CK
  • Kc i.e.GPRS Kc
  • MEs accessing 5G must ensure that the "separation bit" in the AMF field of AUTN is set to 1 during the authentication process.
  • the "separation bit” is bit 0 in the AMF field in AUTN. This separation bit in the AMFN's AMF field can no longer be used for operator specific purposes as described in TS 33.102 and Annex F.
  • the UE must return RES * to SEAF in the NAS message authentication response.
  • SEAF calculates HRES from *, * RES in accordance with Annex A.5 and, SEAF shall compare HRES * and * HXRES. If so, SEAF considers authentication successful from the serving network perspective. Otherwise, SEAF proceeds as described in the SEAF or AUSF or both RES * verification failures to be described later. If the terminal does not reach the terminal and RES * is not received by SEAF, SEAF should consider the authentication to fail and indicate failure in AUSF.
  • SEAF should send RES * to AUSF in the Nausf_UEAuthentication_Authenticate request message together with the corresponding SUCI or SUPI received from the terminal.
  • AUSF When AUSF receives a Nausf_UEAuthentication_Authenticate request message containing RES * , it can verify whether the AV has expired. When the AV expires, AUSF can consider the authentication failure from the home network point of view. The AUSF should compare the received RES * with the stored XRES * . If RES * and XRES * are the same, AUSF should consider authentication successful from a home network perspective. .
  • AUSF must indicate to SEAF whether authentication is successful from the home network perspective in the Nausf_UEAuthentication_Authenticate response. If authentication is successful, K-SAEF should send a Nausf_UEAuthentication_Authenticate response to SEAF. If AUSF receives SUCI from SEAF when authentication starts, if authentication is successful, AUSF must also include SUPI in the Nausf_UEAuthentication_Authenticate response.
  • the K SEAF key received in the Nausf_UEAuthentication_Authenticate response message becomes the anchor key in the sense of the key hierarchy.
  • the following SEAF should derive K AMF from K SEAF , ABBA parameters and SUPI, and provide ngKSI and K AMF to AMF, according to Annex A.7.
  • SEAF receives only Nausf_UEAuthentication_Authenticate response message including SUPI, and provides only ngKSI and K AMF to AMF. No communication service is provided to the terminal until SUPI is known to the serving network.
  • SEAF calculates HRES * from RES * , and SEAF should compare HRES * and HXRES * according to Annex A.5. If they do not match, SEAF considers authentication a failure.
  • SEAF proceeds to step 10 of FIG. 16, and after receiving a Nausf_UEAuthentication_Authenticate response message from AUSF in step 12 of FIG. 16, proceeds as follows.
  • SEAF rejects authentication by sending an authentication rejection to the terminal if SUCI is used by the terminal in the initial NAS message, or SEAF / AMF allows 5G-GUTI to search for SUCI in the initial NAS message and initiate additional authentication attempts. If used by the terminal, the identification procedure with the terminal should be initiated.
  • the SEAF must refuse authentication to the terminal or initiate an identification procedure with the terminal.
  • step 7 of FIG. 16 When 5G AKA is used in step 7 of FIG. 16; Alternatively, when EAP-AKA 'is used in step 5 of FIG. 15, upon reception of RAND and AUTN, if verification of AUTN fails, USIM indicates the reason for the failure, and if synchronization fails, AUTS parameters (TS 33.102) to ME.
  • ME When 5G AKA is used: ME must respond with a CAUSE value indicating the reason for the failure through NAS message authentication failure.
  • the UE In the case of AUTN synchronization failure (described in TS 33.102), the UE also includes the AUTS provided by the USIM.
  • the AMF / SEAF Upon receiving the authentication failure message, the AMF / SEAF can initiate a new authentication for the terminal. (See TS 24.501).
  • EAP-AKA EAP-AKA 'is used: ME must proceed as described in RFC 4187 and RFC 5448 for EAP-AKA.
  • SEAF Upon receiving an authentication failure message with synchronization failure (AUTS) from the terminal, SEAF sends a Nausf_UEAuthentication_Authenticate request message with "synchronization failure indication" to AUSF, and AUSF sends a Nudm_UEAuthentication_Get request message to UDM / ARPF with the following parameters: :
  • SEAF will not respond to the "Indication of Failure to Synchronize" message not requested from the terminal. SEAF does not send a new authentication request to the terminal before receiving a response to the Nausf_UEAuthentication_Authenticate request message with (or before timeout) a "synchronization failure indication" from AUSF.
  • UDM / ARPF When UDM / ARPF receives a Nudm_UEAuthentication_Get request message with "Indication of synchronization failure", ARPF maps to HE / AuC (authentication center) as described in TS 33.102, 6.3.5. UDM / ARPF sends a Nudm_UEAuthentication_Get response message with a new authentication vector for EAP-AKA 'or 5G-AKA depending on the authentication method applicable to AUSF for the user. AUSF executes a new authentication procedure with the terminal according to the authentication method applicable to the user.
  • biometric information such as a fingerprint
  • a sensor for recognizing biometric information in a terminal and a processor and storage capable of safely processing the biometric information are required.
  • biometric information should not be processed to track the individual in reverse, and the privacy of the biometric information itself may be a great threat to anyone exposed through a network operator.
  • a secure method is proposed for allowing a terminal to recognize a user through biometric information and use it for 3GPP-access.
  • the present invention provides a secure method for a terminal to recognize a user through biometric information and use it for 3GPP and a method for registering secure biometric information, which is a prerequisite for this.
  • the terminal has a sensor device capable of recognizing biometric information and a structure (eg, Trusted Execution Environment, Secure Element, etc.) in which biometric data recognized from the sensor device can be safely processed and stored.
  • a network provider can use a separate secure channel or method for subscriber verification in the registration of biometric information. For example, it is a method of confirming by sending a CAPTCHA image to a subscriber through a subscriber's confirmation through a voice call or Internet data communication.
  • the network biometric authentication server is a securely protected server operated by a network operator in a 3GPP system, and is assumed to be, for example, a UDM, ARPF, or similar server.
  • the user goes through a subscriber verification procedure through a secure channel separate from the network operator. If it passes, the user assumes a challenge code for registration from the network authentication server through the terminal (for example, a numeric string or a string that can be directly input by the user to the terminal, but depending on the channel type, the user intervenes in the terminal) It is possible to receive any kind of information that can be input through).
  • the challenge code means a phrase that promotes identity authentication by allowing a client to correctly respond to a question sent by the server, which is an authentication subject. Through this, the user starts the biometric authentication registration process through the terminal.
  • the terminal configures a network authentication server and a user plane, and through this, receives a first public key of the authentication server from the network authentication server.
  • the first public key is generated by the authentication server, which is an authentication subject, and is used as an encryption key for data transmission between a terminal and a network authentication server, which will be described later.
  • a first secret key for decrypting data encrypted by the first public key is also generated in the authentication server.
  • the user checks whether the connected network presented by the terminal is a properly registered home network, and inputs the first biometric information through the sensor of the terminal.
  • a confirmation challenge code corresponding to the registration challenge code is input to the terminal. The process must be done at a predetermined, limited time, otherwise the entire procedure is discontinued.
  • the terminal stores the first biometric information received from the user in a secure processing space, and based on the received first biometric information, a second public key and a second secret key to be used for authentication of the user in the future Create a (Private key) and store it in the same space as biometric information processing, or in a separate secure space at the same level. Then, the challenge code for verification input by the user and the second public key of the terminal are encrypted using the first public key received from the network authentication server and transmitted to the network authentication server.
  • the network authentication server decrypts the messages received from the terminal using the first secret key, compares the challenge code for registration and the challenge code for verification, and if it matches, corresponds to the biometric authentication registration of a legitimate subscriber. As such, the second public key of the terminal is safely stored for future terminal authentication. After that, the network authentication server delivers the result message of the biometric authentication to the terminal and the authentication registration procedure is completed.
  • the result message includes a key index and / or a separate ID to be used for authentication in the future, which are allocated from the network authentication server.
  • the terminal acquires and verifies the user's second biometric information, and upon confirmation, generates a third secret key based on the second biometric information, and the third secret Using the key, the network signs the challenge code for authentication sent in response to the service request and returns a signature result message.
  • the signature result message may include a timestamp, a serving network name, and the like.
  • the network authentication server uses the second public key stored in the registration procedure to confirm that the challenge code for authentication is properly signed, and can also check the timestamp and serving network name for additional security. If all of the confirmed results are valid, the terminal determines that the biometric authentication has been successful.
  • steps 1 to 6 are separate procedures to replace the above-described procedures, or by executing steps 4 to 6 after steps 1 to 7 above (for example, after steps 1 to 3, steps 4 to 6 below) All of the procedures and the above 4 to 7 procedures can be performed.) Both can be used in combination.
  • the user goes through a subscriber verification procedure through a secure channel separate from the network operator. If it passes, the user assumes a challenge code for registration from the network authentication server through the terminal (for example, a numeric string or a string that can be directly input by the user to the terminal, but depending on the channel type, the user intervenes in the terminal) It is possible to receive any kind of information that can be input through).
  • the challenge code means a phrase that promotes identity authentication by allowing a client to correctly respond to a question sent by the server, which is an authentication subject. Through this, the user starts the biometric authentication registration process through the terminal.
  • the terminal configures a network authentication server and a user plane, and through this, receives a first public key of the authentication server from the network authentication server.
  • the first public key is generated by the authentication server, which is an authentication subject, and is used as an encryption key for data transmission between a terminal and a network authentication server, which will be described later.
  • a first secret key for decrypting data encrypted by the first public key is also generated in the authentication server.
  • the user checks whether the connected network presented by the terminal is a properly registered home network, and inputs the first biometric information through the sensor of the terminal.
  • a confirmation challenge code corresponding to the registration challenge code is input to the terminal. The process must be done at a predetermined, limited time, otherwise the entire procedure is discontinued.
  • the terminal stores the first biometric information in a secure processing space and, based on the first biometric information, corresponds to a first master key (a key (K) in the EPS / LTE system) to be used for authentication of a corresponding subscriber in the future. )), And the necessary data is encrypted with the first public key and transmitted to the authentication server so that the network authentication server can also generate the second master key. At this time, the challenge code for verification is also encrypted and transmitted.
  • the network authentication server decrypts the received messages with a first secret key to check whether the challenge code for verification and the challenge code for registration match each other, and if it matches, generates a second master key based on the received data.
  • the second master key is judged as the legitimate master key of the user and the terminal and is safely stored. Thereafter, the network authentication server transmits the verification result to the terminal and the authentication registration procedure through biometric authentication is completed.
  • the result message may include a key index to be used for future authentication or a separate ID, etc., allocated from the network authentication server.
  • the terminal acquires third biometric information from the user, generates a third master key based on the third biometric information, and uses the third master key Therefore, it performs the terminal authentication procedure of the existing 3GPP 5G system and generates keys necessary for communication security.
  • FIG. 18 illustrates a block diagram of a communication device according to an embodiment of the present invention.
  • the wireless communication system includes a network node (X510) and a plurality of terminals (UE) (X520).
  • the network node X510 includes a processor (processor X511), a memory (memory X512), and a communication module (communication module X513) (transceiver (transceiver)).
  • the processor X511 implements the functions, processes, and / or methods proposed in FIGS. 1 to 17 above. Layers of the wired / wireless interface protocol may be implemented by the processor X511.
  • the memory X512 is connected to the processor X511, and stores various information for driving the processor X511.
  • the communication module X513 is connected to the processor X511, and transmits and / or receives wired / wireless signals.
  • a base station As an example of the network node X510, a base station, AMF, SMF, UDF, and the like may correspond to this.
  • the communication module X513 may include a radio frequency unit (RF) unit for transmitting / receiving radio signals.
  • RF radio frequency unit
  • the terminal X520 includes a processor X521, a memory X522, and a communication module (or RF unit) X523 (transceiver).
  • the processor X521 implements the functions, processes, and / or methods proposed in FIGS. 1 to 17 above.
  • the layers of the radio interface protocol may be implemented by the processor X521.
  • the processor may include a NAS layer and an AS layer.
  • the memory X522 is connected to the processor X521, and stores various information for driving the processor X521.
  • the communication module X523 is connected to the processor X521 to transmit and / or receive wireless signals.
  • the memories X512 and X522 may be inside or outside the processors X511 and X521, and may be connected to the processors X511 and X521 by various well-known means.
  • the network node X510 (for a base station) and / or the terminal X520 may have a single antenna or multiple antennas.
  • FIG. 19 illustrates a block diagram of a communication device according to an embodiment of the present invention.
  • FIG. 19 is a diagram illustrating the terminal of FIG. 18 in more detail above.
  • the terminal processor (or digital signal processor (DSP: digital signal processor) (Y610), RF module (RF module) (or RF unit) (Y635), power management module (power management module) (Y605) ), Antenna (Y640), battery (Y655), display (Y615), keypad (Y620), memory (Y630), SIM card (SIM (Subscriber Identification Module) ) card) (Y625) (this configuration is optional), a speaker (Y645) and a microphone (microphone) (Y650).
  • the terminal may also include a single antenna or multiple antennas. Can be.
  • the processor Y610 implements the functions, processes, and / or methods proposed in FIGS. 1 to 17 above.
  • the layer of the radio interface protocol may be implemented by the processor Y610.
  • the memory Y630 is connected to the processor Y610 and stores information related to the operation of the processor Y610.
  • the memory Y630 may be inside or outside the processor Y610, and may be connected to the processor Y610 by various well-known means.
  • the user inputs command information such as a phone number by pressing a button of the keypad Y620 (or touching it) or by voice activation using a microphone Y650, for example.
  • the processor Y610 receives such command information and processes it to perform an appropriate function such as dialing a telephone number.
  • the operational data may be extracted from the SIM card Y625 or the memory Y630. Also, the processor Y610 may recognize the user and display command information or driving information on the display Y615 for convenience.
  • the RF module Y635 is connected to the processor Y610, and transmits and / or receives RF signals.
  • the processor Y610 transmits command information to the RF module Y635 to transmit, for example, a radio signal constituting voice communication data.
  • the RF module Y635 includes a receiver and a transmitter to receive and transmit wireless signals.
  • the antenna Y640 functions to transmit and receive wireless signals.
  • the RF module Y635 may transmit the signal for processing by the processor Y610 and convert the signal to a base band.
  • the processed signal may be converted into audible or readable information output through the speaker Y645.
  • the wireless device is a base station, a network node, a transmitting terminal, a receiving terminal, a wireless device, a wireless communication device, a vehicle, a vehicle equipped with an autonomous driving function, a drone (Unmanned Aerial Vehicle, UAV), AI (Artificial Intelligence) module, Robots, Augmented Reality (AR) devices, Virtual Reality (VR) devices, MTC devices, IoT devices, medical devices, fintech devices (or financial devices), security devices, climate / environment devices, or other areas of the fourth industrial revolution or It may be a device related to 5G service.
  • a drone may be a vehicle that does not ride and is flying by radio control signals.
  • the MTC device and the IoT device are devices that do not require direct human intervention or manipulation, and may be smart meters, bending machines, thermometers, smart bulbs, door locks, and various sensors.
  • a medical device is a device used for the purpose of diagnosing, treating, reducing, treating or preventing a disease, a device used for examining, replacing or modifying a structure or function, medical equipment, surgical device, ( In vitro) diagnostic devices, hearing aids, surgical devices, and the like.
  • a security device is a device installed to prevent a risk that may occur and to maintain safety, and may be a camera, CCTV, black box, or the like.
  • a fintech device is a device that can provide financial services such as mobile payment, and may be a payment device, point of sales (POS), or the like.
  • POS point of sales
  • a climate / environmental device may mean a device that monitors and predicts the climate / environment.
  • the terminal is a mobile phone, a smart phone, a laptop computer, a terminal for digital broadcasting, a personal digital assistants (PDA), a portable multimedia player (PMP), navigation, a slate PC, a tablet PC (tablet PC), ultrabook, wearable device (e.g., watch type terminal (smartwatch), glass type terminal (smart glass), head mounted display (HMD)), foldable device And the like.
  • the HMD is a display device in a form worn on the head, and may be used to implement VR or AR.
  • Embodiments according to the present invention may be implemented by various means, for example, hardware, firmware, software, or a combination thereof.
  • one embodiment of the present invention includes one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), FPGAs ( field programmable gate arrays), processors, controllers, microcontrollers, microprocessors, and the like.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, microcontrollers, microprocessors, and the like.
  • an embodiment of the present invention may be implemented in the form of a module, procedure, function, etc. that performs the functions or operations described above.
  • the software code can be stored in memory and driven by a processor.
  • the memory is located inside or outside the processor, and can exchange data with the processor by various known means.
  • the present invention has been mainly described as an example applied to a 3GPP 5G (5 generation) system, it can be applied to various wireless communication systems in addition to the 3GPP 5G system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne un procédé et un dispositif pour effectuer une authentification à l'aide d'informations biométriques dans un système de communication sans fil. Spécifiquement, le procédé pour effectuer une authentification par un équipement utilisateur (UE) à l'aide d'informations biométriques dans un système de communication sans fil, selon un aspect de la présente invention, peut comprendre les étapes consistant à : recevoir un code de défi pour l'enregistrement à partir d'un serveur d'authentification de réseau ; recevoir une première clé publique à partir du serveur d'authentification de réseau ; générer une seconde clé publique et une deuxième clé privée sur la base des premières informations biométriques reçues en tant qu'entrée ; chiffrer un code de défi pour la vérification et la seconde clé publique au moyen de la première clé publique ; transmettre le code de défi chiffré pour la vérification et la seconde clé publique chiffrée au serveur d'authentification de réseau ; et recevoir un message de résultat d'enregistrement d'authentification d'informations biométriques indiquant que la seconde clé publique a été stockée dans le serveur d'authentification de réseau.
PCT/KR2019/014525 2018-11-02 2019-10-31 Procédé et dispositif pour effectuer une authentification à l'aide d'informations biométriques dans un système de communication sans fil WO2020091434A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20180133955 2018-11-02
KR10-2018-0133955 2018-11-02

Publications (1)

Publication Number Publication Date
WO2020091434A1 true WO2020091434A1 (fr) 2020-05-07

Family

ID=70462623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/014525 WO2020091434A1 (fr) 2018-11-02 2019-10-31 Procédé et dispositif pour effectuer une authentification à l'aide d'informations biométriques dans un système de communication sans fil

Country Status (1)

Country Link
WO (1) WO2020091434A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113162903A (zh) * 2021-02-02 2021-07-23 上海大学 网络切片中的基于连接信息的认证方法
CN116528235A (zh) * 2023-06-30 2023-08-01 华侨大学 基于扩展切比雪夫多项式的车地无线通信认证方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004159100A (ja) * 2002-11-06 2004-06-03 Kureo:Kk 暗号通信プログラム、暗号通信システム用サーバシステム、暗号通信方法および暗号通信システム
US20090300364A1 (en) * 2008-05-29 2009-12-03 James Paul Schneider Username based authentication security
KR20180026508A (ko) * 2015-07-02 2018-03-12 알리바바 그룹 홀딩 리미티드 생체 특징에 기초한 보안 검증 방법, 클라이언트 단말, 및 서버
US20180199205A1 (en) * 2016-01-29 2018-07-12 Tencent Technology (Shenzhen) Company Limited Wireless network connection method and apparatus, and storage medium
US20180307888A1 (en) * 2017-04-24 2018-10-25 Samsung Electronics Co., Ltd. Method and apparatus for performing authentication based on biometric information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004159100A (ja) * 2002-11-06 2004-06-03 Kureo:Kk 暗号通信プログラム、暗号通信システム用サーバシステム、暗号通信方法および暗号通信システム
US20090300364A1 (en) * 2008-05-29 2009-12-03 James Paul Schneider Username based authentication security
KR20180026508A (ko) * 2015-07-02 2018-03-12 알리바바 그룹 홀딩 리미티드 생체 특징에 기초한 보안 검증 방법, 클라이언트 단말, 및 서버
US20180199205A1 (en) * 2016-01-29 2018-07-12 Tencent Technology (Shenzhen) Company Limited Wireless network connection method and apparatus, and storage medium
US20180307888A1 (en) * 2017-04-24 2018-10-25 Samsung Electronics Co., Ltd. Method and apparatus for performing authentication based on biometric information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113162903A (zh) * 2021-02-02 2021-07-23 上海大学 网络切片中的基于连接信息的认证方法
CN116528235A (zh) * 2023-06-30 2023-08-01 华侨大学 基于扩展切比雪夫多项式的车地无线通信认证方法及系统
CN116528235B (zh) * 2023-06-30 2023-10-20 华侨大学 基于扩展切比雪夫多项式的车地无线通信认证方法及系统

Similar Documents

Publication Publication Date Title
WO2020091281A1 (fr) Procédé et appareil pour effectuer une authentification de serveur mandataire pour une permission d'accès par un terminal dans un système de communication sans fil
WO2020032527A1 (fr) Procédé de réception de signal dans une réinitialisation de système de communication sans fil, et appareil l'utilisant
WO2020067790A1 (fr) Procédé et appareil pour déterminer s'il convient d'effectuer une transmission sur un accès aléatoire ou un octroi configuré dans un système de communication sans fil
WO2020046094A1 (fr) Procédé et appareil de sélection de réseau mobile terrestre public (plmn) d'accès dans un système de communication sans fil
WO2020204536A1 (fr) Procédé permettant à un terminal de se connecter à un réseau dans un système de communication sans fil
WO2020111912A1 (fr) Procédé d'émission et de réception de signal de recherche de mobile dans un système de communications sans fil, et appareil associé
WO2020149522A1 (fr) Ue permettant l'établissement d'une session pdu et twif
WO2020067749A1 (fr) Contrôle d'accès pour la transmission de données
WO2020141956A1 (fr) Procédé de sélection de réseau dans un système de communication sans fil
WO2020256425A1 (fr) Procédé et appareil pour la prise en charge de sessions de pdu redondantes
WO2021172964A1 (fr) Procédé et appareil de récupération après une panne dans un système de communication sans fil
WO2020046093A1 (fr) Procédé et dispositif de sélection de réseau mobile terrestre public (plmn) dans un système de communication sans fil
WO2020009440A1 (fr) Procédé et appareil de détermination de service pouvant être pris en charge dans un système de communications sans fil
WO2021045339A1 (fr) Procédé et appareil permettant de prendre en charge une sécurité pour une mo-edt dans une division cu-du dans un système de communication sans fil
WO2021162507A1 (fr) Procédé et appareil pour transmettre un message de réponse dans un système de communication sans fil
WO2021187783A1 (fr) Prise en charge de continuité de service entre snpn et plmn
WO2020022716A1 (fr) Procédé et dispositif de commande d'état de transmission de données dans un système de communication sans fil
WO2022050659A1 (fr) Commande du trafic
WO2021177734A1 (fr) Support de continuité de service pour transfert entre snpn et plmn
WO2021187881A1 (fr) Indication de prise en charge de réseau pour des informations d'appariement de session de pdu fournies par un ue
WO2021162500A1 (fr) Communication liée à une session pdu multi-accès
WO2021194134A1 (fr) Procédé et appareil de gestion de défaillance de mobilité conditionnelle dans un système de communication sans fil
WO2021025246A1 (fr) Procédé et appareil permettant de gérer des informations de sécurité entre un dispositif sans fil et un réseau pour une procédure de libération rrc rapide dans un système de communication sans fil
WO2020091434A1 (fr) Procédé et dispositif pour effectuer une authentification à l'aide d'informations biométriques dans un système de communication sans fil
WO2020032638A1 (fr) Procédé de réalisation d'un contrôle d'accès et dispositif le prenant en charge

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19879108

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19879108

Country of ref document: EP

Kind code of ref document: A1