WO2020167082A1 - Procédé et dispositif de réception de signal par un terminal de liaison latérale dans un système de communication sans fil - Google Patents

Procédé et dispositif de réception de signal par un terminal de liaison latérale dans un système de communication sans fil Download PDF

Info

Publication number
WO2020167082A1
WO2020167082A1 PCT/KR2020/002203 KR2020002203W WO2020167082A1 WO 2020167082 A1 WO2020167082 A1 WO 2020167082A1 KR 2020002203 W KR2020002203 W KR 2020002203W WO 2020167082 A1 WO2020167082 A1 WO 2020167082A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
cell
terminal
data
vehicle
Prior art date
Application number
PCT/KR2020/002203
Other languages
English (en)
Korean (ko)
Inventor
홍의현
서한별
이승민
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2020167082A1 publication Critical patent/WO2020167082A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/20Manipulation of established connections
    • H04W76/28Discontinuous transmission [DTX]; Discontinuous reception [DRX]

Definitions

  • the following description is for a wireless communication system, and more particularly, a resource allocation method and apparatus for efficient communication between terminals in a situation in which various DMRS patterns are mixed in an NR V2X system.
  • a wireless communication system is a multiple access system capable of supporting communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.).
  • multiple access systems include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, and a single carrier frequency (SC-FDMA) system.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • division multiple access division multiple access
  • MC-FDMA multi carrier frequency division multiple access
  • RATs radio access technologies
  • 5G is also included here.
  • the three main requirements areas for 5G are (1) Enhanced Mobile Broadband (eMBB) area, (2) Massive Machine Type Communication (mMTC) area, and (3) ultra-reliability and It includes a low-latency communication (Ultra-reliable and Low Latency Communications, URLLC) area.
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • URLLC Low Latency communication
  • multiple areas may be required for optimization, and other use cases may be focused on only one key performance indicator (KPI).
  • KPI key performance indicator
  • 5G supports these various use cases in a flexible and reliable way.
  • eMBB goes far beyond basic mobile Internet access, covering rich interactive work, media and entertainment applications in the cloud or augmented reality.
  • Data is one of the key drivers of 5G, and it may not be possible to see dedicated voice services for the first time in the 5G era.
  • voice is expected to be processed as an application program simply using the data connection provided by the communication system.
  • the main reasons for the increased traffic volume are an increase in content size and an increase in the number of applications requiring high data rates.
  • Streaming services (audio and video), interactive video and mobile Internet connections will become more widely used as more devices connect to the Internet. Many of these applications require always-on connectivity to push real-time information and notifications to the user.
  • Cloud storage and applications are increasing rapidly in mobile communication platforms, which can be applied to both work and entertainment.
  • cloud storage is a special use case that drives the growth of the uplink data rate.
  • 5G is also used for remote work in the cloud, and requires much lower end-to-end delays to maintain a good user experience when tactile interfaces are used.
  • Entertainment For example, cloud gaming and video streaming is another key factor that is increasing the demand for mobile broadband capabilities. Entertainment is essential on smartphones and tablets anywhere, including high mobility environments such as trains, cars and airplanes.
  • Another use case is augmented reality and information retrieval for entertainment.
  • augmented reality requires very low latency and an instantaneous amount of data.
  • one of the most anticipated 5G use cases relates to the ability to seamlessly connect embedded sensors in all fields, i.e. mMTC.
  • mMTC massive machine type computer
  • Industrial IoT is one of the areas where 5G plays a major role in enabling smart cities, asset tracking, smart utilities, agriculture and security infrastructure.
  • URLLC includes new services that will transform the industry with ultra-reliable/low-latency links such as self-driving vehicles and remote control of critical infrastructure.
  • the level of reliability and delay is essential for smart grid control, industrial automation, robotics, drone control and coordination.
  • 5G can complement fiber-to-the-home (FTTH) and cable-based broadband (or DOCSIS) as a means of providing streams rated at hundreds of megabits per second to gigabits per second. This high speed is required to deliver TVs in 4K or higher (6K, 8K and higher) resolutions as well as virtual and augmented reality.
  • Virtual Reality (VR) and Augmented Reality (AR) applications involve almost immersive sports events. Certain application programs may require special network settings. In the case of VR games, for example, game companies may need to integrate core servers with network operators' edge network servers to minimize latency.
  • Automotive is expected to be an important new driving force in 5G, with many use cases for mobile communication to vehicles. For example, entertainment for passengers demands simultaneous high capacity and high mobility mobile broadband. The reason is that future users will continue to expect high-quality connections, regardless of their location and speed.
  • Another application example in the automotive field is an augmented reality dashboard. It identifies an object in the dark on top of what the driver is looking through the front window, and displays information that tells the driver about the distance and movement of the object overlaid.
  • wireless modules enable communication between vehicles, exchange of information between the vehicle and supporting infrastructure, and exchange of information between the vehicle and other connected devices (eg, devices carried by pedestrians).
  • the safety system allows the driver to lower the risk of accidents by guiding alternative courses of action to make driving safer.
  • the next step will be a remote controlled or self-driven vehicle. It is very reliable and requires very fast communication between different self-driving vehicles and between the vehicle and the infrastructure. In the future, self-driving vehicles will perform all driving activities, and drivers will be forced to focus only on traffic anomalies that the vehicle itself cannot identify.
  • the technical requirements of self-driving vehicles call for ultra-low latency and ultra-fast reliability to increase traffic safety to levels unachievable by humans.
  • Smart cities and smart homes referred to as smart society, will be embedded with high-density wireless sensor networks.
  • a distributed network of intelligent sensors will identify the conditions for cost and energy-efficient maintenance of a city or home.
  • a similar setup can be done for each household.
  • Temperature sensors, window and heating controllers, burglar alarms and appliances are all wirelessly connected. Many of these sensors are typically low data rates, low power and low cost. However, for example, real-time HD video may be required in certain types of devices for surveillance.
  • the smart grid interconnects these sensors using digital information and communication technologies to collect information and act accordingly. This information can include the behavior of suppliers and consumers, allowing smart grids to improve efficiency, reliability, economics, sustainability of production and the distribution of fuels such as electricity in an automated way.
  • the smart grid can also be viewed as another low-latency sensor network.
  • the health sector has many applications that can benefit from mobile communications.
  • the communication system can support telemedicine providing clinical care from remote locations. This can help reduce barriers to distance and improve access to medical services that are not consistently available in remote rural areas. It is also used to save lives in critical care and emergencies.
  • a wireless sensor network based on mobile communication may provide remote monitoring and sensors for parameters such as heart rate and blood pressure.
  • Wireless and mobile communications are becoming increasingly important in industrial applications. Wiring is expensive to install and maintain. Thus, the possibility of replacing cables with reconfigurable wireless links is an attractive opportunity for many industries. However, achieving this requires that the wireless connection operates with a delay, reliability and capacity similar to that of the cable, and its management is simplified. Low latency and very low error probability are new requirements that need to be connected to 5G.
  • Logistics and freight tracking are important use cases for mobile communications that enable tracking of inventory and packages from anywhere using location-based information systems. Logistics and freight tracking use cases typically require low data rates, but require a wide range and reliable location information.
  • Embodiment(s) relates to a resource allocation method for efficient communication between terminals in a situation in which various DMRS patterns are mixed in an NR V2X system.
  • a method of receiving a signal by a sidelink terminal in a wireless communication system comprising: receiving a DMRS related to a PSSCH from a resource pool; And receiving the PSSCH, wherein a first type DMRS is transmitted in at least two predetermined resource regions in the resource pool, and the first type DMRS is a DMRS transmitted in a slot before a preset slot. to be.
  • a sidelink device in a wireless communication system comprising: a memory; And a plurality of processors coupled to the memory, wherein the processor receives a DMRS related to a PSSCH from a resource pool, receives the PSSCH, and at least two predetermined resource regions in the resource pool include a first type DMRS. Is transmitted, and the first type DMRS is a device in which the DMRS is transmitted in a slot before a preset slot.
  • the at least two or more predetermined resource regions may be contiguous within the resource pool.
  • the two or more predetermined resource regions may not overlap each other in the frequency domain.
  • the predetermined resource region may include at least one subchannel.
  • the predetermined resource region may be configured with at least one Transmit Time Interval (TTI).
  • TTI Transmit Time Interval
  • Short TTI may be applied to the predetermined resource region.
  • the terminal may perform decoding based on short TTI in the predetermined resource region.
  • the ratio of the subchannel corresponding to the predetermined resource region to all subchannels for the terminal may be determined based on the priority of the terminal.
  • the ratio of the subchannel corresponding to the predetermined resource region to all subchannels for the terminal may be set to be larger as the value of ProSe Per-Packet Priority (PPPP) decreases.
  • PPPP ProSe Per-Packet Priority
  • the predetermined resource region may be indicated by a network through physical layer or higher layer signaling.
  • the predetermined resource area may be preset.
  • the terminal may be an autonomous vehicle or included in an autonomous vehicle.
  • FIG. 1 is a view showing a vehicle according to the embodiment(s).
  • FIG. 2 is a control block diagram of a vehicle according to the embodiment(s).
  • FIG. 3 is a control block diagram of an autonomous driving device according to the embodiment(s).
  • FIG. 4 is a block diagram of an autonomous driving device according to the embodiment(s).
  • FIG 5 is a view showing the interior of a vehicle according to the embodiment(s).
  • FIG. 6 is a block diagram referred to for describing a vehicle cabin system according to the embodiment(s).
  • FIG. 7 shows a structure of an LTE system to which the embodiment(s) can be applied.
  • FIG. 8 shows a radio protocol architecture for a user plane to which the embodiment(s) can be applied.
  • FIG. 9 shows a radio protocol structure for a control plane to which the embodiment(s) can be applied.
  • FIG. 10 shows a structure of an NR system to which the embodiment(s) can be applied.
  • 11 shows functional partitioning between NG-RAN and 5GC to which the embodiment(s) can be applied.
  • FIG. 12 shows a structure of an NR radio frame to which the embodiment(s) can be applied.
  • FIG. 13 shows a slot structure of an NR frame to which the embodiment(s) can be applied.
  • a method in which transmission resources of the next packet are also reserved may be used.
  • FIG. 15 shows an example in which a PSCCH is transmitted in sidelink transmission mode 3 or 4 to which the embodiment(s) can be applied.
  • 16 shows an example of physical layer processing at the transmission side to which the embodiment(s) can be applied.
  • 17 shows an example of physical layer processing at a receiving side to which the embodiment(s) can be applied.
  • FIG. 18 shows a synchronization source or a synchronization reference in V2X to which the embodiment(s) can be applied.
  • FIG. 19 shows an SS/PBCH block to which the embodiment(s) can be applied.
  • 20 is a diagram for explaining a method of obtaining timing information to which the embodiment(s) can be applied.
  • 21 is a diagram for explaining a process of obtaining system information to which the embodiment(s) can be applied.
  • 22 is a diagram for describing a random access procedure to which the embodiment(s) can be applied.
  • FIG. 23 is a diagram for explaining a threshold value of an SS block to which the embodiment(s) can be applied.
  • 24 is a diagram for explaining beam switching in PRACH retransmission to which the embodiment(s) can be applied.
  • 25 to 26 illustrate a parity check matrix to which the embodiment(s) may be applied.
  • FIG. 27 shows an encoder structure for a polar code to which the embodiment(s) can be applied.
  • 29 shows a UE RRC state transition to which the embodiment(s) may be applied.
  • FIG. 30 shows a state transition between NR/NGC and E-UTRAN/EPC to which the embodiment(s) may be applied.
  • 31 is a diagram for describing a DRX to which the embodiment(s) may be applied.
  • 33 to 35 are diagrams for describing the embodiment(s).
  • 36 to 42 are diagrams illustrating various devices to which the embodiment(s) may be applied.
  • FIG. 1 is a view showing a vehicle according to an embodiment.
  • a vehicle 10 is defined as a transportation means traveling on a road or track.
  • the vehicle 10 is a concept including a car, a train, and a motorcycle.
  • the vehicle 10 may be a concept including both an internal combustion engine vehicle including an engine as a power source, a hybrid vehicle including an engine and an electric motor as a power source, and an electric vehicle including an electric motor as a power source.
  • the vehicle 10 may be a vehicle owned by an individual.
  • the vehicle 10 may be a shared vehicle.
  • the vehicle 10 may be an autonomous vehicle.
  • FIG. 2 is a control block diagram of a vehicle according to an embodiment.
  • the vehicle 10 includes a user interface device 200, an object detection device 210, a communication device 220, a driving operation device 230, a main ECU 240, and a drive control device 250. ), an autonomous driving device 260, a sensing unit 270, and a location data generating device 280.
  • Each of 280 may be implemented as an electronic device that generates an electrical signal and exchanges electrical signals with each other.
  • the user interface device 200 is a device for communicating with the vehicle 10 and a user.
  • the user interface device 200 may receive a user input and provide information generated in the vehicle 10 to the user.
  • the vehicle 10 may implement a user interface (UI) or a user experience (UX) through the user interface device 200.
  • the user interface device 200 may include an input device, an output device, and a user monitoring device.
  • the object detection device 210 may generate information on an object outside the vehicle 10.
  • the information on the object may include at least one of information on the existence of the object, location information of the object, distance information between the vehicle 10 and the object, and relative speed information between the vehicle 10 and the object. .
  • the object detection device 210 may detect an object outside the vehicle 10.
  • the object detection device 210 may include at least one sensor capable of detecting an object outside the vehicle 10.
  • the object detection device 210 may include at least one of a camera, a radar, a lidar, an ultrasonic sensor, and an infrared sensor.
  • the object detection device 210 may provide data on an object generated based on a sensing signal generated by a sensor to at least one electronic device included in the vehicle.
  • the camera may generate information on an object outside the vehicle 10 by using the image.
  • the camera may include at least one lens, at least one image sensor, and at least one processor that is electrically connected to the image sensor and processes a received signal, and generates data about an object based on the processed signal.
  • the camera may be at least one of a mono camera, a stereo camera, and an AVM (Around View Monitoring) camera.
  • the camera may use various image processing algorithms to obtain position information of an object, distance information to an object, or information on a relative speed to an object. For example, from the acquired image, the camera may acquire distance information and relative speed information from the object based on a change in the size of the object over time. For example, the camera may obtain distance information and relative speed information with an object through a pin hole model, road surface profiling, or the like. For example, the camera may obtain distance information and relative speed information from an object based on disparity information from a stereo image obtained from a stereo camera.
  • the camera may be mounted in a position where field of view (FOV) can be secured in the vehicle in order to photograph the outside of the vehicle.
  • the camera may be placed in the interior of the vehicle, close to the front windshield, to acquire an image of the front of the vehicle.
  • the camera can be placed around the front bumper or radiator grille.
  • the camera may be placed in the interior of the vehicle, close to the rear glass, in order to acquire an image of the rear of the vehicle.
  • the camera can be placed around the rear bumper, trunk or tailgate.
  • the camera may be disposed in proximity to at least one of the side windows in the interior of the vehicle in order to acquire an image of the side of the vehicle.
  • the camera may be disposed around a side mirror, a fender, or a door.
  • the radar may generate information on an object outside the vehicle 10 using radio waves.
  • the radar may include at least one processor that is electrically connected to the electromagnetic wave transmitter, the electromagnetic wave receiver, and the electromagnetic wave transmitter and the electromagnetic wave receiver, processes a received signal, and generates data for an object based on the processed signal.
  • the radar may be implemented in a pulse radar method or a continuous wave radar method according to the principle of radio wave emission.
  • the radar may be implemented in a frequency modulated continuous wave (FMCW) method or a frequency shift keyong (FSK) method according to a signal waveform among continuous wave radar methods.
  • FMCW frequency modulated continuous wave
  • FSK frequency shift keyong
  • the radar detects an object by means of an electromagnetic wave, a time of flight (TOF) method or a phase-shift method, and detects the position of the detected object, the distance to the detected object, and the relative speed.
  • TOF time of flight
  • the radar may be placed at a suitable location outside of the vehicle to detect objects located in front, rear or side of the vehicle.
  • the lidar may generate information on an object outside the vehicle 10 using laser light.
  • the radar may include at least one processor that is electrically connected to the optical transmitter, the optical receiver, and the optical transmitter and the optical receiver, processes a received signal, and generates data for an object based on the processed signal. .
  • the rider may be implemented in a TOF (Time of Flight) method or a phase-shift method.
  • the lidar can be implemented either driven or non-driven. When implemented as a drive type, the lidar is rotated by a motor, and objects around the vehicle 10 can be detected. When implemented in a non-driven manner, the lidar can detect an object located within a predetermined range with respect to the vehicle by optical steering.
  • the vehicle 100 may include a plurality of non-driven lidars.
  • the radar detects an object based on a time of flight (TOF) method or a phase-shift method by means of a laser light, and determines the position of the detected object, the distance to the detected object, and the relative speed. Can be detected.
  • the lidar may be placed at an appropriate location outside the vehicle to detect objects located in front, rear or side of the vehicle.
  • the communication device 220 may exchange signals with devices located outside the vehicle 10.
  • the communication device 220 may exchange signals with at least one of an infrastructure (eg, a server, a broadcasting station), another vehicle, and a terminal.
  • the communication device 220 may include at least one of a transmission antenna, a reception antenna, a radio frequency (RF) circuit capable of implementing various communication protocols, and an RF element to perform communication.
  • RF radio frequency
  • the communication device may exchange signals with external devices based on C-V2X (Cellular V2X) technology.
  • C-V2X technology may include LTE-based sidelink communication and/or NR-based sidelink communication. Contents related to C-V2X will be described later.
  • a communication device can communicate with external devices based on the IEEE 802.11p PHY/MAC layer technology and the Dedicated Short Range Communications (DSRC) technology based on the IEEE 1609 Network/Transport layer technology, or the Wireless Access in Vehicular Environment (WAVE) standard. Can be exchanged.
  • DSRC or WAVE standard
  • ITS Intelligent Transport System
  • DSRC technology may use a frequency of 5.9GHz band, and may be a communication method having a data transmission rate of 3Mbps ⁇ 27Mbps.
  • IEEE 802.11p technology can be combined with IEEE 1609 technology to support DSRC technology (or WAVE standard).
  • the communication device may exchange signals with an external device using only either C-V2X technology or DSRC technology.
  • the communication device may exchange signals with an external device by hybridizing C-V2X technology and DSRC technology.
  • the driving operation device 230 is a device that receives a user input for driving. In the case of the manual mode, the vehicle 10 may be driven based on a signal provided by the driving operation device 230.
  • the driving operation device 230 may include a steering input device (eg, a steering wheel), an acceleration input device (eg, an accelerator pedal), and a brake input device (eg, a brake pedal).
  • the main ECU 240 may control the overall operation of at least one electronic device provided in the vehicle 10.
  • the drive control device 250 is a device that electrically controls various vehicle drive devices in the vehicle 10.
  • the drive control device 250 may include a power train drive control device, a chassis drive control device, a door/window drive control device, a safety device drive control device, a lamp drive control device, and an air conditioning drive control device.
  • the power train drive control device may include a power source drive control device and a transmission drive control device.
  • the chassis drive control device may include a steering drive control device, a brake drive control device, and a suspension drive control device.
  • the safety device driving control device may include a safety belt driving control device for controlling the safety belt.
  • the drive control device 250 includes at least one electronic control device (eg, a control Electronic Control Unit (ECU)).
  • ECU control Electronic Control Unit
  • the vehicle type control device 250 may control the vehicle driving device based on a signal received from the autonomous driving device 260.
  • the control device 250 may control a power train, a steering device, and a brake device based on a signal received from the autonomous driving device 260.
  • the autonomous driving device 260 may generate a path for autonomous driving based on the acquired data.
  • the autonomous driving device 260 may generate a driving plan for driving along the generated route.
  • the autonomous driving device 260 may generate a signal for controlling the movement of the vehicle according to the driving plan.
  • the autonomous driving device 260 may provide the generated signal to the driving control device 250.
  • the autonomous driving device 260 may implement at least one ADAS (Advanced Driver Assistance System) function.
  • ADAS includes Adaptive Cruise Control (ACC), Autonomous Emergency Braking (AEB), Forward Collision Warning (FCW), and Lane Keeping Assist (LKA). ), Lane Change Assist (LCA), Target Following Assist (TFA), Blind Spot Detection (BSD), Adaptive High Beam Control System (HBA: High Beam Assist) , Auto Parking System (APS), PD collision warning system (PD collision warning system), Traffic Sign Recognition (TSR), Traffic Sign Assist (TSA), night vision system At least one of (NV: Night Vision), Driver Status Monitoring (DSM), and Traffic Jam Assist (TJA) may be implemented.
  • ACC Adaptive Cruise Control
  • AEB Autonomous Emergency Braking
  • FCW Forward Collision Warning
  • LKA Lane Keeping Assist
  • LKA Lane Change Assist
  • TSA Traffic Spot Detection
  • HBA High Beam Ass
  • the autonomous driving device 260 may perform a switching operation from an autonomous driving mode to a manual driving mode or a switching operation from a manual driving mode to an autonomous driving mode. For example, the autonomous driving device 260 may change the mode of the vehicle 10 from the autonomous driving mode to the manual driving mode or the autonomous driving mode from the manual driving mode based on a signal received from the user interface device 200. Can be switched to.
  • the sensing unit 270 may sense the state of the vehicle.
  • the sensing unit 270 includes an inertial measurement unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, a tilt sensor, a weight detection sensor, a heading sensor, a position module, and a vehicle. It may include at least one of a forward/reverse sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illuminance sensor, and a pedal position sensor. Meanwhile, the inertial measurement unit (IMU) sensor may include one or more of an acceleration sensor, a gyro sensor, and a magnetic sensor.
  • IMU inertial measurement unit
  • the sensing unit 270 may generate state data of the vehicle based on a signal generated by at least one sensor.
  • the vehicle state data may be information generated based on data sensed by various sensors provided inside the vehicle.
  • the sensing unit 270 includes vehicle attitude data, vehicle motion data, vehicle yaw data, vehicle roll data, vehicle pitch data, vehicle collision data, vehicle direction data, vehicle angle data, and vehicle speed.
  • the location data generating device 280 may generate location data of the vehicle 10.
  • the location data generating apparatus 280 may include at least one of a Global Positioning System (GPS) and a Differential Global Positioning System (DGPS).
  • GPS Global Positioning System
  • DGPS Differential Global Positioning System
  • the location data generating apparatus 280 may generate location data of the vehicle 10 based on a signal generated by at least one of GPS and DGPS.
  • the location data generating apparatus 280 may correct the location data based on at least one of an IMU (Inertial Measurement Unit) of the sensing unit 270 and a camera of the object detection apparatus 210.
  • the location data generating device 280 may be referred to as a Global Navigation Satellite System (GNSS).
  • GNSS Global Navigation Satellite System
  • Vehicle 10 may include an internal communication system 50.
  • a plurality of electronic devices included in the vehicle 10 may exchange signals through the internal communication system 50.
  • the signal may contain data.
  • the internal communication system 50 may use at least one communication protocol (eg, CAN, LIN, FlexRay, MOST, Ethernet).
  • FIG. 3 is a control block diagram of an autonomous driving apparatus according to an embodiment.
  • the autonomous driving device 260 may include a memory 140, a processor 170, an interface unit 180, and a power supply unit 190.
  • the memory 140 is electrically connected to the processor 170.
  • the memory 140 may store basic data for a unit, control data for controlling the operation of the unit, and input/output data.
  • the memory 140 may store data processed by the processor 170.
  • the memory 140 may be configured with at least one of ROM, RAM, EPROM, flash drive, and hard drive.
  • the memory 140 may store various data for the overall operation of the autonomous driving device 260, such as a program for processing or controlling the processor 170.
  • the memory 140 may be implemented integrally with the processor 170. Depending on the embodiment, the memory 140 may be classified as a sub-element of the processor 170.
  • the interface unit 180 may exchange signals with at least one electronic device provided in the vehicle 10 by wire or wirelessly.
  • the interface unit 280 includes an object detection device 210, a communication device 220, a driving operation device 230, a main ECU 240, a drive control device 250, a sensing unit 270, and a position data generating device.
  • a signal may be exchanged with at least one of 280 by wire or wirelessly.
  • the interface unit 280 may be configured with at least one of a communication module, a terminal, a pin, a cable, a port, a circuit, an element, and a device.
  • the power supply unit 190 may supply power to the autonomous driving device 260.
  • the power supply unit 190 may receive power from a power source (eg, a battery) included in the vehicle 10 and supply power to each unit of the autonomous driving device 260.
  • the power supply unit 190 may be operated according to a control signal provided from the main ECU 240.
  • the power supply unit 190 may include a switched-mode power supply (SMPS).
  • SMPS switched-mode power supply
  • the processor 170 may be electrically connected to the memory 140, the interface unit 280, and the power supply unit 190 to exchange signals.
  • the processor 170 includes application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, and controllers. It may be implemented using at least one of (controllers), micro-controllers, microprocessors, and electrical units for performing other functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors and controllers. It may be implemented using at least one of (controllers), micro-controllers, microprocessors, and electrical units for performing other functions.
  • the processor 170 may be driven by power provided from the power supply unit 190.
  • the processor 170 may receive data, process data, generate a signal, and provide a signal while power is supplied by the power supply unit 190.
  • the processor 170 may receive information from another electronic device in the vehicle 10 through the interface unit 180.
  • the processor 170 may provide a control signal to another electronic device in the vehicle 10 through the interface unit 180.
  • the autonomous driving device 260 may include at least one printed circuit board (PCB).
  • the memory 140, the interface unit 180, the power supply unit 190, and the processor 170 may be electrically connected to a printed circuit board.
  • the processor 170 may perform a reception operation.
  • the processor 170 may receive data from at least one of the object detection device 210, the communication device 220, the sensing unit 270, and the location data generation device 280 through the interface unit 180. I can.
  • the processor 170 may receive object data from the object detection apparatus 210.
  • the processor 170 may receive HD map data from the communication device 220.
  • the processor 170 may receive vehicle state data from the sensing unit 270.
  • the processor 170 may receive location data from the location data generating device 280.
  • the processor 170 may perform a processing/determining operation.
  • the processor 170 may perform a processing/determining operation based on the driving situation information.
  • the processor 170 may perform a processing/decision operation based on at least one of object data, HD map data, vehicle state data, and location data.
  • the processor 170 may generate driving plan data.
  • the processor 1700 may generate electronic horizon data.
  • the electronic horizon data is understood as driving plan data within a range from the point where the vehicle 10 is located to the horizon.
  • Horizon may be understood as a point in front of a preset distance from a point where the vehicle 10 is located based on a preset driving route. It may mean a point at which the vehicle 10 can reach after a predetermined time from the point.
  • the electronic horizon data may include horizon map data and horizon pass data.
  • the horizon map data may include at least one of topology data, road data, HD map data, and dynamic data.
  • the horizon map data may include a plurality of layers.
  • the horizon map data may include a layer matching topology data, a second layer matching road data, a third layer matching HD map data, and a fourth layer matching dynamic data.
  • the horizon map data may further include static object data.
  • Topology data can be described as a map created by connecting the center of the road.
  • the topology data is suitable for roughly indicating the position of the vehicle, and may be in the form of data mainly used in a navigation for a driver.
  • the topology data may be understood as data about road information excluding information about a lane.
  • the topology data may be generated based on data received from an external server through the communication device 220.
  • the topology data may be based on data stored in at least one memory provided in the vehicle 10.
  • the road data may include at least one of slope data of a road, curvature data of a road, and speed limit data of a road.
  • the road data may further include overtaking prohibited section data.
  • Road data may be based on data received from an external server through the communication device 220.
  • the road data may be based on data generated by the object detection apparatus 210.
  • the HD map data includes detailed lane-level topology information of the road, connection information of each lane, and feature information for localization of the vehicle (e.g., traffic signs, lane marking/attributes, road furniture, etc.). I can.
  • the HD map data may be based on data received from an external server through the communication device 220.
  • the dynamic data may include various dynamic information that may be generated on a road.
  • the dynamic data may include construction information, variable speed lane information, road surface condition information, traffic information, moving object information, and the like.
  • the dynamic data may be based on data received from an external server through the communication device 220.
  • the dynamic data may be based on data generated by the object detection apparatus 210.
  • the processor 170 may provide map data within a range from the point where the vehicle 10 is located to the horizon.
  • the horizon pass data may be described as a trajectory that the vehicle 10 can take within a range from the point where the vehicle 10 is located to the horizon.
  • the horizon pass data may include data representing a relative probability of selecting any one road from a decision point (eg, a crossroads, a junction, an intersection, etc.).
  • the relative probability can be calculated based on the time it takes to reach the final destination. For example, at the decision point, if the first road is selected and the time it takes to reach the final destination is less than the second road is selected, the probability of selecting the first road is less than the probability of selecting the second road. Can be calculated higher.
  • Horizon pass data may include a main pass and a sub pass.
  • the main path can be understood as a trajectory connecting roads with a high relative probability to be selected.
  • the sub-path may be branched at at least one decision point on the main path.
  • the sub-path may be understood as a trajectory connecting at least one road having a low relative probability of being selected from at least one decision point on the main path.
  • the processor 170 may perform a control signal generation operation.
  • the processor 170 may generate a control signal based on electronic horizon data.
  • the processor 170 may generate at least one of a powertrain control signal, a brake device control signal, and a steering device control signal based on the electronic horizon data.
  • the processor 170 may transmit the generated control signal to the driving control device 250 through the interface unit 180.
  • the drive control device 250 may transmit a control signal to at least one of the power train 251, the brake device 252, and the steering device 253.
  • FIG. 5 is a view showing the interior of a vehicle according to the embodiment.
  • FIG. 6 is a block diagram referred to for describing a vehicle cabin system according to an embodiment.
  • a vehicle cabin system 300 (hereinafter, a cabin system) may be defined as a convenience system for a user using the vehicle 10.
  • the cabin system 300 may be described as a top-level system including a display system 350, a cargo system 355, a seat system 360, and a payment system 365.
  • the cabin system 300 includes a main controller 370, a memory 340, an interface unit 380, a power supply unit 390, an input device 310, an imaging device 320, a communication device 330, and a display system. 350, a cargo system 355, a seat system 360, and a payment system 365.
  • the cabin system 300 may further include other components other than the components described herein, or may not include some of the described components.
  • the main controller 370 is electrically connected to the input device 310, the communication device 330, the display system 350, the cargo system 355, the seat system 360, and the payment system 365 to exchange signals. can do.
  • the main controller 370 may control the input device 310, the communication device 330, the display system 350, the cargo system 355, the seat system 360, and the payment system 365.
  • the main controller 370 includes application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, It may be implemented using at least one of controllers, micro-controllers, microprocessors, and electrical units for performing other functions.
  • the main controller 370 may be configured with at least one sub-controller. According to an embodiment, the main controller 370 may include a plurality of sub-controllers. Each of the plurality of sub-controllers may individually control devices and systems included in the grouped cabin system 300. Devices and systems included in the cabin system 300 may be grouped by function or may be grouped based on seatable seats.
  • the main controller 370 may include at least one processor 371. 6 illustrates that the main controller 370 includes one processor 371, the main controller 371 may include a plurality of processors. The processor 371 may be classified as one of the above-described sub-controllers.
  • the processor 371 may receive signals, information, or data from a user terminal through the communication device 330.
  • the user terminal may transmit signals, information, or data to the cabin system 300.
  • the processor 371 may specify a user based on image data received from at least one of an internal camera and an external camera included in the imaging device.
  • the processor 371 may specify a user by applying an image processing algorithm to image data.
  • the processor 371 may compare information received from the user terminal with image data to identify a user.
  • the information may include at least one of route information, body information, passenger information, luggage information, location information, preferred content information, preferred food information, disability information, and usage history information of the user. .
  • the main controller 370 may include an artificial intelligence agent 372.
  • the artificial intelligence agent 372 may perform machine learning based on data acquired through the input device 310.
  • the artificial intelligence agent 372 may control at least one of the display system 350, the cargo system 355, the seat system 360, and the payment system 365 based on the machine learning result.
  • the memory 340 is electrically connected to the main controller 370.
  • the memory 340 may store basic data for a unit, control data for controlling the operation of the unit, and input/output data.
  • the memory 340 may store data processed by the main controller 370.
  • the memory 340 may be configured with at least one of ROM, RAM, EPROM, flash drive, and hard drive.
  • the memory 340 may store various data for overall operation of the cabin system 300, such as a program for processing or controlling the main controller 370.
  • the memory 340 may be implemented integrally with the main controller 370.
  • the interface unit 380 may exchange signals with at least one electronic device provided in the vehicle 10 by wire or wirelessly.
  • the interface unit 380 may be composed of at least one of a communication module, a terminal, a pin, a cable, a port, a circuit, an element, and a device.
  • the power supply unit 390 may supply power to the cabin system 300.
  • the power supply unit 390 may receive power from a power source (eg, a battery) included in the vehicle 10 and supply power to each unit of the cabin system 300.
  • the power supply unit 390 may be operated according to a control signal provided from the main controller 370.
  • the power supply unit 390 may be implemented as a switched-mode power supply (SMPS).
  • SMPS switched-mode power supply
  • the cabin system 300 may include at least one printed circuit board (PCB).
  • PCB printed circuit board
  • the main controller 370, the memory 340, the interface unit 380, and the power supply unit 390 may be mounted on at least one printed circuit board.
  • the input device 310 may receive a user input.
  • the input device 310 may convert a user input into an electrical signal.
  • the electrical signal converted by the input device 310 may be converted into a control signal and provided to at least one of the display system 350, the cargo system 355, the seat system 360, and the payment system 365.
  • At least one processor included in the main controller 370 or the cabin system 300 may generate a control signal based on an electrical signal received from the input device 310.
  • the input device 310 may include at least one of a touch input unit, a gesture input unit, a mechanical input unit, and a voice input unit.
  • the touch input unit may convert a user's touch input into an electrical signal.
  • the touch input unit may include at least one touch sensor to detect a user's touch input.
  • the touch input unit is integrally formed with at least one display included in the display system 350, thereby implementing a touch screen.
  • Such a touch screen may provide an input interface and an output interface between the cabin system 300 and a user.
  • the gesture input unit may convert a user's gesture input into an electrical signal.
  • the gesture input unit may include at least one of an infrared sensor and an image sensor for detecting a user's gesture input.
  • the gesture input unit may detect a user's 3D gesture input.
  • the gesture input unit may include a light output unit that outputs a plurality of infrared light or a plurality of image sensors.
  • the gesture input unit may detect a user's 3D gesture input through a time of flight (TOF) method, a structured light method, or a disparity method.
  • the mechanical input unit may convert a user's physical input (eg, pressing or rotating) through a mechanical device into an electrical signal.
  • the mechanical input unit may include at least one of a button, a dome switch, a jog wheel, and a jog switch. Meanwhile, the gesture input unit and the mechanical input unit may be integrally formed.
  • the input device 310 may include a gesture sensor, and may include a jog dial device formed to be in and out of a portion of a surrounding structure (eg, at least one of a seat, an armrest, and a door). .
  • a jog dial device formed to be in and out of a portion of a surrounding structure (eg, at least one of a seat, an armrest, and a door).
  • the jog dial device may function as a gesture input unit.
  • the jog dial device protrudes compared to the surrounding structure, the jog dial device may function as a mechanical input unit.
  • the voice input unit may convert a user's voice input into an electrical signal.
  • the voice input unit may include at least one microphone.
  • the voice input unit may include a beam foaming microphone.
  • the imaging device 320 may include at least one camera.
  • the imaging device 320 may include at least one of an internal camera and an external camera.
  • the internal camera can take an image inside the cabin.
  • the external camera may capture an image outside the vehicle.
  • the internal camera can acquire an image in the cabin.
  • the imaging device 320 may include at least one internal camera. It is preferable that the imaging device 320 includes a number of cameras corresponding to the number of passengers capable of boarding.
  • the imaging device 320 may provide an image acquired by an internal camera.
  • At least one processor included in the main controller 370 or the cabin system 300 detects the user's motion based on the image acquired by the internal camera, generates a signal based on the detected motion, and generates a display system.
  • the external camera may acquire an image outside the vehicle.
  • the imaging device 320 may include at least one external camera. It is preferable that the imaging device 320 includes a number of cameras corresponding to the boarding door.
  • the imaging device 320 may provide an image acquired by an external camera.
  • At least one processor included in the main controller 370 or the cabin system 300 may acquire user information based on an image acquired by an external camera.
  • At least one processor included in the main controller 370 or the cabin system 300 authenticates the user based on the user information, or the user's body information (for example, height information, weight information, etc.), Passenger information, user's luggage information, etc. can be obtained.
  • the communication device 330 can wirelessly exchange signals with an external device.
  • the communication device 330 may exchange signals with an external device through a network network or may directly exchange signals with an external device.
  • the external device may include at least one of a server, a mobile terminal, and another vehicle.
  • the communication device 330 may exchange signals with at least one user terminal.
  • the communication device 330 may include at least one of an antenna, a radio frequency (RF) circuit capable of implementing at least one communication protocol, and an RF element in order to perform communication.
  • RF radio frequency
  • the communication device 330 may use a plurality of communication protocols.
  • the communication device 330 may switch the communication protocol according to the distance to the mobile terminal.
  • the communication device may exchange signals with external devices based on C-V2X (Cellular V2X) technology.
  • C-V2X technology may include LTE-based sidelink communication and/or NR-based sidelink communication. Contents related to C-V2X will be described later.
  • a communication device can communicate with external devices based on the IEEE 802.11p PHY/MAC layer technology and the Dedicated Short Range Communications (DSRC) technology based on the IEEE 1609 Network/Transport layer technology, or the Wireless Access in Vehicular Environment (WAVE) standard. Can be exchanged.
  • DSRC or WAVE standard
  • ITS Intelligent Transport System
  • DSRC technology may use a frequency of 5.9GHz band, and may be a communication method having a data transmission rate of 3Mbps ⁇ 27Mbps.
  • IEEE 802.11p technology can be combined with IEEE 1609 technology to support DSRC technology (or WAVE standard).
  • the communication device may exchange signals with an external device using only either C-V2X technology or DSRC technology.
  • the communication device may exchange signals with an external device by hybridizing C-V2X technology and DSRC technology.
  • the display system 350 may display a graphic object.
  • the display system 350 may include at least one display device.
  • the display system 350 may include a first display device 410 that can be commonly used and a second display device 420 that can be used individually.
  • the first display device 410 may include at least one display 411 that outputs visual content.
  • the display 411 included in the first display device 410 is a flat panel display. It may be implemented as at least one of a curved display, a rollable display, and a flexible display.
  • the first display device 410 may include a first display 411 positioned at the rear of a seat and formed to be in and out of a cabin, and a first mechanism for moving the first display 411.
  • the first display 411 may be disposed in a slot formed in the main frame of the sheet so as to be retractable.
  • the first display device 410 may further include a flexible area control mechanism.
  • the first display may be formed to be flexible, and the flexible area of the first display may be adjusted according to the user's position.
  • the first display device 410 may include a second display positioned on a ceiling in a cabin and formed to be rollable, and a second mechanism for winding or unwinding the second display.
  • the second display may be formed to enable screen output on both sides.
  • the first display device 410 may include a third display positioned on a ceiling in a cabin and formed to be flexible, and a third mechanism for bending or unfolding the third display.
  • the display system 350 may further include at least one processor that provides a control signal to at least one of the first display device 410 and the second display device 420.
  • the processor included in the display system 350 may generate a control signal based on a signal received from at least one of the main controller 370, the input device 310, the imaging device 320, and the communication device 330. I can.
  • the display area of the display included in the first display device 410 may be divided into a first area 411a and a second area 411b.
  • the first area 411a may define content as a display area.
  • the first area 411 may display at least one of entertainment contents (eg, movies, sports, shopping, music, etc.), video conferences, food menus, and graphic objects corresponding to the augmented reality screen. I can.
  • the first area 411a may display a graphic object corresponding to driving situation information of the vehicle 10.
  • the driving situation information may include at least one of object information outside the vehicle, navigation information, and vehicle status information.
  • the object information outside the vehicle may include information on the presence or absence of the object, location information of the object, distance information between the vehicle 300 and the object, and relative speed information between the vehicle 300 and the object.
  • the navigation information may include at least one of map information, set destination information, route information according to the destination setting, information on various objects on the route, lane information, and current location information of the vehicle.
  • the vehicle status information includes vehicle attitude information, vehicle speed information, vehicle tilt information, vehicle weight information, vehicle direction information, vehicle battery information, vehicle fuel information, vehicle tire pressure information, vehicle steering information , Vehicle interior temperature information, vehicle interior humidity information, pedal position information, vehicle engine temperature information, and the like.
  • the second area 411b may be defined as a user interface area.
  • the second area 411b may output an artificial intelligence agent screen.
  • the second area 411b may be located in an area divided by a sheet frame.
  • the user can view the content displayed in the second area 411b between the plurality of sheets.
  • the first display device 410 may provide holographic content.
  • the first display device 410 may provide holographic content for each of a plurality of users so that only a user who has requested the content can view the content.
  • the second display device 420 may include at least one display 421.
  • the second display device 420 may provide the display 421 at a location where only individual passengers can check the display contents.
  • the display 421 may be disposed on the arm rest of the seat.
  • the second display device 420 may display a graphic object corresponding to the user's personal information.
  • the second display device 420 may include a number of displays 421 corresponding to the number of persons allowed to ride.
  • the second display device 420 may implement a touch screen by forming a layer structure or integrally with the touch sensor.
  • the second display device 420 may display a graphic object for receiving a user input for seat adjustment or room temperature adjustment.
  • the cargo system 355 may provide a product to a user according to a user's request.
  • the cargo system 355 may be operated based on an electrical signal generated by the input device 310 or the communication device 330.
  • the cargo system 355 may include a cargo box.
  • the cargo box may be concealed in a portion of the lower portion of the seat while the goods are loaded.
  • the cargo box may be exposed as a cabin.
  • the user can select a necessary product among the items loaded in the exposed cargo box.
  • the cargo system 355 may include a sliding moving mechanism and a product pop-up mechanism to expose a cargo box according to a user input.
  • the cargo system 355 may include a plurality of cargo boxes to provide various types of goods.
  • a weight sensor for determining whether to be provided for each product may be built into the cargo box.
  • the seat system 360 may provide a user with a customized sheet to the user.
  • the seat system 360 may be operated based on an electrical signal generated by the input device 310 or the communication device 330.
  • the seat system 360 may adjust at least one element of the seat based on the acquired user body data.
  • the seat system 360 may include a user detection sensor (eg, a pressure sensor) to determine whether the user is seated.
  • the seat system 360 may include a plurality of seats each of which a plurality of users can seat. Any one of the plurality of sheets may be disposed to face at least the other. At least two users inside the cabin may sit facing each other.
  • the payment system 365 may provide a payment service to a user.
  • the payment system 365 may be operated based on an electrical signal generated by the input device 310 or the communication device 330.
  • the payment system 365 may calculate a price for at least one service used by the user and request that the calculated price be paid.
  • a wireless communication system is a multiple access system that supports communication with multiple users by sharing available system resources (eg, bandwidth, transmission power, etc.).
  • multiple access systems include code division multiple access (CDMA) systems, frequency division multiple access (FDMA) systems, time division multiple access (TDMA) systems, orthogonal frequency division multiple access (OFDMA) systems, and single carrier frequency (SC-FDMA) systems. division multiple access) system, MC-FDMA (multi carrier frequency division multiple access) system, and the like.
  • Sidelink refers to a communication method in which a direct link is established between terminals (User Equipment, UEs), and voice or data is directly exchanged between terminals without going through a base station (BS).
  • the sidelink is being considered as a method that can solve the burden of the base station due to rapidly increasing data traffic.
  • V2X vehicle-to-everything refers to a communication technology that exchanges information with other vehicles, pedestrians, and infrastructure-built objects through wired/wireless communication.
  • V2X can be divided into four types: vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-network (V2N), and vehicle-to-pedestrian (V2P).
  • V2X communication may be provided through a PC5 interface and/or a Uu interface.
  • RAT radio access technology
  • NR new radio
  • V2X vehicle-to-everything
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • CDMA may be implemented with a radio technology such as universal terrestrial radio access (UTRA) or CDMA2000.
  • TDMA may be implemented using a radio technology such as global system for mobile communications (GSM)/general packet radio service (GPRS)/enhanced data rates for GSM evolution (EDGE).
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • EDGE enhanced data rates for GSM evolution
  • OFDMA may be implemented with wireless technologies such as IEEE (institute of electrical and electronics engineers) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802-20, and E-UTRA (evolved UTRA).
  • IEEE 802.16m is an evolution of IEEE 802.16e and provides backward compatibility with a system based on IEEE 802.16e.
  • UTRA is part of a universal mobile telecommunications system (UMTS).
  • 3rd generation partnership project (3GPP) long term evolution (LTE) is a part of evolved UMTS (E-UMTS) using evolved-UMTS terrestrial radio access (E-UTRA), and employs OFDMA in downlink and SC in uplink.
  • -Adopt FDMA LTE-A (advanced) is an evolution of 3GPP LTE.
  • 5G NR is the successor technology of LTE-A, and is a new clean-slate type mobile communication system with features such as high performance, low latency, and high availability.
  • 5G NR can utilize all available spectrum resources, from low frequency bands of less than 1 GHz to intermediate frequency bands of 1 GHz to 10 GHz and high frequency (millimeter wave) bands of 24 GHz or higher.
  • LTE-A or 5G NR is mainly described, but the technical idea is not limited thereto.
  • E-UTRAN Evolved-UMTS Terrestrial Radio Access Network
  • LTE Long Term Evolution
  • the E-UTRAN includes a base station (BS) 20 that provides a control plane and a user plane to the terminal 10.
  • the terminal 10 may be fixed or mobile, and may be referred to as other terms such as a mobile station (MS), a user terminal (UT), a subscriber station (SS), a mobile terminal (MT), and a wireless device.
  • the base station 20 refers to a fixed station communicating with the terminal 10, and may be referred to as an evolved-NodeB (eNB), a base transceiver system (BTS), an access point, and the like.
  • eNB evolved-NodeB
  • BTS base transceiver system
  • access point and the like.
  • the base stations 20 may be connected to each other through an X2 interface.
  • the base station 20 is connected to an Evolved Packet Core (EPC) 30 through an S1 interface, more specifically, a Mobility Management Entity (MME) through an S1-MME and a Serving Gateway (S-GW) through an S1-U.
  • EPC Evolved Packet Core
  • MME Mobility Management Entity
  • S-GW Serving Gateway
  • the EPC 30 is composed of MME, S-GW, and P-GW (Packet Data Network-Gateway).
  • the MME has access information of the terminal or information on the capabilities of the terminal, and this information is mainly used for mobility management of the terminal.
  • S-GW is a gateway with E-UTRAN as an endpoint
  • P-GW is a gateway with PDN as an endpoint.
  • the layers of the Radio Interface Protocol between the terminal and the network are L1 (Layer 1) based on the lower 3 layers of the Open System Interconnection (OSI) standard model, which is widely known in communication systems. It can be divided into L2 (second layer) and L3 (third layer).
  • L2 second layer
  • L3 third layer
  • the physical layer belonging to the first layer provides an information transfer service using a physical channel
  • the radio resource control (RRC) layer located in the third layer is a radio resource between the terminal and the network. It plays the role of controlling To this end, the RRC layer exchanges RRC messages between the terminal and the base station.
  • FIG. 8 shows a radio protocol architecture for a user plane to which the present invention can be applied.
  • the user plane is a protocol stack for transmitting user data
  • the control plane is a protocol stack for transmitting control signals.
  • a physical layer provides an information transmission service to an upper layer using a physical channel.
  • the physical layer is connected to an upper layer, a medium access control (MAC) layer, through a transport channel. Data is moved between the MAC layer and the physical layer through the transport channel. Transmission channels are classified according to how and with what characteristics data is transmitted over the air interface.
  • MAC medium access control
  • the physical channel may be modulated in an Orthogonal Frequency Division Multiplexing (OFDM) scheme, and uses time and frequency as radio resources.
  • OFDM Orthogonal Frequency Division Multiplexing
  • the MAC layer provides a service to an upper layer, a radio link control (RLC) layer, through a logical channel.
  • the MAC layer provides a mapping function from a plurality of logical channels to a plurality of transport channels.
  • the MAC layer provides a logical channel multiplexing function by mapping a plurality of logical channels to a single transport channel.
  • the MAC sublayer provides a data transmission service on a logical channel.
  • the RLC layer performs concatenation, segmentation, and reassembly of RLC SDUs.
  • the RLC layer In order to ensure various QoS (Quality of Service) required by Radio Bearer (RB), the RLC layer has a Transparent Mode (TM), Unacknowledged Mode (UM), and Acknowledged Mode. , AM).
  • TM Transparent Mode
  • UM Unacknowledged Mode
  • AM Acknowledged Mode.
  • AM RLC provides error correction through automatic repeat request (ARQ).
  • the Radio Resource Control (RRC) layer is defined only in the control plane.
  • the RRC layer is in charge of controlling logical channels, transport channels, and physical channels in relation to configuration, re-configuration, and release of radio bearers.
  • RB refers to a logical path provided by the first layer (PHY layer) and the second layer (MAC layer, RLC layer, PDCP layer) for data transmission between the terminal and the network.
  • Functions of the Packet Data Convergence Protocol (PDCP) layer in the user plane include transmission of user data, header compression, and ciphering.
  • Functions of the Packet Data Convergence Protocol (PDCP) layer in the control plane include transmission of control plane data and encryption/integrity protection.
  • Establishing the RB refers to a process of defining characteristics of a radio protocol layer and channel to provide a specific service, and setting specific parameters and operation methods for each.
  • the RB can be further divided into two types: Signaling Radio Bearer (SRB) and Data Radio Bearer (DRB).
  • SRB is used as a path for transmitting RRC messages in the control plane
  • DRB is used as a path for transmitting user data in the user plane.
  • the UE When an RRC connection is established between the RRC layer of the UE and the RRC layer of the E-UTRAN, the UE is in the RRC_CONNEDTED state, otherwise it is in the RRC_IDLE state.
  • the RRC_INACTIVE state is additionally defined, and the terminal in the RRC_INACTIVE state can release the connection with the base station while maintaining the connection with the core network.
  • a downlink transmission channel for transmitting data from a network to a terminal there is a broadcast channel (BCH) for transmitting system information and a downlink shared channel (SCH) for transmitting user traffic or control messages.
  • BCH broadcast channel
  • SCH downlink shared channel
  • downlink multicast or broadcast service traffic or control messages they may be transmitted through a downlink SCH or a separate downlink multicast channel (MCH).
  • RACH random access channel
  • SCH uplink shared channel
  • BCCH Broadcast Control Channel
  • PCCH Paging Control Channel
  • CCCH Common Control Channel
  • MCCH Multicast Control Channel
  • MTCH Multicast Traffic
  • the physical channel is composed of several OFDM symbols in the time domain and several sub-carriers in the frequency domain.
  • One sub-frame is composed of a plurality of OFDM symbols in the time domain.
  • a resource block is a resource allocation unit and is composed of a plurality of OFDM symbols and a plurality of sub-carriers.
  • each subframe may use specific subcarriers of specific OFDM symbols (eg, the first OFDM symbol) of the corresponding subframe for the PDCCH (Physical Downlink Control Channel), that is, the L1/L2 control channel.
  • TTI Transmission Time Interval
  • FIG. 10 shows the structure of an NR system to which the present invention can be applied.
  • the NG-RAN may include a gNB and/or an eNB that provides a user plane and a control plane protocol termination to a terminal.
  • 10 illustrates a case where only gNB is included.
  • the gNB and the eNB are connected to each other through an Xn interface.
  • the gNB and eNB are connected to the 5th generation core network (5G Core Network: 5GC) through the NG interface.
  • 5G Core Network: 5GC 5th generation core network
  • AMF access and mobility management function
  • UPF user plane function
  • FIG. 11 shows functional division between NG-RAN and 5GC to which the present invention can be applied.
  • the gNB is inter-cell radio resource management (Inter Cell RRM), radio bearer management (RB control), connection mobility control (Connection Mobility Control), radio admission control (Radio Admission Control), measurement setting and providing Functions such as (Measurement configuration & Provision) and dynamic resource allocation may be provided.
  • AMF can provide functions such as NAS security and idle state mobility processing.
  • UPF may provide functions such as mobility anchoring and PDU processing.
  • SMF Session Management Function
  • FIG. 12 shows the structure of an NR radio frame to which the present invention can be applied.
  • radio frames may be used in uplink and downlink transmission in NR.
  • the radio frame has a length of 10 ms and may be defined as two 5 ms half-frames (HF).
  • the half-frame may include five 1ms subframes (Subframe, SF).
  • a subframe may be divided into one or more slots, and the number of slots within a subframe may be determined according to a subcarrier spacing (SCS).
  • SCS subcarrier spacing
  • Each slot may include 12 or 14 OFDM(A) symbols according to a cyclic prefix (CP).
  • CP cyclic prefix
  • each slot may include 14 symbols.
  • each slot may include 12 symbols.
  • the symbol may include an OFDM symbol (or CP-OFDM symbol), an SC-FDMA symbol (or DFT-s-OFDM symbol).
  • Table 1 below shows the number of symbols per slot according to the SCS setting ( ⁇ ) when the normal CP is used ( ), number of slots per frame ( ) And the number of slots per subframe ( ) For example.
  • Table 2 illustrates the number of symbols per slot, the number of slots per frame, and the number of slots per subframe according to the SCS when the extended CP is used.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • the (absolute time) section of the time resource e.g., subframe, slot or TTI
  • TU Time Unit
  • FIG. 13 shows a slot structure of an NR frame to which the present invention can be applied.
  • a slot includes a plurality of symbols in the time domain.
  • one slot includes 14 symbols, but in the case of an extended CP, one slot may include 12 symbols.
  • one slot may include 7 symbols, but in the case of an extended CP, one slot may include 6 symbols.
  • the carrier includes a plurality of subcarriers in the frequency domain.
  • Resource Block (RB) may be defined as a plurality of (eg, 12) consecutive subcarriers in the frequency domain.
  • the BWP (Bandwidth Part) may be defined as a plurality of consecutive (P)RBs in the frequency domain, and may correspond to one numerology (eg, SCS, CP length, etc.).
  • the carrier may include up to N (eg, 5) BWPs. Data communication can be performed through an activated BWP.
  • Each element may be referred to as a resource element (RE) in the resource grid, and one complex symbol may be mapped.
  • RE resource element
  • a method in which transmission resources of the next packet are also reserved may be used.
  • FIG. 14 shows an example in which a transmission resource to which the present invention can be applied is selected.
  • transmission may be performed twice per MAC PDU.
  • a resource for retransmission may be reserved at a predetermined time gap.
  • the terminal can identify the transmission resources reserved by the other terminal or the resources used by the other terminal through sensing within the sensing window, and after excluding them within the selection window, random resources with less interference among the remaining resources Resources can be selected.
  • the terminal may decode a PSCCH including information on the period of the reserved resources within the sensing window, and measure the PSSCH RSRP from the resources periodically determined based on the PSCCH.
  • the UE may exclude resources in which the PSSCH RSRP value exceeds the threshold value from within the selection window. Thereafter, the terminal may randomly select a sidelink resource from among the remaining resources in the selection window.
  • the terminal may determine resources with less interference (eg, resources corresponding to the lower 20%) by measuring RSSI (Received Signal Strength Indication) of periodic resources within the sensing window.
  • the terminal may randomly select a sidelink resource from among resources included in the selection window among the periodic resources. For example, if the terminal fails to decode the PSCCH, the terminal can use the above method.
  • FIG. 15 shows an example in which a PSCCH is transmitted in sidelink transmission mode 3 or 4 to which the present invention can be applied.
  • PSCCH and PSSCH are transmitted in the FDM scheme.
  • PSCCH and PSSCH may be transmitted in an FDM manner on different frequency resources on the same time resource. Referring to FIG. 15, PSCCH and PSSCH may not be directly adjacent as shown in FIG. 15A, and PSCCH and PSSCH may be directly adjacent as shown in FIG. 15B.
  • the basic unit of this transmission is a sub-channel.
  • the subchannel may be a resource unit having one or more RB sizes on a frequency axis on a predetermined time resource (eg, a time resource unit).
  • the number of RBs included in the sub-channel ie, the size of the sub-channel and the starting position on the frequency axis of the sub-channel
  • the embodiment of FIG. 15 may be applied to NR sidelink resource allocation mode 1 or mode 2.
  • CAM Cooperative Awareness Message
  • DENM Decentralized Environmental Notification Message
  • a periodic message type CAM In vehicle-to-vehicle communication, a periodic message type CAM, an event triggered message type DENM, and the like may be transmitted.
  • the CAM may include basic vehicle information such as dynamic state information of the vehicle such as direction and speed, vehicle static data such as dimensions, external lighting conditions, and route history.
  • the size of the CAM can be 50-300 bytes.
  • CAM is broadcast, and the latency should be less than 100ms.
  • DENM may be a message generated in case of an unexpected situation such as a vehicle breakdown or an accident.
  • the size of the DENM can be less than 3000 bytes, and any vehicle within the transmission range can receive the message. In this case, DENM may have a higher priority than CAM.
  • Carrier reselection for V2X/sidelink communication may be performed in the MAC layer based on the Channel Busy Ratio (CBR) of the configured carriers and the PPPP (Prose Per-Packet Priority) of the V2X message to be transmitted.
  • CBR Channel Busy Ratio
  • PPPP Prose Per-Packet Priority
  • CBR may mean the portion of sub-channels in the resource pool detected that the S-RSSI measured by the terminal exceeds a preset threshold.
  • PPPP related to each logical channel may exist, and the setting of the PPPP value should reflect the latency required for both the terminal and the base station.
  • the UE may select one or more carriers among candidate carriers in increasing order from the lowest CBR.
  • the data unit to which the present invention can be applied may be subjected to physical layer processing at the transmitting side before being transmitted through the air interface, and the radio signal carrying the data unit to which the present invention can be applied is the receiving side ( receiving side) can be the object of physical layer processing.
  • 16 shows an example of physical layer processing at a transmission side to which the present invention can be applied.
  • Table 3 may indicate a mapping relationship between an uplink transport channel and a physical channel
  • Table 4 may indicate a mapping relationship between uplink control channel information and a physical channel.
  • Table 5 may indicate a mapping relationship between a downlink transport channel and a physical channel
  • Table 6 may indicate a mapping relationship between downlink control channel information and a physical channel.
  • Table 7 may indicate a mapping relationship between a sidelink transmission channel and a physical channel
  • Table 8 may indicate a mapping relationship between sidelink control channel information and a physical channel.
  • a transport side may perform encoding on a transport block (TB).
  • Data and control streams from the MAC layer may be encoded to provide transport and control services over a radio transmission link at the PHY layer.
  • the TB from the MAC layer may be encoded as a codeword at the transmitting side.
  • the channel coding scheme may be a combination of error detection, error correcting, rate matching, interleaving, and control information separated from a physical channel or a transport channel.
  • the channel coding scheme may be a combination of error detection, error correcting, rate matching, interleaving, and control information mapped on a physical channel or a transport channel. have.
  • the following channel coding scheme may be used for different types of transport channels and different types of control information.
  • a channel coding scheme for each transport channel type may be shown in Table 9.
  • a channel coding scheme for each control information type may be shown in Table 10.
  • Control information Channel coding method DCI Polar code SCI UCI Block code, Polar code
  • the transmitting side may attach a cyclic redundancy check (CRC) sequence to the TB.
  • CRC cyclic redundancy check
  • the transmitting side can provide error detection for the receiving side.
  • the transmitting side may be a transmitting terminal, and the receiving side may be a receiving terminal.
  • a communication device may use an LDPC code to encode/decode UL-SCH and DL-SCH.
  • the NR system can support two LDPC base graphs (ie, two LDPC base metrics).
  • the two LDPC base graphs may be LDPC base graph 1 optimized for small TB and LDPC base graph for large TB.
  • the transmission side may select LDPC base graph 1 or 2 based on the size of the TB and the coding rate (R).
  • the coding rate may be indicated by a modulation coding scheme (MCS) index (I_MCS).
  • MCS index may be dynamically provided to the UE by the PUSCH or the PDCCH scheduling the PDSCH. Or, the MCS index may be dynamically provided to the terminal by the PDCCH to (re) initialize or activate the UL configured grant 2 or DL SPS.
  • the MCS index may be provided to the UE by RRC signaling related to UL configured grant type 1.
  • the transmission side may divide the TB to which the CRC is attached into a plurality of code blocks. In addition, the transmitting side may attach an additional CRC sequence to each code block.
  • the maximum code block size for LDPC base graph 1 and LDPC base graph 2 may be 8448 bits and 3480 bits, respectively. If the TB to which the CRC is attached is not larger than the maximum code block size for the selected LDPC base graph, the transmitting side may encode the TB to which the CRC is attached to the selected LDPC base graph. The transmitting side may encode each code block of the TB into the selected LDPC basic graph.
  • LDPC-coded blocks may be rate matched individually.
  • Code block concatenation may be performed to generate a codeword for transmission on a PDSCH or PUSCH.
  • PDSCH Downlink Control Channel
  • PUSCH Up to two codewords (ie, up to two TBs) may be simultaneously transmitted on the PDSCH.
  • PUSCH may be used for transmission of UL-SCH data and layer 1 and/or 2 control information.
  • the layer 1 and/or 2 control information may be multiplexed with a codeword for UL-SCH data.
  • the transmitting side may perform scrambling and modulation on the codeword.
  • the bits of the codeword can be scrambled and modulated to produce a block of complex-valued modulation symbols.
  • the transmitting side may perform layer mapping.
  • the complex-valued modulation symbols of the codeword may be mapped to one or more multiple input multiple output (MIMO) layers.
  • Codewords can be mapped to up to four layers.
  • the PDSCH can carry two codewords, and thus the PDSCH can support up to 8-layer transmission.
  • PUSCH can support a single codeword, and thus, PUSCH can support up to 4-rate transmission.
  • the transmission side may perform precoding conversion.
  • the downlink transmission waveform may be a general OFDM using a cyclic prefix (CP).
  • transform precoding ie, discrete Fourier transform (DFT)
  • DFT discrete Fourier transform
  • the uplink transmission waveform may be a conventional OFDM using a CP having a transform precoding function that performs DFT spreading that can be disabled or enabled.
  • transform precoding can be selectively applied. Transformation precoding may be to spread the uplink data in a special manner to reduce the peak-to-average power ratio (PAPR) of the waveform.
  • Transform precoding may be a form of DFT. That is, the NR system can support two options for an uplink waveform. One may be CP-OFDM (same as the DL waveform), and the other may be DFT-s-OFDM. Whether the terminal should use CP-OFDM or DFT-s-OFDM may be determined by the base station through the RRC parameter.
  • the transmitting side may perform subcarrier mapping. Layers can be mapped to antenna ports.
  • a transparent manner (non-codebook-based) mapping may be supported, and how beamforming or MIMO precoding is performed may be transparent to the terminal. have.
  • both non-codebook-based mapping and codebook-based mapping may be supported.
  • the transmitting side may map complex-valued modulation symbols to subcarriers in the resource block allocated to the physical channel. have.
  • the transmitting side may perform OFDM modulation.
  • the communication device at the transmitting side sets the time-continuous OFDM baseband signal on the antenna port p and the subcarrier spacing for the OFDM symbol 1 in the TTI for the physical channel (u ) Can be created.
  • the communication device of the transmitting side may perform Inverse Fast Fourier Transform (IFFT) on a complex-valued modulation symbol mapped to a resource block of the corresponding OFDM symbol.
  • IFFT Inverse Fast Fourier Transform
  • the communication device of the transmission side may add a CP to the IFFT signal to generate an OFDM baseband signal.
  • the transmitting side may perform up-conversion.
  • the communication device at the transmitting side may up-convert the OFDM baseband signal, subcarrier spacing setting (u), and OFDM symbol (l) for the antenna port (p) to the carrier frequency (f0) of the cell to which the physical channel is allocated. .
  • the processors 9011 and 9021 of FIG. 23 may be configured to perform encoding, scrambling, modulation, layer mapping, precoding transformation (for uplink), subcarrier mapping, and OFDM modulation.
  • 17 shows an example of physical layer processing at a receiving side to which the present invention can be applied.
  • the physical layer processing at the receiving side may basically be an inverse processing of the physical layer processing at the transmitting side.
  • the receiving side may perform frequency down-conversion.
  • the communication device of the receiving side may receive an RF signal of a carrier frequency through an antenna.
  • the transceivers 9013 and 9023 for receiving the RF signal at the carrier frequency may down-convert the carrier frequency of the RF signal to the baseband to obtain an OFDM baseband signal.
  • the receiving side may perform OFDM demodulation.
  • the communication device at the receiving side may acquire a complex-valued modulation symbol through CP separation and FFT. For example, for each OFDM symbol, the communication device at the receiving side may remove the CP from the OFDM baseband signal.
  • the communication device at the receiving side performs FFT on the CP-removed OFDM baseband signal to obtain complex-valued modulation symbols for the antenna port (p), subcarrier spacing (u), and OFDM symbol (l). I can.
  • the receiving side may perform subcarrier demapping.
  • Subcarrier demapping may be performed on a complex-valued modulation symbol to obtain a complex-valued modulation symbol of a corresponding physical channel.
  • the processor of the terminal may obtain a complex-valued modulation symbol mapped to a subcarrier belonging to the PDSCH among complex-valued modulation symbols received in a bandwidth part (BWP).
  • BWP bandwidth part
  • the receiving side may perform transform de-precoding.
  • transform de-precoding eg, IDFT
  • IDFT a complex-value modulated symbol of an uplink physical channel.
  • transform de-precoding may not be performed.
  • step S114 the receiving side may perform layer demapping.
  • the complex-valued modulation symbol can be demapped into one or two codewords.
  • the receiving side may perform demodulation and descrambling.
  • the complex-value modulated symbol of the codeword can be demodulated and descrambled with bits of the codeword.
  • the receiving side may perform decoding.
  • the codeword can be decoded into TB.
  • LDPC base graph 1 or 2 may be selected based on the size of TB and coding rate (R).
  • the codeword may include one or a plurality of coded blocks. Each coded block may be decoded into a code block to which a CRC is attached or a TB to which a CRC is attached to the selected LDPC base graph.
  • the CRC sequence may be removed from each of the code blocks to which the CRC is attached, and code blocks may be obtained.
  • the code block may be connected to the TB to which the CRC is attached.
  • the TB CRC sequence can be removed from the TB to which the CRC is attached, whereby the TB can be obtained.
  • TB can be delivered to the MAC layer.
  • the processors 9011 and 9021 of FIG. 22 may be configured to perform OFDM demodulation, subcarrier demapping, layer demapping, demodulation, descrambling, and decoding.
  • time and frequency domain resources related to subcarrier mapping e.g., OFDM symbol, subcarrier, carrier frequency
  • OFDM modulation e.g., OFDM modulation
  • frequency up/down conversion are resource allocation (e.g. For example, it may be determined based on an uplink grand and downlink allocation).
  • time division multiple access TDMA
  • frequency division multiples access FDMA
  • ISI inter-symbol interference
  • ICI inter-carrier interference
  • MIB-SL-V2X master information block-sidelink-V2X
  • the terminal is directly synchronized to the GNSS (global navigation satellite systems), or indirectly synchronized to the GNSS through a terminal (in network coverage or out of network coverage) directly synchronized with the GNSS.
  • the GNSS global navigation satellite systems
  • the UE may calculate the DFN and the subframe number using the UTC (Coordinated Universal Time) and (pre) set DFN (Direct Frame Number) offset.
  • the terminal may be directly synchronized with the base station or may be synchronized with another terminal that is time/frequency synchronized with the base station.
  • the base station may be an eNB or a gNB.
  • the terminal may receive synchronization information provided by the base station, and may be directly synchronized with the base station. Thereafter, the terminal may provide synchronization information to other adjacent terminals.
  • the base station timing is set as the synchronization criterion
  • the UE is a cell associated with a corresponding frequency (if it is within cell coverage at the frequency), a primary cell or a serving cell (if it is outside the cell coverage at the frequency) for synchronization and downlink measurement. ) Can be followed.
  • the base station may provide synchronization settings for carriers used for V2X/sidelink communication.
  • the terminal may follow the synchronization setting received from the base station. If the terminal has not detected any cell in the carrier used for the V2X/sidelink communication and has not received a synchronization setting from a serving cell, the terminal may follow a preset synchronization setting.
  • the terminal may be synchronized to another terminal that has not directly or indirectly obtained synchronization information from the base station or the GNSS.
  • the synchronization source and preference may be preset to the terminal.
  • the synchronization source and preference may be set through a control message provided by the base station.
  • the sidelink synchronization source may be associated with synchronization priority.
  • the relationship between the synchronization source and the synchronization priority may be defined as shown in Table 11.
  • Table 11 is only an example, and the relationship between the synchronization source and the synchronization priority may be defined in various forms.
  • GNSS-based synchronization Base station-based synchronization (eNB/gNB-based synchronization) P0 GNSS Base station P1 All terminals synchronized directly to GNSS All terminals synchronized directly to the base station P2 All terminals indirectly synchronized to GNSS All terminals indirectly synchronized to the base station P3 All other terminals GNSS P4 N/A All terminals synchronized directly to GNSS P5 N/A All terminals indirectly synchronized to GNSS P6 N/A All other terminals
  • Whether to use GNSS-based synchronization or base station-based synchronization may be set (in advance).
  • the terminal can derive the transmission timing of the terminal from an available synchronization criterion having the highest priority.
  • GNSS, eNB, and UE may be set/selected as synchronization (sync) references.
  • gNB has been introduced, and therefore, NR gNB can also be a synchronization reference. In this case, it is necessary to determine the synchronization source priority of the gNB.
  • the NR terminal may not implement the LTE synchronization signal detector or may not access the LTE carrier. (Non-standalone NR UE) In this situation, the LTE terminal and the NR terminal may have different timings, which is undesirable from the viewpoint of effective resource allocation.
  • the synchronization source/reference may be defined as a subject that transmits a synchronization signal or a synchronization signal used for the UE to induce timing for transmitting and receiving a sidelink signal or inducing a subframe boundary. If the terminal receives the GNSS signal and induces the subframe boundary based on the UTC timing derived from the GNSS, the GNSS signal or the GNSS may be a synchronization source/reference.
  • GNSS, eNB, and UE may be set/selected as synchronization (sync) references.
  • gNB has been introduced, and therefore, NR gNB can also be a synchronization reference. In this case, it is necessary to determine the synchronization source priority of the gNB.
  • the NR terminal may not implement the LTE synchronization signal detector or may not access the LTE carrier. (Non-standalone NR UE) In this situation, the LTE terminal and the NR terminal may have different timings, which is undesirable from the viewpoint of effective resource allocation.
  • the synchronization source/reference may be defined as a subject that transmits a synchronization signal or a synchronization signal used to induce timing for the UE to transmit/receive a sidelink signal or induce a subframe boundary. If the terminal receives the GNSS signal and induces the subframe boundary based on the UTC timing derived from the GNSS, the GNSS signal or the GNSS may be a synchronization source/reference.
  • the base station and the terminal may perform an initial access (IA) operation.
  • IA initial access
  • Cell discovery is a procedure in which the UE acquires time and frequency synchronization with a cell and detects the physical layer cell ID of the cell.
  • the UE receives the following synchronization signal (SS), the primary synchronization signal (PSS), and a secondary synchronization signal (SSS) to perform cell discovery.
  • SS synchronization signal
  • PSS primary synchronization signal
  • SSS secondary synchronization signal
  • the UE should assume that the PBCH (Physical Broadcast Channel), PSS, and SSS are received in consecutive symbols and form an SS/PBCH block.
  • the UE should assume that the SSS, PBCH DM-RS and PBCH data have the same EPRE.
  • the UE may assume that the ratio of PSS EPRE to SSS EPRE in the SS/PBCH block of the corresponding cell is 0 dB or 3 dB.
  • the UE's cell discovery procedure can be summarized in Table 12.
  • the synchronization signal and the PBCH block are composed of the primary synchronization signal (PSS) and secondary synchronization signal (SSS) occupying 1 symbol and 127 subcarriers, respectively, and 3 OFDM symbols and a PBCH spanning 240 subcarriers. As shown in Fig. 19, one symbol is left unused in the middle of the SSS.
  • the period of the SS/PBCH block can be configured by the network, and the time position at which the SS/PBCH block can be transmitted is determined by the subcarrier interval.
  • Polar coding is used for PBCH.
  • the UE may assume a band-specific subcarrier spacing for the SS/PBCH block.
  • the PBCH symbol carries a unique frequency-multiplexed DMRS.
  • QPSK modulation is used for PBCH.
  • PSS sequence Is defined by the following equation (2)
  • This sequence is mapped to the physical resource shown in FIG. 19.
  • the first symbol index for the candidate SS/PBCH block is determined according to the subcarrier spacing of the SS/PBCH block as follows.
  • n 0, 1.
  • n 0, 1, 2, 3.
  • candidate SS/PBCH blocks are indexed in ascending order in time order from 0 to L-1.
  • the UE may be configured by a higher layer parameter SSB-transmitted-SIB1, which is an index of an SS/PBCH block for a UE that should not receive other signals or channels of REs overlapping with the RE corresponding to the SS/PBCH block.
  • SIB higher layer parameter
  • the UE may be configured by the higher layer parameter SSB-transmitted, which is an index of the SS/PBCH block that should not receive other signals or channels of REs overlapping with the REs corresponding to the SS/PBCH block. Configuration by SSB-transmitted takes precedence over configuration by SSB-transmitted-SIB1.
  • the UE may be configured by the upper layer parameter SSB-periodicityServingCell, which is a half frame period for reception of an SS/PBCH block per serving cell per serving cell. If the period of the half frame for reception of the SS/PBCH block is not configured for the UE, the UE must assume the period of the half frame. The UE should assume that the period is the same for all SS/PBCH blocks of the serving cell.
  • 20 shows a method for a UE to obtain timing information.
  • the UE may obtain 6-bit SFN information through a MasterInformationBlock (MIB) received on the PBCH.
  • MIB MasterInformationBlock
  • 4 bits of SFN may be obtained in the PBCH transport block.
  • the UE can obtain a 1-bit half frame indication as part of the PBCH payload.
  • the UE can obtain the SS/PBCH block index by the DMRS sequence and PBCH payload. That is, the LSB 3 bits of the SS block index are obtained by the DMRS sequence within a 5ms period. In addition, the MSB 3 bits of the timing information are explicitly transferred to the PBCH payload (for 6 GHz or higher).
  • the UE may assume that a half frame having an SS/PBCH block occurs in a period of 2 frames.
  • the UE for FR1 And for FR2 In the case of, it is determined that there is a control resource set for the Type0-PDCCH common search space.
  • UE about FR1 And for FR2 In the case of, it is determined that there is no control resource set for the Type0-PDCCH common search space.
  • the UE For a serving cell without transmission of the SS/PBCH block, the UE acquires time and frequency synchronization of the serving cell based on reception of the SS/PBCH block on the PCell or PSCell of the cell group for the serving cell.
  • SI System information
  • MIB MasterInformationBlock
  • SIB SystemInformationBlocks
  • -MIB MasterInformationBlock
  • SIB1 SystemInformationBlockType1
  • SIB1 SystemInformationBlockType1 is periodically and repeatedly transmitted on the DL-SCH.
  • SIB1 contains information on availability and scheduling of other SIBs (eg, periodicity, SI window size). It also indicates whether they (i.e., other SIBs) are provided on a periodic broadcast basis or on a request basis. If other SIBs are provided on a request basis, SIB1 includes information for the UE to perform the SI request.
  • SIs other than -SystemInformationBlockType1 are delivered as SI (SystemInformation) messages transmitted through the DL-SCH. Each SI message is transmitted within a time domain window (SI window) that occurs periodically.
  • SI window time domain window
  • RAN provides necessary SI through dedicated signaling. Nevertheless, the UE must acquire the PSCell's MIB to obtain the SCG's SFN timing (which may be different from the MCG). When the related SI for the SCell is changed, the RAN releases and adds the related SCell. In the case of PSCell, the SI can be changed only by reconfiguration through synchronization.
  • the UE obtains AS and NAS information by applying the SI acquisition procedure.
  • the procedure is applied to the UE of RRC_IDLE, RRC_INACTIVE and RRC_CONNECTED.
  • the RRC_IDLE and RRC_INACTIVE UEs must have valid versions of (at least) MasterInformationBlock, SystemInformationBlockType1 and SystemInformationBlockTypeX through SystemInformationBlockTypeY (depending on the support of the relevant RAT for UE control mobility).
  • the UE of RRC_CONNECTED must have a valid version of (at least) MasterInformationBlock, SystemInformationBlockType1 and SystemInformationBlockTypeX (according to mobility support for the relevant RAT).
  • the UE must store the related SI obtained from the currently camped cell/serving cell.
  • the version of the SI acquired and stored by the UE is valid only for a specific time.
  • the UE can use this stored version of the SI. For example, this is the case after cell reselection, returning out of coverage, or after SI change indication.
  • the random access procedure of the UE may be summarized in Table 13 and FIG. 22.
  • the UE may transmit a PRACH preamble in UL.
  • Random access preamble sequences of two lengths are supported.
  • the long sequence length 839 is applied in the subcarrier spacing of 1.25 and 5 kHz
  • the short sequence length 139 is applied in the subcarrier spacing of 15, 30, 60 and 120 kHz.
  • Long sequences support unrestricted sets and restricted sets of Type A and Type B, while short sequences support only unrestricted sets.
  • RACH preamble formats are defined with one or more RACH OFDM symbols and different cyclic prefixes and guard times.
  • the PRACH preamble configuration to be used is provided to the UE in system information.
  • the UE may retransmit the PRACH preamble through power ramping within a preset number of times.
  • the UE calculates the PRACH transmission power for retransmission of the preamble based on the most recent estimated path loss and power ramp counter. When the UE performs beam switching, the power ramping counter is maintained unchanged.
  • the system information informs the UE of the association between the SS block and the RACH resource.
  • 23 shows the concept of a threshold value of an SS block for RACH resource association.
  • the threshold value of the SS block for RACH resource association is based on RSRP and network configurability. Transmission or retransmission of the RACH preamble is based on an SS block that satisfies the threshold value.
  • the DL-SCH may provide timing alignment information, RA-preamble ID, initial UL grant, and Temporary C-RNTI.
  • the UE may perform (transmit) UL transmission through the UL-SCH as Msg3 of the random access procedure.
  • Msg3 may include an RRC connection request and a UE identifier.
  • the network can transmit Msg4, which can be treated as a contention resolution message on the DL.
  • Msg4 can be treated as a contention resolution message on the DL.
  • the UE can enter the RRC connected state.
  • Layer 1 Before starting the physical random access procedure, Layer 1 must receive a set of SS/PBCH block indexes from an upper layer and provide a corresponding RSRP measurement set to the upper layer.
  • Layer 1 Before starting the physical random access procedure, Layer 1 must receive the following information from the upper layer.
  • PRACH Physical Random Access Channel
  • transmission parameter configuration PRACH preamble format, time resources, and frequency resources for PRACH transmission.
  • -Parameters for determining the root sequence and its cyclic shift in the PRACH preamble sequence set index of the logical root sequence table, cyclic shift (), set type (unrestricted, restricted set A, or restricted set B)).
  • the L1 random access procedure includes transmission of a random access preamble (Msg1) in the PRACH, a random access response (RAR) message with a PDCCH / PDSCH (Msg2), and, if applicable, a Msg3 PUSCH and a PDSCH for contention cancellation.
  • Msg1 random access preamble
  • RAR random access response
  • the random access preamble transmission has the same subcarrier interval as the random access preamble transmission initiated by the higher layer.
  • the UE uses the UL / SUL indicator field value from the detected “PDCCH order” for transmission of the corresponding random access preamble. Determine the UL carrier.
  • the physical random access procedure is triggered according to a PRACH transmission request or a PDCCH order by an upper layer.
  • the upper layer configuration for PRACH transmission includes the following.
  • the preamble is the transmit power on the indicated PRACH resource. It is transmitted using the selected PRACH format.
  • the UE is provided with a plurality of SS/PBCH blocks related to one PRACH occasion by the value of the upper layer parameter SSB-perRACH-Occasion.
  • SSB-perRACH-Occasion When the value of SSB-perRACH-Occasion is less than 1, one SS/PBCH block is mapped to 1/consecutive PRACH occasion, SSB-per-rach-occasion.
  • the UE receives a plurality of preambles per SS/PBCH block by the value of the upper layer parameter cb-preamblePerSSB, and the UE determines the total number of preambles per SSB per PRACH opportunity by the product of SSB-perRACH-Occasion and cb-preamblePerSSB. do.
  • SS / PBCH block index is mapped to PRACH occasions in the following order
  • the period starting from frame 0 is, Greater than or equal to, ⁇ 1, 2, 4 ⁇ is the smallest period of the PRACH configuration period, where the UE is from the upper layer parameter SSB-transmitted-SIB1 To get Is the number of SS/PBCH blocks that can be mapped to one PRACH configuration period.
  • the UE and the last symbol of PDCCH order reception PRACH should be transmitted in the first available PRACH occasion, which is the time between the first symbols of PRACH transmission equal to or greater than msec.
  • the PUSCH preparation time for PUSCH processing capability 1 Is the time interval of the symbol, is a preset value, is.
  • the UE attempts to detect the PDCCH corresponding to the RA-RNTI during the window controlled by the upper layer.
  • the window starts from the first symbol of the initial control resource set, and the UE is at least after the last symbol of the preamble sequence transmission. It is configured for the symbol Type1-PDCCH common search space.
  • the window length as the number of slots based on the subcarrier spacing for the Type0-PDCCH common search space is provided by the upper layer parameter rar-WindowLength.
  • the UE If the UE detects the PDCCH corresponding to the RA-RNTI and the corresponding PDSCH including the DL-SCH transport block in the window, the UE delivers the transport block to the upper layer.
  • the upper layer parses the transport block for RAPID (Random Access Preamble Identity) related to PRACH transmission.
  • RAPID Random Access Preamble Identity
  • the upper layer indicates an uplink grant to the physical layer. This is referred to as a RAR (Random Access Response) UL grant in the physical layer. If the upper layer does not identify the RAPID related to PRACH transmission, the upper layer may instruct the physical layer to transmit the PRACH.
  • the minimum time between the last symbol of PDSCH reception and the first symbol of PRACH transmission is Same as msec, where Is corresponding to the PDSCH reception time for PDSCH processing capability 1 when an additional PDSCH DM-RS is configured. It is the time interval of the symbol.
  • the UE For the detected SS/PBCH block or the received CSI, the UE must receive a corresponding PDSCH including a DL-SCH transport block having the same DM-RS antenna port quasi co-location property and a PDCCH of the corresponding RA-RNTI.
  • the UE attempts to detect the PDCCH corresponding to the RA-RNTI in response to PRACH transmission initiated by the PDCCH order, the UE assumes that the PDCCH and the PDCCH order have the same DM-RS antenna port quasi co-location attribute.
  • the contents of RAR UL approvals starting with MSB and ending with LSB are shown in Table 14.
  • Table 14 shows the random access response grant content field size.
  • RAR grant field Number of bits Frequency hopping flag One Msg3 PUSCH frequency resource allocation 12 Msg3 PUSCH time resource allocation 4 MCS 4 TPC command for Msg3 PUSCH 3 CSI request One Reserved bits 3
  • Msg3 PUSCH frequency resource allocation is for uplink resource allocation type 1.
  • Bits are used as hopping information bits as described in the following [Table 14].
  • MCS is determined from the first 16 indexes of the MCS index table applicable to PUSCH
  • Table 15 shows the TPC commands for Msg3 PUSCH.
  • the CSI request field is interpreted to determine whether an aperiodic CSI report is included in the corresponding PUSCH transmission.
  • the CSI request field is reserved.
  • the UE receives the subsequent PDSCH using the same subcarrier interval as the PDSCH reception providing the RAR message.
  • the UE If the UE does not detect the PDCCH within the window using the corresponding RA-RNTI and the corresponding DL-SCH transport block, the UE performs a random access response reception failure procedure.
  • the UE may perform power ramping for retransmission of the random access preamble based on the power ramping counter.
  • FIG. As shown in I.6, when the UE performs beam switching in PRACH retransmission, the power ramping counter remains unchanged.
  • the UE when the UE retransmits the random access preamble for the same beam, the UE may increase the power ramping counter by 1. However, even if the beam changes, the power lamp counter does not change.
  • the upper layer parameter msg3-tp indicates whether or not the UE should apply transform pre-coding for Msg3 PUSCH transmission.
  • the frequency offset for the second hop is given in Table 16. Table 16 shows the frequency offset for the second hop for Msg3 PUSCH transmission with frequency hopping.
  • the subcarrier spacing for Msg3 PUSCH transmission is provided by the upper layer parameter msg3-scs.
  • the UE must transmit the PRACH and Msg3 PUSCH through the same uplink carrier of the same serving cell.
  • the UL BWP for Msg3 PUSCH transmission is indicated by SystemInformationBlockType1.
  • the minimum time between the last symbol of PDSCH reception carrying RAR to the UE and the first symbol of corresponding Msg3 PUSCH transmission scheduled by the RAR of PDSCH is Same as msec.
  • Is the time interval of the symbol Is a time interval of a symbol corresponding to the PUSCH preparation time for PUSCH processing capability 1, Is the maximum timing adjustment value that can be provided in the TA command field of the RAR.
  • the UE In response to Msg3 PUSCH transmission when the C-RNTI is not provided to the UE, the UE attempts to detect the PDCCH with the TC-RNTI scheduling the PDSCH including the UE contention resolution ID. In response to receiving the PDSCH through the UE contention cancellation ID, the UE transmits HARQ-ACK information on the PUCCH.
  • the minimum time between the last symbol of PDSCH reception and the first symbol of the corresponding HARQ-ACK transmission is Same as msec. Is a time interval of a symbol corresponding to a PDSCH reception time for PDSCH processing capability 1 when an additional PDSCH DM-RS is configured.
  • the channel coding scheme for one embodiment is mainly (1) LDPC (Low Density Parity Check) coding scheme for data, and (2) Polar coding for control information, repetitive coding / simplex coding / Reed-Muller coding. Includes coding scheme.
  • LDPC Low Density Parity Check
  • the network/UE can perform LDPC coding for PDSCH/PUSCH by supporting two basic graphs (BG).
  • BG1 has a mother code rate of 1/3
  • BG2 has a mother code rate of 1/5.
  • coding of control information repetitive coding / simplex coding / Reed-Muller coding may be supported.
  • a polar coding scheme can be used.
  • the mother code size may be 512
  • the mother code size may be 1024.
  • Table 17 summarizes the coding scheme of uplink control information.
  • Uplink Control Information size including CRC, if present Channel code
  • a polarity coding scheme may be used for the PBCH.
  • This coding scheme may be the same as in the PDCCH.
  • the LDPC coding structure is described in detail.
  • the LDPC code is an (n, k) linear block code defined by a null space x parity check matrix H of (n, k).
  • the parity check matrix is represented by a prototype graph as shown in FIG. 25 below.
  • a QC (quasi-cyclic) LDPC code is used.
  • the parity check matrix is an mxn array of ZxZ cyclic permutation matrices.
  • 26 shows an example of a parity check matrix based on a 4-4 cyclic permutation matrix.
  • H is represented by shift values (cyclic matrix) and 0 (zero matrix) instead of Pi.
  • FIG. 27 shows an encoder structure for a polar code. Specifically, FIG. 27 (a) shows the basic module of the polar code, and I.9 (b) shows the basic matrix.
  • Polar codes are known in the art as codes capable of obtaining channel capacity in binary input discrete memoryless channels (B-DMC). That is, when the size N of the code block increases to infinity, the channel capacity can be obtained.
  • the polar code encoder performs channel combining and channel division as shown in FIG. 28.
  • 29 shows a UE RRC state machine and state transition.
  • the UE has only one RRC state at a time.
  • FIG. 30 shows a UE state machine and state transition and mobility procedures supported between NR/NGC and E-UTRAN/EPC.
  • the RRC state indicates whether the RRC layer of the UE is logically connected to the RRC layer of the NG RAN.
  • the UE When the RRC connection is established, the UE is in a radio resource control (RRC) _CONNECTED state or an RRC_INACTIVE state. Otherwise, that is, if the RRC connection is not established, the UE is in the RRC_IDLE state.
  • RRC radio resource control
  • the NG RAN When in the RRC connected state or the RRC inactive state, since the UE has an RRC connection, the NG RAN can recognize the presence of the UE in the cell unit. Therefore, it is possible to effectively control the UE.
  • the UE when in the RRC Idle state, the UE cannot be recognized by the NG RAN, and is managed by the core network in the tracking area unit, which is a unit of a wider area than the cell. That is, for a terminal in an RRC idle state, only the existence of the terminal is recognized in a wide area unit. To receive general mobile communication services such as voice or data, it is necessary to switch to the RRC connection state.
  • the UE When the user first turns on the UE, the UE first searches for an appropriate cell and then maintains the RRC Idle state in the cell. Only when it is necessary to establish an RRC connection, the UE in the RRC Idle state establishes an RRC connection with the NG RAN through the RRC connection procedure, and then transitions to the RRC connected state or RRC_INACTIVE state.
  • An example of a case in which the UE in the RRC Idle state needs to establish an RRC connection is a case where uplink data transmission is required due to a call attempt by a user, etc., or a response message is transmitted in response to a paging message received from the NG RAN. Varies.
  • the RRC IDLE state and RRC INACTIVE state have the following characteristics,
  • -UE specific DRX discontinuous reception
  • -UE-specific DRX can be configured by upper layer or RRC layer;
  • the UE can be configured with UE specific DRX;
  • PLMN public land mobile network
  • PLMN selection, cell reselection procedure, and location registration are common to both the RRC_IDLE state and the RRC_INACTIVE state.
  • the PLMN When the UE is turned on, the PLMN is selected by NAS (Non-Access Stratum). For the selected PLMN, an associated Radio Access Technology (RAT) may be set.
  • NAS Non-Access Stratum
  • RAT Radio Access Technology
  • the NAS should provide an equivalent PLMN list for use by the AS for cell selection and cell reselection, if possible.
  • the UE searches for a suitable cell of the selected PLMN and selects the cell to provide an available service, and additionally, the UE must tune to the control channel. This choice is called “camping on the cell”.
  • the UE registers its presence in the tracking area of the selected cell by the NAS registration procedure, and the PLMN selected as a result of successful location registration becomes a registered PLMN.
  • the UE finds a more suitable cell according to the cell reselection criteria, it reselects the cell and camps on the cell. If the new cell does not belong to at least one tracking area in which the UE is registered, location registration is performed. In the RRC_INACTIVE state, if a new cell does not belong to the configured RNA, an RNA update procedure is performed.
  • the UE must search for a PLMN with a high priority at regular time intervals, and search for a suitable cell when the NAS selects another PLMN.
  • a new PLMN is automatically selected (automatic mode), or an indication of which PLMN is available is given to the user, so that manual selection can be made (manual mode).
  • Registration is not performed by a UE capable of only services that do not require registration.
  • the PLMN Upon receiving a call to a registered UE, the PLMN knows (in most cases) the set of tracking areas (RCR_IDLE state) or RNA (RCC_INACTIVE state) to which the UE is camped. It is possible to send a “paging” message to the UE on the control channel of all cells in the corresponding region set, and the UE may receive and respond to the paging message.
  • RCR_IDLE state the set of tracking areas
  • RRCC_INACTIVE state RNA
  • the AS In the UE, the AS must report to the NAS the PLMN available at the request of the NAS or autonomously.
  • a specific PLMN may be automatically or manually selected based on a list of PLMN identifiers of priority.
  • Each PLMN in the PLMN ID list is identified with a'PLMN ID'.
  • the UE may receive one or more'PLMN IDs' in a given cell.
  • the result of the PLMN selection performed by the NAS is the identifier of the selected PLMN.
  • the UE must scan all RF channels in the NR band according to its ability to find an available PLMN. In each carrier, the UE must search for the strongest cell and read its system information to find out which PLMN(s) belong. If the UE can read one or several PLMN identifiers in the strongest cell, if the following high quality criteria are met, each found PLMN should be reported to the NAS as a high quality PLMN (but no RSRP value).
  • the measured RSRP value should be -110 dBm or higher.
  • Found PLMNs that do not satisfy the high quality criteria but can read the PLMN identifier by the UE are reported to the NAS along with the RSRP value.
  • the quality measure reported to the NAS by the UE should be the same for each PLMN found in one cell.
  • the PLMN search may be stopped at the request of the NAS.
  • the UE may optimize the PLMN search using stored information, for example, carrier frequency and, optionally, information about cell parameters from previously received measurement control information elements.
  • a cell selection procedure When the UE selects a PLMN, a cell selection procedure must be performed to select an appropriate cell of the PLMN to be camped on.
  • the UE must perform measurement for cell selection and reselection purposes.
  • the NAS may control the RAT for which cell selection is to be performed, for example, by displaying the RAT related to the selected PLMN and maintaining the forbidden registration area(s) list and the equivalent PLMN list. .
  • the UE must select an appropriate cell based on the RRC_IDLE state measurement and cell selection criteria.
  • stored information for multiple RATs may be available at the UE.
  • the UE When camped on in a cell, the UE should regularly search for a better cell according to the cell reselection criteria. When a better cell is found, it is selected. A change in cell may mean a change in RAT. When the received system information related to the NAS changes due to cell selection and reselection, the NAS is notified.
  • the UE For normal service, the UE must camp on in a suitable cell and tune to the control channel(s) of that cell so that the UE can do the following:
  • the amount of cell measurement depends on the UE implementation.
  • the measurement amount of the cell is as follows between beams corresponding to the same cell based on the SS/PBCH block. It is derived together.
  • the cell measurement quantity is derived from the linear average of the power values up to the maximum number of the maximum beam measurement quantity value exceeding the threshold value.
  • the UE must scan all RF channels in the NR band according to its ability to find an appropriate cell.
  • the UE needs to search for the strongest cell.
  • This procedure requires storage information of a carrier frequency from a previously received measurement control information element or a previously detected cell, and optionally information about cell parameters.
  • the UE When the UE finds an appropriate cell, the UE must select this cell.
  • the first mechanism uses cell status indications and special reservations to control the cell selection and reselection procedures.
  • a second mechanism called integrated access control, prevents the selected access category or access ID from sending the initial access message for load control reasons.
  • a UE assigned to an access identifier in the range of -12 to 14 must operate as if the cell state is “barred” when the cell is “reserved for operator use” for the registered PLMN or the selected PLMN.
  • the UE cannot select/reselect this cell even if it is not an emergency call.
  • -UE can exclude barred cells as cell selection/reselection candidates for up to 300 seconds.
  • the UE may select another cell at the same frequency.
  • the UE may select another cell at the same frequency when the reselection criterion is satisfied.
  • -UE should exclude barred cells as cell selection/reselection candidates for 300 seconds.
  • the UE shall not reselect a cell at the same frequency as the barred cell.
  • the UE must exclude barred cells and cells at the same frequency as the cell selection/reselection candidate for 300 seconds.
  • Cell selection of other cells may also involve changing the RAT.
  • Information on cell access restrictions related to access categories and IDs is broadcast as system information.
  • the UE should ignore cell access restrictions related to an access category and an identifier for cell reselection. Changes in the indicated access restrictions should not trigger cell reselection by the UE.
  • the UE should consider the NAS initiated access attempt and cell access restrictions related to the access category and identifier for the RNAU.
  • the AS In the UE, the AS must report the tracking area information to the NAS.
  • the UE When the UE reads one or more PLMN identifiers in the current cell, the UE must report the discovered PLMN identifier, suitable for tracking area information, to the NAS.
  • the UE transmits a RAN-based notification area update (RNAU) periodically or when the UE selects a cell that does not belong to the configured RNA.
  • RNAU RAN-based notification area update
  • the principle of PLMN selection in NR is based on the 3GPP PLMN selection principle.
  • Cell selection is necessary when switching from RM-DEREGISTERED to RM-REGISTERED, from CM-IDLE to CM-CONNECTED, and from CM-CONNECTED to CM-IDLE, and is based on the following principles.
  • the UE NAS layer identifies the selected PLMN and the equivalent PLMN;
  • -UE searches the NR frequency band and identifies the strongest cell for each carrier frequency.
  • Cell system information broadcast is read to identify the PLMN.
  • the UE can search for each carrier in turn (“initial cell selection”) or shorten the search by using stored information (“stored information cell selection”).
  • the UE tries to identify a suitable cell; If it is not possible to identify a suitable cell, it tries to identify an acceptable cell.
  • camp is started in the cell and a cell reselection procedure is started.
  • Cell PLMN is the selected PLMN, registered or equivalent PLMN;
  • the cell is not banned or reserved, and the cell is not part of a tracking area in the “forbidden tracking areas for roaming” list.
  • -An acceptable cell is a cell whose measured cell property meets the cell selection criteria and the cell is not blocked.
  • the UE When transitioning from RRC_CONNECTED to RRC_IDLE, the UE camps at a frequency allocated by RRC in any cell or cell/state transition message of the last cell/cell set in RRC_CONNECTED.
  • the UE should try to find a suitable cell in the manner described for the stored information or initial cell selection. If a suitable cell is not found in any frequency or RAT, the UE should try to find an acceptable cell.
  • cell quality is derived between beams corresponding to the same cell.
  • the UE of RC_IDLE performs cell reselection.
  • the principle of the procedure is as follows.
  • -UE measures the attributes of serving and neighboring cells to enable the reselection process
  • Cell reselection identifies the cell that the UE should camp. It is based on cell reselection criteria including serving and measurement of adjacent cells:
  • -In-frequency reselection is based on the rank of the cell
  • -NCL is provided by the serving cell to handle specific cases for intra-frequency and inter-frequency neighboring cells.
  • -A blacklist may be provided to prevent the UE from reselecting to neighboring cells within a specific frequency and between frequencies.
  • cell quality is derived between beams corresponding to the same cell.
  • RRC_INACTIVE is a state in which the UE maintains the CM-CONNECTED state and can move within an area composed of NG-RAN (RNA) without notifying NG-RAN.
  • RNA NG-RAN
  • the last serving gNB node maintains the UE context and UE-related NG connection with the serving AMF and UPF.
  • the last serving gNB receives DL data from UPF or DL signal from AMF while the UE is in RRC_INACTIVE, it is paged in the cell corresponding to RNA and the RNA contains cells of neighboring gNB(s), neighbor XnAP RAN paging can be transmitted to the gNB.
  • the AMF provides RRC inactivity assistant information to the NG-RAN node to help the NG-RAN node determine whether the UE can be transmitted with RRC_INACTIVE.
  • the RRC inactivity assistant information includes a registration area configured for the UE, a UE-specific DRX, a periodic registration update timer, whether the UE is configured in a Mobile Initiated Connection Only (MICO) mode by AMF, and a UE identity index value.
  • MICO Mobile Initiated Connection Only
  • the UE registration area is considered by the NG-RAN node when configuring the RAN-based notification area.
  • the UE specific DRX and UE identity index values are used by the NG-RAN node for RAN paging.
  • the periodic registration update timer is considered to configure a periodic RAN notification area update timer in the NG-RAN node.
  • the NG-RAN node can configure the UE with a periodic RNA update timer value.
  • the receiving gNB triggers the XnAP discovery UE context procedure to obtain the UE context from the last serving gNB and also includes tunnel information for potential recovery of data from the last serving gNB. You can trigger a data transfer procedure.
  • the receiving gNB becomes the serving gNB and further triggers the NGAP route switch request procedure.
  • the serving gNB triggers release of the UE context in the last serving gNB by the XnAP UE context release procedure.
  • the gNB performs establishment of a new RRC connection instead of resuming the previous RRC connection.
  • a UE in the RRC_INACTIVE state should start the RNA update procedure when moving out of the configured RNA.
  • the receiving gNB may decide to transmit the UE back to the RRC_INACTIVE state, move the UE to the RRC_CONNECTED state, or transmit the UE to the RRC_IDLE.
  • the UE of RRC_INACTIVE performs cell reselection.
  • the principle of the procedure is the same as in the RRC_IDLE state.
  • Type of signals UE procedure 1 st step RRC signaling (MAC-CellGroupConfig) -Receive DRX configuration information 2 nd Step MAC CE ((Long) DRX command MAC CE) -Receive DRX command 3 rd Step - -Monitor a PDCCH during an on-duration of a DRX cycle
  • the UE uses Discontinuous Reception (DRX) in the RRC_IDLE and RRC_INACTIVE states to reduce power consumption.
  • DRX Discontinuous Reception
  • the UE When DRX is configured, the UE performs a DRX operation according to the DRX configuration information.
  • the UE operating as a DRX repeatedly turns on and off the reception operation.
  • the UE when DRX is configured, the UE attempts to receive a PDCCH, which is a downlink channel, only for a predetermined time interval, and does not attempt to receive a PDCCH for the remaining period.
  • a PDCCH which is a downlink channel
  • the period during which the UE should attempt to receive the PDCCH is referred to as on-duration, and this on-duration is defined once per DRX cycle.
  • the UE can receive DRX configuration information from the gNB through RRC signaling and can operate as a DRX through reception of the (Long) DRX command MAC CE.
  • DRX configuration information may be included in MAC-CellGroupConfig.
  • IE MAC-CellGroupConfig is used to configure MAC parameters for cell groups, including DRX.
  • Table 20 and Table 21 are examples of IE MAC-CellGroupConfig.
  • drx-onDurationTimer is the duration at the start of the DRX cycle; drx-SlotOffset is the slot delay before the start of the drx-onDurationTimer.
  • drx-StartOffset is a subframe where the DRX cycle starts.
  • the drx-InactivityTimer is the duration after the PDCCH in which the PDCCH occurs.
  • drx-RetransmissionTimerDL (per DL HARQ process) is the maximum duration until a DL retransmission is received.
  • drx-RetransmissionTimerUL (per UL HARQ process) is the maximum duration until approval for UL retransmission is received.
  • drx-LongCycle is a Long DRX cycle.
  • drx-ShortCycle (optional) is a Short DRX cycle.
  • drx-ShortCycleTimer (optional) is the period during which the UE should follow the Short DRX Cycle.
  • the drx-HARQ-RTT-TimerDL (per DL HARQ process) is the minimum duration before DL allocation for HARQ retransmission is expected by the MAC entity.
  • drx-HARQ-RTT-TimerUL (per UL HARQ process) is the minimum duration until UL HARQ retransmission approval is expected by the MAC entity.
  • DRX Command MAC CE or Long DRX Command MAC CE is identified by a MAC PDU lower header with an LCID.
  • the fixed size is 0 bits.
  • Table 5 shows an example of the LCID value for the DL-SCH.
  • the PDCCH monitoring activity of the UE is managed by DRX and BA.
  • the UE When DRX is configured, the UE does not need to continuously monitor the PDCCH.
  • DRX has the following features.
  • -on-duration Waiting time for the UE to receive PDCCH after waking up. If the UE successfully decodes the PDCCH, the UE remains awake and starts an inactivity timer;
  • -Inactivity-timer A period in which the UE waits to successfully decode the PDCCH from the last successful decoding of the PDCCH, and if it fails, it may return to sleep. The UE must restart the inactivity timer according to a single successful decoding of the PDCCH only for the first transmission (ie, not retransmission).
  • -Retransmission timer the duration that lasts until retransmission is expected
  • the MAC entity used below may be represented by the UE or the MAC entity of the UE.
  • the MAC entity will be configured by an RRC with DRX function to control the PDCCH monitoring activity of the UE for C-RNTI, CS-RNTI, TPC-PUCCH-RNTI, TPC-PUSCH-RNTI and TPC-SRS-RNTI of the MAC entity. Can When using DRX operation, the MAC entity must also monitor the PDCCH. When in RRC_CONNECTED, if DRX is configured, the MAC entity can monitor the PDCCH discontinuously using the DRX operation; Otherwise, the MAC entity should continuously monitor the PDCCH.
  • RRC controls DRX operation by configuring parameters in Tables 3 and 4 (DRX configuration information).
  • a PDCCH indicating a new transmission addressed to the C-RNTI of the MAC entity is not received.
  • the MAC entity When DRX is configured, the MAC entity must perform the operations shown in the following table.
  • the MAC entity transmits when it expects HARQ feedback and Type 1 trigger SRS.
  • the MAC entity does not need to monitor the PDCCH if it is not a complete PDCCH occasion (eg, the active time starts or expires in the middle of the PDCCH opportunity).
  • the UE may use Discontinuous Reception (DRX) in the RRC_IDLE and RRC_INACTIVE states to reduce power consumption.
  • DRX Discontinuous Reception
  • the UE monitors one paging occasion (PO) per DRX cycle, and one PO may consist of a number of time slots (eg, subframes or OFDM symbols) in which paging DCI can be transmitted.
  • PO paging occasion
  • the length of one PO is one period of beam sweeping, and the UE can assume that the same paging message is repeated in all beams of the sweeping pattern.
  • the paging message is the same for both RAN start paging and CN start paging.
  • One paging frame is one radio frame that may include one or a plurality of paging events.
  • the UE Upon receiving the RAN paging, the UE initiates an RRC connection resumption procedure.
  • the UE receives the CN initialization paging in the RRC_INACTIVE state, the UE moves to RRC_IDLE and notifies the NAS.
  • a front-loaded DMRS structure was introduced in which the DMRS was placed in front of the symbol to meet the low latency requirements (TR-38.211). Furthermore, an additional DMRS structure was also introduced in which the same pattern as the front-load DMRS was additionally located on the time axis in order to estimate the channel of the fast UEs.
  • FIG. 32 shows the structure of a DMRS when one front-loaded DMRS and one additional DMRS enter within one subframe (refer to RAN1#90 Chairman's Notes).
  • an area painted in yellow is an area that does not contain data, and blue indicates a location of a DMRS within the data area.
  • the left side shows 3 control symbols and the right side shows 2 control symbols.
  • the number and location of DMRSs that can be added in a symbol according to the data symbol length (PUSCH/PDSCH) can be found in Tables 23 to 24 below.
  • the numbers in the table indicate the position of the OFDM symbol within a subframe.
  • the PSCCH/PSSCH subframe structure (normal CP environment) of the existing LTE sidelink uses symbol 0 and symbol 13 as sections for AGC and Gap, respectively, as shown in Table 33, and is composed of one data symbol before and after one RS. Has been.
  • a subframe structure eg, a DMRS pattern
  • the AGC tuning value may change significantly and a problem may occur in the reception quality, so a method for solving this is also proposed.
  • the DMRS pattern described below mainly describes the front-loaded DMRS structure, but may be extended to other types of DMRS patterns.
  • the UE may receive a DMRS related to a PSSCH from a resource pool and may receive a PSSCH.
  • a first type DMRS may be transmitted in at least two or more predetermined resource regions in the resource pool.
  • the first type DMRS may be a DMRS transmission in a slot before a preset slot.
  • the first type DMRS may be a front-loaded DMRS
  • the preset slot may be a third slot in the case of FIG. 33.
  • At least two or more predetermined resource regions may be contiguous within the resource pool.
  • the predetermined resource region may include at least one subchannel. Also, two or more predetermined resource regions may not overlap each other in the frequency domain. That is, in order to prevent the front-loaded DMRS structure from being continuously concentrated only on a specific subchannel, the section in which the front-loaded DMRS is used over time within the subchannel available to the UE can be spread evenly.
  • FIG. 33 illustrates an example of sequentially allocating a region in which a front-loaded DMRS is used for each TTI in each subchannel (which may be one block in the frequency axis).
  • the predetermined resource region may be composed of at least one Transmit Time Interval (TTI).
  • TTI Transmit Time Interval
  • 7 large blocks may be TTIs as a time axis.
  • Short TTI may be applied to a predetermined resource region.
  • the UE may perform decoding based on short TTI in a predetermined resource region.
  • the front-loaded DMRS structure can be used for low-latency transmission in a predetermined subchannel for a predetermined time within the resource pool of the terminal.
  • the ratio of the subchannel corresponding to the predetermined resource region to all subchannels for the terminal may be determined based on the priority of the terminal.
  • a certain ratio eg, A/3 among subchannels allows the use of the front-loaded DMRS structure.
  • This ratio can be determined through priority assignment between terminals. For example, in the case of a terminal requiring relatively short latency or a terminal requiring high PPPP, the corresponding ratio may be set higher than that of neighboring terminals (eg, A/2).
  • the ratio of the subchannel corresponding to the predetermined resource region to the total subchannel for the terminal may be set to be larger as the ProSe Per-Packet Priority (PPPP) value is smaller.
  • PPPP ProSe Per-Packet Priority
  • the predetermined resource region may be indicated by the network through physical layer or higher layer signaling, or the predetermined resource region may be preset.
  • a time resource region that can be used by a terminal having a specific DMRS pattern (eg, front-loaded DMRS) in the resource pool may be predetermined or signaled by a network. From the viewpoint of the transmitting terminal, when the terminal transmits a signal with a front-loaded DMRS pattern, a resource capable of using the front-loaded DMRS pattern in the indicated time resource region may be determined and used for transmission. In this case, it is not possible for a terminal having an existing DMRS pattern to transmit a signal in a time resource region in which the front-loaded DMRS pattern can be used.
  • the UE having the existing DMRS pattern can use the resources of the front-loaded DMRS pattern, but the UE having the front-loaded DMRS pattern does not indiscriminately use the transmission resources using the existing DMRS pattern.
  • a terminal having a front-loaded DMRS pattern can use transmission resources using an existing DMRS pattern with limited/specific conditions.
  • the short TTI may be decoded only in the corresponding region by limiting the region in which the short TTI may be transmitted in the resource pool.
  • the network configures a resource in which a short TTI is likely to be transmitted in a frequency domain and/or a time domain, and 2) a corresponding terminal may attempt short TTI decoding only on the corresponding resource. In this case, it is not necessary to attempt decoding for different TTIs in all regions of the resource. In addition, it is easy to satisfy the latency requirement as some resources are allowed to use short TTI.
  • the DMRS pattern used for each sub-channel may be different within one subframe. Therefore, when a reception is attempted based on a terminal using an existing DMRS, the AGC tuning value changes significantly, resulting in a problem in reception quality. have. In this case, power may need to be normalized according to the DMRS density.
  • the TTI region hit by the front-loaded DMRS from an arbitrary transmitting terminal is omitted when receiving messages from different terminals that do not have a front-loaded DMRS structure, or Such a message may be punctured/rate matched in consideration of the resource region overlapping with other channels, or transmission of only symbols that overlap another message and the time domain may be omitted.
  • the DMRS pattern may be used differently for each resource pool.
  • the network may signal a DMRS pattern to be used in a corresponding resource pool with a physical layer or an upper layer 1 signal, or may specify in advance.
  • a front-loaded DMRS structure can be used in a specific resource pool.
  • Steps S3501 and S3502 of FIG. 35 may be expressed as steps of acquiring (or checking, specifying) radio resources allocated for the sidelink (SL).
  • the radio resource may include at least one sub-channel and at least one time unit.
  • the subchannel may be composed of one or a plurality of consecutive resource blocks (RBs), or may be composed of a specific number of consecutive subcarriers.
  • the time unit may be a subframe, a Transmission Time Interval (TTI), a slot, an OFDM/OFDMA symbol, or an SC-FDM/SC-FDMA symbol.
  • TTI Transmission Time Interval
  • the transmitting terminal may generate data transmitted through the radio resource.
  • Step S3503 may include a process of transferring data generated in an upper layer to a lower layer (eg, a physical layer).
  • the transmitting terminal may know the resource allocation information through step S3503.
  • step S3504 of FIG. 35 the transmitting terminal specifies resource allocation information for delivering the information generated in step S3503.
  • step S3505 of FIG. 35 the transmitting terminal transmits data to the receiving terminal through a sidelink (SL).
  • Resource allocation information used for transmission is specified/determined through step S3504.
  • steps S3501 to S3502 of FIG. 35 a process in which the base station and the terminal are connected is required.
  • the base station and the terminal may perform initial access (IA) and random access (RA) operations.
  • the terminal may perform a Discontinuous Reception (DRX) operation in the RRC_IDLE or RRC_INACTIVE state.
  • IA initial access
  • RA random access
  • DRX Discontinuous Reception
  • the inventions and/or embodiments in the embodiment(s) may be regarded as one proposed method, but combinations between each invention and/or embodiments may also be considered in a new way.
  • the invention is not limited to the embodiments presented in the embodiment(s), and is not limited to a specific system. All (parameter) and/or (action) and/or (combination between each parameter and/or action) and/or (whether or not the corresponding parameter and/or action is applied) and/or (each parameter and/or In the case of / or whether a combination between operations is applied), the base station may (pre)configure the terminal through higher layer signaling and/or physical layer signaling, or may be defined in the system in advance.
  • each item of the embodiment(s) is defined as one operation mode, and one of them is (pre)configured to the UE through higher layer signaling and/or physical layer signaling, and the base station operates according to the corresponding operation mode. You can do it.
  • the TTI (transmit time interval) of the embodiment(s) or the resource unit for signal transmission may correspond to units of various lengths such as sub-slot/slot/subframe or basic unit, which is a basic transmission unit, and the embodiment(s)
  • the terminal of can correspond to various types of devices such as vehicles and pedestrian terminals.
  • matters related to the operation of the terminal and/or the base station and/or the RSU (road side unit) in the embodiment(s) are not limited to each device type, and may be applied to different types of devices.
  • matters described as the operation of the base station in the embodiment(s) may be applied to the operation of the terminal.
  • the contents applied in direct communication between terminals may be used between the terminal and the base station (for example, uplink or downlink), and at this time, a special information such as a base station or a relay node or a UE type RSU
  • the proposed method can be used for communication between a type of UE or the like and a terminal or communication between a special type of wireless device.
  • the base station may be replaced with a relay node and a UE-type RSU.
  • the content is not limited to direct communication between terminals, and may be used in uplink or downlink, and in this case, a base station or a relay node may use the proposed method.
  • examples of the above-described proposed method may also be included as one of the implementation methods, and thus may be regarded as a kind of proposed method.
  • the above-described proposed schemes may be implemented independently, but may be implemented in the form of a combination (or merge) of some of the proposed schemes.
  • the information on whether to apply the proposed methods is a signal (e.g., a physical layer signal or a higher layer signal) defined in advance by the base station to the terminal or the transmitting terminal to the receiving terminal. Rules can be defined to inform through.
  • 36 illustrates a wireless communication device according to an embodiment.
  • the wireless communication system may include a first device 9010 and a second device 9020.
  • the first device 9010 includes a base station, a network node, a transmitting terminal, a receiving terminal, a wireless device, a wireless communication device, a vehicle, a vehicle equipped with an autonomous driving function, a connected car, a drone (Unmanned Aerial Vehicle, UAV), AI (Artificial Intelligence) modules, robots, Augmented Reality (AR) devices, Virtual Reality (VR) devices, Mixed Reality (MR) devices, hologram devices, public safety devices, MTC devices, IoT devices, medical devices, pins It may be a tech device (or financial device), a security device, a climate/environment device, a device related to 5G service, or a device related to the fourth industrial revolution field.
  • UAV Unmanned Aerial Vehicle
  • AI Artificial Intelligence
  • AR Augmented Reality
  • VR Virtual Reality
  • MR Mixed Reality
  • hologram devices public safety devices
  • MTC devices IoT devices
  • medical devices pins It may be a tech device (or financial device), a security device, a
  • the second device 9020 includes a base station, a network node, a transmitting terminal, a receiving terminal, a wireless device, a wireless communication device, a vehicle, a vehicle equipped with an autonomous driving function, a connected car, a drone (Unmanned Aerial Vehicle, UAV), AI (Artificial Intelligence) modules, robots, Augmented Reality (AR) devices, Virtual Reality (VR) devices, Mixed Reality (MR) devices, hologram devices, public safety devices, MTC devices, IoT devices, medical devices, pins It may be a tech device (or financial device), a security device, a climate/environment device, a device related to 5G service, or a device related to the fourth industrial revolution field.
  • UAV Unmanned Aerial Vehicle
  • AI Artificial Intelligence
  • AR Augmented Reality
  • VR Virtual Reality
  • MR Mixed Reality
  • hologram devices public safety devices
  • MTC devices IoT devices
  • medical devices pins It may be a tech device (or financial device), a security device, a
  • the terminal is a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistants (PDA), a portable multimedia player (PMP), a navigation system, a slate PC, and a tablet.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • PC tablet PC
  • ultrabook ultrabook
  • wearable device wearable device, for example, a watch-type terminal (smartwatch), glass-type terminal (smart glass), HMD (head mounted display)
  • the HMD may be a display device worn on the head.
  • HMD can be used to implement VR, AR or MR.
  • a drone may be a vehicle that is not human and is flying by a radio control signal.
  • the VR device may include a device that implements an object or a background of a virtual world.
  • the AR device may include a device that connects and implements an object or background of a virtual world, such as an object or background of the real world.
  • the MR device may include a device that combines and implements an object or background of a virtual world, such as an object or background of the real world.
  • the hologram device may include a device that implements a 360-degree stereoscopic image by recording and reproducing stereoscopic information by utilizing an interference phenomenon of light generated by the encounter of two laser lights called holography.
  • the public safety device may include an image relay device or an image device wearable on a user's human body.
  • the MTC device and the IoT device may be devices that do not require direct human intervention or manipulation.
  • the MTC device and the IoT device may include a smart meter, a bending machine, a thermometer, a smart light bulb, a door lock, or various sensors.
  • the medical device may be a device used for the purpose of diagnosing, treating, alleviating, treating or preventing a disease.
  • the medical device may be a device used for the purpose of diagnosing, treating, alleviating or correcting an injury or disorder.
  • a medical device may be a device used for the purpose of examining, replacing or modifying a structure or function.
  • the medical device may be a device used for the purpose of controlling pregnancy.
  • the medical device may include a device for treatment, a device for surgery, a device for (extra-corporeal) diagnosis, a device for hearing aid or a procedure.
  • the security device may be a device installed to prevent a risk that may occur and maintain safety.
  • the security device may be a camera, CCTV, recorder, or black box.
  • the fintech device may be a device capable of providing financial services such as mobile payment.
  • the fintech device may include a payment device or a point of sales (POS).
  • the climate/environment device may include a device that monitors or predicts the climate/environment.
  • the first device 9010 may include at least one or more processors such as the processor 9011, at least one or more memories such as the memory 9012, and at least one or more transceivers such as the transceiver 9013.
  • the processor 9011 may perform the functions, procedures, and/or methods described above.
  • the processor 9011 may perform one or more protocols.
  • the processor 9011 may perform one or more layers of an air interface protocol.
  • the memory 9012 is connected to the processor 9011 and may store various types of information and/or commands.
  • the transceiver 9013 may be connected to the processor 9011 and controlled to transmit and receive wireless signals.
  • the transceiver 9013 may be connected to one or more antennas 9014-1 to 9014-n, and the transceiver 9013 may include the methods and methods herein through one or more antennas 9014-1 to 9014-n. / Or it may be set to transmit and receive user data, control information, radio signal / channel, etc. mentioned in the operation flow chart.
  • the n antennas may be the number of physical antennas or the number of logical antenna ports.
  • the second device 9020 may include at least one processor such as the processor 9021, at least one memory device such as the memory 9022, and at least one transceiver such as the transceiver 9023.
  • the processor 9021 may perform the functions, procedures, and/or methods described above.
  • the processor 9021 may implement one or more protocols.
  • the processor 9021 may implement one or more layers of an air interface protocol.
  • the memory 9022 is connected to the processor 9021 and may store various types of information and/or commands.
  • the transceiver 9023 is connected to the processor 9021 and may be controlled to transmit and receive radio signals.
  • the transceiver 9023 may be connected to one or more antennas 9024-1 to 9024-n, and the transceiver 9023 may include the methods and methods herein through one or more antennas 9024-1 to 9024-n. / Or it may be set to transmit and receive user data, control information, radio signal / channel, etc. mentioned in the operation flow chart.
  • the memory 9012 and/or the memory 9022 may be connected inside or outside the processor 9011 and/or the processor 9021, respectively, or other processors through various technologies such as wired or wireless connection.
  • 37 shows a wireless communication device according to an embodiment.
  • FIG. 37 may be a diagram illustrating in more detail the first or second devices 9010 and 9020 of FIG. 36.
  • the wireless communication device in FIG. 37 is not limited to the terminal.
  • the wireless communication device may be any suitable mobile computer device configured to perform one or more implementations, such as a vehicle communication system or device, a wearable device, a portable computer, a smart phone, or the like.
  • the terminal includes at least one processor (e.g., a DSP or microprocessor) such as a processor 9110, a transceiver 9135, a power management module 9105, an antenna 9140, and a battery 9155. ), display 9115, keypad 9120, Global Positioning System (GPS) chip 9160, sensor 9165, memory 9130, (optionally) subscriber identification module (SIM) card 9125, speaker ( 9145), a microphone 9150, and the like.
  • the terminal may include one or more antennas.
  • the processor 9110 may be configured to perform the above-described functions, procedures and/or methods. According to an implementation example, the processor 9110 may perform one or more protocols, such as layers of a radio interface protocol.
  • the memory 9130 may be connected to the processor 9110 and may store information related to the operation of the processor 9110.
  • the memory 9130 may be located inside or outside the processor 9110, and may be connected to other processors through various technologies such as wired or wireless connection.
  • a user can input various types of information (eg, command information such as a phone number) by pressing a button on the keypad 9120 or using various techniques such as voice activation using the microphone 9150.
  • the processor 9110 may receive and process user information and perform an appropriate function such as dialing a phone number.
  • data eg, operational data
  • the processor 9110 may receive and process GPS information from the GPS chip 9160 in order to perform a function related to the location of the terminal, such as vehicle navigation and map service.
  • the processor 9110 may display various types of information and data on the display 9115 for user's reference or convenience.
  • the transceiver 9135 is connected to the processor 9110 and may transmit and receive radio signals such as RF signals.
  • the processor 9110 may control the transceiver 9135 to initiate communication and transmit a radio signal including various types of information or data such as voice communication data.
  • the transceiver 9135 may include one receiver and one transmitter to send or receive wireless signals.
  • the antenna 9140 may facilitate transmission and reception of wireless signals.
  • the transceiver 9135 may forward and convert the signals to a baseband frequency for processing using the processor 9110.
  • the processed signals may be processed according to various technologies, such as converted into audible or readable information to be output through the speaker 9145.
  • the senor 9165 may be connected to the processor 9110.
  • the sensor 9165 may include one or more sensing devices configured to detect various types of information including, but not limited to, speed, acceleration, light, vibration, proximity, position, image, and the like.
  • the processor 9110 may receive and process sensor information obtained from the sensor 9165, and may perform various types of functions such as collision avoidance and automatic driving.
  • various components may be further included in the terminal.
  • the camera may be connected to the processor 9110 and may be used for various services such as automatic driving and vehicle safety service.
  • FIG. 37 is only an example of a terminal, and implementation is not limited thereto.
  • some components e.g., keypad 9120, GPS chip 9160, sensor 9165, speaker 9145 and/or microphone 9150
  • FIG. 38 illustrates a transceiver of a wireless communication device according to an embodiment.
  • FIG. 38 may show an example of a transceiver that may be implemented in a frequency division duplex (FDD) system.
  • FDD frequency division duplex
  • At least one processor may process data to be transmitted and may transmit a signal such as an analog output signal to the transmitter 9210.
  • the analog output signal at the transmitter 9210 may be filtered by a low pass filter (LPF) 9211, e.g. to remove noise due to a previous digital-to-analog conversion (ADC), and , It may be upconverted from the baseband to RF by an upconverter (eg, mixer) 9212, and amplified by an amplifier such as a variable gain amplifier (VGA) 9213.
  • the amplified signal may be filtered by a filter 9214, amplified by a power amplifier (PA) 9215, may be routed through a duplexer 9250/antenna switch 9260, and an antenna 9270 ) Can be transmitted.
  • LPF low pass filter
  • PA power amplifier
  • the antenna 9270 may receive signals in a wireless environment, and the received signals may be routed at the antenna switch 9260/duplexer 9250 and sent to the receiver 9220.
  • the signal received by the receiver 9220 may be amplified by an amplifier such as a low noise amplifier (LNA) 9223, filtered by a band pass filter 9224, and a downconverter (e.g. For example, it may be downconverted from RF to baseband by a mixer 9225.
  • LNA low noise amplifier
  • the downconverted signal may be filtered by a low pass filter (LPF) 9226, amplified by an amplifier such as VGA 9272 to obtain an analog input signal, and the analog input signal may be processed by one or more processors.
  • LPF low pass filter
  • the local oscillator (LO) 9240 may generate transmission and reception of an LO signal to be transmitted to the upconverter 9212 and the downconverter 9225, respectively.
  • the phase locked loop (PLL) 9230 may receive control information from the processor, and may send control signals to the LO generator 9240 to transmit/receive LO signals at an appropriate frequency.
  • FIG. 39 shows a transceiver of a wireless communication device according to an embodiment.
  • FIG. 39 may show an example of a transceiver that may be implemented in a time division duplex communication (TDD) system.
  • TDD time division duplex communication
  • the transmitter 9310 and the receiver 9320 of the transceiver of the TDD system may have one or more similar characteristics to the transmitter and receiver of the transceiver of the FDD system.
  • the structure of the transceiver of the TDD system will be described.
  • the signal amplified by the transmitter's power amplifier (PA) 9315 is routed through a band select switch 9350, a band pass filter (BPF) 9360, and antenna switch(s) 9370. Can be, and can be transmitted to the antenna 9380.
  • PA power amplifier
  • the antenna 9380 receives signals from the wireless environment and the received signals are routed through an antenna switch(s) 9370, a band pass filter (BPF) 9360, and a band select switch 9350. It may be, and may be provided to the receiver 9320.
  • BPF band pass filter
  • the operation of the wireless device related to the sidelink described in FIG. 40 is merely an example, and sidelink operations using various techniques may be performed in the wireless device.
  • the sidelink may be a terminal-to-terminal interface for sidelink communication and/or sidelink discovery.
  • the sidelink may correspond to the PC5 interface.
  • the sidelink operation may be transmission and reception of information between terminals.
  • Sidelinks can carry various types of information.
  • the wireless device may acquire sidelink-related information.
  • the information related to the sidelink may be one or more resource configurations.
  • Information related to the sidelink can be obtained from other wireless devices or network nodes.
  • the wireless device may decode the information related to the sidelink.
  • the wireless device may perform one or more sidelink operations based on the sidelink-related information.
  • the sidelink operation(s) performed by the wireless device may include one or more operations described herein.
  • FIG. 41 illustrates an operation of a network node related to a sidelink according to an embodiment.
  • the operation of the network node related to the sidelink described in FIG. 41 is only an example, and sidelink operations using various techniques may be performed in the network node.
  • the network node may receive information on the sidelink from the wireless device.
  • the information on the sidelink may be sidelink UE information used to inform the network node of the sidelink information.
  • the network node may determine whether to transmit one or more commands related to the sidelink based on the received information.
  • the network node may transmit the command(s) related to the sidelink to the wireless device.
  • the wireless device may perform one or more sidelink operation(s) based on the received command.
  • Network nodes can be replaced by wireless devices or terminals.
  • a wireless device 9610 may include a communication interface 9611 for communicating with one or more other wireless devices, network nodes, and/or other elements in the network.
  • the communication interface 9611 may include one or more transmitters, one or more receivers, and/or one or more communication interfaces.
  • the wireless device 9610 may include a processing circuit 9612.
  • the processing circuit 9612 may include one or more processors such as the processor 9613 and one or more memories such as the memory 9614.
  • the processing circuit 9612 may be configured to control any of the methods and/or processes described herein and/or, for example, to cause the wireless device 9610 to perform such a method and/or process.
  • the processor 9613 may correspond to one or more processors for performing wireless device functions described herein.
  • the wireless device 9610 may include a memory 9614 configured to store data, program software code, and/or other information described herein.
  • the memory 9614 may include software code including instructions for causing the processor 9613 to perform some or all of the processes according to the present invention described above when one or more processors such as the processor 9613 are executed ( 9615).
  • one or more processors such as the processor 9613, which control one or more transceivers such as the transceiver 2223 to transmit and receive information, may perform one or more processes related to transmission and reception of information.
  • the network node 9620 may include a communication interface 9621 for communicating with one or more other network nodes, wireless devices, and/or other elements on the network.
  • the communication interface 9621 may include one or more transmitters, one or more receivers, and/or one or more communication interfaces.
  • the network node 9620 may include a processing circuit 9622.
  • the processing circuit may include a processor 9623 and a memory 9624.
  • the memory 9624 when executed by one or more processors, such as the processor 9623, includes software code 9625 including instructions that cause the processor 9623 to perform some or all of the processes in accordance with the present invention. ) Can be configured to store.
  • one or more processors that control one or more transceivers, such as the transceiver 2213 to transmit and receive information may perform one or more processes related to transmission and reception of information.
  • each structural element or function may be considered selectively.
  • Each of the structural elements or features may be performed without being combined with other structural elements or features.
  • some structural elements and/or features may be combined with each other to constitute implementations.
  • the order of operations described in the implementation can be changed.
  • Some structural elements or features of one implementation may be included in other implementations, or may be replaced with structural elements or features corresponding to other implementations.
  • Implementations in the present invention may be made by various techniques, for example hardware, firmware, software, or combinations thereof.
  • a method according to implementation may include one or more Application Specific Integrated Circuits (ASICs), one or more Digital Signal Processors (DSPs), one or more Digital Signal Processing Devices (DSPD), one or more Programmable Logic Devices (PLDs), and one or more.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPD Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • processors one or more controllers
  • microcontrollers one or more microprocessors, and the like.
  • firmware or software implementations may be implemented in the form of modules, procedures, functions, and the like.
  • the software code can be stored in memory and executed by a processor.
  • the memory may be located inside or outside the processor, and may transmit and receive data from the processor in various ways.
  • Embodiments as described above can be applied to various mobile communication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)

Abstract

Une forme de réalisation de la présente invention concerne un procédé permettant de recevoir un signal par un terminal de liaison descendante dans un système de communication sans fil, le procédé comprenant les étapes suivantes: la réception d'un DMRS lié à PSSCH, dans un groupe de ressources; et la réception du PSSCH, un DMRS de premier type étant transmis dans au moins deux zones de ressources prédéterminées dans le groupe de ressources, et, pour le DMRS de premier type, le DMRS est transmis dans un intervalle avant un intervalle prédéterminé.
PCT/KR2020/002203 2019-02-15 2020-02-17 Procédé et dispositif de réception de signal par un terminal de liaison latérale dans un système de communication sans fil WO2020167082A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20190018251 2019-02-15
KR10-2019-0018251 2019-02-15

Publications (1)

Publication Number Publication Date
WO2020167082A1 true WO2020167082A1 (fr) 2020-08-20

Family

ID=72044701

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002203 WO2020167082A1 (fr) 2019-02-15 2020-02-17 Procédé et dispositif de réception de signal par un terminal de liaison latérale dans un système de communication sans fil

Country Status (1)

Country Link
WO (1) WO2020167082A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190052420A1 (en) * 2017-08-11 2019-02-14 Qualcomm Incorporated Methods and apparatus related to demodulation reference signal design and related signaling

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190052420A1 (en) * 2017-08-11 2019-02-14 Qualcomm Incorporated Methods and apparatus related to demodulation reference signal design and related signaling

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LG ELECTRONICS: "Discussion on physical layer structure for NR V2X", R1-1901335. 3GPP TSG RAN WG1 AD-HOC MEETING 1901, 21 January 2019 (2019-01-21), Taipei, Taiwan, XP051601273 *
MEDIATEK INC.: "On sidelink resource allocation mechanism", R1-1900199. 3GPP TSG RAN WG1 AD-HOC MEETING 1901, 12 January 2019 (2019-01-12), Taipei, Taiwan, XP051575819 *
NOKIA ET AL.: "Discussions on DMRS for NR side link", R1-1901 159. 3GPP TSG RAN WG1 AD-HOC MEETING 1901, 12 January 2019 (2019-01-12), Taipei, Taiwan, XP051576691 *
PANASONIC: "Discussion on physical layer structures of NR sidelink", R1-1813014. 3GPP TSG RAN WG1 MEETING #95, 2 November 2018 (2018-11-02), Spokane, USA, XP051479277 *

Similar Documents

Publication Publication Date Title
WO2020145785A1 (fr) Procédé et appareil permettant à un terminal de liaison latérale de transmettre un signal dans un système de communication sans fil
WO2020096435A1 (fr) Procédé et appareil d'émission d'un signal de rétroaction au moyen d'un terminal de liaison latérale dans un système de communication sans fil
WO2020171669A1 (fr) Procédé et appareil permettant à un terminal de liaison latérale d'émettre et de recevoir un signal relatif à un rapport d'état de canal dans un système de communication sans fil
WO2021002723A1 (fr) Procédé de fonctionnement d'équipement d'utilisateur relatif à une drx de liaison latérale dans un système de communication sans fil
WO2019240548A1 (fr) Procédé et appareil pour réaliser une communication de liaison latérale par un ue dans un nr v2x
WO2020091346A1 (fr) Procédé et dispositif de transmission de pssch par un terminal dans un système de communication sans fil
WO2020159297A1 (fr) Procédé et appareil permettant de transmettre un signal au moyen d'un terminal de liaison latérale dans un système de communication sans fil
WO2019240544A1 (fr) Procédé et appareil permettant la réalisation d'une communication de liaison latérale par un ue dans une v2x nr
WO2020218636A1 (fr) Véhicule autonome, et système et procédé pour fournir un service à l'aide de celui-ci
WO2020032724A1 (fr) Procédé de réception d'un signal de liaison descendante par un terminal dans un système de communication sans fil, et terminal utilisant ledit procédé
WO2020032699A1 (fr) Procédé de transmission de canal physique partagé de liaison montante d'un terminal dans une bande sans licence et dispositif utilisant ledit procédé
WO2020022845A1 (fr) Procédé et appareil destinés à transmettre un signal par un terminal de liaison montante dans un système de communication sans fil
WO2019240550A1 (fr) Procédé et appareil pour rapporter un type de diffusion par un ue dans nr v2x
WO2020032690A1 (fr) Procédé par lequel un terminal transmet des informations de commande de liaison montante dans une bande sans licence, et appareil utilisant le procédé
WO2020091565A1 (fr) Procédé de transmission de signal de liaison montante pour un terminal dans une bande sans licence, et appareil utilisant ledit procédé
WO2020032727A1 (fr) Procédé de transmission d'un rach par un terminal dans un système de communication sans fil et terminal utilisant ledit procédé
WO2020032697A1 (fr) Procédé de transmission de liaison montante dans une bande sans licence dans un système de communication sans fil et terminal correspondant
WO2020032678A1 (fr) Procédé par lequel un terminal transmet des données dans une bande sans licence, et appareil utilisant le procédé
WO2020032725A1 (fr) Procédé de transmission de bloc de signal de synchronisation réalisé par un dispositif de communication dans un système de communication sans fil et dispositif de communication utilisant le procédé
WO2020027572A1 (fr) Procédé et dispositif de transmission d'un signal de synchronisation au moyen d'un terminal de liaison latérale dans un système de communication sans fil
WO2019226026A1 (fr) Procédé et appareil de transmission de signal de liaison latérale dans un système de communication sans fil
WO2020060361A1 (fr) Procédé pour qu'un terminal accède à un canal dans une bande sans licence et dispositif utilisant ledit procédé
WO2020091566A1 (fr) Procédé par lequel un terminal transmet un signal dans une bande sans licence, et appareil utilisant le procédé
WO2021100938A1 (fr) Procédé de transmission de signal entre un véhicule, un terminal et un réseau dans un système de communication sans fil, et véhicule, terminal et réseau correspondants
WO2021100935A1 (fr) Procédé de transmission, par un terminal d'un usager de la route vulnérable, d'un signal dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20755969

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20755969

Country of ref document: EP

Kind code of ref document: A1