WO2023022251A1 - Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil - Google Patents

Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil Download PDF

Info

Publication number
WO2023022251A1
WO2023022251A1 PCT/KR2021/010969 KR2021010969W WO2023022251A1 WO 2023022251 A1 WO2023022251 A1 WO 2023022251A1 KR 2021010969 W KR2021010969 W KR 2021010969W WO 2023022251 A1 WO2023022251 A1 WO 2023022251A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
learning
terminal
isp
base station
Prior art date
Application number
PCT/KR2021/010969
Other languages
English (en)
Korean (ko)
Inventor
박재용
이명희
오재기
정재훈
하업성
김성진
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020237043173A priority Critical patent/KR20240047337A/ko
Priority to PCT/KR2021/010969 priority patent/WO2023022251A1/fr
Publication of WO2023022251A1 publication Critical patent/WO2023022251A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the following description relates to a wireless communication system, and relates to a signal transmission method related to an artificial intelligence-based special purpose virtual camera through federated learning in a wireless communication system.
  • a wireless access system is widely deployed to provide various types of communication services such as voice and data.
  • a wireless access system is a multiple access system capable of supporting communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.).
  • Examples of the multiple access system include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, and a single carrier frequency (SC-FDMA) system. division multiple access) system.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • eMBB enhanced mobile broadband
  • RAT radio access technology
  • a communication system considering reliability and latency-sensitive services/UE (user equipment) as well as mMTC (massive machine type communications) providing various services anytime and anywhere by connecting multiple devices and objects has been proposed. .
  • Various technical configurations for this have been proposed.
  • the present disclosure may provide a method and apparatus in which a terminal or a base station performs image signal processing (ISP) based on artificial intelligence in a wireless communication system.
  • ISP image signal processing
  • a method of operating a terminal in a wireless communication system includes performing image sensing through a first sensor, receiving neural network-related information from a base station, and generating a first neural network based on the neural network-related information.
  • the method may include learning a network and transmitting the first neural network learning result data to a base station.
  • the first sensor does not perform image signal processing (ISP), but the neural network-related information may include information indicating whether or not the terminal learns a neural network. Learning of the first neural network may be learning to process the image sensing result data based on artificial intelligence.
  • the neural network related information may include weight information of the first neural network.
  • the terminal may perform the first neural network learning only when the information indicating whether the neural network is learning or not indicates the first neural network learning.
  • the terminal may receive second neural network learning result data from the base station.
  • the terminal may perform image sensing through a second sensor, and the second sensor may perform ISP.
  • the terminal may transmit ISP output data of the second sensor to the base station.
  • a method of operating a base station in a wireless communication system includes transmitting neural network related information to a terminal, receiving first neural network learning result data from the terminal, and the first neural network learning result data. Learning a first reverse neural network based on , and learning a second neural network based on a result of learning the first reverse neural network.
  • the reverse first neural network learning may perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process image sensing result data based on artificial intelligence. In the learning of the second neural network, a result of performing the ISP may be compared with data resulting from learning the first neural network.
  • the neural network related information may include weight information of the first neural network.
  • the base station may transmit second neural network learning result data to the terminal.
  • the base station may include an image signal processor (ISP).
  • the image signal processor (ISP) may perform image signal processing (ISP) based on the first reverse neural network learning result data.
  • the base station may receive ISP output data from the terminal and learn the second neural network based on the ISP output data.
  • a terminal in a wireless communication system, may include a transceiver and a processor connected to the transceiver.
  • the processor may control the terminal to perform image sensing through the first sensor.
  • the processor may control the transceiver to receive neural network related information from a base station.
  • the processor may control the terminal to learn a first neural network based on the neural network related information.
  • the processor may control the transceiver to transmit the first neural network learning result data to the base station.
  • the first sensor may not perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process the image sensing result data based on artificial intelligence.
  • a base station may include a transceiver and a processor connected to the transceiver.
  • the processor may control the transceiver to transmit neural network related information to the terminal.
  • the processor may control the base station to receive first neural network learning result data from the terminal.
  • the processor may control the base station to learn a first reverse neural network based on the first neural network learning result data, and to learn a second neural network based on the first reverse neural network learning result.
  • the reverse first neural network learning may perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process image sensing result data based on artificial intelligence. In the learning of the second neural network, a result of performing the ISP may be compared with data resulting from learning the first neural network.
  • an apparatus may include at least one memory and at least one processor functionally connected to the at least one memory.
  • the at least one processor may control the device to perform image sensing through the first sensor.
  • the at least one processor may control the device to receive neural network related information from a base station.
  • the at least one processor may control the device to learn a first neural network based on the neural network related information.
  • the at least one processor may control the device to transmit the first neural network learning result data to the base station.
  • the first sensor may not perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process the image sensing result data based on artificial intelligence.
  • a non-transitory computer-readable medium storing at least one instruction (instructions) includes the at least one instruction executable by a processor.
  • the at least one command instructs the computer readable medium to perform image sensing through a first sensor, to receive neural network-related information from a base station, and to generate a first neural network based on the neural network-related information. Instructing to learn, and instructing the base station to transmit the first neural network learning result data.
  • the first sensor may not perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process the image sensing result data based on artificial intelligence.
  • a terminal or a base station may perform image signal processing (ISP) based on artificial intelligence.
  • ISP image signal processing
  • a terminal can efficiently transmit and receive ISP-related data based on artificial intelligence.
  • Effects obtainable in the embodiments of the present disclosure are not limited to the above-mentioned effects, and other effects not mentioned are technical fields to which the technical configuration of the present disclosure is applied from the description of the following embodiments of the present disclosure. can be clearly derived and understood by those skilled in the art. That is, unintended effects according to implementing the configuration described in the present disclosure may also be derived by those skilled in the art from the embodiments of the present disclosure.
  • FIG. 1 is a diagram illustrating an example of a communication system applicable to the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
  • FIG. 3 is a diagram illustrating another example of a wireless device applicable to the present disclosure.
  • FIG. 4 is a diagram illustrating an example of a portable device applicable to the present disclosure.
  • FIG. 5 is a diagram illustrating an example of a vehicle or autonomous vehicle applicable to the present disclosure.
  • AI Artificial Intelligence
  • FIG. 7 is a diagram illustrating a method of processing a transmission signal applicable to the present disclosure.
  • FIG 8 is a diagram showing an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
  • FIG. 9 is a diagram showing an electromagnetic spectrum applicable to the present disclosure.
  • FIG. 10 is a diagram illustrating a THz communication method applicable to the present disclosure.
  • FIG. 11 is a diagram showing the structure of a perceptron included in an artificial neural network applicable to the present disclosure.
  • FIG. 12 is a diagram illustrating an artificial neural network structure applicable to the present disclosure.
  • FIG. 13 is a diagram illustrating a deep neural network applicable to the present disclosure.
  • FIG. 14 is a diagram illustrating a convolutional neural network applicable to the present disclosure.
  • 15 is a diagram illustrating a filter operation of a convolutional neural network applicable to the present disclosure.
  • 16 is a diagram showing a neural network structure in which a circular loop applicable to the present disclosure exists.
  • 17 is a diagram illustrating an operational structure of a recurrent neural network applicable to the present disclosure.
  • 18a and 18b are diagrams illustrating an example of federated learning applicable to the present disclosure.
  • 19 is a diagram illustrating an example of federated learning applicable to the present disclosure.
  • 20 is a diagram illustrating an example of a federated learning procedure applicable to the present disclosure.
  • 21 is a diagram illustrating an example of a terminal applicable to the present disclosure.
  • 22 is a diagram illustrating an example of a terminal applicable to the present disclosure.
  • FIG. 23 is a diagram illustrating an example of a terminal applicable to the present disclosure.
  • 24 is a diagram illustrating an example of a terminal applicable to the present disclosure.
  • 25 is a diagram illustrating an example of an operation procedure of a terminal applicable to the present disclosure.
  • 26 is a diagram illustrating an example of a server, base station, or terminal procedure applicable to the present disclosure.
  • each component or feature may be considered optional unless explicitly stated otherwise.
  • Each component or feature may be implemented in a form not combined with other components or features.
  • an embodiment of the present disclosure may be configured by combining some elements and/or features. The order of operations described in the embodiments of the present disclosure may be changed. Some components or features of one embodiment may be included in another embodiment, or may be replaced with corresponding components or features of another embodiment.
  • a base station has meaning as a terminal node of a network that directly communicates with a mobile station.
  • a specific operation described as being performed by a base station in this document may be performed by an upper node of the base station in some cases.
  • the 'base station' is a term such as a fixed station, Node B, eNode B, gNode B, ng-eNB, advanced base station (ABS), or access point. can be replaced by
  • a terminal includes a user equipment (UE), a mobile station (MS), a subscriber station (SS), a mobile subscriber station (MSS), It may be replaced with terms such as mobile terminal or advanced mobile station (AMS).
  • UE user equipment
  • MS mobile station
  • SS subscriber station
  • MSS mobile subscriber station
  • AMS advanced mobile station
  • the transmitting end refers to a fixed and/or mobile node providing data service or voice service
  • the receiving end refers to a fixed and/or mobile node receiving data service or voice service. Therefore, in the case of uplink, the mobile station can be a transmitter and the base station can be a receiver. Similarly, in the case of downlink, the mobile station may be a receiving end and the base station may be a transmitting end.
  • Embodiments of the present disclosure are wireless access systems, such as an IEEE 802.xx system, a 3rd Generation Partnership Project (3GPP) system, a 3GPP Long Term Evolution (LTE) system, a 3GPP 5G (5th generation) NR (New Radio) system, and a 3GPP2 system. It may be supported by at least one disclosed standard document, and in particular, the embodiments of the present disclosure are supported by 3GPP technical specification (TS) 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331 documents It can be.
  • 3GPP technical specification TS 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331 documents It can be.
  • embodiments of the present disclosure may be applied to other wireless access systems, and are not limited to the above-described systems.
  • it may also be applicable to a system applied after the 3GPP 5G NR system, and is not limited to a specific system.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • LTE is 3GPP TS 36.xxx Release 8 or later
  • LTE technology after 3GPP TS 36.xxx Release 10 is referred to as LTE-A
  • xxx Release 13 may be referred to as LTE-A pro.
  • 3GPP NR may mean technology after TS 38.xxx Release 15.
  • 3GPP 6G may mean technology after TS Release 17 and/or Release 18.
  • "xxx" means a standard document detail number.
  • LTE/NR/6G may be collectively referred to as a 3GPP system.
  • FIG. 1 is a diagram illustrating an example of a communication system applied to the present disclosure.
  • a communication system 100 applied to the present disclosure includes a wireless device, a base station, and a network.
  • the wireless device means a device that performs communication using a radio access technology (eg, 5G NR, LTE), and may be referred to as a communication/wireless/5G device.
  • the wireless device includes a robot 100a, a vehicle 100b-1 and 100b-2, an extended reality (XR) device 100c, a hand-held device 100d, and a home appliance. appliance) 100e, Internet of Thing (IoT) device 100f, and artificial intelligence (AI) device/server 100g.
  • a radio access technology eg, 5G NR, LTE
  • XR extended reality
  • IoT Internet of Thing
  • AI artificial intelligence
  • the vehicle may include a vehicle equipped with a wireless communication function, an autonomous vehicle, a vehicle capable of performing inter-vehicle communication, and the like.
  • the vehicles 100b-1 and 100b-2 may include an unmanned aerial vehicle (UAV) (eg, a drone).
  • UAV unmanned aerial vehicle
  • the XR device 100c includes augmented reality (AR)/virtual reality (VR)/mixed reality (MR) devices, and includes a head-mounted device (HMD), a head-up display (HUD) installed in a vehicle, a television, It may be implemented in the form of smart phones, computers, wearable devices, home appliances, digital signage, vehicles, robots, and the like.
  • the mobile device 100d may include a smart phone, a smart pad, a wearable device (eg, a smart watch, a smart glass), a computer (eg, a laptop computer), and the like.
  • the home appliance 100e may include a TV, a refrigerator, a washing machine, and the like.
  • the IoT device 100f may include a sensor, a smart meter, and the like.
  • the base station 120 and the network 130 may also be implemented as a wireless device, and a specific wireless device 120a may operate as a base station/network node to other wireless devices.
  • the wireless devices 100a to 100f may be connected to the network 130 through the base station 120 .
  • AI technology may be applied to the wireless devices 100a to 100f, and the wireless devices 100a to 100f may be connected to the AI server 100g through the network 130.
  • the network 130 may be configured using a 3G network, a 4G (eg LTE) network, or a 5G (eg NR) network.
  • the wireless devices 100a to 100f may communicate with each other through the base station 120/network 130, but communicate directly without going through the base station 120/network 130 (e.g., sidelink communication). You may.
  • the vehicles 100b-1 and 100b-2 may perform direct communication (eg, vehicle to vehicle (V2V)/vehicle to everything (V2X) communication).
  • the IoT device 100f eg, sensor
  • the IoT device 100f may directly communicate with other IoT devices (eg, sensor) or other wireless devices 100a to 100f.
  • Wireless communication/connection 150a, 150b, and 150c may be performed between the wireless devices 100a to 100f/base station 120 and the base station 120/base station 120.
  • wireless communication/connection includes various types of uplink/downlink communication 150a, sidelink communication 150b (or D2D communication), and inter-base station communication 150c (eg relay, integrated access backhaul (IAB)). This can be done through radio access technology (eg 5G NR).
  • radio access technology eg 5G NR
  • a wireless device and a base station/wireless device, and a base station can transmit/receive radio signals to each other.
  • the wireless communication/connections 150a, 150b, and 150c may transmit/receive signals through various physical channels.
  • various configuration information setting processes for transmitting / receiving radio signals various signal processing processes (eg, channel encoding / decoding, modulation / demodulation, resource mapping / demapping, etc.) At least a part of a resource allocation process may be performed.
  • FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
  • a first wireless device 200a and a second wireless device 200b may transmit and receive radio signals through various wireless access technologies (eg, LTE and NR).
  • ⁇ the first wireless device 200a, the second wireless device 200b ⁇ denotes the ⁇ wireless device 100x and the base station 120 ⁇ of FIG. 1 and/or the ⁇ wireless device 100x and the wireless device 100x.
  • can correspond.
  • the first wireless device 200a includes one or more processors 202a and one or more memories 204a, and may further include one or more transceivers 206a and/or one or more antennas 208a.
  • the processor 202a controls the memory 204a and/or the transceiver 206a and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flow diagrams disclosed herein.
  • the processor 202a may process information in the memory 204a to generate first information/signal, and transmit a radio signal including the first information/signal through the transceiver 206a.
  • the processor 202a may receive a radio signal including the second information/signal through the transceiver 206a and store information obtained from signal processing of the second information/signal in the memory 204a.
  • the memory 204a may be connected to the processor 202a and may store various information related to the operation of the processor 202a.
  • memory 204a may perform some or all of the processes controlled by processor 202a, or instructions for performing the descriptions, functions, procedures, suggestions, methods, and/or flowcharts of operations disclosed herein. It may store software codes including them.
  • the processor 202a and the memory 204a may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • the transceiver 206a may be coupled to the processor 202a and may transmit and/or receive wireless signals through one or more antennas 208a.
  • the transceiver 206a may include a transmitter and/or a receiver.
  • the transceiver 206a may be used interchangeably with a radio frequency (RF) unit.
  • RF radio frequency
  • a wireless device may mean a communication modem/circuit/chip.
  • the first wireless device may include a transceiver and a processor connected to the transceiver.
  • the processor may control the first wireless device to perform image sensing through a first sensor.
  • the processor may control the transceiver to receive neural network related information from a base station.
  • the processor may control the first wireless device to learn a first neural network based on the neural network related information.
  • the processor may control the transceiver to transmit the first neural network learning result data to the base station.
  • the first sensor may not perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process the image sensing result data based on artificial intelligence.
  • the first wireless device may include a transceiver and a processor connected to the transceiver.
  • the processor may control the transceiver to transmit neural network related information to the terminal.
  • the processor may control the first wireless device to receive first neural network learning result data from the terminal.
  • the processor controls the first wireless device to learn a first reverse neural network based on the first neural network learning result data, and to learn a second neural network based on the first reverse neural network learning result. can do.
  • the reverse first neural network learning may perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process image sensing result data based on artificial intelligence. In the learning of the second neural network, a result of performing the ISP may be compared with data resulting from learning the first neural network.
  • the first wireless device may include at least one memory and at least one processor functionally connected to the at least one memory.
  • the at least one processor may control the first wireless device to perform image sensing through a first sensor.
  • the at least one processor may control the first wireless device to receive neural network related information from a base station.
  • the at least one processor may control the first wireless device to learn a first neural network based on the neural network related information.
  • the at least one processor may control the first wireless device to transmit the first neural network learning result data to the base station.
  • the first sensor may not perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the terminal learns the neural network. Learning of the first neural network may process the image sensing result data based on artificial intelligence.
  • the second wireless device 200b includes one or more processors 202b, one or more memories 204b, and may further include one or more transceivers 206b and/or one or more antennas 208b.
  • the processor 202b controls the memory 204b and/or the transceiver 206b and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flow diagrams disclosed herein.
  • the processor 202b may process information in the memory 204b to generate third information/signal, and transmit a radio signal including the third information/signal through the transceiver 206b.
  • the processor 202b may receive a radio signal including the fourth information/signal through the transceiver 206b and store information obtained from signal processing of the fourth information/signal in the memory 204b.
  • the memory 204b may be connected to the processor 202b and may store various information related to the operation of the processor 202b.
  • the memory 204b may perform some or all of the processes controlled by the processor 202b, or instructions for performing the descriptions, functions, procedures, suggestions, methods, and/or flowcharts of operations disclosed herein. It may store software codes including them.
  • the processor 202b and the memory 204b may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • the transceiver 206b may be coupled to the processor 202b and may transmit and/or receive wireless signals through one or more antennas 208b.
  • the transceiver 206b may include a transmitter and/or a receiver.
  • the transceiver 206b may be used interchangeably with an RF unit.
  • a wireless device may mean a communication modem/circuit/chip.
  • one or more protocol layers may be implemented by one or more processors 202a, 202b.
  • the one or more processors 202a and 202b may include one or more layers (eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control) and functional layers such as service data adaptation protocol (SDAP).
  • One or more processors 202a, 202b may generate one or more protocol data units (PDUs) and/or one or more service data units (SDUs) according to the descriptions, functions, procedures, proposals, methods, and/or operational flow charts disclosed herein.
  • PDUs protocol data units
  • SDUs service data units
  • processors 202a, 202b may generate messages, control information, data or information according to the descriptions, functions, procedures, proposals, methods and/or operational flow diagrams disclosed herein.
  • One or more processors 202a, 202b generate PDUs, SDUs, messages, control information, data or signals (eg, baseband signals) containing information according to the functions, procedures, proposals and/or methods disclosed herein , may be provided to one or more transceivers 206a and 206b.
  • One or more processors 202a, 202b may receive signals (eg, baseband signals) from one or more transceivers 206a, 206b, and descriptions, functions, procedures, suggestions, methods, and/or flowcharts of operations disclosed herein PDUs, SDUs, messages, control information, data or information can be obtained according to these.
  • signals eg, baseband signals
  • One or more processors 202a, 202b may be referred to as a controller, microcontroller, microprocessor or microcomputer.
  • One or more processors 202a, 202b may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • firmware or software may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, and the like.
  • Firmware or software configured to perform the descriptions, functions, procedures, proposals, methods and/or operational flow charts disclosed in this document may be included in one or more processors 202a or 202b or stored in one or more memories 204a or 204b. It can be driven by the above processors 202a and 202b.
  • the descriptions, functions, procedures, suggestions, methods and/or operational flow charts disclosed in this document may be implemented using firmware or software in the form of codes, instructions and/or sets of instructions.
  • the second wireless device may be a non-transitory computer-readable medium storing at least one instruction.
  • the second wireless device may include the at least one command executable by a processor.
  • the at least one command instructs the second wireless device to perform image sensing through a first sensor, to receive neural network-related information from a base station, and to generate a first neural network based on the neural network-related information. Instructing to learn, and instructing the base station to transmit the first neural network learning result data.
  • the first sensor may not perform image signal processing (ISP).
  • the neural network related information may include information indicating whether or not the second wireless device learns the neural network. Learning of the first neural network may process the image sensing result data based on artificial intelligence.
  • One or more memories 204a, 204b may be coupled to one or more processors 202a, 202b and may store various types of data, signals, messages, information, programs, codes, instructions and/or instructions.
  • One or more memories 204a, 204b may include read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, hard drive, registers, cache memory, computer readable storage media, and/or It may consist of a combination of these.
  • One or more memories 204a, 204b may be located internally and/or externally to one or more processors 202a, 202b.
  • one or more memories 204a, 204b may be connected to one or more processors 202a, 202b through various technologies such as wired or wireless connections.
  • One or more transceivers 206a, 206b may transmit user data, control information, radio signals/channels, etc. referred to in the methods and/or operational flow charts of this document to one or more other devices.
  • One or more transceivers 206a, 206b may receive user data, control information, radio signals/channels, etc. referred to in descriptions, functions, procedures, proposals, methods and/or operational flow charts, etc. disclosed herein from one or more other devices. there is.
  • one or more transceivers 206a and 206b may be connected to one or more processors 202a and 202b and transmit and receive radio signals.
  • one or more processors 202a, 202b may control one or more transceivers 206a, 206b to transmit user data, control information, or radio signals to one or more other devices.
  • one or more processors 202a, 202b may control one or more transceivers 206a, 206b to receive user data, control information, or radio signals from one or more other devices.
  • one or more transceivers 206a, 206b may be coupled to one or more antennas 208a, 208b, and one or more transceivers 206a, 206b may be connected to one or more antennas 208a, 208b to achieve the descriptions, functions disclosed in this document.
  • one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (eg, antenna ports).
  • One or more transceivers (206a, 206b) in order to process the received user data, control information, radio signal / channel, etc. using one or more processors (202a, 202b), the received radio signal / channel, etc. in the RF band signal It can be converted into a baseband signal.
  • One or more transceivers 206a and 206b may convert user data, control information, and radio signals/channels processed by one or more processors 202a and 202b from baseband signals to RF band signals.
  • one or more transceivers 206a, 206b may include (analog) oscillators and/or filters.
  • FIG. 3 is a diagram illustrating another example of a wireless device applied to the present disclosure.
  • a wireless device 300 corresponds to the wireless devices 200a and 200b of FIG. 2, and includes various elements, components, units/units, and/or modules. ) can be configured.
  • the wireless device 300 may include a communication unit 310, a control unit 320, a memory unit 330, and an additional element 340.
  • the communication unit may include communication circuitry 312 and transceiver(s) 314 .
  • communication circuitry 312 may include one or more processors 202a, 202b of FIG. 2 and/or one or more memories 204a, 204b.
  • transceiver(s) 314 may include one or more transceivers 206a, 206b of FIG.
  • the control unit 320 is electrically connected to the communication unit 310, the memory unit 330, and the additional element 340 and controls overall operations of the wireless device. For example, the control unit 320 may control electrical/mechanical operations of the wireless device based on programs/codes/commands/information stored in the memory unit 330. In addition, the control unit 320 transmits the information stored in the memory unit 330 to the outside (eg, another communication device) through the communication unit 310 through a wireless/wired interface, or transmits the information stored in the memory unit 330 to the outside (eg, another communication device) through the communication unit 310. Information received through a wireless/wired interface from other communication devices) may be stored in the memory unit 330 .
  • the additional element 340 may be configured in various ways according to the type of wireless device.
  • the additional element 340 may include at least one of a power unit/battery, an input/output unit, a driving unit, and a computing unit.
  • the wireless device 300 may be a robot (FIG. 1, 100a), a vehicle (FIG. 1, 100b-1, 100b-2), an XR device (FIG. 1, 100c), a mobile device (FIG. 1, 100d) ), home appliances (FIG. 1, 100e), IoT devices (FIG.
  • Wireless devices can be mobile or used in a fixed location depending on the use-case/service.
  • various elements, components, units/units, and/or modules in the wireless device 300 may be entirely interconnected through a wired interface or at least partially connected wirelessly through the communication unit 310 .
  • the control unit 320 and the communication unit 310 are connected by wire, and the control unit 320 and the first units (eg, 130 and 140) are connected wirelessly through the communication unit 310.
  • each element, component, unit/unit, and/or module within wireless device 300 may further include one or more elements.
  • the control unit 320 may be composed of one or more processor sets.
  • control unit 320 may include a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphic processing processor, a memory control processor, and the like.
  • memory unit 330 may include RAM, dynamic RAM (DRAM), ROM, flash memory, volatile memory, non-volatile memory, and/or combinations thereof. can be configured.
  • FIG. 4 is a diagram illustrating an example of a portable device applied to the present disclosure.
  • a portable device may include a smart phone, a smart pad, a wearable device (eg, smart watch, smart glasses), and a portable computer (eg, a laptop computer).
  • a mobile device may be referred to as a mobile station (MS), a user terminal (UT), a mobile subscriber station (MSS), a subscriber station (SS), an advanced mobile station (AMS), or a wireless terminal (WT).
  • MS mobile station
  • UT user terminal
  • MSS mobile subscriber station
  • SS subscriber station
  • AMS advanced mobile station
  • WT wireless terminal
  • a portable device 400 includes an antenna unit 408, a communication unit 410, a control unit 420, a memory unit 430, a power supply unit 440a, an interface unit 440b, and an input/output unit 440c. ) may be included.
  • the antenna unit 408 may be configured as part of the communication unit 410 .
  • Blocks 410 to 430/440a to 440c respectively correspond to blocks 310 to 330/340 of FIG. 3 .
  • the communication unit 410 may transmit/receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
  • the controller 420 may perform various operations by controlling components of the portable device 400 .
  • the controller 420 may include an application processor (AP).
  • the memory unit 430 may store data/parameters/programs/codes/commands necessary for driving the portable device 400 . Also, the memory unit 430 may store input/output data/information.
  • the power supply unit 440a supplies power to the portable device 400 and may include a wired/wireless charging circuit, a battery, and the like.
  • the interface unit 440b may support connection between the mobile device 400 and other external devices.
  • the interface unit 440b may include various ports (eg, audio input/output ports and video input/output ports) for connection with external devices.
  • the input/output unit 440c may receive or output image information/signal, audio information/signal, data, and/or information input from a user.
  • the input/output unit 440c may include a camera, a microphone, a user input unit, a display unit 440d, a speaker, and/or a haptic module.
  • the input/output unit 440c acquires information/signals (eg, touch, text, voice, image, video) input from the user, and the acquired information/signals are stored in the memory unit 430.
  • the communication unit 410 may convert the information/signal stored in the memory into a wireless signal, and directly transmit the converted wireless signal to another wireless device or to a base station.
  • the communication unit 410 may receive a radio signal from another wireless device or base station and then restore the received radio signal to original information/signal. After the restored information/signal is stored in the memory unit 430, it may be output in various forms (eg, text, voice, image, video, or haptic) through the input/output unit 440c.
  • FIG. 5 is a diagram illustrating an example of a vehicle or autonomous vehicle to which the present disclosure applies.
  • a vehicle or an autonomous vehicle may be implemented as a mobile robot, vehicle, train, manned/unmanned aerial vehicle (AV), ship, etc., and is not limited to a vehicle type.
  • AV unmanned aerial vehicle
  • a vehicle or autonomous vehicle 500 includes an antenna unit 508, a communication unit 510, a control unit 520, a driving unit 540a, a power supply unit 540b, a sensor unit 540c, and an autonomous driving unit.
  • a portion 540d may be included.
  • the antenna unit 550 may be configured as a part of the communication unit 510 .
  • Blocks 510/530/540a to 540d respectively correspond to blocks 410/430/440 of FIG. 4 .
  • the communication unit 510 may transmit/receive signals (eg, data, control signals, etc.) with external devices such as other vehicles, base stations (eg, base stations, roadside base units, etc.), servers, and the like.
  • the controller 520 may perform various operations by controlling elements of the vehicle or autonomous vehicle 500 .
  • the controller 520 may include an electronic control unit (ECU).
  • ECU electronice control unit
  • AI devices include TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, vehicles, etc. It may be implemented as a device or a movable device.
  • the AI device 600 includes a communication unit 610, a control unit 620, a memory unit 630, an input/output unit 640a/640b, a running processor unit 640c, and a sensor unit 640d.
  • a communication unit 610 can include Blocks 610 to 630/640a to 640d may respectively correspond to blocks 310 to 330/340 of FIG. 3 .
  • the communication unit 610 communicates wired and wireless signals (eg, sensor information, user data) with external devices such as other AI devices (eg, FIG. 1, 100x, 120, and 140) or AI servers (Fig. input, learning model, control signal, etc.) can be transmitted and received. To this end, the communication unit 610 may transmit information in the memory unit 630 to an external device or transmit a signal received from the external device to the memory unit 630 .
  • external devices eg, sensor information, user data
  • AI devices eg, FIG. 1, 100x, 120, and 140
  • AI servers Fig. input, learning model, control signal, etc.
  • the controller 620 may determine at least one executable operation of the AI device 600 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. And, the controller 620 may perform the determined operation by controlling components of the AI device 600 . For example, the control unit 620 may request, retrieve, receive, or utilize data from the learning processor unit 640c or the memory unit 630, and may perform a predicted operation among at least one feasible operation or one determined to be desirable. Components of the AI device 600 may be controlled to execute an operation. In addition, the control unit 620 collects history information including user feedback on the operation contents or operation of the AI device 600 and stores it in the memory unit 630 or the running processor unit 640c, or the AI server ( 1, 140) can be transmitted to an external device. The collected history information can be used to update the learning model.
  • the memory unit 630 may store data supporting various functions of the AI device 600 .
  • the memory unit 630 may store data obtained from the input unit 640a, data obtained from the communication unit 610, output data of the learning processor unit 640c, and data obtained from the sensing unit 640.
  • the memory unit 630 may store control information and/or software codes required for operation/execution of the controller 620 .
  • the input unit 640a may obtain various types of data from the outside of the AI device 600.
  • the input unit 620 may obtain learning data for model learning and input data to which the learning model is to be applied.
  • the input unit 640a may include a camera, a microphone, and/or a user input unit.
  • the output unit 640b may generate an output related to sight, hearing, or touch.
  • the output unit 640b may include a display unit, a speaker, and/or a haptic module.
  • the sensing unit 640 may obtain at least one of internal information of the AI device 600, surrounding environment information of the AI device 600, and user information by using various sensors.
  • the sensing unit 640 may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. there is.
  • the learning processor unit 640c may learn a model composed of an artificial neural network using learning data.
  • the running processor unit 640c may perform AI processing together with the running processor unit of the AI server (FIG. 1, 140).
  • the learning processor unit 640c may process information received from an external device through the communication unit 610 and/or information stored in the memory unit 630 .
  • the output value of the learning processor unit 640c may be transmitted to an external device through the communication unit 610 and/or stored in the memory unit 630.
  • the transmitted signal may be processed by a signal processing circuit.
  • the signal processing circuit 700 may include a scrambler 710, a modulator 720, a layer mapper 730, a precoder 740, a resource mapper 750, and a signal generator 760.
  • the operation/function of FIG. 7 may be performed by the processors 202a and 202b and/or the transceivers 206a and 206b of FIG. 2 .
  • blocks 710 to 760 may be implemented in the processors 202a and 202b and/or the transceivers 206a and 206b of FIG. 2 .
  • blocks 710 to 760 may be implemented in the processors 202a and 202b of FIG. 2 .
  • blocks 710 to 750 may be implemented in the processors 202a and 202b of FIG. 2 and block 760 may be implemented in the transceivers 206a and 206b of FIG. 2 , and are not limited to the above-described embodiment.
  • the codeword may be converted into a radio signal through the signal processing circuit 700 of FIG. 7 .
  • a codeword is an encoded bit sequence of an information block.
  • Information blocks may include transport blocks (eg, UL-SCH transport blocks, DL-SCH transport blocks).
  • Radio signals may be transmitted through various physical channels (eg, PUSCH, PDSCH).
  • the codeword may be converted into a scrambled bit sequence by the scrambler 710.
  • a scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of a wireless device.
  • the scrambled bit sequence may be modulated into a modulation symbol sequence by modulator 720.
  • the modulation method may include pi/2-binary phase shift keying (pi/2-BPSK), m-phase shift keying (m-PSK), m-quadrature amplitude modulation (m-QAM), and the like.
  • the complex modulation symbol sequence may be mapped to one or more transport layers by the layer mapper 730. Modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by the precoder 740 (precoding).
  • the output z of the precoder 740 can be obtained by multiplying the output y of the layer mapper 730 by the N*M precoding matrix W.
  • N is the number of antenna ports and M is the number of transport layers.
  • the precoder 740 may perform precoding after transform precoding (eg, discrete fourier transform (DFT)) on complex modulation symbols. Also, the precoder 740 may perform precoding without performing transform precoding.
  • transform precoding eg, discrete fourier transform (DFT)
  • the resource mapper 750 may map modulation symbols of each antenna port to time-frequency resources.
  • the time-frequency resource may include a plurality of symbols (eg, CP-OFDMA symbols and DFT-s-OFDMA symbols) in the time domain and a plurality of subcarriers in the frequency domain.
  • the signal generator 760 generates a radio signal from the mapped modulation symbols, and the generated radio signal can be transmitted to other devices through each antenna.
  • CP cyclic prefix
  • DAC digital-to-analog converter
  • the signal processing process for the received signal in the wireless device may be configured in reverse to the signal processing process 710 to 760 of FIG. 7 .
  • a wireless device eg, 200a and 200b of FIG. 2
  • the received radio signal may be converted into a baseband signal through a signal restorer.
  • the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a fast fourier transform (FFT) module.
  • ADC analog-to-digital converter
  • FFT fast fourier transform
  • the baseband signal may be restored to a codeword through a resource de-mapper process, a postcoding process, a demodulation process, and a de-scramble process.
  • a signal processing circuit for a received signal may include a signal restorer, a resource demapper, a postcoder, a demodulator, a descrambler, and a decoder.
  • 6G (radio communications) systems are characterized by (i) very high data rates per device, (ii) very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) battery- It aims to lower energy consumption of battery-free IoT devices, (vi) ultra-reliable connectivity, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system can be four aspects such as “intelligent connectivity”, “deep connectivity”, “holographic connectivity”, and “ubiquitous connectivity”, and the 6G system can satisfy the requirements shown in Table 1 below. That is, Table 1 is a table showing the requirements of the 6G system.
  • the 6G system is enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), mMTC (massive machine type communications), AI integrated communication, tactile Internet (tactile internet), high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion and improved data security ( can have key factors such as enhanced data security.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low latency communications
  • mMTC massive machine type communications
  • AI integrated communication e.g., AI integrated communication
  • tactile Internet tactile internet
  • high throughput high network capacity
  • high energy efficiency high backhaul and access network congestion
  • improved data security can have key factors such as enhanced data security.
  • FIG. 10 is a diagram illustrating an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
  • a 6G system is expected to have 50 times higher simultaneous wireless communication connectivity than a 5G wireless communication system.
  • URLLC a key feature of 5G, is expected to become a more mainstream technology by providing end-to-end latency of less than 1 ms in 6G communications.
  • the 6G system will have much better volume spectral efficiency, unlike the frequently used area spectral efficiency.
  • 6G systems can provide very long battery life and advanced battery technology for energy harvesting, so mobile devices in 6G systems may not need to be charged separately.
  • AI The most important and newly introduced technology for the 6G system is AI.
  • AI was not involved in the 4G system.
  • 5G systems will support partial or very limited AI.
  • the 6G system will be AI-enabled for full automation.
  • Advances in machine learning will create more intelligent networks for real-time communication in 6G.
  • Introducing AI in communications can simplify and enhance real-time data transmission.
  • AI can use a plethora of analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play an important role in machine-to-machine, machine-to-human and human-to-machine communications.
  • AI can be a rapid communication in the brain computer interface (BCI).
  • BCI brain computer interface
  • AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in fundamental signal processing and communication mechanisms. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based multiple input multiple output (MIMO) mechanism, It may include AI-based resource scheduling and allocation.
  • MIMO multiple input multiple output
  • Machine learning may be used for channel estimation and channel tracking, and may be used for power allocation, interference cancellation, and the like in a downlink (DL) physical layer. Machine learning can also be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • DL downlink
  • AI algorithms based on deep learning require a lot of training data to optimize training parameters.
  • a lot of training data is used offline. This is because static training on training data in a specific channel environment may cause a contradiction between dynamic characteristics and diversity of a radio channel.
  • Machine learning refers to a set of actions that train a machine to create a machine that can do tasks that humans can or cannot do.
  • Machine learning requires data and a running model.
  • data learning methods can be largely classified into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network training is aimed at minimizing errors in the output.
  • Neural network learning repeatedly inputs training data to the neural network, calculates the output of the neural network for the training data and the error of the target, and backpropagates the error of the neural network from the output layer of the neural network to the input layer in a direction to reduce the error. ) to update the weight of each node in the neural network.
  • Supervised learning uses training data in which correct answers are labeled in the learning data, and unsupervised learning may not have correct answers labeled in the learning data. That is, for example, learning data in the case of supervised learning related to data classification may be data in which each learning data is labeled with a category. Labeled training data is input to the neural network, and an error may be calculated by comparing the output (category) of the neural network and the label of the training data. The calculated error is back-propagated in a reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back-propagation.
  • a reverse direction ie, from the output layer to the input layer
  • the amount of change in the connection weight of each updated node may be determined according to a learning rate.
  • the neural network's computation of input data and backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently according to the number of iterations of the learning cycle of the neural network. For example, a high learning rate is used in the early stages of neural network learning to increase efficiency by allowing the neural network to quickly achieve a certain level of performance, and a low learning rate can be used in the late stage to increase accuracy.
  • the learning method may vary depending on the characteristics of the data. For example, in a case where the purpose of the receiver is to accurately predict data transmitted by the transmitter in a communication system, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be considered. ) is called
  • the neural network cord used as a learning method is largely divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent boltzmann machine (RNN). and this learning model can be applied.
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN recurrent boltzmann machine
  • THz communication can be applied in 6G systems.
  • the data transmission rate can be increased by increasing the bandwidth. This can be done using sub-THz communication with wide bandwidth and applying advanced massive MIMO technology.
  • THz waves also known as sub-millimeter radiation
  • THz waves generally represent a frequency band between 0.1 THz and 10 THz with corresponding wavelengths in the range of 0.03 mm-3 mm.
  • the 100 GHz-300 GHz band range (sub THz band) is considered a major part of the THz band for cellular communications. Adding to the sub-THz band mmWave band will increase 6G cellular communications capacity.
  • 300 GHz-3 THz is in the far infrared (IR) frequency band.
  • the 300 GHz-3 THz band is part of the broad band, but is at the border of the wide band, just behind the RF band. Thus, this 300 GHz-3 THz band exhibits similarities to RF.
  • THz communications include (i) widely available bandwidth to support very high data rates, and (ii) high path loss at high frequencies (highly directional antennas are indispensable).
  • the narrow beamwidth produced by the highly directional antenna reduces interference.
  • the small wavelength of the THz signal allows a much larger number of antenna elements to be incorporated into devices and BSs operating in this band. This enables advanced adaptive array technology to overcome range limitations.
  • THz Terahertz
  • FIG. 10 is a diagram illustrating a THz communication method applicable to the present disclosure.
  • THz waves are located between RF (Radio Frequency)/millimeter (mm) and infrared bands, and (i) transmit non-metal/non-polarizable materials better than visible light/infrared rays, and have a shorter wavelength than RF/millimeter waves and have high straightness. Beam focusing may be possible.
  • 11 is a diagram showing the structure of a perceptron included in an artificial neural network applicable to the present disclosure.
  • 12 is a diagram showing an artificial neural network structure applicable to the present disclosure.
  • an artificial intelligence system may be applied in a 6G system.
  • the artificial intelligence system may operate based on a learning model corresponding to the human brain, as described above.
  • a paradigm of machine learning using a neural network structure having a high complexity such as an artificial neural network as a learning model may be referred to as deep learning.
  • the neural network cord used in the learning method is largely a deep neural network (DNN), a convolutional deep neural network (CNN), and a recurrent neural network (RNN).
  • DNN deep neural network
  • CNN convolutional deep neural network
  • RNN recurrent neural network
  • the artificial neural network may be composed of several perceptrons.
  • the huge artificial neural network structure extends the simplified perceptron structure shown in FIG. 11, and the input vector can be applied to different multi-dimensional perceptrons.
  • an input value or an output value is referred to as a node.
  • the perceptron structure shown in FIG. 11 can be described as being composed of a total of three layers based on input values and output values.
  • An artificial neural network in which there are H number of (d + 1) dimensional perceptrons between the 1st layer and the 2nd layer and K number of (H + 1) dimensional perceptrons between the 2nd layer and the 3rd layer is represented as shown in FIG. can
  • the layer where the input vector is located is called the input layer
  • the layer where the final output value is located is called the output layer
  • all layers located between the input layer and the output layer are called hidden layers.
  • the artificial neural network illustrated in FIG. 12 can be understood as a total of two layers.
  • the artificial neural network is composed of two-dimensionally connected perceptrons of basic blocks.
  • the above-described input layer, hidden layer, and output layer can be jointly applied to various artificial neural network structures such as CNN and RNN, which will be described later, as well as multi-layer perceptrons.
  • CNN neural network
  • RNN multi-layer perceptrons
  • DNN deep neural network
  • FIG. 13 is a diagram illustrating a deep neural network applicable to the present disclosure.
  • the deep neural network may be a multi-layer perceptron composed of 8 hidden layers + 8 output layers.
  • the multilayer perceptron structure can be expressed as a fully-connected neural network.
  • a fully-connected neural network there is no connection relationship between nodes located on the same layer, and a connection relationship may exist only between nodes located on adjacent layers.
  • DNN has a fully-connected neural network structure and is composed of a combination of multiple hidden layers and activation functions, so it can be usefully applied to identify the correlation characteristics between inputs and outputs.
  • the correlation characteristic may mean a joint probability of input and output.
  • 14 is a diagram illustrating a convolutional neural network applicable to the present disclosure.
  • 15 is a diagram illustrating a filter operation of a convolutional neural network applicable to the present disclosure.
  • various artificial neural network structures different from the aforementioned DNN can be formed depending on how a plurality of perceptrons are connected to each other.
  • nodes located inside one layer are arranged in a one-dimensional vertical direction.
  • the nodes are two-dimensionally arranged with w nodes horizontally and h nodes vertically. (convolutional neural network structure in FIG. 14).
  • a weight is added for each connection in the connection process from one input node to the hidden layer, a total of h ⁇ w weights should be considered. Since there are h ⁇ w nodes in the input layer, a total of h 2 w 2 weights may be required between two adjacent layers.
  • the convolutional neural network of FIG. 14 has a problem in that the number of weights increases exponentially according to the number of connections, it can be assumed that there is a filter with a small size instead of considering all mode connections between adjacent layers. can For example, as shown in FIG. 15, a weighted sum and an activation function operation may be performed on a portion where filters overlap.
  • one filter has weights corresponding to the number of filters, and learning of weights can be performed so that a specific feature on an image can be extracted as a factor and output.
  • a 3 ⁇ 3 filter is applied to a 3 ⁇ 3 area at the top left of the input layer, and an output value obtained by performing a weighted sum and an activation function operation on a corresponding node may be stored in z 22 .
  • the above-described filter is moved by a certain distance horizontally and vertically while scanning the input layer, and the weighted sum and activation function calculations are performed, and the output value can be placed at the position of the current filter.
  • the deep neural network of this structure is called a convolutional neural network (CNN), and the result of the convolution operation
  • the hidden layer may be called a convolutional layer.
  • a neural network including a plurality of convolutional layers may be referred to as a deep convolutional neural network (DCNN).
  • the number of weights may be reduced by calculating a weighted sum including only nodes located in a region covered by the filter in the node where the current filter is located. This allows one filter to be used to focus on features for a local area. Accordingly, CNN can be effectively applied to image data processing in which a physical distance in a 2D area is an important criterion. Meanwhile, in the CNN, a plurality of filters may be applied immediately before the convolution layer, and a plurality of output results may be generated through a convolution operation of each filter.
  • a structure in which this method is applied to an artificial neural network can be referred to as a recurrent neural network structure.
  • 16 is a diagram showing a neural network structure in which a circular loop applicable to the present disclosure exists.
  • 17 is a diagram illustrating an operational structure of a recurrent neural network applicable to the present disclosure.
  • a recurrent neural network is an element ⁇ x 1 (t) , x 2 (t) , . , x d (t) ⁇ into the fully connected neural network, the immediately preceding point in time t-1 is the hidden vector ⁇ z 1 (t-1) , z 2 (t-1) , . . . , z H (t-1) ⁇ together to apply a weighted sum and an activation function.
  • the reason why the hidden vector is transmitted to the next time point in this way is that information in the input vector at previous time points is regarded as being accumulated in the hidden vector of the current time point.
  • the recurrent neural network may operate in a predetermined sequence of views with respect to an input data sequence.
  • the input vector at time point 1 ⁇ x 1 (t) , x 2 (t) , . . . , x d (t) ⁇ is input to the recurrent neural network ⁇ z 1 (1) , z 2 (1) , . . . , z H (1) ⁇ is the input vector ⁇ x 1 (2) , x 2 (2) , . , x d (2) ⁇ , the vector ⁇ z 1 (2) , z 2 (2) , . . . of the hidden layer through the weighted sum and activation function , z H (2) ⁇ is determined.
  • This process is at time point 2, time point 3, . . . , iteratively performed until time T.
  • a deep recurrent neural network a recurrent neural network
  • Recurrent neural networks are designed to be usefully applied to sequence data (eg, natural language processing).
  • neural network core used as a learning method, in addition to DNN, CNN, and RNN, restricted Boltzmann machine (RBM), deep belief networks (DBN), and deep Q-Network It includes various deep learning techniques such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • RBM restricted Boltzmann machine
  • DNN deep belief networks
  • Q-Network It includes various deep learning techniques such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver, not a traditional communication framework, in fundamental signal processing and communication mechanisms. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling ( scheduling) and allocation.
  • a method for exchanging information between devices may include communication between a server and devices, and communication between sensors inside devices.
  • a method for exchanging information between autonomous driving devices, unmanned devices, or general devices may include communication between a server and devices and communication between sensors inside the devices.
  • Devices are increasingly equipped with sensors to operate like autonomous vehicles, unmanned drones, IoT or robots.
  • devices are increasingly equipped with sensors such as radar and cameras to perform autonomous driving operations. Accordingly, the amount of data processed by the device increases, job processing becomes complicated, and data loss due to image signal processing (ISP) may increase.
  • ISP image signal processing
  • the autonomous driving device can identify surrounding objects. For example, an autonomous device may identify whether a nearby object is a person, mobile device, or object.
  • the autonomous driving device may grasp the driving environment.
  • the self-driving device can grasp the surrounding environment and region by 3D.
  • the autonomous driving device may detect surrounding objects or measure the speed of surrounding objects. Accordingly, the autonomous driving device may brake or prevent a collision with a surrounding object.
  • An autonomous driving device needs to be equipped with the above functions. However, the functions described above require very complex calculations and accurate references to be derived. Sensors included in existing devices have performance optimization limitations. That is, the camera lens, image sensor, and ISP included in the existing device have performance optimization limitations.
  • the RF frequency and processing speed of the radar included in the existing device have limitations in performance optimization. In addition, there is performance optimization for laser signal generation and detection of a radar included in an existing device.
  • changes in sensor specifications according to the product target of the device may cause an imbalance between the amount of information collected from the device and the work processing capability.
  • the product target may include concepts such as premium, high tier, and not tier.
  • Sensors may include lenses, image sensors, ISP processing, radar/lidar, and the like. The above-described imbalance may cause degradation in the performance of the device to identify surrounding objects, grasp the driving environment, detect objects, or measure speed.
  • the information obtained by the devices from the camera may be general information-centric. Accordingly, devices having a specific purpose must reprocess information obtained from the camera. Thus, additional processing costs may be incurred for such processing.
  • This disclosure proposes an organic operating method between a device and a server.
  • the present disclosure proposes a method of increasing reliability of communication between sensors inside a device.
  • the present disclosure proposes a method of increasing resource efficiency by performing ISP processing by AI.
  • FIGS. 18a and 18b are diagrams illustrating an example of federated learning applicable to the present disclosure.
  • Federated Learning is a machine learning technique in which multiple terminals or servers holding local data samples and information train an algorithm without terminals or servers exchanging data samples and information. This method allows terminals to build a common powerful machine learning model without sharing data and information.
  • federated learning technology can solve important issues such as data privacy, data security, data access rights, and heterogeneous data access.
  • Federated learning is performed in such a way that the terminal trains a local model for local data and information, the terminal passes parameters of the local model to the server so that the server can create a global model, and the terminal receives the global model from the server. It can be.
  • Parameters of the local model may include weights and information of the deep neural network.
  • Servers can contain global models.
  • Devices that communicate with the server may include a local model.
  • the global model may include information generated by the server based on information received from a plurality of terminals.
  • the local model may include information generated by the device based on information received from the server.
  • Servers and devices may communicate through a variety of methods to transfer models to each other. For example, a server and a device may communicate by being wired or wirelessly connected to transmit models to each other.
  • a server may include a base station.
  • the server 1808 may receive W 1 from the terminal 1 1802 .
  • a terminal may include a wireless communication device such as a vehicle, an autonomous vehicle, or an unmanned device.
  • W 1 may mean the weight of the neural network generated by terminal 1 through learning.
  • Terminal 1 (1802) can receive the W BS from the server (1808).
  • W BS may mean the weight of the neural network generated by the server through learning.
  • Server 1808 may receive W 2 from terminal 2 1804 .
  • W 2 may mean the weight of the neural network generated by terminal 2 through learning.
  • Terminal 2 (1804) can receive the W BS from the server (1808).
  • W BS may mean the weight of the neural network generated by the server through learning.
  • Terminal 3 (1806) may also operate in the same manner as described above.
  • the server 1808 may generate the W BS through learning based on weights received from a plurality of terminals as described above.
  • W BS generated by the server can be expressed as in Equation 1 below.
  • U may mean the number of terminals that provided weight information to the server.
  • a vehicle may include a central server 1902 and a sensor camera 1904 . Also, the vehicle may include a plurality of sensor cameras.
  • the central server 1902 can communicate with the sensor camera 1904.
  • the central server may communicate with the sensor camera by including an Integrated Access Backhaul-donor (IAB-donor) and an Integrated Access Backhaul-node (IAB-node).
  • IAB-donor Integrated Access Backhaul-donor
  • IAB-node Integrated Access Backhaul-node
  • the central server may include a central unit (CU) of the IAB donor, and the sensor camera may include a distributed unit (DU) of the IAB donor.
  • CU central unit
  • DU distributed unit
  • the central server may generate weights W BS through learning using artificial intelligence based on information received from a plurality of sensor cameras.
  • the sensor camera may generate weights through learning using artificial intelligence based on information received from the central server.
  • the sensor cameras may transmit the weights W each generated to the central server.
  • the device may update the local model using local data.
  • a device may receive a global model and perform training with stored local data.
  • the device may transmit the updated local model to the server.
  • the device may transmit the calculated parameters to the server.
  • the calculated parameters may include weights and information of the deep neural network.
  • the server may update the global model through the received local model.
  • the server may update parameters of the global model using parameters received from a plurality of devices.
  • the server may complete the parameter update and evaluate the cost function.
  • the server may transmit the updated global model to the device.
  • the server may transmit the global model to multiple devices to perform learning.
  • the device and the server may repeat the above-described procedures until the values obtained from the above-described evaluation converge. Through this process, the device and the server can derive an optimal weight value.
  • the cost function and the optimal weight value can be expressed as Equation 2 below.
  • d k may mean a value (trained information) measured by the base station.
  • x k (w) may mean a value measured by terminal k.
  • x k (w) can be expressed as f NN (s k ;).
  • J(w) may mean a value obtained by adding all squares of differences between d k and x k (w).
  • W * may mean an estimator that makes J(w) a minimum value.
  • a terminal may include a vehicle and a wireless communication device.
  • the terminal may include a sensor 2102 and a central server 2104 .
  • the terminal may include a plurality of sensors.
  • the central server 2104 may include an ISP 2108 and a neural network training unit (2110).
  • Sensor 2102 and central server 2104 may communicate.
  • the central server 2104 may communicate with a plurality of sensors.
  • the sensor may include a camera sensor.
  • a virtual neural network unit (2106) may be located inside or outside the sensor (2102).
  • the aforementioned central server 2104 may be replaced by a base station. That is, the ISP 2108 and the neural net learner 2110 may be located outside the terminal and may be included in the base station.
  • the sensor 2102 according to the present disclosure may not include an image signal processing (ISP) device.
  • the sensor 2102 according to the present disclosure may process an input image according to a specific purpose through the virtual neural net unit 2106.
  • the sensor 2102 may extract only a person from an image excluding other objects through the virtual neural net unit 2106 . That is, the virtual neural net unit 2106 may process images according to a specific purpose.
  • the virtual neural net unit may process an image according to a specific purpose based on artificial intelligence.
  • the sensor may process the image through the virtual neural net unit and transmit data to the central server 2104. The data transmitted by the sensor to the central server does not go through the ISP, so data loss can be reduced.
  • a plurality of sensors included in the terminal may transmit data to the central server without processing images through the ISP.
  • a plurality of sensors can minimize data loss by transmitting unprocessed data to the central server.
  • Central server 2104 may include ISP 2108 .
  • the central server can process the image through the ISP and compare it with the data output by the virtual neural net. The relevant procedures are described in detail below.
  • the sensor 2102 can send data output to the virtual neural net unit 2106.
  • the virtual neural net unit 2106 may receive the data output of the sensor 2102 as an input.
  • the virtual neural net unit 2106 may multiply data received as an input by a weight value and output the result.
  • the virtual neural net unit may output x k by multiplying data received as an input by a weight w BS value, which is a neural net learning result.
  • the central server 2104 may receive the result of the virtual neural net as an input.
  • the neural net learning unit 2110 may receive output data of the virtual neural net unit 2106 as an input.
  • ISP 2108 may also receive data from sensor 2102 .
  • the ISP may perform image processing on the input data.
  • the neural net learner 2110 may compare the output data of the ISP 2108 with the learning result. For example, the neural net learning unit may obtain a difference between an output d k of the ISP and a virtual neural net x k value. The neural net learning unit may generate weight values based on these differences. The neural net learning unit may transmit the generated weight value to the virtual neural net unit. The neural net learning unit and the virtual neural net unit may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • the terminal may include a sensor 2202 and a central server 2204 . Also, the terminal may include a plurality of sensors.
  • the central server 2204 may include an ISP 2208, a neural network training unit (2210) and a reverse virtual neural network unit (2112). Sensor 2202 and central server 2204 can communicate with each other. As an example, the central server 2204 may communicate with a plurality of sensors.
  • the sensor may include a camera sensor.
  • the virtual neural network unit 2206 may be located inside or outside the sensor 2202 .
  • the aforementioned central server 2204 may be replaced by a base station. That is, the reverse virtual neural net unit 2212, the ISP 2208, and the neural net learning unit 2210 may be located outside the terminal and may be included in the base station.
  • the central server 2204 may generate data having a specific purpose removed through the reverse virtual neural net unit 2212 .
  • the reverse virtual neural net unit may generate output data of a sensor. The relevant procedures are described in detail below.
  • the sensor 2202 can send data output to the virtual neural net unit 2206.
  • the virtual neural net unit 2206 may receive the data output of the sensor 2202 as an input.
  • the virtual neural net unit 2206 may multiply data received as an input by a weight value and output the result.
  • the virtual neural net unit may output x k by multiplying data received as an input by a weight w BS value, which is a neural net learning result.
  • the central server 2204 may receive the result of the virtual neural net as an input.
  • the neural net learning unit 2210 may receive output data of the virtual neural net unit 2206 as an input.
  • the reverse virtual neural net unit 2212 may receive output data of the virtual neural net unit 2206 .
  • the reverse virtual neural net unit may remove the specific purpose of the image processed by the virtual neural net unit according to the specific purpose. For example, the reverse virtual neural net unit may generate data with a specific purpose removed based on artificial intelligence. Also, the reverse virtual neural net unit may process the received data to generate output data of the sensor. The reverse virtual neural net unit may transmit the generated data to the ISP, and the ISP 2208 may receive data from the reverse neural net unit 2212. The ISP may perform image processing on the input data.
  • the neural net learner 2210 may compare the output data of the ISP 2208 with the learning result. For example, the neural net learning unit may obtain a difference between an output d k of the ISP and a virtual neural net x k value. The neural net learning unit may generate weight values based on these differences. The neural net learning unit may transmit the generated weight value to the virtual neural net unit. The neural net learning unit and the virtual neural net unit may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • the terminal may include a sensor 2302 and a central server 2304 . Also, the terminal may include a plurality of sensors.
  • the central server 2304 may include a reverse virtual neural network unit 2308 and a neural network training unit 2310. Sensor 2302 and central server 2304 can communicate with each other. As an example, the central server 2304 may communicate with a plurality of sensors.
  • the sensor may include a camera sensor.
  • the virtual neural network unit 2306 may be located inside or outside the sensor 2302.
  • the aforementioned central server 2304 may be replaced by a base station. That is, the reverse virtual neural net unit 2308 and the neural net learning unit 2310 may be located outside the terminal and may be included in the base station.
  • the central server 2304 may perform ISP through the reverse virtual neural net unit 2308.
  • ISP may perform ISP through the reverse virtual neural net unit 2308.
  • the sensor 2302 can send data output to the virtual neural net unit 2308.
  • the virtual neural net unit 2306 may receive the data output of the sensor 2302 as an input.
  • the virtual neural net unit 2306 may multiply data received as an input by a weight value and output the result.
  • the virtual neural net unit may output x k by multiplying data received as an input by a weight w BS value, which is a neural net learning result.
  • the central server 2304 may receive the result of the virtual neural net as an input.
  • the neural net learning unit 2310 may receive output data of the virtual neural net unit 2306 as an input.
  • the reverse virtual neural net unit 2308 may receive output data of the virtual neural net unit 2306.
  • the reverse virtual neural net unit may remove the specific purpose of the image processed by the virtual neural net unit according to the specific purpose. For example, the reverse virtual neural net unit may generate data with a specific purpose removed based on artificial intelligence. Also, the reverse virtual neural net unit may process the received data to generate output data of the sensor.
  • the reverse virtual neural network unit may perform image signal processing (ISP). Accordingly, the neural net learning unit 2310 may compare the output data of the reverse virtual neural net unit and the learning result. For example, the neural net learning unit may obtain a difference between an output d k value of the reverse virtual neural net unit and a virtual neural net x k value. The neural net learning unit may generate weight values based on these differences. The neural net learning unit may transmit the generated weight value to the virtual neural net unit. The neural net learning unit and the virtual neural net unit may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • ISP image signal processing
  • the terminal may include a first sensor 2402 and a second sensor 2404 .
  • the first sensor may include a radar.
  • the second sensor may include an ISP.
  • the central server may include a neural network training unit (2408).
  • another terminal may include the neural net learning unit 2408.
  • the base station may include a neural net learning unit 2408. That is, the neural net learning unit 2408 may be located in another terminal, a central server, or a base station. The location of the neural net learning unit is not limited to the above-described embodiment.
  • the first sensor 2402 and the central server may communicate with each other.
  • the second sensor 2404 and the central server can communicate with each other.
  • the virtual neural network unit 2406 may be located inside or outside the first sensor 2402 .
  • a terminal according to the present disclosure may include a first sensor 2402 that does not include an ISP and a second sensor 2404 that includes an ISP. The relevant procedures are described in detail below.
  • the first sensor 2402 may transmit data output to the virtual neural net unit 2406 .
  • the virtual neural net unit 2406 may receive the data output of the first sensor 2402 as an input.
  • the virtual neural net unit 2406 may multiply data received as an input by a weight value and output the result.
  • the virtual neural net unit may output x k by multiplying data received as an input by a weight w BS value, which is a neural net learning result.
  • the central server may receive the result of the virtual neural net as an input.
  • the neural net learning unit 2408 may receive output data of the virtual neural net unit 2406 as an input. Since the second sensor 2404 includes an ISP device, it can directly perform ISP.
  • the neural net learner 2408 may receive data that has undergone ISP processing from the second sensor.
  • the neural net learner may receive the ISP-processed d k value from the second sensor. Accordingly, the neural net learner may compare data received from the second sensor with a learning result. For example, the neural net learner may obtain a difference between an output d k value of the second sensor and an output x k value of the virtual neural net unit. The neural net learning unit may generate weight values based on these differences. The neural net learning unit may transmit the generated weight value to the virtual neural net unit. The neural net learning unit and the virtual neural net unit may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • the terminal may perform image sensing through a sensor. Also, the terminal may perform image sensing through a plurality of sensors.
  • the sensor may include a camera sensor and may not include an image signal processor (ISP). That is, the terminal can perform image sensing without performing ISP.
  • ISP image signal processor
  • the terminal may learn a first neural network based on image sensing.
  • the terminal may process the sensed image according to a specific purpose by learning the first neural network.
  • the terminal may include a virtual neural network unit.
  • the virtual neural net unit may learn the first neural network and process the sensed image according to a specific purpose.
  • the sensor of the terminal may transmit output data to the virtual neural net unit of the terminal.
  • the virtual neural network unit of the terminal may multiply input data by a weight value and output the result. That is, the terminal may output x k by multiplying the neural net learning result by the weight w BS value.
  • the terminal may transmit the first neural network learning result.
  • the terminal may transmit the first neural network learning result to the central server base station inside the terminal.
  • the central server inside the terminal may include a device capable of performing ISP.
  • the terminal may transmit the first neural network learning result to the base station.
  • the base station may include a device capable of performing ISP.
  • the terminal may transmit the first neural network learning result to another terminal.
  • another terminal may include a device capable of performing ISP.
  • Image signal processing may not be performed on data transmitted by a terminal to a central server, a base station, or another terminal.
  • the terminal can reduce data loss by transmitting data not performed by the ISP.
  • a plurality of sensors included in the terminal may transmit data for which ISP is not performed to the central server or the base station. Accordingly, data loss can be minimized.
  • some of the plurality of sensors included in the terminal may perform ISP.
  • the terminal may transmit ISP-performed data to a central server, a base station, or other terminals using sensors capable of performing the ISP.
  • the terminal may include a central server therein.
  • a central server may perform an ISP.
  • the central server may perform ISP using an ISP device.
  • the central server may perform ISP by learning a reverse virtual neural network.
  • the central server of the terminal may perform ISP through a reverse virtual neural network unit.
  • the central server of the terminal may receive data transmitted by the virtual neural net unit of the terminal.
  • the reverse virtual neural net unit of the central server may remove the specific purpose of the image processed by the virtual neural net unit according to the specific purpose. For example, the reverse virtual neural net unit may generate data with a specific purpose removed based on artificial intelligence. Also, the reverse virtual neural net unit may process the received data to generate output data of the sensor.
  • the reverse virtual neural network unit of the central server may perform image signal processing (ISP). If the reverse virtual neural network portion of the central server is capable of performing an ISP, the central server may not include an ISP device. In addition, the central server may additionally include an ISP device even when the reverse virtual neural network unit can perform ISP. If the central server additionally includes an ISP device, the ISP device of the central server may perform ISP on output data of the reverse virtual neural net unit. The ISP device may transmit output data of the reverse virtual neural net unit that performed the ISP to the neural net learning unit of the central server.
  • ISP image signal processing
  • the neural net learning unit of the central server may learn the second neural network. For example, the neural net learning unit of the central server may compare the output data of the reverse virtual neural net unit and the second neural network learning result. Specifically, the neural net learning unit of the central server may obtain a difference between an output d k value of the reverse virtual neural net unit and an output x k value of the virtual neural net learning unit. The neural net learning unit may generate weight values based on these differences. The neural net learning unit may transmit the generated weight value to the virtual neural net unit.
  • the neural net learning unit of the terminal and the virtual neural net unit of the terminal may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • the central server of the terminal may not include a reverse virtual neural net unit, but may include an ISP device and a neural net learning unit.
  • the central server may receive the result of the virtual neural net as an input.
  • the neural net learning unit may receive output data of the virtual neural net unit as an input.
  • the ISP device can also receive data from the terminal's sensor.
  • the ISP device may perform ISP on the input data.
  • the central server may learn the second neural network.
  • the neural net learning unit of the central server may compare the output data of the ISP and the second neural network learning result.
  • the neural net learning unit may obtain a difference between an output dk value of the ISP and an output x k value of the virtual neural net unit.
  • the neural net learning unit may generate weight values based on these differences.
  • the neural net learning unit may transmit the generated weight value to the virtual neural net unit.
  • the neural net learning unit of the terminal and the virtual neural net unit of the terminal may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • FIG. 26 is a diagram illustrating an example of a server, base station, or terminal procedure applicable to the present disclosure.
  • a terminal may transmit a first neural network learning result to a central server, a base station, or other terminals. Procedures related to FIG. 26 will be described below.
  • the base station may receive a first neural network training result.
  • the base station may receive the result of the virtual neural net unit of the terminal as an input. That is, the neural net learning unit of the base station may receive output data of the virtual neural net unit of the terminal as an input.
  • the reverse virtual neural net unit of the base station may receive output data of the virtual neural net unit.
  • the reverse virtual neural net unit may remove the specific purpose of the image processed by the virtual neural net unit according to the specific purpose.
  • the reverse virtual neural net unit may generate data with a specific purpose removed based on artificial intelligence.
  • the reverse virtual neural net unit may process the received data to generate output data of the sensor.
  • the reverse virtual neural network unit may perform image signal processing (ISP).
  • ISP image signal processing
  • the reverse virtual neural network unit of the base station may learn a second neural network based on a learning result of the first neural network. For example, the neural net learning unit of the base station may compare the output data of the reverse virtual neural net unit and the first neural network learning result. Specifically, the neural net learning unit of the base station may obtain a difference between an output dk value of the reverse virtual neural net unit and an output x k value of the virtual neural net unit. The neural net learning unit of the base station may generate a weight value based on this difference.
  • the base station may additionally include an ISP device.
  • ISP may be performed on the output data of the reverse virtual neural network unit by the ISP device.
  • the base station may transmit the second neural network learning result.
  • the neural net learning unit of the base station may transmit the generated weight value to the virtual neural net unit of the terminal.
  • the neural net learning unit of the base station and the virtual neural net unit of the terminal may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • the base station may include an ISP device and a neural network learning unit.
  • the base station may receive the result of the virtual neural net as an input.
  • the neural net learning unit of the base station may receive output data of the virtual neural net unit of the terminal as an input.
  • the ISP device of the base station can also receive data from the sensor of the terminal.
  • the ISP device may perform ISP on the input data.
  • the base station may learn a second neural network based on a learning result of the first neural network.
  • the neural net learning unit of the base station may compare the output data of the ISP and the first neural network learning result. For example, the neural net learning unit of the base station may obtain a difference between an output d k value of the ISP and an output x k value of the virtual neural net unit of the terminal. The neural net learning unit of the base station may generate a weight value based on this difference.
  • the base station may transmit the second neural network learning result. Specifically, the neural net learning unit of the base station may transmit the generated weight value to the virtual neural net unit of the terminal.
  • the neural net learning unit of the base station and the virtual neural net unit of the terminal may repeat the above-described procedure until the weight values converge. As the procedure is repeated, the weight values can become more and more accurate.
  • Embodiments of the present disclosure may be applied to various wireless access systems.
  • various wireless access systems there is a 3rd Generation Partnership Project (3GPP) or 3GPP2 system.
  • 3GPP 3rd Generation Partnership Project
  • 3GPP2 3rd Generation Partnership Project2
  • Embodiments of the present disclosure may be applied not only to the various wireless access systems, but also to all technical fields to which the various wireless access systems are applied. Furthermore, the proposed method can be applied to mmWave and THz communication systems using ultra-high frequency bands.
  • embodiments of the present disclosure may be applied to various applications such as free-running vehicles and drones.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)

Abstract

Selon un exemple de la présente divulgation, un procédé de fonctionnement d'un terminal dans un système de communication sans fil peut comprendre les étapes consistant : à effectuer une détection d'image par le biais d'un premier capteur ; à recevoir des informations se rapportant à un réseau neuronal en provenance d'une station de base ; à réaliser la formation d'un premier réseau neuronal sur la base des informations se rapportant au réseau neuronal ; et à transmettre, à la station de base, des données concernant le résultat de formation du premier réseau neuronal. Le premier capteur n'effectue pas de traitement de signal d'image (ISP), et les informations se rapportant au réseau neuronal peuvent comporter des informations indiquant si le terminal a formé ou non un réseau neuronal. La formation du premier réseau neuronal peut être une formation pour traiter, en se basant sur l'intelligence artificielle, des données concernant le résultat de la détection d'image.
PCT/KR2021/010969 2021-08-18 2021-08-18 Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil WO2023022251A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020237043173A KR20240047337A (ko) 2021-08-18 2021-08-18 무선 통신 시스템에서 신호 전송 방법 및 장치
PCT/KR2021/010969 WO2023022251A1 (fr) 2021-08-18 2021-08-18 Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/010969 WO2023022251A1 (fr) 2021-08-18 2021-08-18 Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil

Publications (1)

Publication Number Publication Date
WO2023022251A1 true WO2023022251A1 (fr) 2023-02-23

Family

ID=85239865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/010969 WO2023022251A1 (fr) 2021-08-18 2021-08-18 Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil

Country Status (2)

Country Link
KR (1) KR20240047337A (fr)
WO (1) WO2023022251A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018125777A (ja) * 2017-02-02 2018-08-09 パナソニックIpマネジメント株式会社 撮像装置、学習サーバ、撮像システム
KR20190107107A (ko) * 2017-05-15 2019-09-18 구글 엘엘씨 구성가능하고 프로그래밍가능한 이미지 프로세서 유닛
US20190340534A1 (en) * 2016-09-26 2019-11-07 Google Llc Communication Efficient Federated Learning
KR20210066754A (ko) * 2019-11-28 2021-06-07 경희대학교 산학협력단 연합 학습을 활용한 사용자 특성 분석을 위한 딥 러닝 모델 생성 방법
CN113159329A (zh) * 2021-04-27 2021-07-23 Oppo广东移动通信有限公司 模型训练方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190340534A1 (en) * 2016-09-26 2019-11-07 Google Llc Communication Efficient Federated Learning
JP2018125777A (ja) * 2017-02-02 2018-08-09 パナソニックIpマネジメント株式会社 撮像装置、学習サーバ、撮像システム
KR20190107107A (ko) * 2017-05-15 2019-09-18 구글 엘엘씨 구성가능하고 프로그래밍가능한 이미지 프로세서 유닛
KR20210066754A (ko) * 2019-11-28 2021-06-07 경희대학교 산학협력단 연합 학습을 활용한 사용자 특성 분석을 위한 딥 러닝 모델 생성 방법
CN113159329A (zh) * 2021-04-27 2021-07-23 Oppo广东移动通信有限公司 模型训练方法、装置、设备及存储介质

Also Published As

Publication number Publication date
KR20240047337A (ko) 2024-04-12

Similar Documents

Publication Publication Date Title
WO2022250221A1 (fr) Procédé et dispositif d'émission d'un signal dans un système de communication sans fil
WO2022039295A1 (fr) Procédé de prétraitement d'une liaison descendante dans un système de communication sans fil et appareil associé
WO2022050468A1 (fr) Procédé pour réaliser un apprentissage fédéré dans un système de communication sans fil et appareil associé
WO2022025321A1 (fr) Procédé et dispositif de randomisation de signal d'un appareil de communication
WO2022050565A1 (fr) Appareil et procédé de transfert intercellulaire dans un système de communication sans fil
WO2022092859A1 (fr) Procédé et dispositif pour ajuster un point de division dans un système de communication sans fil
WO2023017881A1 (fr) Appareil et procédé pour effectuer un transfert sur la base d'un résultat de mesure à l'aide d'un intervalle de mesure dans un système de communication sans fil
WO2023022251A1 (fr) Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil
WO2022050444A1 (fr) Procédé de communication pour un apprentissage fédéré et dispositif pour son exécution
WO2024117296A1 (fr) Procédé et appareil d'émission et de réception de signaux dans un système de communication sans fil faisant intervenir un émetteur-récepteur ayant des paramètres réglables
WO2023113282A1 (fr) Appareil et procédé pour effectuer un apprentissage en ligne d'un modèle d'émetteur-récepteur dans un système de communication sans fil
WO2022119424A1 (fr) Dispositif et procédé de transmission de signal dans un système de communication sans fil
WO2024048816A1 (fr) Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil
WO2022260200A1 (fr) Dispositif et procédé pour réaliser un transfert intercellulaire en tenant compte de l'efficacité de la batterie dans un système de communication sans fil
WO2022231084A1 (fr) Procédé et dispositif d'émission d'un signal dans un système de communication sans fil
WO2022270649A1 (fr) Dispositif et procédé pour réaliser une communication vocale dans un système de communication sans fil
WO2023042941A1 (fr) Procédé et appareil de transmission de signal dans un système de communication sans fil
WO2024019184A1 (fr) Appareil et procédé pour effectuer un entraînement pour un modèle d'émetteur-récepteur dans un système de communication sans fil
WO2023120781A1 (fr) Appareil et procédé de transmission de signal dans un système de communication sans fil
WO2022092905A1 (fr) Appareil et procédé de transmission de signal dans un système de communication sans fil
WO2024038926A1 (fr) Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil
WO2024111693A1 (fr) Appareil et procédé pour effectuer une communication radar à l'aide de réseau de réception virtuel dans système de communication sans fil
WO2023017882A1 (fr) Appareil et procédé pour sélectionner une technologie d'accès radio en tenant compte de l'efficacité de batterie dans un système de communication sans fil
WO2023120768A1 (fr) Dispositif et procédé de transmission de signaux dans un système de communication sans fil
WO2022124729A1 (fr) Dispositif et procédé d'émission d'un signal dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21954298

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE