WO2022014731A1 - Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp - Google Patents

Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp Download PDF

Info

Publication number
WO2022014731A1
WO2022014731A1 PCT/KR2020/009240 KR2020009240W WO2022014731A1 WO 2022014731 A1 WO2022014731 A1 WO 2022014731A1 KR 2020009240 W KR2020009240 W KR 2020009240W WO 2022014731 A1 WO2022014731 A1 WO 2022014731A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
communication devices
server
time
parameter
Prior art date
Application number
PCT/KR2020/009240
Other languages
English (en)
Korean (ko)
Inventor
이태현
이상림
이호재
김영준
전기준
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2020/009240 priority Critical patent/WO2022014731A1/fr
Publication of WO2022014731A1 publication Critical patent/WO2022014731A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter

Definitions

  • This disclosure relates to wireless communication.
  • 6G systems are expected to have 50 times higher simultaneous wireless connectivity than 5G wireless communication systems.
  • URLLC a key feature of 5G, will become an even more important technology by providing an end-to-end delay of less than 1ms in 6G communication.
  • 6G systems will have much better volumetric spectral efficiencies as opposed to frequently used areal spectral efficiencies.
  • the 6G system can provide very long battery life and advanced battery technology for energy harvesting, so mobile devices will not need to be charged separately in the 6G system.
  • AI The most important and newly introduced technology for 6G systems is AI.
  • AI was not involved in the 4G system.
  • 5G systems will support partial or very limited AI.
  • the 6G system will be AI-enabled for full automation.
  • Advances in machine learning will create more intelligent networks for real-time communication in 6G.
  • Incorporating AI into communications can simplify and enhance real-time data transmission.
  • AI can use numerous analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
  • AirComp-based federated learning When performing AirComp-based federated learning, synchronized aggregation of each device is required. In this case, when a training failure or the like occurs in the device scheduled by the server, an aggregation error may occur.
  • a scheduling method between a server and a device is presented to solve asynchronous problems caused by differences in AI capabilities of devices participating in AirComp-based federated learning.
  • Each device transmits a training completion flag to the server after updating the local parameter, and through this, the server can determine the distribution of the number of devices that have completed learning and the learning delay time. Based on this, the server performs scheduling with the device by adjusting the trade-off value in consideration of learning accuracy and latency performance.
  • the devices participating in the learning can transmit local parameters to the server at the same time, and the server can reduce the parameter averaging error by identifying the number of devices that have completed learning.
  • the straggler effect can be mitigated by excluding a device whose learning is delayed according to a trade-off value from scheduling.
  • the edge server can mitigate the aggregation error for AI parameters by identifying the number of devices that have failed training.
  • it is possible to mitigate the straggler effect, which depends on the worst case among devices participating in learning, in latency of federated learning.
  • FIG. 1 illustrates a wireless communication system to which the present disclosure can be applied.
  • FIG. 2 is a block diagram illustrating a radio protocol architecture for a user plane.
  • FIG. 3 is a block diagram illustrating a radio protocol structure for a control plane.
  • FIG. 4 shows another example of a wireless communication system to which the technical features of the present disclosure can be applied.
  • 5 illustrates functional partitioning between NG-RAN and 5GC.
  • FIG. 6 illustrates a frame structure that can be applied in NR.
  • FIG. 9 is a diagram illustrating a difference between a conventional control region and CORESET in NR.
  • FIG. 10 shows an example of a frame structure for a new radio access technology.
  • FIG. 12 is an abstract diagram of a hybrid beamforming structure from the viewpoint of the TXRU and the physical antenna.
  • FIG. 13 shows a synchronization signal and a PBCH (SS/PBCH) block.
  • 15 shows an example of a process of acquiring system information of a terminal.
  • 17 is for explaining a power ramping carwonter.
  • 19 is a flowchart illustrating an example of performing an idle mode DRX operation.
  • 21 shows an example of a communication structure that can be provided in a 6G system.
  • FIG. 22 shows an example of a perceptron structure.
  • FIG. 23 shows an example of a multiple perceptron structure.
  • 25 shows an example of a convolutional neural network.
  • 26 shows an example of a filter operation in a convolutional neural network.
  • FIG. 27 shows an example of a neural network structure in which a cyclic loop exists.
  • FIG. 30 is a diagram showing an example of THz communication application.
  • 31 is a diagram illustrating an example of an electronic device-based THz wireless communication transceiver.
  • 32 is a diagram illustrating an example of a method of generating an optical device-based THz signal.
  • 33 is a diagram illustrating an example of an optical element-based THz wireless communication transceiver.
  • 34 illustrates the structure of a photoinc source based transmitter.
  • 35 illustrates the structure of an optical modulator.
  • 36 illustrates an example of an operation process for federated learning based on orthogonal division access.
  • 39 shows an example of an operation process of a server for determining a parameter transmission time and scheduling with a device.
  • 40 shows an example of a joint learning process of a server and a device after scheduling.
  • 41 shows an example of the number of arrivals of the training completion flag according to time and a scheduling process of the server.
  • FIG. 43 is a flowchart of an example of a federated learning control method proposed in the present specification.
  • 44 illustrates the communication system 1 applied to the present disclosure.
  • 45 illustrates a wireless device applicable to the present disclosure.
  • 46 illustrates a signal processing circuit for a transmission signal.
  • 49 illustrates a vehicle or an autonomous driving vehicle applied to the present disclosure.
  • 50 illustrates a vehicle applied to the present disclosure.
  • 53 illustrates an AI device applied to the present disclosure.
  • a or B (A or B) may mean “only A”, “only B” or “both A and B”.
  • a or B (A or B)” may be interpreted as “A and/or B (A and/or B)”.
  • A, B or C(A, B or C) herein means “only A”, “only B”, “only C”, or “any and any combination of A, B and C ( any combination of A, B and C)”.
  • a slash (/) or a comma (comma) used herein may mean “and/or”.
  • A/B may mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”.
  • A, B, C may mean “A, B, or C”.
  • At least one of A and B may mean “only A”, “only B” or “both A and B”.
  • the expression “at least one of A or B” or “at least one of A and/or B” means “at least one It can be interpreted the same as “at least one of A and B”.
  • At least one of A, B and C means “only A”, “only B”, “only C”, or “A, B and C” Any combination of A, B and C”. Also, “at least one of A, B or C” or “at least one of A, B and/or C” means may mean “at least one of A, B and C”.
  • parentheses used herein may mean “for example”. Specifically, when displayed as “control information (PDCCH)”, “PDCCH” may be proposed as an example of “control information”. In other words, “control information” of the present specification is not limited to “PDCCH”, and “PDDCH” may be proposed as an example of “control information”. Also, even when displayed as “control information (ie, PDCCH)”, “PDCCH” may be proposed as an example of “control information”.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • CDMA may be implemented with a radio technology such as universal terrestrial radio access (UTRA) or CDMA2000.
  • TDMA may be implemented with a radio technology such as global system for mobile communications (GSM)/general packet radio service (GPRS)/enhanced data rates for GSM evolution (EDGE).
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • EDGE enhanced data rates for GSM evolution
  • OFDMA may be implemented with a wireless technology such as Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802-20, and evolved UTRA (E-UTRA).
  • IEEE 802.16m is an evolution of IEEE 802.16e, and provides backward compatibility with a system based on IEEE 802.16e.
  • UTRA is part of the universal mobile telecommunications system (UMTS).
  • 3rd generation partnership project (3GPP) long term evolution (LTE) is a part of evolved UMTS (E-UMTS) that uses evolved-UMTS terrestrial radio access (E-UTRA), and employs OFDMA in downlink and SC in uplink - Adopt FDMA.
  • LTE-A (advanced) is an evolution of 3GPP LTE.
  • 5G NR is a successor technology of LTE-A, and is a new clean-slate type mobile communication system with characteristics such as high performance, low latency, and high availability. 5G NR can utilize all available spectrum resources, from low frequency bands below 1 GHz, to intermediate frequency bands from 1 GHz to 10 GHz, and high frequency (millimeter wave) bands above 24 GHz.
  • LTE-A or 5G NR is mainly described, but the spirit of the present disclosure is not limited thereto.
  • E-UTRAN Evolved-UMTS Terrestrial Radio Access Network
  • LTE Long Term Evolution
  • the E-UTRAN includes a base station (20: Base Station, BS) that provides a control plane (control plane) and a user plane (user plane) to a terminal (10: User Equipment, UE).
  • the terminal 10 may be fixed or mobile, and may be called by other terms such as a mobile station (MS), a user terminal (UT), a subscriber station (SS), a mobile terminal (MT), and a wireless device.
  • the base station 20 refers to a fixed station that communicates with the terminal 10, and may be called by other terms such as an evolved-NodeB (eNB), a base transceiver system (BTS), and an access point.
  • eNB evolved-NodeB
  • BTS base transceiver system
  • the base stations 20 may be connected to each other through an X2 interface.
  • the base station 20 is connected to an Evolved Packet Core (EPC) 30 through an S1 interface, more specifically, a Mobility Management Entity (MME) through S1-MME and a Serving Gateway (S-GW) through S1-U.
  • EPC Evolved Packet Core
  • the EPC 30 is composed of an MME, an S-GW, and a Packet Data Network-Gateway (P-GW).
  • the MME has access information of the terminal or information about the capability of the terminal, and this information is mainly used for mobility management of the terminal.
  • the S-GW is a gateway having E-UTRAN as an endpoint
  • the P-GW is a gateway having a PDN as an endpoint.
  • the layers of the Radio Interface Protocol between the terminal and the network are L1 (Layer 1), It can be divided into L2 (2nd layer) and L3 (3rd layer), of which the physical layer belonging to the first layer provides an information transfer service using a physical channel,
  • the RRC (Radio Resource Control) layer located in the third layer performs a role of controlling radio resources between the terminal and the network. To this end, the RRC layer exchanges RRC messages between the terminal and the base station.
  • the 2 is a block diagram illustrating a radio protocol architecture for a user plane.
  • 3 is a block diagram illustrating a radio protocol structure for a control plane.
  • the user plane is a protocol stack for transmitting user data
  • the control plane is a protocol stack for transmitting a control signal.
  • the physical layer provides an information transfer service (information transfer service) to the upper layer using a physical channel (physical channel).
  • the physical layer is connected to a medium access control (MAC) layer, which is an upper layer, through a transport channel. Data moves between the MAC layer and the physical layer through the transport channel. Transmission channels are classified according to how and with what characteristics data are transmitted through the air interface.
  • MAC medium access control
  • the physical channel may be modulated by OFDM (Orthogonal Frequency Division Multiplexing), and time and frequency are used as radio resources.
  • OFDM Orthogonal Frequency Division Multiplexing
  • the functions of the MAC layer include mapping between logical channels and transport channels and multiplexing/demultiplexing into transport blocks provided as physical channels on transport channels of MAC service data units (SDUs) belonging to logical channels.
  • SDUs MAC service data units
  • the MAC layer provides a service to the RLC (Radio Link Control) layer through a logical channel.
  • RLC Radio Link Control
  • the functions of the RLC layer include concatenation, segmentation, and reassembly of RLC SDUs.
  • the RLC layer In order to guarantee various QoS (Quality of Service) required by Radio Bearer (RB), the RLC layer has a transparent mode (Transparent Mode, TM), an unacknowledged mode (Unacknowledged Mode, UM) and an acknowledged mode (Acknowledged Mode).
  • TM Transparent Mode
  • UM unacknowledged Mode
  • AM acknowledged Mode
  • AM RLC provides error correction through automatic repeat request (ARQ).
  • the RRC (Radio Resource Control) layer is defined only in the control plane.
  • the RRC layer is responsible for controlling logical channels, transport channels, and physical channels in relation to configuration, re-configuration, and release of radio bearers.
  • RB means a logical path provided by the first layer (PHY layer) and the second layer (MAC layer, RLC layer, PDCP layer) for data transfer between the terminal and the network.
  • Functions of the Packet Data Convergence Protocol (PDCP) layer in the user plane include delivery of user data, header compression, and ciphering.
  • Functions of the Packet Data Convergence Protocol (PDCP) layer in the control plane include transmission of control plane data and encryption/integrity protection.
  • Setting the RB means defining the characteristics of a radio protocol layer and channel to provide a specific service, and setting each specific parameter and operation method.
  • the RB may be further divided into a signaling RB (SRB) and a data RB (DRB).
  • SRB is used as a path for transmitting an RRC message in the control plane
  • DRB is used as a path for transmitting user data in the user plane.
  • the terminal When an RRC connection is established between the RRC layer of the terminal and the RRC layer of the E-UTRAN, the terminal is in an RRC connected state, otherwise, it is in an RRC idle state.
  • a downlink transport channel for transmitting data from a network to a terminal there are a BCH (Broadcast Channel) for transmitting system information and a downlink SCH (Shared Channel) for transmitting user traffic or control messages in addition to this.
  • BCH Broadcast Channel
  • SCH Shared Channel
  • downlink multicast or broadcast service traffic or control messages they may be transmitted through a downlink SCH or may be transmitted through a separate downlink multicast channel (MCH).
  • MCH downlink multicast channel
  • RACH random access channel
  • SCH uplink shared channel
  • the logical channels that are located above the transport channel and are mapped to the transport channel include a Broadcast Control Channel (BCCH), a Paging Control Channel (PCCH), a Common Control Channel (CCCH), a Multicast Control Channel (MCCH), and a Multicast Traffic Channel (MTCH). Channel), etc.
  • BCCH Broadcast Control Channel
  • PCCH Paging Control Channel
  • CCCH Common Control Channel
  • MCCH Multicast Control Channel
  • MTCH Multicast Traffic Channel
  • a physical channel consists of several OFDM symbols in the time domain and several sub-carriers in the frequency domain.
  • One sub-frame is composed of a plurality of OFDM symbols in the time domain.
  • a resource block is a resource allocation unit and includes a plurality of OFDM symbols and a plurality of sub-carriers.
  • each subframe may use specific subcarriers of specific OFDM symbols (eg, the first OFDM symbol) of the corresponding subframe for a Physical Downlink Control Channel (PDCCH), that is, an L1/L2 control channel.
  • PDCCH Physical Downlink Control Channel
  • a Transmission Time Interval (TTI) is a unit time of transmission, and may be, for example, a subframe or a slot.
  • new radio access technology new RAT, NR
  • next-generation communication As more and more communication devices require greater communication capacity, there is a need for improved mobile broadband communication compared to a conventional radio access technology (RAT).
  • massive MTC massive machine type communications
  • massive MTC massive machine type communications
  • URLLC Ultra-Reliable and Low Latency Communication
  • FIG. 4 shows another example of a wireless communication system to which the technical features of the present disclosure can be applied.
  • FIG. 4 shows a system architecture based on a 5G new radio access technology (NR) system.
  • An entity used in a 5G NR system may absorb some or all functions of an entity (eg, eNB, MME, S-GW) introduced in FIG. 1 .
  • An entity used in the NR system may be identified with the name "NG" to distinguish it from LTE.
  • the wireless communication system includes one or more UEs 11 , a next-generation RAN (NG-RAN), and a 5th generation core network (5GC).
  • the NG-RAN consists of at least one NG-RAN node.
  • the NG-RAN node is an entity corresponding to the BS 20 shown in FIG. 1 .
  • the NG-RAN node is configured with at least one gNB 21 and/or at least one ng-eNB 22 .
  • the gNB 21 provides termination of the NR user plane and control plane protocol towards the UE 11 .
  • the Ng-eNB 22 provides termination of the E-UTRA user plane and control plane protocol towards the UE 11 .
  • 5GC includes an access and mobility management function (AMF), a user plane function (UPF), and a session management function (SMF).
  • AMF hosts functions such as NAS security, idle state mobility handling, and more.
  • the AMF is an entity that includes the functions of the conventional MME.
  • UPF hosts functions such as mobility anchoring and PDU (protocol data unit) processing.
  • the UPF is an entity that includes the functions of the conventional S-GW.
  • SMF hosts functions such as UE IP address assignment and PDU session control.
  • gNB and ng-eNB are interconnected via Xn interface. gNB and ng-eNB are also connected to 5GC via NG interface. More specifically, it is connected to the AMF via the NG-C interface and to the UPF via the NG-U interface.
  • 5 illustrates functional partitioning between NG-RAN and 5GC.
  • the gNB is inter-cell radio resource management (Inter Cell RRM), radio bearer management (RB control), connection mobility control (Connection Mobility Control), radio admission control (Radio Admission Control), measurement setup and provision Functions such as (Measurement configuration & Provision) and dynamic resource allocation may be provided.
  • AMF may provide functions such as NAS security, idle state mobility processing, and the like.
  • the UPF may provide functions such as mobility anchoring and PDU processing.
  • a Session Management Function (SMF) may provide functions such as terminal IP address assignment and PDU session control.
  • FIG. 6 illustrates a frame structure that can be applied in NR.
  • a frame may be configured for 10 milliseconds (ms), and may include 10 subframes configured for 1 ms.
  • uplink and downlink transmission may be composed of frames.
  • a radio frame has a length of 10 ms and may be defined as two 5 ms half-frames (HF).
  • a half-frame may be defined as 5 1ms subframes (Subframe, SF).
  • a subframe is divided into one or more slots, and the number of slots in a subframe depends on subcarrier spacing (SCS).
  • SCS subcarrier spacing
  • Each slot includes 12 or 14 OFDM(A) symbols according to a cyclic prefix (CP). When a normal CP is used, each slot includes 14 symbols. When the extended CP is used, each slot includes 12 symbols.
  • the symbol may include an OFDM symbol (or a CP-OFDM symbol) and an SC-FDMA symbol (or a DFT-s-OFDM symbol).
  • One or a plurality of slots may be included in the subframe according to subcarrier spacing.
  • Table 1 illustrates a subcarrier spacing configuration ⁇ .
  • Table 2 illustrates the number of slots in a frame (N frame ⁇ slot ), the number of slots in a subframe (N subframe ⁇ slot ), and the number of symbols in a slot (N slot symb ) according to the subcarrier spacing configuration ⁇ . .
  • Table 3 illustrates the number of symbols per slot, the number of slots per frame, and the number of slots per subframe (SF) according to SCS when the extended CP is used.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • OFDM(A) numerology eg, SCS, CP length, etc.
  • the (absolute time) interval of a time resource eg, SF, slot, or TTI
  • TU Time Unit
  • a slot includes a plurality of symbols in the time domain.
  • one slot may include 14 symbols, but in the case of an extended CP, one slot may include 12 symbols.
  • one slot may include 7 symbols, but in the case of an extended CP, one slot may include 6 symbols.
  • a carrier wave includes a plurality of subcarriers in the frequency domain.
  • a resource block (RB) may be defined as a plurality of (eg, 12) consecutive subcarriers in the frequency domain.
  • a bandwidth part (BWP) may be defined as a plurality of consecutive (P)RBs in the frequency domain, and may correspond to one numerology (eg, SCS, CP length, etc.).
  • a carrier wave may include a maximum of N (eg, 5) BWPs. Data communication may be performed through the activated BWP.
  • Each element may be referred to as a resource element (RE) in the resource grid, and one complex symbol may be mapped.
  • RE resource element
  • a physical downlink control channel may include one or more control channel elements (CCEs) as shown in Table 4 below.
  • CCEs control channel elements
  • the PDCCH may be transmitted through a resource composed of 1, 2, 4, 8 or 16 CCEs.
  • the CCE consists of six resource element groups (REGs), and one REG consists of one resource block in the frequency domain and one orthogonal frequency division multiplexing (OFDM) symbol in the time domain.
  • REGs resource element groups
  • OFDM orthogonal frequency division multiplexing
  • a new unit called a control resource set may be introduced.
  • the UE may receive the PDCCH in CORESET.
  • CORESET may be composed of N CORESET RB resource blocks in the frequency domain, and may be composed of N CORESET symb ⁇ ⁇ 1, 2, 3 ⁇ symbols in the time domain.
  • N CORESET RB and N CORESET symb may be provided by the base station through a higher layer signal.
  • a plurality of CCEs (or REGs) may be included in CORESET.
  • the UE may attempt PDCCH detection in CORESET in units of 1, 2, 4, 8 or 16 CCEs.
  • One or a plurality of CCEs capable of attempting PDCCH detection may be referred to as PDCCH candidates.
  • the terminal may receive a plurality of CORESETs set.
  • FIG. 9 is a diagram illustrating a difference between a conventional control region and CORESET in NR.
  • the control region 300 in the conventional wireless communication system (eg, LTE/LTE-A) is configured over the entire system band used by the base station. All terminals except for some terminals supporting only a narrow band (eg, eMTC/NB-IoT terminals) receive radio signals of the entire system band of the base station in order to properly receive/decode the control information transmitted by the base station.
  • eMTC/NB-IoT terminals receive radio signals of the entire system band of the base station in order to properly receive/decode the control information transmitted by the base station.
  • the CORESETs 301, 302, and 303 may be said to be radio resources for control information to be received by the terminal, and only a part of the system band may be used instead of the entire system band.
  • the base station may allocate a CORESET to each terminal, and may transmit control information through the allocated CORESET.
  • the first CORESET 301 may be allocated to terminal 1
  • the second CORESET 302 may be allocated to the second terminal
  • the third CORESET 303 may be allocated to terminal 3 .
  • the terminal may receive control information of the base station even if it does not necessarily receive the entire system band.
  • the CORESET there may be a terminal-specific CORESET for transmitting terminal-specific control information and a common CORESET for transmitting control information common to all terminals.
  • the resource may include at least one of a resource in a time domain, a resource in a frequency domain, a resource in a code domain, and a resource in a space domain.
  • FIG. 10 shows an example of a frame structure for a new radio access technology.
  • a structure in which a control channel and a data channel are time division multiplexed (TDM) in one TTI is considered as one of the frame structures.
  • a hatched region indicates a downlink control region, and a black portion indicates an uplink control region.
  • An area without an indication may be used for downlink data (DL data) transmission or uplink data (UL data) transmission.
  • a characteristic of this structure is that downlink (DL) transmission and uplink (UL) transmission are sequentially performed within one subframe, so that DL data is transmitted within a subframe, and UL ACK / Acknowledgment/Not-acknowledgement (NACK) may also be received.
  • NACK Acknowledgment/Not-acknowledgement
  • the base station and the terminal switch from the transmit mode to the receive mode, or a time gap for the conversion process from the receive mode to the transmit mode. ) is required.
  • some OFDM symbols at the time of switching from DL to UL in the self-contained subframe structure may be set as a guard period (GP).
  • one slot may have a self-contained structure in which all of a DL control channel, DL or UL data, and a UL control channel may be included.
  • the first N symbols in a slot may be used to transmit a DL control channel (hereinafter, DL control region), and the last M symbols in a slot may be used to transmit a UL control channel (hereinafter, UL control region).
  • N and M are each an integer greater than or equal to 0.
  • a resource region hereinafter, referred to as a data region between the DL control region and the UL control region may be used for DL data transmission or UL data transmission.
  • a data region a resource region between the DL control region and the UL control region may be used for DL data transmission or UL data transmission.
  • the following configuration may be considered. Each section is listed in chronological order.
  • the DL region may be (i) a DL data region, (ii) a DL control region + a DL data region.
  • the UL region may be (i) a UL data region, (ii) a UL data region + a UL control region.
  • the PDCCH may be transmitted in the DL control region, and the PDSCH may be transmitted in the DL data region.
  • the PUCCH may be transmitted in the UL control region, and the PUSCH may be transmitted in the UL data region.
  • DCI downlink control information
  • DL data scheduling information for example, DL data scheduling information, UL data scheduling information, etc.
  • UCI Uplink Control Information
  • ACK/NACK Positive Acknowledgment/Negative Acknowledgment
  • CSI Channel State Information
  • SR Service Request
  • the GP provides a time gap between the base station and the terminal in the process of switching from the transmission mode to the reception mode or in the process of switching from the reception mode to the transmission mode. Some symbols at the time of switching from DL to UL in a subframe may be set to GP.
  • mmW millimeter wave
  • the wavelength is shortened, so that it is possible to install a plurality of antenna elements in the same area. That is, in the 30 GHz band, the wavelength is 1 cm, and a total of 100 antenna elements can be installed in a 2-dimensional array form at 0.5 wavelength (lambda) intervals on a 5 by 5 cm panel. Therefore, mmW uses a plurality of antenna elements to increase a beamforming (BF) gain to increase coverage or increase throughput.
  • BF beamforming
  • TXRU transceiver unit
  • independent beamforming for each frequency resource is possible.
  • TXRU transceiver unit
  • to install the TXRU in all 100 antenna elements (element) has a problem in terms of effectiveness in terms of price. Therefore, a method of mapping a plurality of antenna elements to one TXRU and adjusting the direction of a beam with an analog phase shifter is being considered.
  • This analog beamforming method has a disadvantage in that frequency selective beamforming cannot be performed because only one beam direction can be made in the entire band.
  • hybrid beamforming As an intermediate form between digital beamforming (Digital BF) and analog beamforming (analog BF), hybrid beamforming (hybrid BF) having B TXRUs, which is less than Q antenna elements, may be considered.
  • hybrid beamforming having B TXRUs, which is less than Q antenna elements, may be considered.
  • the direction of beams that can be transmitted simultaneously is limited to B or less.
  • analog beamforming (or RF beamforming) performs precoding (or combining) at the RF stage, which results in the number of RF chains and the number of D/A (or A/D) converters. It has the advantage of being able to achieve performance close to digital beamforming while reducing the
  • the hybrid beamforming structure may be represented by N TXRUs and M physical antennas. Then, digital beamforming for L data layers to be transmitted from the transmitter can be expressed as an N by L matrix, and then the N digital signals converted into analog signals through TXRU. After conversion, analog beamforming expressed by an M by N matrix is applied.
  • FIG. 12 is an abstract diagram of a hybrid beamforming structure from the viewpoint of the TXRU and the physical antenna.
  • the number of digital beams is L, and the number of analog beams is N. Furthermore, in the NR system, a direction of supporting more efficient beamforming to a terminal located in a specific area is considered by designing a base station to change analog beamforming in units of symbols. Furthermore, when defining N specific TXRUs and M RF antennas as one antenna panel in FIG. 12, the NR system considers introducing a plurality of antenna panels to which hybrid beamforming independent of each other is applicable. is becoming
  • analog beams advantageous for signal reception may be different for each terminal, at least a specific subframe for a synchronization signal, system information, paging, etc.
  • a beam sweeping operation in which a plurality of analog beams to be applied by a base station is changed for each symbol so that all terminals can have a reception opportunity is being considered.
  • FIG. 13 shows a synchronization signal and a PBCH (SS/PBCH) block.
  • the SS/PBCH block spans PSS and SSS occupying 1 symbol and 127 subcarriers, and 3 OFDM symbols and 240 subcarriers, respectively, but on one symbol, an unused portion for SSS is in the middle It consists of the remaining PBCH.
  • the periodicity of the SS/PBCH block may be configured by the network, and the time position at which the SS/PBCH block may be transmitted may be determined by subcarrier spacing.
  • Polar coding may be used for the PBCH.
  • the UE may assume a band-specific subcarrier interval for the SS/PBCH block unless the network sets the UE to assume different subcarrier intervals.
  • PBCH symbols carry their frequency-multiplexed DMRS.
  • QPSK modulation may be used for PBCH.
  • 1008 unique physical layer cell IDs may be given.
  • first symbol indices for candidate SS/PBCH blocks are determined according to subcarrier spacing of SS/PBCH blocks, which will be described later.
  • the first symbols of the candidate SS/PBCH blocks have an index of ⁇ 4, 8, 16, 20 ⁇ +28*n.
  • n 0, 1, 2, 3, 5, 6, 7, 8, 10, 11, 12, 13, 15, 16, 17, 18.
  • the first symbols of the candidate SS/PBCH blocks have an index of ⁇ 8, 12, 16, 20, 32, 36, 40, 44 ⁇ +56*n.
  • n 0, 1, 2, 3, 5, 6, 7, 8.
  • Candidate SS/PBCH blocks in a half frame are indexed in ascending order from 0 to L-1 on the time axis.
  • the index of SS/PBCH blocks in which the UE cannot receive other signals or channels in REs overlapping with REs corresponding to SS/PBCH blocks is set can be set.
  • the index of SS/PBCH blocks per serving cell in which the UE cannot receive other signals or channels in REs overlapping the REs corresponding to the SS/PBCH blocks. can be set.
  • the setting by 'SSB-transmitted' may take precedence over the setting by 'SSB-transmitted-SIB1'.
  • a periodicity of a half frame for reception of SS/PBCH blocks per serving cell may be set by a higher layer parameter 'SSB-periodicityServingCell'. If the UE does not set the periodicity of the half frame for the reception of SS/PBCH blocks, the UE must assume the periodicity of the half frame. The UE may assume that the periodicity is the same for all SS/PBCH blocks in the serving cell.
  • the UE may obtain 6-bit SFN information through the MIB (Master Information Block) received in the PBCH.
  • SFN 4 bits can be obtained in the PBCH transport block.
  • the UE may obtain a 1-bit half frame indicator as part of the PBCH payload.
  • the UE may obtain the SS/PBCH block index by the DMRS sequence and the PBCH payload. That is, LSB 3 bits of the SS block index can be obtained by the DMRS sequence for a period of 5 ms. Also, the MSB 3 bits of the timing information are explicitly carried within the PBCH payload (for >6 GHz).
  • the UE may assume that a half frame with SS/PBCH blocks occurs with a periodicity of 2 frames. If it detects the SS / PBCH block, the terminal, and if the k for the FR1 and SSB ⁇ 23 ⁇ 11 SSB and k for FR2, Type0-PDCCH common search space (common search space) is determined that the present controlled set of resources for do. The UE determines that there is no control resource set for the Type0-PDCCH common search space if k SSB >23 for FR1 and k SSB > 11 for FR2.
  • the UE For a serving cell without transmission of SS/PBCH blocks, the UE acquires time and frequency synchronization of the serving cell based on reception of SS/PBCH blocks on the PSCell or the primary cell of the cell group for the serving cell.
  • SI System information
  • MIB MasterInformationBlock
  • SIBs SystemInformationBlocks
  • SIB1 SystemInformationBlockType1
  • SIB1 is transmitted with periodicity and repetition on the DL-SCH.
  • SIB1 includes information on availability and scheduling (eg, periodicity, SI-window size) of other SIBs. In addition, it indicates whether these (ie, other SIBs) are provided on a periodic broadcast basis or upon request. If other SIBs are provided by request, SIB1 includes information for the UE to perform the SI request;
  • SIBs other than SIB1 are carried in a SystemInformation (SI) message transmitted on the DL-SCH.
  • SI SystemInformation
  • Each SI message is transmitted within a periodically occurring time domain window (called an SI-window);
  • the RAN provides the necessary SI by dedicated signaling. Nevertheless, the UE must acquire the MIB of the PSCell in order to obtain the SFN timing of the SCH (which may be different from the MCG).
  • the RAN releases and adds the related secondary cell.
  • the SI can be changed only by reconfiguration with Sync.
  • 15 shows an example of a process of acquiring system information of a terminal.
  • the terminal may receive the MIB from the network and then receive the SIB1. Thereafter, the terminal may transmit a system information request to the network, and may receive a 'SystemInformation message' from the network in response thereto.
  • the terminal may apply a system information acquisition procedure for acquiring AS (access stratum) and NAS (non-access stratum) information.
  • UEs in RRC_IDLE and RRC_INACTIVE states must ensure (at least) valid versions of MIB, SIB1, and SystemInformationBlockTypeX (according to the relevant RAT support for UE-controlled mobility).
  • the UE in the RRC_CONNECTED state must guarantee valid versions of MIB, SIB1, and SystemInformationBlockTypeX (according to mobility support for the related RAT).
  • the UE must store the related SI obtained from the currently camped/serving cell.
  • the version of the SI obtained and stored by the terminal is valid only for a certain period of time.
  • the UE may use this stored version of the SI after, for example, cell reselection, return from coverage, or system information change instruction.
  • the random access procedure of the UE can be summarized as shown in Table 5 below.
  • the UE may transmit a physical random access channel (PRACH) preamble in uplink as message (Msg) 1 of the random access procedure.
  • PRACH physical random access channel
  • Random access preamble sequences having two different lengths are supported.
  • a long sequence of length 839 applies to subcarrier spacings of 1.25 kHz and 5 kHz
  • a short sequence of length 139 applies to subcarrier spacings of 15, 30, 60, and 120 kHz.
  • a long sequence supports an unrestricted set and a restricted set of types A and B, whereas a short sequence supports only an unrestricted set.
  • a plurality of RACH preamble formats are defined with one or more RACH OFDM symbols, a different cyclic prefix (CP), and a guard time.
  • the PRACH preamble configuration to be used is provided to the UE as system information.
  • the UE may retransmit the power-rammed PRACH preamble within a prescribed number of times.
  • the UE calculates the PRACH transmission power for retransmission of the preamble based on the most recent estimated path loss and power ramping counter. If the UE performs beam switching, the power ramping counter does not change.
  • 17 is for explaining a power ramping carwonter.
  • the UE may perform power ramping for retransmission of the random access preamble based on the power ramping counter.
  • the power ramping counter does not change when the UE performs beam switching during PRACH retransmission.
  • the UE when the UE retransmits the random access preamble for the same beam, such as when the power ramping counter increases from 1 to 2 and from 3 to 4, the UE increments the power ramping counter by 1. However, when the beam is changed, the power ramping counter does not change during PRACH retransmission.
  • the system information informs the UE of the relationship between SS blocks and RACH resources.
  • the threshold of the SS block for the RACH resource relationship is based on RSRP and network configuration. Transmission or retransmission of the RACH preamble is based on an SS block that satisfies a threshold. Accordingly, in the example of FIG. 18 , since the SS block m exceeds the threshold of the received power, the RACH preamble is transmitted or retransmitted based on the SS block m.
  • the DL-SCH may provide timing arrangement information, an RA-preamble ID, an initial uplink grant, and a temporary C-RNTI.
  • the UE may perform uplink transmission on the UL-SCH as Msg3 of the random access procedure.
  • Msg3 may include an RRC connection request and UE identifier.
  • the network may transmit Msg4, which may be treated as a contention resolution message, in downlink.
  • Msg4 may be treated as a contention resolution message
  • up to 400 megahertz (MHz) per component carrier (CC) may be supported. If the terminal operating in such a wideband CC always operates with RF for the entire CC turned on, the terminal battery consumption may increase.
  • different numerology for each frequency band within the CC eg, subcarrier spacing (sub -carrier spacing: SCS)
  • SCS subcarrier spacing
  • the base station may instruct the terminal to operate only in a partial bandwidth rather than the entire bandwidth of the broadband CC, and the partial bandwidth is defined as a bandwidth part (BWP) for convenience.
  • BWP may be composed of continuous resource blocks (RBs) on the frequency axis, and one numerology (eg, subcarrier interval, cyclic prefix (CP) length, slot/mini-slot) may correspond to a duration, etc.).
  • numerology eg, subcarrier interval, cyclic prefix (CP) length, slot/mini-slot
  • the base station may set a plurality of BWPs even within one CC configured for the terminal. For example, in a PDCCH monitoring slot, a BWP occupying a relatively small frequency region may be configured, and a PDSCH indicated by the PDCCH may be scheduled on a larger BWP.
  • some terminals may be set to other BWPs for load balancing.
  • a partial spectrum from the entire bandwidth may be excluded and both BWPs may be configured in the same slot.
  • the base station may set at least one DL/UL BWP to a terminal associated with a wideband CC, and at least one DL/UL BWP among DL/UL BWP(s) set at a specific time. It can be activated (activation) (by L1 signaling or MAC CE or RRC signaling, etc.), and switching to another configured DL / UL BWP (by L1 signaling or MAC CE or RRC signaling, etc.) can be indicated, or timer-based timer When the value expires, it may be switched to a predetermined DL/UL BWP. In this case, the activated DL/UL BWP is defined as an active DL/UL BWP.
  • the terminal in a situation such as when the terminal is in the process of initial access or before the RRC connection is set up, it may not be able to receive the configuration for DL/UL BWP.
  • the DL assumed by the terminal /UL BWP is defined as an initial active DL/UL BWP.
  • Discontinuous Reception refers to an operation mode in which a UE (User Equipment) reduces battery consumption so that the UE can discontinuously receive a downlink channel. That is, the terminal configured for DRX can reduce power consumption by discontinuously receiving the DL signal.
  • UE User Equipment
  • the DRX operation is performed within a DRX cycle indicating a time interval in which On Duration is periodically repeated.
  • the DRX cycle includes an on-period and a sleep period (Sleep Duration) (or an opportunity of DRX).
  • the on-period indicates a time interval during which the UE monitors the PDCCH to receive the PDCCH.
  • DRX may be performed in RRC (Radio Resource Control)_IDLE state (or mode), RRC_INACTIVE state (or mode), or RRC_CONNECTED state (or mode).
  • RRC_IDLE state and RRC_INACTIVE state DRX may be used to receive paging signal discontinuously.
  • RRC_IDLE state a state in which a radio connection (RRC connection) is not established between the base station and the terminal.
  • RRC connection A wireless connection (RRC connection) is established between the base station and the terminal, but the wireless connection is inactive.
  • RRC_CONNECTED state a state in which a radio connection (RRC connection) is established between the base station and the terminal.
  • DRX can be basically divided into idle (idle) mode DRX, connected (Connected) DRX (C-DRX), and extended (extended) DRX.
  • DRX applied in the IDLE state may be named idle mode DRX, and DRX applied in the CONNECTED state may be named connected mode DRX (C-DRX).
  • Extended/Enhanced DRX is a mechanism that can extend the cycles of idle mode DRX and C-DRX, and Extended/Enhanced DRX (eDRX) can be mainly used for (massive) IoT applications.
  • whether to allow eDRX may be configured based on system information (eg, SIB1).
  • SIB1 may include an eDRX-allowed parameter.
  • the eDRX-allowed parameter is a parameter indicating whether idle mode extended DRX is allowed.
  • paging occasion is P-RNTI (Paging-Radio Network Temporary Identifier) (which addresses a paging message for NB-IoT) PDCCH (Physical Downlink Control Channel) or MPDCCH (MTC PDCCH) ) or a subframe that can be transmitted through a narrowband PDCCH (NPDCCH).
  • P-RNTI Paging-Radio Network Temporary Identifier
  • MTC PDCCH MPDCCH
  • NPDCCH narrowband PDCCH
  • PO may indicate a start subframe of MPDCCH repetition.
  • the PO may indicate the start subframe of the NPDCCH repetition. Therefore, the first valid NB-IoT downlink subframe after PO is the start subframe of NPDCCH repetition.
  • One paging frame is one radio frame that may include one or a plurality of paging opportunities. When DRX is used, the UE only needs to monitor one PO per DRX cycle.
  • One paging narrow band is one narrow band in which the terminal performs paging message reception. PF, PO, and PNB may be determined based on DRX parameters provided in system information.
  • 19 is a flowchart illustrating an example of performing an idle mode DRX operation.
  • the terminal may receive idle mode DRX configuration information from the base station through higher layer signaling (eg, system information) (S21).
  • higher layer signaling eg, system information
  • the UE may determine a Paging Frame (PF) and a Paging Occasion (PO) to monitor the PDCCH in the paging DRX cycle based on the idle mode DRX configuration information (S22).
  • the DRX cycle may include an on-period and a sleep period (or an opportunity of DRX).
  • the UE may monitor the PDCCH in the PO of the determined PF (S23).
  • the UE monitors only one subframe (PO) per paging DRX cycle.
  • the terminal receives the PDCCH scrambled by the P-RNTI during the on-period (ie, when paging is detected), the terminal transitions to the connected mode and may transmit/receive data to/from the base station.
  • C-DRX means DRX applied in the RRC connection state.
  • the DRX cycle of C-DRX may consist of a short DRX cycle and/or a long DRX cycle.
  • a short DRX cycle may correspond to an option.
  • the UE may perform PDCCH monitoring for the on-period. If the PDCCH is successfully detected during PDCCH monitoring, the UE may operate (or run) an inactive timer and maintain an awake state. Conversely, if the PDCCH is not successfully detected during PDCCH monitoring, the UE may enter the sleep state after the on-period ends.
  • a PDCCH reception opportunity (eg, a slot having a PDCCH search space) may be configured non-contiguously based on the C-DRX configuration.
  • a PDCCH reception opportunity (eg, a slot having a PDCCH search space) may be continuously configured in the present disclosure.
  • PDCCH monitoring may be limited to a time interval set as a measurement gap (gap) regardless of the C-DRX configuration.
  • the DRX cycle consists of 'On Duration' and 'Opportunity for DRX'.
  • the DRX cycle defines a time interval in which the 'on-period' is periodically repeated.
  • the 'on-interval' indicates a time period that the UE monitors to receive the PDCCH.
  • the UE performs PDCCH monitoring during the 'on-period'. If there is a successfully detected PDCCH during PDCCH monitoring, the UE operates an inactivity timer and maintains an awake state. On the other hand, if there is no PDCCH successfully detected during PDCCH monitoring, the UE enters a sleep state after the 'on-period' ends.
  • PDCCH monitoring/reception may be discontinuously performed in the time domain in performing the procedures and/or methods described/proposed above.
  • a PDCCH reception opportunity eg, a slot having a PDCCH search space
  • PDCCH monitoring/reception may be continuously performed in the time domain in performing the procedures and/or methods described/proposed above.
  • PDCCH reception opportunities eg, a slot having a PDCCH search space
  • PDCCH monitoring may be limited in a time interval configured as a measurement gap.
  • Table 6 shows the process of the UE related to DRX (RRC_CONNECTED state).
  • DRX configuration information is received through higher layer (eg, RRC) signaling, and whether DRX ON/OFF is controlled by a DRX command of the MAC layer. If DRX is configured, PDCCH monitoring may be discontinuously performed in performing the procedures and/or methods described/proposed in the present disclosure.
  • Type of signals UE procedure Step 1 RRC signaling (MAC-CellGroupConfig) - Receive DRX setting information Step 2 MAC CE ((Long) DRX command MAC CE) - Receive DRX command Step 3 - - PDCCH monitoring during on-duration of DRX cycle
  • the MAC-CellGroupConfig may include configuration information required to set a MAC (Medium Access Control) parameter for a cell group.
  • MAC-CellGroupConfig may also include configuration information related to DRX.
  • MAC-CellGroupConfig may include information as follows to define DRX.
  • drx-InactivityTimer Defines the length of the time interval in which the UE remains awake after the PDCCH opportunity in which the PDCCH indicating the initial UL or DL data is detected
  • drx-HARQ-RTT-TimerDL Defines the length of the maximum time interval from when DL initial transmission is received until DL retransmission is received.
  • drx-HARQ-RTT-TimerDL Defines the length of the maximum time interval after the grant for UL initial transmission is received until the grant for UL retransmission is received.
  • the UE maintains the awake state and performs PDCCH monitoring at every PDCCH opportunity.
  • the 6G system may be a 5G system or a next-generation communication system after the NR system.
  • 6G systems have (i) very high data rates per device, (ii) very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) battery-free free) It aims to lower the energy consumption of IoT devices, (vi) ultra-reliable connections, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system can be in four aspects: intelligent connectivity, deep connectivity, holographic connectivity, and ubiquitous connectivity. can be satisfied. That is, Table 7 is a table showing an example of the requirements of the 6G system.
  • the 6G system includes enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine-type communication (mMTC), AI integrated communication, tactile internet, and high throughput. ), high network capacity, high energy efficiency, low backhaul and access network congestion, and key factors such as enhanced data security (key factors) can have.
  • 21 shows an example of a communication structure that can be provided in a 6G system.
  • 6G systems are expected to have 50 times higher simultaneous wireless connectivity than 5G wireless communication systems.
  • URLLC a key feature of 5G, will become an even more important technology by providing an end-to-end delay of less than 1ms in 6G communication.
  • 6G systems will have much better volumetric spectral efficiencies as opposed to frequently used areal spectral efficiencies.
  • the 6G system can provide very long battery life and advanced battery technology for energy harvesting, so mobile devices will not need to be charged separately in the 6G system.
  • New network characteristics in 6G may be as follows.
  • 6G is expected to be integrated with satellites to provide a global mobile population.
  • the integration of terrestrial, satellite and public networks into one wireless communication system is very important for 6G.
  • AI may be applied in each step of a communication procedure (or each procedure of signal processing to be described later).
  • the 6G wireless network will deliver power to charge the batteries of devices such as smartphones and sensors. Therefore, wireless information and energy transfer (WIET) will be integrated.
  • WIET wireless information and energy transfer
  • Small cell networks The idea of small cell networks was introduced to improve the received signal quality as a result of improved throughput, energy efficiency and spectral efficiency in cellular systems. As a result, small cell networks are essential characteristics for communication systems beyond 5G and Beyond 5G (5GB). Accordingly, the 6G communication system also adopts the characteristics of the small cell network.
  • Ultra-dense heterogeneous networks will be another important characteristic of 6G communication systems.
  • a multi-tier network composed of heterogeneous networks improves overall QoS and reduces costs.
  • a backhaul connection is characterized as a high-capacity backhaul network to support high-capacity traffic.
  • High-speed fiber optics and free-space optics (FSO) systems may be possible solutions to this problem.
  • High-precision localization (or location-based service) through communication is one of the functions of the 6G wireless communication system. Therefore, the radar system will be integrated with the 6G network.
  • Softening and virtualization are two important features that underlie the design process in 5GB networks to ensure flexibility, reconfigurability and programmability. In addition, billions of devices can be shared in a shared physical infrastructure.
  • AI artificial intelligence
  • AI The most important and newly introduced technology for 6G systems is AI.
  • AI was not involved in the 4G system.
  • 5G systems will support partial or very limited AI.
  • the 6G system will be AI-enabled for full automation.
  • Advances in machine learning will create more intelligent networks for real-time communication in 6G.
  • Incorporating AI into communications can simplify and enhance real-time data transmission.
  • AI can use numerous analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play an important role in M2M, machine-to-human and human-to-machine communication.
  • AI can be a rapid communication in BCI (Brain Computer Interface).
  • BCI Brain Computer Interface
  • AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling ( scheduling) and allocation may be included.
  • Machine learning may be used for channel estimation and channel tracking, and may be used for power allocation, interference cancellation, and the like in a physical layer of a downlink (DL). In addition, machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • DL downlink
  • machine learning may be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • DNN deep neural network
  • Deep learning-based AI algorithms require large amounts of training data to optimize training parameters.
  • a lot of training data is used offline. This is because static training on training data in a specific channel environment may cause a contradiction between dynamic characteristics and diversity of a wireless channel.
  • signals of the physical layer of wireless communication are complex signals.
  • further research on a neural network for detecting a complex domain signal is needed.
  • Machine learning refers to a set of actions that trains a machine to create a machine that can perform tasks that humans can or cannot do.
  • Machine learning requires data and a learning model.
  • data learning methods can be roughly divided into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network learning is to minimize output errors. Neural network learning repeatedly inputs learning data into the neural network, calculates the output and target errors of the neural network for the training data, and backpropagates the neural network error from the output layer of the neural network to the input layer in the direction to reduce the error. ) to update the weight of each node in the neural network.
  • Supervised learning uses training data in which the correct answer is labeled in the training data, and in unsupervised learning, the correct answer may not be labeled in the training data. That is, for example, learning data in the case of supervised learning regarding data classification may be data in which categories are labeled for each of the training data.
  • the labeled training data is input to the neural network, and an error can be calculated by comparing the output (category) of the neural network with the label of the training data.
  • the calculated error is back propagated in the reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back propagation.
  • a change amount of the connection weight of each node to be updated may be determined according to a learning rate.
  • the computation of the neural network on the input data and the backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stage of learning of a neural network, a high learning rate can be used to allow the neural network to quickly obtain a certain level of performance to increase efficiency, and a low learning rate can be used to increase accuracy at the end of learning.
  • the learning method may vary depending on the characteristics of the data. For example, when the purpose of accurately predicting data transmitted from a transmitter in a communication system is at a receiver, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be considered. ) is called
  • the neural network cord used as a learning method is largely divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and Recurrent Boltzmann Machine (RNN) methods. have.
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN Recurrent Boltzmann Machine
  • An artificial neural network is an example of connecting several perceptrons.
  • FIG. 22 shows an example of a perceptron structure.
  • each component is multiplied by a weight (W 1 ,W 2 ,...,W d ), and the result After summing all the , the whole process of applying the activation function ⁇ ( ⁇ ) is called a perceptron.
  • the huge artificial neural network structure may extend the simplified perceptron structure shown in FIG. 22 to apply input vectors to different multidimensional perceptrons. For convenience of description, an input value or an output value is referred to as a node.
  • the perceptron structure shown in FIG. 22 can be described as being composed of a total of three layers based on an input value and an output value.
  • An artificial neural network in which H (d+1)-dimensional perceptrons exist between the first and second layers and K (H+1)-dimensional perceptrons exist between the second and third layers can be expressed as shown in FIG. have.
  • FIG. 23 shows an example of a multiple perceptron structure.
  • the layer where the input vector is located is called the input layer
  • the layer where the final output value is located is called the output layer
  • all layers located between the input layer and the output layer are called hidden layers.
  • three layers are disclosed, but when counting the actual number of artificial neural network layers, the input layer is counted except for the input layer, so it can be viewed as a total of two layers.
  • the artificial neural network is constructed by connecting the perceptrons of the basic blocks in two dimensions.
  • the aforementioned input layer, hidden layer, and output layer can be jointly applied in various artificial neural network structures such as CNN and RNN, which will be described later, as well as multi-layer perceptron.
  • CNN neural network
  • RNN multi-layer perceptron
  • DNN deep neural network
  • the deep neural network shown in Fig. 24 is a multilayer perceptron composed of eight hidden layers + output layers.
  • the multi-layered perceptron structure is referred to as a fully-connected neural network.
  • a connection relationship does not exist between nodes located in the same layer, and a connection relationship exists only between nodes located in adjacent layers.
  • DNN has a fully connected neural network structure and is composed of a combination of multiple hidden layers and activation functions, so it can be usefully applied to figure out the correlation between input and output.
  • the correlation characteristic may mean a joint probability of input/output.
  • various artificial neural network structures different from the above-described DNN can be formed depending on how a plurality of perceptrons are connected to each other.
  • 25 shows an example of a convolutional neural network.
  • nodes located inside one layer are arranged in a one-dimensional vertical direction.
  • the nodes are two-dimensionally arranged with w horizontally and h vertical nodes.
  • a weight is added per connection in the connection process from one input node to the hidden layer, a total of h*w weights must be considered.
  • h*w nodes in the input layer a total of h 2 w 2 weights are needed between two adjacent layers.
  • 26 shows an example of a filter operation in a convolutional neural network.
  • the convolutional neural network of FIG. 25 has a problem in that the number of weights increases exponentially according to the number of connections, so instead of considering the connections of all modes between adjacent layers, it is assumed that a filter with a small size exists in FIG. As in , the weighted sum and activation function operations are performed on the overlapping filters.
  • One filter has a weight corresponding to the number corresponding to its size, and weight learning can be performed so that a specific feature on an image can be extracted and output as a factor.
  • a 3 by 3 filter is applied to the upper left 3 by 3 region of the input layer, and an output value obtained by performing weighted sum and activation function operations on the corresponding node is stored in z 22 .
  • the filter performs weighted sum and activation function calculations while moving horizontally and vertically at regular intervals while scanning the input layer, and places the output value at the position of the current filter.
  • a calculation method is similar to a convolution operation on an image in the field of computer vision, so a deep neural network with such a structure is called a convolutional neural network (CNN), and a hidden neural network generated as a result of a convolution operation
  • the layer is called a convolutional layer.
  • a neural network having a plurality of convolutional layers is called a deep convolutional neural network (DCNN).
  • the number of weights can be reduced by calculating the weighted sum by including only nodes located in the region covered by the filter in the node where the filter is currently located. Due to this, one filter can be used to focus on features for a local area. Accordingly, CNN can be effectively applied to image data processing in which physical distance in a two-dimensional domain is an important criterion. Meanwhile, in CNN, a plurality of filters may be applied immediately before the convolution layer, and a plurality of output results may be generated through the convolution operation of each filter.
  • a structure in which an input method is applied to an artificial neural network is called a recurrent neural network structure.
  • FIG. 27 shows an example of a neural network structure in which a cyclic loop exists.
  • a recurrent neural network completely connects elements (x 1 (t), x 2 (t), ..., x d (t)) of a point t in a data sequence.
  • the immediately preceding time point t-1 inputs the hidden vectors (z 1 (t1), z 2 (t1),..., z H (t1)) together to apply the weighted sum and activation functions.
  • z 1 (t1), z 2 (t1),..., z H (t1) is a structure that The reason why the hidden vector is transferred to the next time in this way is because information in the input vector at the previous time is considered to be accumulated in the hidden vector of the current time.
  • the recurrent neural network operates in a predetermined time sequence with respect to an input data sequence.
  • the hidden vector (z 1 (1), z 2 (1) ), ..., z H (1)) are input together with the input vectors of time 2 (x 1 (2), x 2 (2), ..., x d (2)) to form weighted sum and activation functions
  • a deep recurrent neural network when a plurality of hidden layers are arranged in a recurrent neural network, this is called a deep recurrent neural network (DRNN).
  • the recurrent neural network is designed to be usefully applied to sequence data (eg, natural language processing).
  • Deep Q-Network As a neural network core used as a learning method, in addition to DNN, CNN, and RNN, Restricted Boltzmann Machine (RBM), deep belief networks (DBN), Deep Q-Network and It includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • RBM Restricted Boltzmann Machine
  • DNN deep belief networks
  • Deep Q-Network includes various deep learning techniques such as, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism.
  • deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling and It may include an allocation (allocation) and the like.
  • the data rate can be increased by increasing the bandwidth. This can be accomplished by using sub-THz communication with a wide bandwidth and applying advanced large-scale MIMO technology.
  • THz waves also known as sub-millimeter radiation, typically exhibit a frequency band between 0.1 THz and 10 THz with corresponding wavelengths in the range of 0.03 mm-3 mm.
  • the 100GHz-300GHz band range (Sub THz band) is considered a major part of the THz band for cellular communication.
  • the 6G cellular communication capacity is increased.
  • 300GHz-3THz is in the far-infrared (IR) frequency band.
  • the 300GHz-3THz band is part of the broadband, but at the edge of the wideband, just behind the RF band. Thus, this 300 GHz-3 THz band shows similarities to RF.
  • THz communication The main characteristics of THz communication include (i) widely available bandwidth to support very high data rates, and (ii) high path loss occurring at high frequencies (high directional antennas are indispensable).
  • the narrow beamwidth produced by the highly directional antenna reduces interference.
  • the small wavelength of the THz signal allows a much larger number of antenna elements to be integrated into devices and BSs operating in this band. This allows the use of advanced adaptive nesting techniques that can overcome range limitations.
  • optical wireless communication technology OPC technology
  • OWC technology is envisioned for 6G communications in addition to RF-based communications for all possible device-to-access networks. These networks connect to network-to-backhaul/fronthaul network connections.
  • OWC technology has already been used since the 4G communication system, but will be used more widely to meet the needs of the 6G communication system.
  • OWC technologies such as light fidelity, visible light communication, optical camera communication, and FSO communication based on a light band are well known technologies.
  • Communication based on optical radio technology can provide very high data rates, low latency and secure communication.
  • LiDAR can also be used for ultra-high-resolution 3D mapping in 6G communication based on wide bands.
  • FSO The transmitter and receiver characteristics of an FSO system are similar to those of a fiber optic network.
  • data transmission in an FSO system is similar to that of a fiber optic system. Therefore, FSO can be a good technology to provide backhaul connectivity in 6G systems along with fiber optic networks.
  • FSO supports high-capacity backhaul connections for remote and non-remote areas such as sea, space, underwater, and isolated islands.
  • FSO also supports cellular BS connectivity.
  • MIMO technology improves, so does the spectral efficiency. Therefore, large-scale MIMO technology will be important in 6G systems. Since the MIMO technology uses multiple paths, a multiplexing technique and a beam generation and operation technique suitable for the THz band should also be considered important so that a data signal can be transmitted through one or more paths.
  • Blockchain will become an important technology for managing large amounts of data in future communication systems.
  • Blockchain is a form of distributed ledger technology, which is a database distributed across numerous nodes or computing devices. Each node replicates and stores an identical copy of the ledger.
  • the blockchain is managed as a peer-to-peer network. It can exist without being managed by a centralized authority or server. Data on the blockchain is collected together and organized into blocks. Blocks are linked together and protected using encryption.
  • Blockchain in nature perfectly complements IoT at scale with improved interoperability, security, privacy, reliability and scalability. Therefore, blockchain technology provides several features such as interoperability between devices, traceability of large amounts of data, autonomous interaction of different IoT systems, and large-scale connection stability of 6G communication systems.
  • the 6G system integrates terrestrial and public networks to support vertical expansion of user communications.
  • 3D BS will be provided via low orbit satellites and unmanned aerial vehicles (UAVs). Adding a new dimension in terms of elevation and associated degrees of freedom makes 3D connections significantly different from traditional 2D networks.
  • UAVs unmanned aerial vehicles
  • UAVs Unmanned Aerial Vehicles
  • a BS entity is installed in the UAV to provide cellular connectivity.
  • UAVs have certain features not found in fixed BS infrastructure, such as easy deployment, strong line-of-sight links, and degrees of freedom with controlled mobility.
  • UAVs can *?* easily handle these situations.
  • UAV will become a new paradigm in the field of wireless communication. This technology facilitates the three basic requirements of wireless networks: eMBB, URLLC and mMTC.
  • UAVs can also serve several purposes, such as improving network connectivity, fire detection, disaster emergency services, security and surveillance, pollution monitoring, parking monitoring, incident monitoring, and more. Therefore, UAV technology is recognized as one of the most important technologies for 6G communication.
  • Tight integration of multiple frequencies and heterogeneous communication technologies is very important in 6G systems. As a result, users can seamlessly move from one network to another without having to make any manual configuration on the device. The best network is automatically selected from the available communication technologies. This will break the limitations of the cell concept in wireless communication. Currently, user movement from one cell to another causes too many handovers in high-density networks, causing handover failures, handover delays, data loss and ping-pong effects. 6G cell-free communication will overcome all of this and provide better QoS. Cell-free communication will be achieved through multi-connectivity and multi-tier hybrid technologies and different heterogeneous radios of devices.
  • WIET wireless information and energy transfer
  • WIET uses the same fields and waves as wireless communication systems.
  • the sensor and smartphone will be charged using wireless power transfer during communication.
  • WIET is a promising technology for extending the life of battery-charging wireless systems. Therefore, devices without batteries will be supported in 6G communication.
  • Autonomous wireless networks are the ability to continuously detect dynamically changing environmental conditions and exchange information between different nodes.
  • sensing will be tightly integrated with communications to support autonomous systems.
  • the density of access networks in 6G will be enormous.
  • Each access network is connected by backhaul connections such as fiber optic and FSO networks.
  • backhaul connections such as fiber optic and FSO networks.
  • Beamforming is a signal processing procedure that adjusts an antenna array to transmit a radio signal in a specific direction.
  • Beamforming technology has several advantages, such as high No. 14-to-noise ratio, interference prevention and rejection, and high network efficiency.
  • Hologram beamforming (HBF) is a new beamforming method that is significantly different from MIMO systems because it uses a software-defined antenna. HBF will be a very effective approach for efficient and flexible transmission and reception of signals in multi-antenna communication devices in 6G.
  • Big data analytics is a complex process for analyzing various large data sets or big data. This process ensures complete data management by finding information such as hidden data, unknown correlations and customer propensity. Big data is gathered from a variety of sources such as videos, social networks, images and sensors. This technology is widely used to process massive amounts of data in 6G systems.
  • LIS large intelligent surface
  • the linearity is strong, so there may be many shaded areas due to obstructions.
  • the LIS technology that expands the communication area, strengthens communication stability and enables additional additional services becomes important.
  • the LIS is an artificial surface made of electromagnetic materials, and can change the propagation of incoming and outgoing radio waves.
  • LIS can be seen as an extension of massive MIMO, but the array structure and operation mechanism are different from those of massive MIMO.
  • LIS has low power consumption in that it operates as a reconfigurable reflector with passive elements, that is, only passively reflects the signal without using an active RF chain.
  • each of the passive reflectors of the LIS must independently adjust the phase shift of the incoming signal, it can be advantageous for a wireless communication channel.
  • the reflected signal can be gathered at the target receiver to boost the received signal power.
  • THz wave is located between RF (Radio Frequency)/millimeter (mm) and infrared bands. Beam focusing may be possible.
  • RF Radio Frequency
  • mm millimeter
  • the frequency band expected to be used for THz wireless communication may be a D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz) band with low propagation loss due to absorption of molecules in the air.
  • THz wireless communication may be applied to wireless cognition, sensing, imaging, wireless communication, THz navigation, and the like.
  • FIG. 30 is a diagram showing an example of THz communication application.
  • the THz wireless communication scenario may be classified into a macro network, a micro network, and a nanoscale network.
  • THz wireless communication can be applied to vehicle-to-vehicle connection and backhaul/fronthaul connection.
  • THz wireless communication in micro-networks is used in indoor small cells, fixed point-to-point or multi-point connections such as wireless connections in data centers, and short-range such as kiosk downloading. It can be applied to communication (near-field communication).
  • Table 8 is a table showing an example of a technique that can be used in the THz wave.
  • Transceivers Device Available immature UTC-PD, RTD and SBD Modulation and coding Low order modulation techniques (OOK, QPSK), LDPC, Reed Soloman, Hamming, Polar, Turbo Antenna Omni and Directional, phased array with low number of antenna elements Bandwidth 300 GHz to 69 GHz (or 23 GHz) Channel models Partially Data rate 100 Gbps Outdoor deployment No Free space loss High Coverage Low Radio Measurements 300 GHz indoor Device size Few micrometers
  • THz wireless communication can be classified based on a method for generating and receiving THz.
  • the THz generation method can be classified into an optical device or an electronic device-based technology.
  • 31 is a diagram illustrating an example of an electronic device-based THz wireless communication transceiver.
  • a method of generating THz using an electronic device includes a method using a semiconductor device such as a Resonant Tunneling Diode (RTD), a method using a local oscillator and a multiplier, and an integrated circuit based on a compound semiconductor HEMT (High Electron Mobility Transistor).
  • MMIC Monolithic Microwave Integrated Circuits
  • a doubler, tripler, or multiplier is applied to increase the frequency, and it passes through the subharmonic mixer and is radiated by the antenna. Since the THz band forms a high frequency, a multiplier is essential.
  • the multiplier is a circuit that has an output frequency that is N times that of the input, matches the desired harmonic frequency, and filters out all other frequencies.
  • beamforming may be implemented by applying an array antenna or the like to the antenna of FIG. 31 .
  • IF indicates an intermediate frequency
  • tripler and multipler indicate a multiplier
  • PA Power Amplifier PA Power Amplifier
  • LNA low noise amplifier LNA low noise amplifier
  • PLL phase lock circuit Phase lock circuit (Phase) -Locked Loop).
  • FIG. 32 is a diagram illustrating an example of a method of generating an optical device-based THz signal
  • FIG. 33 is a diagram illustrating an example of an optical device-based THz wireless communication transceiver.
  • Optical device-based THz wireless communication technology refers to a method of generating and modulating a THz signal using an optical device.
  • the optical element-based THz signal generation technology is a technology that generates a high-speed optical signal using a laser and an optical modulator, and converts it into a THz signal using an ultra-high-speed photodetector.
  • it is easier to increase the frequency compared to the technology using only electronic devices, it is possible to generate a high-power signal, and it is possible to obtain a flat response characteristic in a wide frequency band.
  • a laser diode, a broadband optical modulator, and a high-speed photodetector are required to generate an optical device-based THz signal.
  • an optical coupler refers to a semiconductor device that uses light waves to transmit electrical signals to provide a coupling with electrical insulation between circuits or systems
  • UTC-PD Uni-Traveling Carrier Photo-) Detector
  • UTC-PD is one of the photodetectors, which uses electrons as active carriers and reduces the movement time of electrons by bandgap grading.
  • UTC-PD is capable of photodetection above 150GHz.
  • EDFA Erbium-Doped Fiber Amplifier
  • PD Photo Detector
  • OSA various optical communication functions (photoelectric It represents an optical module (Optical Sub Aassembly) in which conversion, electro-optical conversion, etc.) are modularized into one component
  • DSO represents a digital storage oscilloscope.
  • FIGS. 34 and 35 illustrate the structure of the photoelectric converter (or photoelectric converter) will be described with reference to FIGS. 34 and 35 .
  • 34 illustrates a structure of a photoinc source-based transmitter
  • FIG. 35 illustrates a structure of an optical modulator.
  • a phase of a signal may be changed by passing an optical source of a laser through an optical wave guide. At this time, data is loaded by changing electrical characteristics through a microwave contact or the like. Accordingly, an optical modulator output is formed as a modulated waveform.
  • the photoelectric modulator (O/E converter) is an optical rectification operation by a nonlinear crystal (nonlinear crystal), photoelectric conversion (O / E conversion) by a photoconductive antenna (photoconductive antenna), a bunch of electrons in the light beam (bunch of) THz pulses can be generated by, for example, emission from relativistic electrons.
  • a terahertz pulse (THz pulse) generated in the above manner may have a length in units of femtoseconds to picoseconds.
  • An O/E converter performs down conversion by using non-linearity of a device.
  • a number of contiguous GHz bands for fixed or mobile service use for the terahertz system are used. likely to use
  • available bandwidth may be classified based on oxygen attenuation 10 2 dB/km in a spectrum up to 1 THz. Accordingly, a framework in which the available bandwidth is composed of several band chunks may be considered.
  • the bandwidth (BW) becomes about 20 GHz.
  • Effective down conversion from the IR band to the THz band depends on how the nonlinearity of the O/E converter is utilized. That is, in order to down-convert to a desired terahertz band (THz band), the O/E converter having the most ideal non-linearity for transfer to the terahertz band (THz band) is design is required. If an O/E converter that does not fit the target frequency band is used, there is a high possibility that an error may occur with respect to the amplitude and phase of the corresponding pulse.
  • a terahertz transmission/reception system may be implemented using one photoelectric converter. Although it depends on the channel environment, as many photoelectric converters as the number of carriers may be required in a far-carrier system. In particular, in the case of a multi-carrier system using several broadbands according to the above-described spectrum usage-related scheme, the phenomenon will become conspicuous. In this regard, a frame structure for the multi-carrier system may be considered.
  • the down-frequency-converted signal based on the photoelectric converter may be transmitted in a specific resource region (eg, a specific frame).
  • the frequency domain of the specific resource region may include a plurality of chunks. Each chunk may be composed of at least one component carrier (CC).
  • Federated learning is one of the techniques of distributed machine learning, in which several devices that are the subject of learning share parameters such as weight or gradient of the local model with the server.
  • the server updates the global parameters by collecting the local model parameters of each device. In this process, since raw data of each device is not shared, communication overhead in the data transmission process can be reduced and personal information can be protected.
  • 36 illustrates an example of an operation process for federated learning based on orthogonal division access.
  • the federated learning based on the existing orthogonal multiple access operates as shown in FIG. 36 .
  • the device transmits local parameters to each allocated resource, and the server performs offline aggregation on the parameters received from the device.
  • the server derives global parameters through averaging of all local parameters, and transmits them back to the device.
  • the time for updating global parameters is delayed as the number of devices participating in learning increases under limited resources.
  • AirComp is a method in which all devices transmit local parameters using the same resource as shown in FIG. 37, and the signal received to the server can naturally obtain the sum of local parameters by the superposition characteristic of analog waveforms. have. Since AirComp-based federated learning transmits local parameters through the same resource, latency is not significantly affected by the number of devices participating in learning. However, for accurate aggregation of parameters, all devices must be synchronized.
  • the server may use a method of designating a time for transmitting local parameters as shown in FIG. 38 . Because the time required to update parameters is different for each device due to differences in AI capabilities, the server must designate the parameter transmission time based on the device with the largest learning delay in order to receive the local parameters of all devices. This creates a straggler effect in which the overall latency of learning depends on the worst device. In addition, in AirComp-based federated learning, if local parameters cannot be transmitted at the specified time due to communication failure or device learning delay problem, the server cannot accurately determine whether the device parameters are transmitted. Therefore, it is necessary to develop a scheduling technology that can secure learning accuracy while mitigating the Straggler effect of federated learning.
  • edge devices #1 to #5 when the edge server broadcasts global parameters, edge devices #1 to #5 perform parameter updates.
  • a time for transmitting a local parameter or a desired aggregation time may be defined in advance.
  • edge device #1, edge device #2, and edge device #3 complete training within the required aggregation time and transmit local parameters to the edge server.
  • the training completion time is later than the required aggregation time due to scheduling delay
  • the edge device #5 the training completion time is later than the required aggregation time due to the training delay. Accordingly, referring to FIG. 38 , although the number K of devices participating in training is 5, the number of local parameters received by the edge server is 3, so an error may occur in federated learning.
  • the federated learning system dealt with in this specification is composed of a server that manages the entire learning and a plurality of edge devices having local data. Each device learns a weight or gradient parameter for the model based on local data and sends it to the server using the same resource.
  • the server calculates an averaged global parameter by dividing the sum of local parameters received through AirComp by the number of devices participating in learning, and broadcasts it back to the devices.
  • 39 shows an example of a signaling process between the server and the device in the initial learning process.
  • the server broadcasts initial global parameters to devices that want to participate in learning, and each device performs learning based on the received parameters and their own local data.
  • the device that has completed updating the local parameter transmits a flag indicating that learning has been completed to the server.
  • each of edge devices #1 to #4 performs training, and when training is completed, the flag is transmitted to the server.
  • 40 shows an example of an operation process of a server for determining a parameter transmission time and scheduling with a device.
  • FIG. 40 shows the operation of the server receiving the training completion flag.
  • the server can determine the number of devices that have completed learning and transmitted parameters through the number of flags received.
  • the server can determine the learning latency of each device through the time the flag is received, and based on this, can designate a parameter transmission time to each device.
  • the server may consider the following when determining the parameter transmission time.
  • AirComp-based federated learning requires all devices to transmit local parameters at the same time, so the latency efficiency of learning is affected by the largest learning latency among devices participating in learning.
  • a time point after the time when the edge device #1, the edge device #2, and the edge device #3 transmits the flag may be set as the aggregation time.
  • the aggregation time may be set to an arbitrary point in time earlier than the training completion point of the edge device #4 in consideration of the number of devices participating in learning and the learning latency. That is, in the example of FIG. 39, edge device #4 may be excluded from the federated learning process by the server.
  • the server may determine the parameter transmission time of the device in consideration of both the accuracy and latency of federated learning. To this end, the server calculates the following tradeoff value ⁇ every time by using the number of parameter transmission devices (K) and the learning delay time T train i of the i-th device identified through the received flag.
  • p and q are tradeoff values and exponent values having a positive real value, and can be adjusted while the server considers the parameter transmission time. If the learning accuracy is prioritized and the server receives the local parameters of as many devices as possible, the server may increase the p value. On the other hand, scheduling can be performed by increasing the value of q in order to preferentially perform federated learning with devices having a fast learning speed by prioritizing the latency efficiency of learning.
  • the server receives local parameters from all devices or continuously calculates the trade-off value and determines that the trade-off value cannot be maximized even if the training completion flag arrives later, the server sets the parameter transmission time as the parameter transmission time and sends the flags to the devices. can tell you
  • 41 shows an example of a joint learning process of a server and a device after scheduling.
  • Devices participating in federated learning through the scheduling process transmit the updated local parameters to the server using the same resources, and the server updates the global parameters by dividing the number K of devices participating in the sum of the received local parameters, K broadcast to the specified device.
  • the server updates the global parameters by dividing the number K of devices participating in the sum of the received local parameters, K broadcast to the specified device.
  • one of the following methods can be selected and operated.
  • the K devices participating in the learning transmit local parameters according to the learning completion time at the promised transmission time when learning is performed.
  • the server may periodically change the parameter transmission time.
  • the edge server transmits updated global parameters to edge devices #1 to #3. Thereafter, each of the edge devices #1 to #3 performs training. At this time, according to the training capabilities of the edge devices #1 to #3, each of the edge devices #1 to #3 can complete the training within the aggregation time, so that each of the devices can sufficiently complete the training within the aggregation time. Training can be performed, and thus the accuracy of local parameters can be increased. Thereafter, the edge devices #1 to #3 simultaneously transmit local parameters to the edge server at the aggregation time.
  • the local parameters updated by each of the edge devices receiving the initial global parameters by performing training may be separately transmitted to the server after the flag is transmitted.
  • the server may determine a transmission time of the updated local parameter based on the flag and inform the edge devices.
  • the server may receive all local parameters updated based on the initial global parameters, or may receive as many as the number of edge devices participating in the subsequent joint learning. That is, in the example of FIGS. 39 to 41 , the server may receive a local parameter from each of the edge devices #1 to #4, and may receive a local parameter from each of the edge devices #1 to #3.
  • the tor-training completion time may be determined by the server for every cycle.
  • the period may be a time interval including a global parameter transmission time and a reception time of a local parameter trained based on the global parameter from the server's point of view. That is, the period may be determined for every global parameter transmission.
  • the period may be defined in advance or may be changed aperiodically whenever there is a decision of the server.
  • the trade-off value is defined through the number of devices participating in learning and the learning delay time of the slowest participating device.
  • the server continuously calculates the trade-off value through the training completion flag from each device in the initial learning phase. If the server does not receive the flag for a long time, since the trade-off value does not increase even after receiving the flag later, the server does not receive the flag any more and performs scheduling with the devices that have completed learning so far. At this time, the server adjusts the number of devices participating in the federated learning and the parameter transmission time through p and q, which are exponential values of the trade-off value.
  • FIG. 42 is a diagram illustrating a result of scheduling a device participating in learning by a server according to a p or q value.
  • the server increases the p value of the trade-off value to slow down the parameter transmission time as much as possible so that as many devices as possible participate in the learning.
  • a method to proceed with the next learning as quickly as possible may be selected.
  • FIG. 43 is a flowchart of an example of a federated learning control method proposed in the present specification. The method may be performed by a server that performs federated learning.
  • the server transmits the first global parameter to the N communication devices (S4310).
  • the server receives a plurality of training completion information from each of the N communication devices (S4320).
  • each of the plurality of training completion information may inform the training completion time of each of the N communication devices.
  • the server transmits the second global parameter and time information to the K communication devices (S4330).
  • the time information may inform the request completion time determined based on the plurality of training completion information.
  • the server receives K local parameters from each of the K communication devices at the completion of the request (S4340).
  • K and N may be an integer of 1 or more.
  • the K communication devices may be included in the N communication devices.
  • the local parameter may be determined based on the second global parameter.
  • the methods proposed in the present specification include at least one computer-readable recording medium including an instruction based on being executed by at least one processor, and one or more processors. and one or more memories operably coupled by the one or more processors and storing instructions, wherein the one or more processors execute the instructions to perform the methods proposed herein, configured to control a terminal It can also be performed by an apparatus.
  • an operation by the base station corresponding to the operation performed by the terminal may be considered.
  • 44 illustrates the communication system 1 applied to the present disclosure.
  • the communication system 1 applied to the present disclosure includes a wireless device, a base station, and a network.
  • the wireless device refers to a device that performs communication using a radio access technology (eg, 5G NR (New RAT), LTE (Long Term Evolution)), and may be referred to as a communication/wireless/5G device.
  • a radio access technology eg, 5G NR (New RAT), LTE (Long Term Evolution)
  • the wireless device may include a robot 100a, a vehicle 100b-1, 100b-2, an eXtended Reality (XR) device 100c, a hand-held device 100d, and a home appliance 100e. ), an Internet of Thing (IoT) device 100f, and an AI device/server 400 .
  • the vehicle may include a vehicle equipped with a wireless communication function, an autonomous driving vehicle, a vehicle capable of performing inter-vehicle communication, and the like.
  • the vehicle may include an Unmanned Aerial Vehicle (UAV) (eg, a drone).
  • UAV Unmanned Aerial Vehicle
  • XR devices include AR (Augmented Reality)/VR (Virtual Reality)/MR (Mixed Reality) devices, and include a Head-Mounted Device (HMD), a Head-Up Display (HUD) provided in a vehicle, a television, a smartphone, It may be implemented in the form of a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • the portable device may include a smart phone, a smart pad, a wearable device (eg, a smart watch, smart glasses), a computer (eg, a laptop computer), and the like.
  • Home appliances may include a TV, a refrigerator, a washing machine, and the like.
  • the IoT device may include a sensor, a smart meter, and the like.
  • the base station and the network may be implemented as a wireless device, and the specific wireless device 200a may operate as a base station/network node to other wireless devices.
  • the wireless devices 100a to 100f may be connected to the network 300 through the base station 200 .
  • AI Artificial Intelligence
  • the network 300 may be configured using a 3G network, a 4G (eg, LTE) network, or a 5G (eg, NR) network.
  • the wireless devices 100a to 100f may communicate with each other through the base station 200/network 300, but may also communicate directly (e.g. sidelink communication) without passing through the base station/network.
  • the vehicles 100b-1 and 100b-2 may perform direct communication (e.g. Vehicle to Vehicle (V2V)/Vehicle to everything (V2X) communication).
  • the IoT device eg, sensor
  • the IoT device may communicate directly with other IoT devices (eg, sensor) or other wireless devices 100a to 100f.
  • Wireless communication/connection 150a, 150b, and 150c may be performed between the wireless devices 100a to 100f/base station 200 and the base station 200/base station 200 .
  • the wireless communication/connection includes uplink/downlink communication 150a and sidelink communication 150b (or D2D communication), and communication between base stations 150c (eg relay, IAB (Integrated Access Backhaul)).
  • This can be done through technology (eg 5G NR)
  • Wireless communication/connection 150a, 150b, 150c allows the wireless device and the base station/radio device, and the base station and the base station to transmit/receive wireless signals to each other.
  • the wireless communication/connection 150a, 150b, and 150c may transmit/receive signals through various physical channels.
  • various signal processing processes eg, channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.
  • resource allocation processes etc.
  • 45 illustrates a wireless device applicable to the present disclosure.
  • the first wireless device 100 and the second wireless device 200 may transmit and receive wireless signals through various wireless access technologies (eg, LTE, NR).
  • ⁇ first wireless device 100, second wireless device 200 ⁇ is ⁇ wireless device 100x, base station 200 ⁇ of FIG. 44 and/or ⁇ wireless device 100x, wireless device 100x) ⁇ can be matched.
  • the first wireless device 100 includes one or more processors 102 and one or more memories 104 , and may further include one or more transceivers 106 and/or one or more antennas 108 .
  • the processor 102 controls the memory 104 and/or the transceiver 106 and may be configured to implement the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein.
  • the processor 102 may process information in the memory 104 to generate first information/signal, and then transmit a wireless signal including the first information/signal through the transceiver 106 .
  • the processor 102 may receive the radio signal including the second information/signal through the transceiver 106 , and then store information obtained from signal processing of the second information/signal in the memory 104 .
  • the memory 104 may be connected to the processor 102 and may store various information related to the operation of the processor 102 .
  • memory 104 may provide instructions for performing some or all of the processes controlled by processor 102 , or for performing descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
  • the processor 102 and the memory 104 may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • a wireless communication technology eg, LTE, NR
  • the transceiver 106 may be coupled to the processor 102 , and may transmit and/or receive wireless signals via one or more antennas 108 .
  • the transceiver 106 may include a transmitter and/or a receiver.
  • the transceiver 106 may be used interchangeably with a radio frequency (RF) unit.
  • RF radio frequency
  • a wireless device may refer to a communication modem/circuit/chip.
  • the second wireless device 200 includes one or more processors 202 , one or more memories 204 , and may further include one or more transceivers 206 and/or one or more antennas 208 .
  • the processor 202 controls the memory 204 and/or the transceiver 206 and may be configured to implement the descriptions, functions, procedures, suggestions, methods, and/or flow charts disclosed herein.
  • the processor 202 may process the information in the memory 204 to generate third information/signal, and then transmit a wireless signal including the third information/signal through the transceiver 206 .
  • the processor 202 may receive the radio signal including the fourth information/signal through the transceiver 206 , and then store information obtained from signal processing of the fourth information/signal in the memory 204 .
  • the memory 204 may be connected to the processor 202 and may store various information related to the operation of the processor 202 .
  • the memory 204 may provide instructions for performing some or all of the processes controlled by the processor 202, or for performing the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein. may store software code including
  • the processor 202 and the memory 204 may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • the transceiver 206 may be coupled to the processor 202 and may transmit and/or receive wireless signals via one or more antennas 208 .
  • the transceiver 206 may include a transmitter and/or a receiver.
  • the transceiver 206 may be used interchangeably with an RF unit.
  • a wireless device may refer to a communication modem/circuit/chip.
  • one or more protocol layers may be implemented by one or more processors 102 , 202 .
  • one or more processors 102 , 202 may implement one or more layers (eg, functional layers such as PHY, MAC, RLC, PDCP, RRC, SDAP).
  • the one or more processors 102, 202 are configured to process one or more Protocol Data Units (PDUs) and/or one or more Service Data Units (SDUs) according to the description, function, procedure, proposal, method, and/or operational flowcharts disclosed herein.
  • PDUs Protocol Data Units
  • SDUs Service Data Units
  • One or more processors 102 , 202 may generate messages, control information, data, or information according to the description, function, procedure, proposal, method, and/or flow charts disclosed herein.
  • the one or more processors 102 and 202 generate a signal (eg, a baseband signal) including PDUs, SDUs, messages, control information, data or information according to the functions, procedures, proposals and/or methods disclosed herein. , to one or more transceivers 106 and 206 .
  • the one or more processors 102 , 202 may receive signals (eg, baseband signals) from one or more transceivers 106 , 206 , and may be described, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein.
  • PDUs, SDUs, messages, control information, data, or information may be acquired according to the fields.
  • One or more processors 102, 202 may be referred to as a controller, microcontroller, microprocessor, or microcomputer.
  • One or more processors 102 , 202 may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • firmware or software may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, and the like.
  • the descriptions, functions, procedures, suggestions, methods, and/or flow charts disclosed in this document provide that firmware or software configured to perform is contained in one or more processors 102 , 202 , or stored in one or more memories 104 , 204 . It may be driven by the above processors 102 and 202 .
  • the descriptions, functions, procedures, proposals, methods, and/or flowcharts of operations disclosed herein may be implemented using firmware or software in the form of code, instructions, and/or a set of instructions.
  • One or more memories 104 , 204 may be coupled with one or more processors 102 , 202 , and may store various forms of data, signals, messages, information, programs, code, instructions, and/or instructions.
  • the one or more memories 104 and 204 may be comprised of ROM, RAM, EPROM, flash memory, hard drives, registers, cache memory, computer readable storage media, and/or combinations thereof.
  • One or more memories 104 , 204 may be located inside and/or external to one or more processors 102 , 202 . Additionally, one or more memories 104 , 204 may be coupled to one or more processors 102 , 202 through various technologies, such as wired or wireless connections.
  • One or more transceivers 106 , 206 may transmit user data, control information, radio signals/channels, etc. referred to in the methods and/or operational flowcharts of this document to one or more other devices.
  • One or more transceivers 106, 206 may receive user data, control information, radio signals/channels, etc. referred to in the descriptions, functions, procedures, suggestions, methods and/or flow charts, etc. disclosed herein, from one or more other devices. have.
  • one or more transceivers 106 , 206 may be coupled to one or more processors 102 , 202 and may transmit and receive wireless signals.
  • one or more processors 102 , 202 may control one or more transceivers 106 , 206 to transmit user data, control information, or wireless signals to one or more other devices.
  • one or more processors 102 , 202 may control one or more transceivers 106 , 206 to receive user data, control information, or wireless signals from one or more other devices.
  • one or more transceivers 106, 206 may be coupled to one or more antennas 108, 208, and the one or more transceivers 106, 206 may be coupled via one or more antennas 108, 208 to the descriptions, functions, and functions disclosed herein. , may be set to transmit and receive user data, control information, radio signals/channels, etc.
  • one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (eg, antenna ports).
  • the one or more transceivers 106, 206 convert the received radio signal/channel, etc. from the RF band signal to process the received user data, control information, radio signal/channel, etc. using the one or more processors 102, 202. It can be converted into a baseband signal.
  • One or more transceivers 106 , 206 may convert user data, control information, radio signals/channels, etc. processed using one or more processors 102 , 202 from baseband signals to RF band signals.
  • one or more transceivers 106 , 206 may include (analog) oscillators and/or filters.
  • 46 illustrates a signal processing circuit for a transmission signal.
  • the signal processing circuit 1000 may include a scrambler 1010 , a modulator 1020 , a layer mapper 1030 , a precoder 1040 , a resource mapper 1050 , and a signal generator 1060 .
  • the operations/functions of FIG. 46 may be performed by the processors 102 , 202 and/or transceivers 106 , 206 of FIG. 45 .
  • the hardware elements of FIG. 46 may be implemented in the processors 102 , 202 and/or transceivers 106 , 206 of FIG. 45 .
  • blocks 1010 to 1060 may be implemented in the processors 102 and 202 of FIG. 45 .
  • blocks 1010 to 1050 may be implemented in the processors 102 and 202 of FIG. 45
  • block 1060 may be implemented in the transceivers 106 and 206 of FIG. 45 .
  • the codeword may be converted into a wireless signal through the signal processing circuit 1000 of FIG. 46 .
  • the codeword is a coded bit sequence of an information block.
  • the information block may include a transport block (eg, a UL-SCH transport block, a DL-SCH transport block).
  • the radio signal may be transmitted through various physical channels (eg, PUSCH, PDSCH).
  • the codeword may be converted into a scrambled bit sequence by the scrambler 1010 .
  • a scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of a wireless device, and the like.
  • the scrambled bit sequence may be modulated by a modulator 1020 into a modulation symbol sequence.
  • the modulation method may include pi/2-Binary Phase Shift Keying (pi/2-BPSK), m-Phase Shift Keying (m-PSK), m-Quadrature Amplitude Modulation (m-QAM), and the like.
  • the complex modulation symbol sequence may be mapped to one or more transport layers by the layer mapper 1030 .
  • Modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by the precoder 1040 (precoding).
  • the output z of the precoder 1040 may be obtained by multiplying the output y of the layer mapper 1030 by the precoding matrix W of N*M.
  • N is the number of antenna ports
  • M is the number of transport layers.
  • the precoder 1040 may perform precoding after performing transform precoding (eg, DFT transform) on the complex modulation symbols. Also, the precoder 1040 may perform precoding without performing transform precoding.
  • the resource mapper 1050 may map modulation symbols of each antenna port to a time-frequency resource.
  • the time-frequency resource may include a plurality of symbols (eg, a CP-OFDMA symbol, a DFT-s-OFDMA symbol) in the time domain and a plurality of subcarriers in the frequency domain.
  • CP Cyclic Prefix
  • DAC Digital-to-Analog Converter
  • the signal processing process for the received signal in the wireless device may be configured in reverse of the signal processing process 1010 to 1060 of FIG. 46 .
  • the wireless device eg, 100 and 200 in FIG. 45
  • the received radio signal may be converted into a baseband signal through a signal restorer.
  • the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a Fast Fourier Transform (FFT) module.
  • ADC analog-to-digital converter
  • FFT Fast Fourier Transform
  • the baseband signal may be restored to a codeword through a resource de-mapper process, a postcoding process, a demodulation process, and a descrambling process.
  • the codeword may be restored to the original information block through decoding.
  • the signal processing circuit (not shown) for the received signal may include a signal restorer, a resource de-mapper, a post coder, a demodulator, a descrambler, and a decoder.
  • the wireless device 47 shows another example of a wireless device applied to the present disclosure.
  • the wireless device may be implemented in various forms according to use-example/service (refer to FIG. 44 ).
  • wireless devices 100 and 200 correspond to wireless devices 100 and 200 of FIG. 45 , and various elements, components, units/units, and/or modules ) may consist of
  • the wireless devices 100 and 200 may include a communication unit 110 , a control unit 120 , a memory unit 130 , and an additional element 140 .
  • the communication unit may include communication circuitry 112 and transceiver(s) 114 .
  • communication circuitry 112 may include one or more processors 102 , 202 and/or one or more memories 104 , 204 of FIG. 45 .
  • the transceiver(s) 114 may include one or more transceivers 106 , 206 and/or one or more antennas 108 , 208 of FIG.
  • the control unit 120 is electrically connected to the communication unit 110 , the memory unit 130 , and the additional element 140 , and controls general operations of the wireless device. For example, the controller 120 may control the electrical/mechanical operation of the wireless device based on the program/code/command/information stored in the memory unit 130 . In addition, the control unit 120 transmits information stored in the memory unit 130 to the outside (eg, other communication device) through the communication unit 110 through a wireless/wired interface, or externally (eg, through the communication unit 110 ) Information received through a wireless/wired interface from another communication device) may be stored in the memory unit 130 .
  • the outside eg, other communication device
  • Information received through a wireless/wired interface from another communication device may be stored in the memory unit 130 .
  • the additional element 140 may be configured in various ways according to the type of the wireless device.
  • the additional element 140 may include at least one of a power unit/battery, an input/output unit (I/O unit), a driving unit, and a computing unit.
  • wireless devices include, but are not limited to, robots ( FIGS. 44 and 100A ), vehicles ( FIGS. 44 , 100B-1 , 100B-2 ), XR devices ( FIGS. 44 and 100C ), portable devices ( FIGS. 44 and 100D ), and home appliances. (FIG. 44, 100e), IoT device (FIG.
  • the wireless device may be mobile or used in a fixed location depending on the use-example/service.
  • various elements, components, units/units, and/or modules in the wireless devices 100 and 200 may be all interconnected through a wired interface, or at least some may be wirelessly connected through the communication unit 110 .
  • the control unit 120 and the communication unit 110 are connected by wire, and the control unit 120 and the first unit (eg, 130 , 140 ) are connected to the communication unit 110 through the communication unit 110 . It can be connected wirelessly.
  • each element, component, unit/unit, and/or module within the wireless device 100 , 200 may further include one or more elements.
  • the controller 120 may be configured with one or more processor sets.
  • control unit 120 may be configured as a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphic processing processor, a memory control processor, and the like.
  • memory unit 130 may include random access memory (RAM), dynamic RAM (DRAM), read only memory (ROM), flash memory, volatile memory, and non-volatile memory. volatile memory) and/or a combination thereof.
  • FIG. 47 will be described in more detail with reference to the drawings.
  • the portable device may include a smart phone, a smart pad, a wearable device (eg, a smart watch, smart glasses), and a portable computer (eg, a laptop computer).
  • a mobile device may be referred to as a mobile station (MS), a user terminal (UT), a mobile subscriber station (MSS), a subscriber station (SS), an advanced mobile station (AMS), or a wireless terminal (WT).
  • MS mobile station
  • UT user terminal
  • MSS mobile subscriber station
  • SS subscriber station
  • AMS advanced mobile station
  • WT wireless terminal
  • the portable device 100 includes an antenna unit 108 , a communication unit 110 , a control unit 120 , a memory unit 130 , a power supply unit 140a , an interface unit 140b , and an input/output unit 140c ) may be included.
  • the antenna unit 108 may be configured as a part of the communication unit 110 .
  • Blocks 110 to 130/140a to 140c respectively correspond to blocks 110 to 130/140 in FIG. 47 .
  • the communication unit 110 may transmit and receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
  • the controller 120 may perform various operations by controlling the components of the portable device 100 .
  • the controller 120 may include an application processor (AP).
  • the memory unit 130 may store data/parameters/programs/codes/commands necessary for driving the portable device 100 . Also, the memory unit 130 may store input/output data/information.
  • the power supply unit 140a supplies power to the portable device 100 and may include a wired/wireless charging circuit, a battery, and the like.
  • the interface unit 140b may support a connection between the portable device 100 and other external devices.
  • the interface unit 140b may include various ports (eg, an audio input/output port and a video input/output port) for connection with an external device.
  • the input/output unit 140c may receive or output image information/signal, audio information/signal, data, and/or information input from a user.
  • the input/output unit 140c may include a camera, a microphone, a user input unit, a display unit 140d, a speaker, and/or a haptic module.
  • the input/output unit 140c obtains information/signals (eg, touch, text, voice, image, video) input from the user, and the obtained information/signals are stored in the memory unit 130 . can be saved.
  • the communication unit 110 may convert the information/signal stored in the memory into a wireless signal, and transmit the converted wireless signal directly to another wireless device or to a base station. Also, after receiving a radio signal from another radio device or base station, the communication unit 110 may restore the received radio signal to original information/signal. After the restored information/signal is stored in the memory unit 130 , it may be output in various forms (eg, text, voice, image, video, haptic) through the input/output unit 140c.
  • various forms eg, text, voice, image, video, haptic
  • the vehicle or autonomous driving vehicle may be implemented as a mobile robot, a vehicle, a train, an aerial vehicle (AV), a ship, and the like.
  • AV aerial vehicle
  • the vehicle or autonomous driving vehicle 100 includes an antenna unit 108, a communication unit 110, a control unit 120, a driving unit 140a, a power supply unit 140b, a sensor unit 140c, and autonomous driving. It may include a part 140d.
  • the antenna unit 108 may be configured as a part of the communication unit 110 .
  • Blocks 110/130/140a-140d correspond to blocks 110/130/140 of FIG. 47, respectively.
  • the communication unit 110 may transmit/receive signals (eg, data, control signals, etc.) to and from external devices such as other vehicles, base stations (e.g., base stations, roadside units, etc.), servers, and the like.
  • the controller 120 may control elements of the vehicle or the autonomous driving vehicle 100 to perform various operations.
  • the controller 120 may include an Electronic Control Unit (ECU).
  • the driving unit 140a may cause the vehicle or the autonomous driving vehicle 100 to run on the ground.
  • the driving unit 140a may include an engine, a motor, a power train, a wheel, a brake, a steering device, and the like.
  • the power supply unit 140b supplies power to the vehicle or the autonomous driving vehicle 100 , and may include a wired/wireless charging circuit, a battery, and the like.
  • the sensor unit 140c may obtain vehicle status, surrounding environment information, user information, and the like.
  • the sensor unit 140c includes an inertial measurement unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, an inclination sensor, a weight sensor, a heading sensor, a position module, and a vehicle forward movement.
  • IMU inertial measurement unit
  • a collision sensor a wheel sensor
  • a speed sensor a speed sensor
  • an inclination sensor a weight sensor
  • a heading sensor a position module
  • a vehicle forward movement / may include a reverse sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illuminance sensor, a pedal position sensor, and the like.
  • the autonomous driving unit 140d includes a technology for maintaining a driving lane, a technology for automatically adjusting speed such as adaptive cruise control, a technology for automatically driving along a predetermined route, and a technology for automatically setting a route when a destination is set. technology can be implemented.
  • the communication unit 110 may receive map data, traffic information data, and the like from an external server.
  • the autonomous driving unit 140d may generate an autonomous driving route and a driving plan based on the acquired data.
  • the controller 120 may control the driving unit 140a to move the vehicle or the autonomous driving vehicle 100 along the autonomous driving path (eg, speed/direction adjustment) according to the driving plan.
  • the communication unit 110 may obtain the latest traffic information data from an external server non/periodically, and may acquire surrounding traffic information data from surrounding vehicles.
  • the sensor unit 140c may acquire vehicle state and surrounding environment information.
  • the autonomous driving unit 140d may update the autonomous driving route and driving plan based on the newly acquired data/information.
  • the communication unit 110 may transmit information about a vehicle location, an autonomous driving route, a driving plan, and the like to an external server.
  • the external server may predict traffic information data in advance using AI technology or the like based on information collected from the vehicle or autonomous vehicles, and may provide the predicted traffic information data to the vehicle or autonomous vehicles.
  • the vehicle 50 illustrates a vehicle applied to the present disclosure.
  • the vehicle may also be implemented as a means of transportation, a train, an air vehicle, a ship, and the like.
  • the vehicle 100 may include a communication unit 110 , a control unit 120 , a memory unit 130 , an input/output unit 140a , and a position measurement unit 140b .
  • blocks 110 to 130/140a to 140b correspond to blocks 110 to 130/140 of FIG. 47, respectively.
  • the communication unit 110 may transmit and receive signals (eg, data, control signals, etc.) with other vehicles or external devices such as a base station.
  • the controller 120 may control components of the vehicle 100 to perform various operations.
  • the memory unit 130 may store data/parameters/programs/codes/commands supporting various functions of the vehicle 100 .
  • the input/output unit 140a may output an AR/VR object based on information in the memory unit 130 .
  • the input/output unit 140a may include a HUD.
  • the position measuring unit 140b may acquire position information of the vehicle 100 .
  • the location information may include absolute location information of the vehicle 100 , location information within a driving line, acceleration information, location information with a surrounding vehicle, and the like.
  • the position measuring unit 140b may include a GPS and various sensors.
  • the communication unit 110 of the vehicle 100 may receive map information, traffic information, and the like from an external server and store it in the memory unit 130 .
  • the position measuring unit 140b may acquire vehicle position information through GPS and various sensors and store it in the memory unit 130 .
  • the controller 120 may generate a virtual object based on map information, traffic information, vehicle location information, and the like, and the input/output unit 140a may display the created virtual object on a window inside the vehicle ( 1410 and 1420 ).
  • the controller 120 may determine whether the vehicle 100 is normally operating within the driving line based on the vehicle location information. When the vehicle 100 deviates from the driving line abnormally, the controller 120 may display a warning on the windshield of the vehicle through the input/output unit 140a.
  • control unit 120 may broadcast a warning message regarding driving abnormality to surrounding vehicles through the communication unit 110 .
  • control unit 120 may transmit the location information of the vehicle and information on driving/vehicle abnormality to a related organization through the communication unit 110 .
  • the XR device may be implemented as an HMD, a head-up display (HUD) provided in a vehicle, a television, a smart phone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, and the like.
  • HMD head-up display
  • a television a smart phone
  • a computer a wearable device
  • a home appliance a digital signage
  • a vehicle a robot, and the like.
  • the XR device 100a may include a communication unit 110 , a control unit 120 , a memory unit 130 , an input/output unit 140a , a sensor unit 140b , and a power supply unit 140c .
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 47, respectively.
  • the communication unit 110 may transmit/receive signals (eg, media data, control signals, etc.) to/from external devices such as other wireless devices, portable devices, or media servers.
  • Media data may include images, images, and sounds.
  • the controller 120 may perform various operations by controlling the components of the XR device 100a.
  • the controller 120 may be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, and metadata generation and processing.
  • the memory unit 130 may store data/parameters/programs/codes/commands necessary for driving the XR device 100a/creating an XR object.
  • the input/output unit 140a may obtain control information, data, and the like from the outside, and may output the generated XR object.
  • the input/output unit 140a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 140b may obtain an XR device state, surrounding environment information, user information, and the like.
  • the sensor unit 140b may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. have.
  • the power supply unit 140c supplies power to the XR device 100a, and may include a wired/wireless charging circuit, a battery, and the like.
  • the memory unit 130 of the XR device 100a may include information (eg, data, etc.) necessary for generating an XR object (eg, AR/VR/MR object).
  • the input/output unit 140a may obtain a command to operate the XR device 100a from the user, and the controller 120 may drive the XR device 100a according to the user's driving command. For example, when the user intends to watch a movie or news through the XR device 100a, the controller 120 transmits the content request information to another device (eg, the mobile device 100b) through the communication unit 130 or can be sent to the media server.
  • another device eg, the mobile device 100b
  • the communication unit 130 may download/stream contents such as movies and news from another device (eg, the portable device 100b) or a media server to the memory unit 130 .
  • the controller 120 controls and/or performs procedures such as video/image acquisition, (video/image) encoding, and metadata generation/processing for the content, and is acquired through the input/output unit 140a/sensor unit 140b It is possible to generate/output an XR object based on information about one surrounding space or a real object.
  • the XR device 100a is wirelessly connected to the portable device 100b through the communication unit 110 , and the operation of the XR device 100a may be controlled by the portable device 100b.
  • the portable device 100b may operate as a controller for the XR device 100a.
  • the XR device 100a may obtain 3D location information of the portable device 100b, and then generate and output an XR object corresponding to the portable device 100b.
  • Robots can be classified into industrial, medical, home, military, etc. depending on the purpose or field of use.
  • the robot 100 may include a communication unit 110 , a control unit 120 , a memory unit 130 , an input/output unit 140a , a sensor unit 140b , and a driving unit 140c .
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 47, respectively.
  • the communication unit 110 may transmit/receive signals (eg, driving information, control signals, etc.) with external devices such as other wireless devices, other robots, or control servers.
  • the controller 120 may perform various operations by controlling the components of the robot 100 .
  • the memory unit 130 may store data/parameters/programs/codes/commands supporting various functions of the robot 100 .
  • the input/output unit 140a may obtain information from the outside of the robot 100 and may output information to the outside of the robot 100 .
  • the input/output unit 140a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit 140b may obtain internal information, surrounding environment information, user information, and the like of the robot 100 .
  • the sensor unit 140b may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, a radar, and the like.
  • the driving unit 140c may perform various physical operations, such as moving a robot joint. In addition, the driving unit 140c may make the robot 100 travel on the ground or fly in the air.
  • the driving unit 140c may include an actuator, a motor, a wheel, a brake, a propeller, and the like.
  • AI devices include TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, vehicles, etc. It may be implemented in any possible device or the like.
  • the AI device 100 includes a communication unit 110 , a control unit 120 , a memory unit 130 , input/output units 140a/140b , a learning processor unit 140c , and a sensor unit 140d). may include.
  • Blocks 110 to 130/140a to 140d correspond to blocks 110 to 130/140 of FIG. 47, respectively.
  • the communication unit 110 uses wired/wireless communication technology to communicate with external devices such as other AI devices (eg, FIGS. 44, 100x, 200, 400) or an AI server (eg, 400 in FIG. 44) and wired/wireless signals (eg, sensor information). , user input, learning model, control signal, etc.) can be transmitted and received. To this end, the communication unit 110 may transmit information in the memory unit 130 to an external device or transmit a signal received from the external device to the memory unit 130 .
  • AI devices eg, FIGS. 44, 100x, 200, 400
  • an AI server eg, 400 in FIG. 44
  • wired/wireless signals eg, sensor information
  • the communication unit 110 may transmit information in the memory unit 130 to an external device or transmit a signal received from the external device to the memory unit 130 .
  • the controller 120 may determine at least one executable operation of the AI device 100 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. In addition, the controller 120 may control the components of the AI device 100 to perform the determined operation. For example, the control unit 120 may request, search, receive, or utilize the data of the learning processor unit 140c or the memory unit 130 , and may be predicted or preferred among at least one executable operation. Components of the AI device 100 may be controlled to execute the operation. In addition, the control unit 120 collects history information including user feedback on the operation contents or operation of the AI device 100 and stores it in the memory unit 130 or the learning processor unit 140c, or the AI server ( 44 and 400), and the like may be transmitted to an external device. The collected historical information may be used to update the learning model.
  • the memory unit 130 may store data supporting various functions of the AI device 100 .
  • the memory unit 130 may store data obtained from the input unit 140a , data obtained from the communication unit 110 , output data of the learning processor unit 140c , and data obtained from the sensing unit 140 .
  • the memory unit 130 may store control information and/or software codes necessary for the operation/execution of the control unit 120 .
  • the input unit 140a may acquire various types of data from the outside of the AI device 100 .
  • the input unit 140a may acquire training data for model learning, input data to which the learning model is applied, and the like.
  • the input unit 140a may include a camera, a microphone, and/or a user input unit.
  • the output unit 140b may generate an output related to sight, hearing, or touch.
  • the output unit 140b may include a display unit, a speaker, and/or a haptic module.
  • the sensing unit 140 may obtain at least one of internal information of the AI device 100 , surrounding environment information of the AI device 100 , and user information by using various sensors.
  • the sensing unit 140 may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. have.
  • the learning processor unit 140c may train a model composed of an artificial neural network by using the training data.
  • the learning processor unit 140c may perform AI processing together with the learning processor unit of the AI server ( FIGS. 44 and 400 ).
  • the learning processor unit 140c may process information received from an external device through the communication unit 110 and/or information stored in the memory unit 130 .
  • the output value of the learning processor unit 140c may be transmitted to an external device through the communication unit 110 and/or stored in the memory unit 130 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Est divulgué un procédé de planification entre un serveur et un dispositif pour résoudre le problème de l'asynchronisme se produisant en raison d'une différence de capacité IA du dispositif participant à l'apprentissage fédéré basé sur AirComp. Est divulgué un procédé de commande d'apprentissage fédéré exécuté par un serveur, dans lequel : un premier paramètre global est transmis à N dispositifs de communication ; une pluralité d'éléments d'informations d'achèvement d'apprentissage sont reçus en provenance des N dispositifs de communication respectifs, chaque élément de la pluralité d'éléments d'informations d'achèvement d'apprentissage indiquant le point d'achèvement d'apprentissage des N dispositifs de communication respectifs ; un second paramètre global et des informations de point sont transmis à K dispositifs de communication, les informations de point indiquant le point d'achèvement demandé déterminé sur la base de la pluralité d'éléments d'informations d'achèvement d'apprentissage ; et K paramètres locaux sont reçus en provenance des K dispositifs de communication respectifs au point d'achèvement demandé.
PCT/KR2020/009240 2020-07-14 2020-07-14 Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp WO2022014731A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/009240 WO2022014731A1 (fr) 2020-07-14 2020-07-14 Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/009240 WO2022014731A1 (fr) 2020-07-14 2020-07-14 Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp

Publications (1)

Publication Number Publication Date
WO2022014731A1 true WO2022014731A1 (fr) 2022-01-20

Family

ID=79554678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/009240 WO2022014731A1 (fr) 2020-07-14 2020-07-14 Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp

Country Status (1)

Country Link
WO (1) WO2022014731A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116009677A (zh) * 2022-09-02 2023-04-25 南通大学 一种基于Cell-Free mMIMO网络的联邦学习设备端能耗优化方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598870A (zh) * 2019-09-02 2019-12-20 深圳前海微众银行股份有限公司 一种联邦学习方法及装置
WO2020115273A1 (fr) * 2018-12-07 2020-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Prédiction de performances de communication d'un réseau à l'aide d'un apprentissage fédéré

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020115273A1 (fr) * 2018-12-07 2020-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Prédiction de performances de communication d'un réseau à l'aide d'un apprentissage fédéré
CN110598870A (zh) * 2019-09-02 2019-12-20 深圳前海微众银行股份有限公司 一种联邦学习方法及装置

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TAKAYUKI NISHIO; RYO YONETANI: "Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 23 April 2018 (2018-04-23), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081143755 *
YUSUKE KODA; KOJI YAMAMOTO; TAKAYUKI NISHIO; MASAHIRO MORIKURA: "Differentially Private AirComp Federated Learning with Power Adaptation Harnessing Receiver Noise", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 14 April 2020 (2020-04-14), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081644309 *
ZHAOHUI YANG; MINGZHE CHEN; WALID SAAD; CHOONG SEON HONG; MOHAMMAD SHIKH-BAHAEI: "Energy Efficient Federated Learning Over Wireless Communication Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 6 November 2019 (2019-11-06), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081526234 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116009677A (zh) * 2022-09-02 2023-04-25 南通大学 一种基于Cell-Free mMIMO网络的联邦学习设备端能耗优化方法
CN116009677B (zh) * 2022-09-02 2023-10-03 南通大学 一种基于Cell-Free mMIMO网络的联邦学习设备端能耗优化方法

Similar Documents

Publication Publication Date Title
WO2021112360A1 (fr) Procédé et dispositif d'estimation de canal dans un système de communication sans fil
WO2022050432A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2021256584A1 (fr) Procédé d'émission ou de réception de données dans un système de communication sans fil et appareil associé
WO2022075493A1 (fr) Procédé de réalisation d'un apprentissage par renforcement par un dispositif de communication v2x dans un système de conduite autonome
WO2022019352A1 (fr) Procédé et appareil de transmission et de réception de signal pour un terminal et une station de base dans un système de communication sans fil
WO2022054985A1 (fr) Procédé et appareil d'émission et de réception de signaux par un équipement utilisateur, et station de base dans un système de communication sans fil
WO2022025321A1 (fr) Procédé et dispositif de randomisation de signal d'un appareil de communication
WO2021251523A1 (fr) Procédé et dispositif permettant à un ue et à une station de base d'émettre et de recevoir un signal dans un système de communication sans fil
WO2022054981A1 (fr) Procédé et dispositif d'exécution d'apprentissage fédéré par compression
WO2022025316A1 (fr) Procédé et appareil pour transmettre et recevoir un signal en utilisant de multiples antennes dans un système de communication sans fil
WO2022014732A1 (fr) Procédé et appareil d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2022014751A1 (fr) Procédé et appareil de génération de mots uniques pour estimation de canal dans le domaine fréquentiel dans un système de communication sans fil
WO2021251511A1 (fr) Procédé d'émission/réception de signal de liaison montante de bande de fréquences haute dans un système de communication sans fil, et dispositif associé
WO2022059808A1 (fr) Procédé de réalisation d'un apprentissage par renforcement par un dispositif de communication v2x dans un système de conduite autonome
WO2022010001A1 (fr) Procédé et dispositif de communication basés sur un réseau neuronal
WO2022014731A1 (fr) Procédé et dispositif de planification pour apprentissage fédéré basé sur aircomp
WO2021256586A1 (fr) Dispositif et procédé d'estimation d'angle de signal de réception
WO2022050528A1 (fr) Procédé et appareil pour l'exécution d'une resélection de cellule dans un système de communications sans fil
WO2021261611A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2021256585A1 (fr) Procédé et dispositif pour la transmission/la réception d'un signal dans un système de communication sans fil
WO2022092353A1 (fr) Procédé et appareil permettant d'effectuer un codage et un décodage de canal dans un système de communication sans fil
WO2022050434A1 (fr) Procédé et appareil pour effectuer un transfert intercellulaire dans système de communication sans fil
WO2022039287A1 (fr) Procédé permettant à un équipement utilisateur et à une station de base de transmettre/recevoir des signaux dans un système de communication sans fil, et appareil
WO2022080530A1 (fr) Procédé et dispositif pour émettre et recevoir des signaux en utilisant de multiples antennes dans un système de communication sans fil
WO2022045402A1 (fr) Procédé et dispositif permettant à un terminal et une station de base d'émettre et recevoir un signal dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945430

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945430

Country of ref document: EP

Kind code of ref document: A1