WO2023068398A1 - Procédé et dispositif pour prendre en charge une communication sémantique dans un système de communication sans fil - Google Patents

Procédé et dispositif pour prendre en charge une communication sémantique dans un système de communication sans fil Download PDF

Info

Publication number
WO2023068398A1
WO2023068398A1 PCT/KR2021/014691 KR2021014691W WO2023068398A1 WO 2023068398 A1 WO2023068398 A1 WO 2023068398A1 KR 2021014691 W KR2021014691 W KR 2021014691W WO 2023068398 A1 WO2023068398 A1 WO 2023068398A1
Authority
WO
WIPO (PCT)
Prior art keywords
graph
subgraphs
latent
wireless device
similarity
Prior art date
Application number
PCT/KR2021/014691
Other languages
English (en)
Korean (ko)
Inventor
정익주
이상림
전기준
이태현
조민석
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2021/014691 priority Critical patent/WO2023068398A1/fr
Publication of WO2023068398A1 publication Critical patent/WO2023068398A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation

Definitions

  • It relates to a method and apparatus for supporting semantic communication in a wireless communication system.
  • Mobile communication systems have been developed to provide voice services while ensuring user activity.
  • the mobile communication system has expanded its scope not only to voice but also to data services.
  • the explosive increase in traffic causes a shortage of resources and users require higher-speed services, so a more advanced mobile communication system is required. .
  • next-generation mobile communication system The requirements of the next-generation mobile communication system are to support explosive data traffic, drastic increase in transmission rate per user, significantly increased number of connected devices, very low end-to-end latency, and high energy efficiency.
  • Dual Connectivity Massive MIMO (Massive Multiple Input Multiple Output), In-band Full Duplex, Non-Orthogonal Multiple Access (NOMA), Super Wideband Wideband) support, various technologies such as device networking (Device Networking) are being studied.
  • Massive MIMO Massive Multiple Input Multiple Output
  • NOMA Non-Orthogonal Multiple Access
  • Super Wideband Wideband various technologies such as device networking (Device Networking) are being studied.
  • Reliability considered in the existing communication scheme is related to how accurately a radio signal (ie, a complex valued modulation symbol) can be transmitted.
  • a radio signal ie, a complex valued modulation symbol
  • semantic communication how accurately the meaning of a transmitted message can be conveyed is considered. That is, in order to support semantic communication, a semantic message indicating a meaning must be interpreted as the same meaning at a transmitting/receiving side.
  • Shared knowledge including a plurality of pieces of information may be created through sharing of a graph corresponding to local knowledge possessed by a source and a destination. In this way, having the same knowledge (that is, shared knowledge) between the source and the destination can make it possible to interpret the concept (semantic message) delivered from each more correctly.
  • this method has the following limitations in generating shared knowledge.
  • graph-based knowledge Since the size of graph-based knowledge (graph-based information) is very large, the resources required to convey that information also increase. In addition, the amount of computation for comparing graph-based information is greatly increased, so that the meaning of a message transmitted through semantic communication cannot be correctly interpreted or delays occur, resulting in poor performance compared to conventional communication. Regarding the above problem, a graph neural network (GNN) may be utilized. However, even in the case of GNN, a large amount of computation is required in the process of creating a latent vector for the entire graph data.
  • GNN graph neural network
  • the purpose of this specification is to propose a method for solving the above problems in supporting semantic communication based on GNN.
  • a method performed by a first wireless device to support semantic communication in a wireless communication system includes one or more sub-graphs from preset graph data. ), generating one or more first latent vectors based on the one or more subgraphs, one or more second latent vectors from a second wireless device. more second latent vectors), calculating a similarity between the one or more first latent vectors and the one or more second latent vectors based on the similarity function, and the calculation and updating the one or more subgraphs based on the similarity.
  • the one or more subgraphs are extracted based on control information.
  • the control information is i) a score function for extracting a sub-graph related to a part of knowledge represented by the preset graph data, ii) a graph for processing the sub-graph It includes information on at least one of a graph neural network model (GNN model) or iii) a similarity function related to a latent vector generated from the subgraph.
  • GNN model graph neural network model
  • a similarity function related to a latent vector generated from the subgraph.
  • the one or more sub-graphs are based on specific node sets determined in order of high scores calculated based on the score function among node sets included in the preset graph data.
  • the latent vector is identified by an index indicating a ranking based on the calculated score.
  • the degree of similarity is characterized in that the index representing the ranking is calculated between the first latent vector and the second latent vector having the same index.
  • the updating of the one or more subgraphs may include restoring one or more subgraphs from the one or more second latent vectors, comparing the restored one or more subgraphs with the one or more subgraphs, and comparing the one or more subgraphs with the one or more subgraphs. Updating the subgraphs and reflecting the updated one or more subgraphs to the preset graph data.
  • Updating the one or more subgraphs may be performed by adding, modifying, or deleting a node or an edge included in each of the one or more subgraphs.
  • Performance information related to supporting the semantic communication is transmitted from the first wireless device to the second wireless device, and the performance information related to supporting the semantic communication determines whether the first wireless device supports an operation related to graph data. It may include information indicating whether or not.
  • the control information may be transmitted from the second wireless device to the first wireless device based on performance information related to supporting the semantic communication indicating that an operation related to graph data is supported by the first wireless device. .
  • the GNN model may be related to at least one of an operation of generating a latent vector from graph data or an operation of restoring a sub-graph from a latent vector.
  • the control information includes i) the number of specific node sets determined in the order of high calculated scores, ii) the number of hops associated with the one or more subgraphs, iii) position encoding associated with the one or more subgraphs ( positional-encoding) method or iv) transmission period of information including the one or more second latent vectors.
  • Each of the one or more subgraphs may be extracted to include one or more neighboring nodes based on the number of hops from each node of any one of the specific node sets.
  • the preset graph data may be updated based on the updated one or more subgraphs.
  • each of the one or more subgraphs may be extracted to include location information related to any one of the specific node sets.
  • a first wireless device supporting semantic communication includes one or more transceivers, one or more processors controlling the one or more transceivers, and operable with the one or more processors. and one or more memories that are accessible to the one or more processors and store instructions that, when executed by the one or more processors, cause the one or more processors to perform operations.
  • the operations may include extracting one or more sub-graphs from preset graph data, and generating one or more first latent vectors based on the one or more sub-graphs. ), Receiving information including one or more second latent vectors from a second wireless device, Based on the similarity function, the one or more first latent vectors and calculating a similarity between the one or more second latent vectors, and updating the one or more subgraphs based on the calculated similarity.
  • the one or more subgraphs are extracted based on control information.
  • the control information is i) a score function for extracting a sub-graph related to a part of knowledge represented by the preset graph data, ii) a graph for processing the sub-graph It includes information on at least one of a graph neural network model (GNN model) or iii) a similarity function related to a latent vector generated from the subgraph.
  • GNN model graph neural network model
  • a similarity function related to a latent vector generated from the subgraph.
  • the one or more sub-graphs are based on specific node sets determined in order of high scores calculated based on the score function among node sets included in the preset graph data.
  • the latent vector is identified by an index indicating a ranking based on the calculated score.
  • the degree of similarity is characterized in that the index representing the ranking is calculated between the first latent vector and the second latent vector having the same index.
  • the first wireless device may be a user equipment (UE) or a base station (BS), and the second wireless device may be a user equipment (UE) or a base station (BS).
  • UE user equipment
  • BS base station
  • An apparatus includes one or more memories and one or more processors functionally coupled to the one or more memories.
  • the one or more memories store instructions that, when executed by the one or more processors, cause the one or more processors to perform operations.
  • the operations may include extracting one or more sub-graphs from preset graph data, and generating one or more first latent vectors based on the one or more sub-graphs. ), Receiving information including one or more second latent vectors from a second wireless device, Based on the similarity function, the one or more first latent vectors and calculating a similarity between the one or more second latent vectors, and updating the one or more subgraphs based on the calculated similarity.
  • the one or more subgraphs are extracted based on control information.
  • the control information is i) a score function for extracting a sub-graph related to a part of knowledge represented by the preset graph data, ii) a graph for processing the sub-graph It includes information on at least one of a graph neural network model (GNN model) or iii) a similarity function related to a latent vector generated from the subgraph.
  • GNN model graph neural network model
  • a similarity function related to a latent vector generated from the subgraph.
  • the one or more sub-graphs are based on specific node sets determined in order of high scores calculated based on the score function among node sets included in the preset graph data.
  • the latent vector is identified by an index indicating a ranking based on the calculated score.
  • the degree of similarity is characterized in that the index representing the ranking is calculated between the first latent vector and the second latent vector having the same index.
  • One or more non-transitory computer readable media stores one or more instructions.
  • the one or more instructions when executed by one or more processors, cause the one or more processors to perform operations.
  • the operations may include extracting one or more sub-graphs from preset graph data, and generating one or more first latent vectors based on the one or more sub-graphs. ), Receiving information including one or more second latent vectors from a second wireless device, Based on the similarity function, the one or more first latent vectors and calculating a similarity between the one or more second latent vectors, and updating the one or more subgraphs based on the calculated similarity.
  • the one or more subgraphs are extracted based on control information.
  • the control information is i) a score function for extracting a sub-graph related to a part of knowledge represented by the preset graph data, ii) a graph for processing the sub-graph It includes information on at least one of a graph neural network model (GNN model) or iii) a similarity function related to a latent vector generated from the subgraph.
  • GNN model graph neural network model
  • a similarity function related to a latent vector generated from the subgraph.
  • the one or more sub-graphs are based on specific node sets determined in order of high scores calculated based on the score function among node sets included in the preset graph data.
  • the latent vector is identified by an index indicating a ranking based on the calculated score.
  • the degree of similarity is characterized in that the index representing the ranking is calculated between the first latent vector and the second latent vector having the same index.
  • one or more first latent vectors are generated from a subgraph of the entire graph (one or more subgraphs extracted from the entire graph data).
  • a similarity between the one or more first latent vectors and the one or more second latent vectors transmitted from the second wireless device is calculated.
  • a subgraph is updated based on the calculated similarity.
  • one or more subgraphs are reconstructed from one or more second latent vectors based on the calculated similarity, and based on a comparison result between the one or more restored subgraphs and the extracted one or more subgraphs, the extraction One or more sub-graphs are updated.
  • One or more updated subgraphs are reflected in the entire graph.
  • the following effects are derived in generating shared knowledge for knowledge each possessed by a source and a destination for supporting semantic communication.
  • transmission of information including the second latent vectors and updating of the subgraph based on the transmission may be periodically performed. That is, the sub-graph (sub-graph) is maintained so that the similarity between the latent vectors of the first wireless device (eg, terminal) and the latent vectors of the second wireless device (eg, base station) reaches a certain level. -graph) can be updated.
  • the performance of shared knowledge possessed by each source and destination through periodic update can be improved to support normal semantic communication, and can be managed so that the corresponding performance is maintained.
  • transmitted and received information represents meaning based on a graph, even if an error occurs in the transmission of a message at the symbol level, the message is transmitted and received through knowledge update between the transmitting and receiving ends. It can be interpreted in the correct meaning based on the shared knowledge possessed. Accordingly, robustness against errors caused by channel conditions can be improved compared to the conventional communication method.
  • FIG. 1 is a diagram showing an example of a communication system applicable to the present specification.
  • FIG. 2 is a diagram showing an example of a wireless device applicable to the present specification.
  • FIG. 3 is a diagram illustrating a method of processing a transmission signal applicable to the present specification.
  • FIG. 4 is a diagram showing another example of a wireless device applicable to the present specification.
  • FIG. 5 is a diagram illustrating an example of a portable device applicable to the present specification.
  • FIG. 6 is a diagram illustrating physical channels applicable to the present specification and a signal transmission method using them.
  • FIG. 7 is a diagram showing an example of a perceptron structure.
  • FIG. 8 is a diagram showing an example of a multilayer perceptron structure.
  • FIG. 9 is a diagram showing an example of a deep neural network.
  • FIG. 10 is a diagram showing an example of a convolutional neural network.
  • FIG. 11 is a diagram showing an example of a filter operation in a convolutional neural network.
  • FIG. 12 shows an example of a neural network structure in which a cyclic loop exists.
  • FIG. 13 shows an example of an operating structure of a recurrent neural network.
  • FIG. 14 is a diagram illustrating a communication model for each level to which an embodiment proposed in this specification can be applied.
  • 15 is a diagram for explaining the operation of a graph neural network to which a method according to an embodiment of the present specification can be applied.
  • FIG. 16 shows the definition of a GNN model to which a method according to an embodiment of the present specification can be applied.
  • FIG. 17 illustrates an operation related to a GNN model to which a method according to an embodiment of the present specification may be applied.
  • FIG. 18 is a flowchart illustrating an operation for updating related to processing of graph data according to an embodiment of the present specification.
  • 19A is a flowchart for explaining initialization related to graph data update for semantic communication according to an embodiment of the present specification.
  • 19B is a diagram for explaining an operation related to a score function according to an embodiment of the present specification.
  • 19C is a table illustrating a heuristic method related to a score function according to an embodiment of the present specification.
  • FIG. 20 illustrates a case where a position encoding method according to an embodiment of the present specification is to be applied.
  • 21A and 21B are flowcharts for explaining procedures related to subgraph extraction and latent vector generation/transmission according to an embodiment of the present specification.
  • 22 is a diagram for explaining an operation related to subgraph extraction according to an embodiment of the present specification.
  • 23 is a flowchart for explaining a procedure related to processing of received latent vectors according to an embodiment of the present specification.
  • 24 is a flowchart for explaining a procedure related to updating graph data according to an embodiment of the present specification.
  • 25 is a flowchart for explaining a method performed by a first wireless device to support semantic communication in a wireless communication system according to an embodiment of the present specification.
  • each component or feature may be considered optional unless explicitly stated otherwise.
  • Each component or feature may be implemented in a form not combined with other components or features.
  • the embodiments of the present specification may be configured by combining some components and/or features. The order of operations described in the embodiments of this specification may be changed. Some components or features of one embodiment may be included in another embodiment, or may be replaced with corresponding components or features of another embodiment.
  • a base station has meaning as a terminal node of a network that directly communicates with a mobile station.
  • a specific operation described herein as being performed by a base station may be performed by an upper node of the base station in some cases.
  • the 'base station' is a term such as a fixed station, Node B, eNode B, gNode B, ng-eNB, advanced base station (ABS), or access point. can be replaced by
  • a terminal includes a user equipment (UE), a mobile station (MS), a subscriber station (SS), a mobile subscriber station (MSS), It may be replaced with terms such as mobile terminal or advanced mobile station (AMS).
  • UE user equipment
  • MS mobile station
  • SS subscriber station
  • MSS mobile subscriber station
  • AMS advanced mobile station
  • the transmitting end refers to a fixed and/or mobile node providing data service or voice service
  • the receiving end refers to a fixed and/or mobile node receiving data service or voice service. Therefore, in the case of uplink, the mobile station can be a transmitter and the base station can be a receiver. Similarly, in the case of downlink, the mobile station may be a receiving end and the base station may be a transmitting end.
  • Embodiments of the present specification are wireless access systems, such as an IEEE 802.xx system, a 3rd Generation Partnership Project (3GPP) system, a 3GPP Long Term Evolution (LTE) system, a 3GPP 5G (5th generation) NR (New Radio) system, and a 3GPP2 system. It may be supported by at least one disclosed standard document, and in particular, the embodiments of the present specification are supported by 3GPP TS (technical specification) 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331 documents It can be.
  • 3GPP TS technical specification
  • embodiments of the present specification may be applied to other wireless access systems, and are not limited to the above-described systems.
  • it may also be applicable to a system applied after the 3GPP 5G NR system, and is not limited to a specific system.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • LTE is 3GPP TS 36.xxx Release 8 or later
  • LTE technology after 3GPP TS 36.xxx Release 10 is referred to as LTE-A
  • xxx Release 13 may be referred to as LTE-A pro.
  • 3GPP NR may mean technology after TS 38.xxx Release 15.
  • 3GPP 6G may mean technology after TS Release 17 and/or Release 18.
  • "xxx" means a standard document detail number.
  • LTE/NR/6G may be collectively referred to as a 3GPP system.
  • a communication system 100 applied to the present specification includes a wireless device, a base station, and a network.
  • the wireless device means a device that performs communication using a radio access technology (eg, 5G NR, LTE), and may be referred to as a communication/wireless/5G device.
  • the wireless device includes a robot 100a, a vehicle 100b-1 and 100b-2, an extended reality (XR) device 100c, a hand-held device 100d, and a home appliance. appliance) 100e, Internet of Thing (IoT) device 100f, and artificial intelligence (AI) device/server 100g.
  • a radio access technology eg, 5G NR, LTE
  • XR extended reality
  • AI artificial intelligence
  • the vehicle may include a vehicle equipped with a wireless communication function, an autonomous vehicle, a vehicle capable of performing inter-vehicle communication, and the like.
  • the vehicles 100b-1 and 100b-2 may include an unmanned aerial vehicle (UAV) (eg, a drone).
  • UAV unmanned aerial vehicle
  • the XR device 100c includes augmented reality (AR)/virtual reality (VR)/mixed reality (MR) devices, and includes a head-mounted device (HMD), a head-up display (HUD) installed in a vehicle, a television, It may be implemented in the form of smart phones, computers, wearable devices, home appliances, digital signage, vehicles, robots, and the like.
  • the mobile device 100d may include a smart phone, a smart pad, a wearable device (eg, a smart watch, a smart glass), a computer (eg, a laptop computer), and the like.
  • the home appliance 100e may include a TV, a refrigerator, a washing machine, and the like.
  • the IoT device 100f may include a sensor, a smart meter, and the like.
  • the base station 120 and the network 130 may also be implemented as a wireless device, and a specific wireless device 120a may operate as a base station/network node to other wireless devices.
  • the wireless devices 100a to 100f may be connected to the network 130 through the base station 120 .
  • AI technology may be applied to the wireless devices 100a to 100f, and the wireless devices 100a to 100f may be connected to the AI server 100g through the network 130.
  • the network 130 may be configured using a 3G network, a 4G (eg LTE) network, or a 5G (eg NR) network.
  • the wireless devices 100a to 100f may communicate with each other through the base station 120/network 130, but communicate directly without going through the base station 120/network 130 (e.g., sidelink communication). You may.
  • the vehicles 100b-1 and 100b-2 may perform direct communication (eg, vehicle to vehicle (V2V)/vehicle to everything (V2X) communication).
  • the IoT device 100f eg, sensor
  • the IoT device 100f may directly communicate with other IoT devices (eg, sensor) or other wireless devices 100a to 100f.
  • Wireless communication/connection 150a, 150b, and 150c may be performed between the wireless devices 100a to 100f/base station 120 and the base station 120/base station 120.
  • wireless communication/connection includes various types of uplink/downlink communication 150a, sidelink communication 150b (or D2D communication), and inter-base station communication 150c (eg relay, integrated access backhaul (IAB)). This can be done through radio access technology (eg 5G NR).
  • radio access technology eg 5G NR
  • a wireless device and a base station/wireless device, and a base station can transmit/receive radio signals to each other.
  • the wireless communication/connections 150a, 150b, and 150c may transmit/receive signals through various physical channels.
  • various configuration information setting processes for transmitting / receiving radio signals various signal processing processes (eg, channel encoding / decoding, modulation / demodulation, resource mapping / demapping, etc.) At least a part of a resource allocation process may be performed.
  • FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present specification.
  • a first wireless device 200a and a second wireless device 200b may transmit and receive radio signals through various wireless access technologies (eg, LTE and NR).
  • ⁇ the first wireless device 200a, the second wireless device 200b ⁇ denotes the ⁇ wireless device 100x and the base station 120 ⁇ of FIG. 1 and/or the ⁇ wireless device 100x and the wireless device 100x.
  • can correspond.
  • the first wireless device 200a includes one or more processors 202a and one or more memories 204a, and may further include one or more transceivers 206a and/or one or more antennas 208a.
  • the processor 202a controls the memory 204a and/or the transceiver 206a and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flow diagrams disclosed herein.
  • the processor 202a may process information in the memory 204a to generate first information/signal, and transmit a radio signal including the first information/signal through the transceiver 206a.
  • the processor 202a may receive a radio signal including the second information/signal through the transceiver 206a and store information obtained from signal processing of the second information/signal in the memory 204a.
  • the memory 204a may be connected to the processor 202a and may store various information related to the operation of the processor 202a.
  • memory 204a may perform some or all of the processes controlled by processor 202a, or instructions for performing the descriptions, functions, procedures, suggestions, methods, and/or flowcharts of operations disclosed herein. It may store software codes including them.
  • the processor 202a and the memory 204a may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • the transceiver 206a may be coupled to the processor 202a and may transmit and/or receive wireless signals through one or more antennas 208a.
  • the transceiver 206a may include a transmitter and/or a receiver.
  • the transceiver 206a may be used interchangeably with a radio frequency (RF) unit.
  • RF radio frequency
  • a wireless device may mean a communication modem/circuit/chip.
  • the second wireless device 200b includes one or more processors 202b, one or more memories 204b, and may further include one or more transceivers 206b and/or one or more antennas 208b.
  • the processor 202b controls the memory 204b and/or the transceiver 206b and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flow diagrams disclosed herein.
  • the processor 202b may process information in the memory 204b to generate third information/signal, and transmit a radio signal including the third information/signal through the transceiver 206b.
  • the processor 202b may receive a radio signal including the fourth information/signal through the transceiver 206b and then store information obtained from signal processing of the fourth information/signal in the memory 204b.
  • the memory 204b may be connected to the processor 202b and may store various information related to the operation of the processor 202b. For example, memory 204b may perform some or all of the processes controlled by processor 202b, or instructions for performing the descriptions, functions, procedures, suggestions, methods, and/or flowcharts of operations disclosed herein. It may store software codes including them.
  • the processor 202b and the memory 204b may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (eg, LTE, NR).
  • a wireless communication technology eg, LTE, NR
  • the transceiver 206b may be coupled to the processor 202b and may transmit and/or receive wireless signals through one or more antennas 208b.
  • the transceiver 206b may include a transmitter and/or a receiver.
  • the transceiver 206b may be used interchangeably with an RF unit.
  • a wireless device may mean a communication modem/circuit/chip.
  • one or more protocol layers may be implemented by one or more processors 202a, 202b.
  • the one or more processors 202a and 202b may include one or more layers (eg, PHY (physical), MAC (media access control), RLC (radio link control), PDCP (packet data convergence protocol), RRC (radio resource) control) and functional layers such as service data adaptation protocol (SDAP).
  • One or more processors 202a, 202b may generate one or more protocol data units (PDUs) and/or one or more service data units (SDUs) according to the descriptions, functions, procedures, proposals, methods, and/or operational flow charts disclosed herein.
  • PDUs protocol data units
  • SDUs service data units
  • processors 202a, 202b may generate messages, control information, data or information according to the descriptions, functions, procedures, proposals, methods and/or operational flow diagrams disclosed herein.
  • One or more processors 202a, 202b generate PDUs, SDUs, messages, control information, data or signals (e.g., baseband signals) containing information according to the functions, procedures, proposals and/or methods disclosed herein. , may be provided to one or more transceivers 206a and 206b.
  • One or more processors 202a, 202b may receive signals (eg, baseband signals) from one or more transceivers 206a, 206b, and descriptions, functions, procedures, proposals, methods, and/or flowcharts of operations disclosed herein PDUs, SDUs, messages, control information, data or information can be obtained according to these.
  • signals eg, baseband signals
  • One or more processors 202a, 202b may be referred to as a controller, microcontroller, microprocessor or microcomputer.
  • One or more processors 202a, 202b may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • firmware or software may be implemented to include modules, procedures, functions, and the like.
  • Firmware or software configured to perform the descriptions, functions, procedures, suggestions, methods, and/or operational flow diagrams disclosed herein may be included in one or more processors 202a, 202b or stored in one or more memories 204a, 204b. It can be driven by the above processors 202a and 202b.
  • the descriptions, functions, procedures, suggestions, methods and/or operational flow diagrams disclosed herein may be implemented using firmware or software in the form of codes, instructions and/or sets of instructions.
  • One or more memories 204a, 204b may be coupled to one or more processors 202a, 202b and may store various types of data, signals, messages, information, programs, codes, instructions and/or instructions.
  • One or more memories 204a, 204b may include read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, hard drive, registers, cache memory, computer readable storage media, and/or It may consist of a combination of these.
  • One or more memories 204a, 204b may be located internally and/or externally to one or more processors 202a, 202b.
  • one or more memories 204a, 204b may be connected to one or more processors 202a, 202b through various technologies such as wired or wireless connections.
  • One or more transceivers 206a, 206b may transmit user data, control information, radio signals/channels, etc. referred to in the methods and/or operational flow charts herein, etc. to one or more other devices.
  • One or more transceivers (206a, 206b) may receive user data, control information, radio signals/channels, etc. referred to in descriptions, functions, procedures, proposals, methods and/or operational flow charts, etc. disclosed herein from one or more other devices. there is.
  • one or more transceivers 206a and 206b may be connected to one or more processors 202a and 202b and transmit and receive radio signals.
  • one or more processors 202a, 202b may control one or more transceivers 206a, 206b to transmit user data, control information, or radio signals to one or more other devices.
  • one or more processors 202a, 202b may control one or more transceivers 206a, 206b to receive user data, control information, or radio signals from one or more other devices.
  • one or more transceivers 206a, 206b may be coupled with one or more antennas 208a, 208b, and one or more transceivers 206a, 206b may be connected to one or more antennas 208a, 208b, as described herein. , procedures, proposals, methods and / or operation flowcharts, etc.
  • one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (eg, antenna ports).
  • One or more transceivers (206a, 206b) in order to process the received user data, control information, radio signal / channel, etc. using one or more processors (202a, 202b), the received radio signal / channel, etc. in the RF band signal It can be converted into a baseband signal.
  • One or more transceivers 206a and 206b may convert user data, control information, and radio signals/channels processed by one or more processors 202a and 202b from baseband signals to RF band signals.
  • one or more transceivers 206a, 206b may include (analog) oscillators and/or filters.
  • the transmitted signal may be processed by a signal processing circuit.
  • the signal processing circuit 300 may include a scrambler 310, a modulator 320, a layer mapper 330, a precoder 340, a resource mapper 350, and a signal generator 360.
  • the operation/function of FIG. 3 may be performed by the processors 202a and 202b and/or the transceivers 206a and 206b of FIG. 2 .
  • the hardware elements of FIG. 3 may be implemented in the processors 202a and 202b and/or the transceivers 206a and 206b of FIG.
  • blocks 310 to 350 may be implemented in the processors 202a and 202b of FIG. 2 and block 360 may be implemented in the transceivers 206a and 206b of FIG. 2 , but are not limited to the above-described embodiment.
  • the codeword may be converted into a radio signal through the signal processing circuit 300 of FIG. 3 .
  • a codeword is an encoded bit sequence of an information block.
  • Information blocks may include transport blocks (eg, UL-SCH transport blocks, DL-SCH transport blocks).
  • the radio signal may be transmitted through various physical channels (eg, PUSCH, PDSCH) of FIG. 6 .
  • the codeword may be converted into a scrambled bit sequence by the scrambler 310.
  • a scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of a wireless device.
  • the scrambled bit sequence may be modulated into a modulation symbol sequence by modulator 320.
  • the modulation method may include pi/2-binary phase shift keying (pi/2-BPSK), m-phase shift keying (m-PSK), m-quadrature amplitude modulation (m-QAM), and the like.
  • the complex modulation symbol sequence may be mapped to one or more transport layers by the layer mapper 330. Modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by the precoder 340 (precoding).
  • the output z of the precoder 340 can be obtained by multiplying the output y of the layer mapper 330 by the N*M precoding matrix W.
  • N is the number of antenna ports and M is the number of transport layers.
  • the precoder 340 may perform precoding after transform precoding (eg, discrete fourier transform (DFT)) on complex modulation symbols. Also, the precoder 340 may perform precoding without performing transform precoding.
  • transform precoding eg, discrete fourier transform (DFT)
  • the resource mapper 350 may map modulation symbols of each antenna port to time-frequency resources.
  • the time-frequency resource may include a plurality of symbols (eg, CP-OFDMA symbols and DFT-s-OFDMA symbols) in the time domain and a plurality of subcarriers in the frequency domain.
  • the signal generator 360 generates a radio signal from the mapped modulation symbols, and the generated radio signal can be transmitted to other devices through each antenna.
  • the signal generator 360 may include an inverse fast fourier transform (IFFT) module, a cyclic prefix (CP) inserter, a digital-to-analog converter (DAC), a frequency uplink converter, and the like.
  • IFFT inverse fast fourier transform
  • CP cyclic prefix
  • DAC digital-to-analog converter
  • the signal processing process for the received signal in the wireless device may be configured in reverse to the signal processing process 310 to 360 of FIG. 3 .
  • a wireless device eg, 200a and 200b of FIG. 2
  • the received radio signal may be converted into a baseband signal through a signal restorer.
  • the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a fast fourier transform (FFT) module.
  • ADC analog-to-digital converter
  • FFT fast fourier transform
  • the baseband signal may be restored to a codeword through a resource de-mapper process, a postcoding process, a demodulation process, and a de-scramble process.
  • a signal processing circuit for a received signal may include a signal restorer, a resource demapper, a postcoder, a demodulator, a descrambler, and a decoder.
  • FIG. 4 is a diagram illustrating another example of a wireless device applied to the present specification.
  • a wireless device 400 corresponds to the wireless devices 200a and 200b of FIG. 2, and includes various elements, components, units/units, and/or modules. ) can be configured.
  • the wireless device 400 may include a communication unit 410, a control unit 420, a memory unit 430, and an additional element 440.
  • the communication unit may include communication circuitry 412 and transceiver(s) 414 .
  • communication circuitry 412 may include one or more processors 202a, 202b of FIG. 2 and/or one or more memories 204a, 204b.
  • transceiver(s) 414 may include one or more transceivers 206a, 206b of FIG.
  • the control unit 420 is electrically connected to the communication unit 410, the memory unit 430, and the additional element 440 and controls overall operations of the wireless device. For example, the controller 420 may control electrical/mechanical operations of the wireless device based on programs/codes/commands/information stored in the memory 430 . In addition, the control unit 420 transmits the information stored in the memory unit 430 to the outside (eg, another communication device) through the communication unit 410 through a wireless/wired interface, or transmits the information stored in the memory unit 430 to the outside (eg, another communication device) through the communication unit 410. Information received through a wireless/wired interface from other communication devices) may be stored in the memory unit 430 .
  • the additional element 440 may be configured in various ways according to the type of wireless device.
  • the additional element 440 may include at least one of a power unit/battery, an input/output unit, a driving unit, and a computing unit.
  • the wireless device 400 may be a robot (FIG. 1, 100a), a vehicle (FIG. 1, 100b-1, 100b-2), an XR device (FIG. 1, 100c), a mobile device (FIG. 1, 100d) ), home appliances (FIG. 1, 100e), IoT devices (FIG.
  • Wireless devices can be mobile or used in a fixed location depending on the use-case/service.
  • various elements, components, units/units, and/or modules in the wireless device 400 may be entirely interconnected through a wired interface or at least partially connected wirelessly through the communication unit 410 .
  • the control unit 420 and the communication unit 410 are connected by wire, and the control unit 420 and the first units (eg, 430 and 440) are connected wirelessly through the communication unit 410.
  • each element, component, unit/unit, and/or module within wireless device 400 may further include one or more elements.
  • the control unit 420 may be composed of one or more processor sets.
  • the controller 420 may include a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphic processing processor, a memory control processor, and the like.
  • the memory unit 430 may include RAM, dynamic RAM (DRAM), ROM, flash memory, volatile memory, non-volatile memory, and/or combinations thereof. can be configured.
  • FIG. 5 is a diagram illustrating an example of a portable device applied to the present specification.
  • a portable device may include a smart phone, a smart pad, a wearable device (eg, smart watch, smart glasses), and a portable computer (eg, a laptop computer).
  • a mobile device may be referred to as a mobile station (MS), a user terminal (UT), a mobile subscriber station (MSS), a subscriber station (SS), an advanced mobile station (AMS), or a wireless terminal (WT).
  • MS mobile station
  • UT user terminal
  • MSS mobile subscriber station
  • SS subscriber station
  • AMS advanced mobile station
  • WT wireless terminal
  • a portable device 500 includes an antenna unit 508, a communication unit 510, a control unit 520, a memory unit 530, a power supply unit 540a, an interface unit 540b, and an input/output unit 540c. ) may be included.
  • the antenna unit 508 may be configured as part of the communication unit 510 .
  • Blocks 510 to 530/540a to 540c respectively correspond to blocks 410 to 430/440 of FIG. 4 .
  • the communication unit 510 may transmit/receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
  • the controller 520 may perform various operations by controlling components of the portable device 500 .
  • the controller 520 may include an application processor (AP).
  • the memory unit 530 may store data/parameters/programs/codes/commands necessary for driving the portable device 500 . Also, the memory unit 530 may store input/output data/information.
  • the power supply unit 540a supplies power to the portable device 500 and may include a wired/wireless charging circuit, a battery, and the like.
  • the interface unit 540b may support connection between the portable device 500 and other external devices.
  • the interface unit 540b may include various ports (eg, audio input/output ports and video input/output ports) for connection with external devices.
  • the input/output unit 540c may receive or output image information/signal, audio information/signal, data, and/or information input from a user.
  • the input/output unit 540c may include a camera, a microphone, a user input unit, a display unit 540d, a speaker, and/or a haptic module.
  • the input/output unit 540c acquires information/signals (eg, touch, text, voice, image, video) input from the user, and the acquired information/signals are stored in the memory unit 530.
  • the communication unit 510 may convert the information/signal stored in the memory into a wireless signal, and directly transmit the converted wireless signal to another wireless device or to a base station.
  • the communication unit 510 may receive a radio signal from another wireless device or a base station and then restore the received radio signal to original information/signal. After the restored information/signal is stored in the memory unit 530, it may be output in various forms (eg, text, voice, image, video, or haptic) through the input/output unit 540c.
  • a terminal may receive information from a base station through downlink (DL) and transmit information to the base station through uplink (UL).
  • Information transmitted and received between the base station and the terminal includes general data information and various control information, and there are various physical channels according to the type/use of the information transmitted and received by the base station and the terminal.
  • FIG. 6 is a diagram illustrating physical channels applied to this specification and a signal transmission method using them.
  • the terminal may receive a primary synchronization channel (P-SCH) and a secondary synchronization channel (S-SCH) from the base station to synchronize with the base station and obtain information such as a cell ID. .
  • P-SCH primary synchronization channel
  • S-SCH secondary synchronization channel
  • the terminal may acquire intra-cell broadcast information by receiving a physical broadcast channel (PBCH) signal from the base station. Meanwhile, the terminal may check the downlink channel state by receiving a downlink reference signal (DL RS) in the initial cell search step.
  • PBCH physical broadcast channel
  • DL RS downlink reference signal
  • the UE receives a physical downlink control channel (PDCCH) and a physical downlink control channel (PDSCH) according to the physical downlink control channel information in step S612, Specific system information can be obtained.
  • PDCCH physical downlink control channel
  • PDSCH physical downlink control channel
  • the terminal may perform a random access procedure such as steps S613 to S616 in order to complete access to the base station.
  • the UE transmits a preamble through a physical random access channel (PRACH) (S613), and RAR for the preamble through a physical downlink control channel and a physical downlink shared channel corresponding thereto (S613). random access response) may be received (S614).
  • the UE transmits a physical uplink shared channel (PUSCH) using scheduling information in the RAR (S615), and performs a contention resolution procedure such as receiving a physical downlink control channel signal and a physical downlink shared channel signal corresponding thereto. ) can be performed (S616).
  • the terminal After performing the procedure as described above, the terminal performs reception of a physical downlink control channel signal and/or a physical downlink shared channel signal as a general uplink/downlink signal transmission procedure (S617) and a physical uplink shared channel (physical uplink shared channel).
  • channel (PUSCH) signal and/or physical uplink control channel (PUCCH) signal may be transmitted (S618).
  • UCI uplink control information
  • HARQ-ACK/NACK hybrid automatic repeat and request acknowledgment/negative-ACK
  • SR scheduling request
  • CQI channel quality indication
  • PMI precoding matrix indication
  • RI rank indication
  • BI beam indication
  • UCI is generally transmitted periodically through PUCCH, but may be transmitted through PUSCH according to an embodiment (eg, when control information and traffic data are to be simultaneously transmitted).
  • the UE may aperiodically transmit UCI through the PUSCH according to a request/instruction of the network.
  • 6G (radio communications) systems are characterized by (i) very high data rates per device, (ii) very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) battery- It aims to lower energy consumption of battery-free IoT devices, (vi) ultra-reliable connectivity, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system can be four aspects such as “intelligent connectivity”, “deep connectivity”, “holographic connectivity”, and “ubiquitous connectivity”, and the 6G system can satisfy the requirements shown in Table 1 below. That is, Table 1 is a table showing the requirements of the 6G system.
  • the 6G system is enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), mMTC (massive machine type communications), AI integrated communication, tactile Internet (tactile internet), high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion and improved data security ( can have key factors such as enhanced data security.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low latency communications
  • mMTC massive machine type communications
  • AI integrated communication e.g., AI integrated communication
  • tactile Internet tactile internet
  • high throughput high network capacity
  • high energy efficiency high backhaul and access network congestion
  • improved data security can have key factors such as enhanced data security.
  • AI The most important and newly introduced technology for the 6G system is AI.
  • AI was not involved in the 4G system.
  • 5G systems will support partial or very limited AI.
  • the 6G system will be AI-enabled for full automation.
  • Advances in machine learning will create more intelligent networks for real-time communication in 6G.
  • Introducing AI in communications can simplify and enhance real-time data transmission.
  • AI can use a plethora of analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play an important role in machine-to-machine, machine-to-human and human-to-machine communications.
  • AI can be a rapid communication in BCI (Brain Computer Interface).
  • BCI Brain Computer Interface
  • AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in fundamental signal processing and communication mechanisms. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling and may include allocations, etc.
  • Machine learning may be used for channel estimation and channel tracking, and may be used for power allocation, interference cancellation, and the like in a downlink (DL) physical layer. Machine learning can also be used for antenna selection, power control, symbol detection, and the like in a MIMO system.
  • DL downlink
  • AI algorithms based on deep learning require a lot of training data to optimize training parameters.
  • a lot of training data is used offline. This is because static training on training data in a specific channel environment may cause a contradiction between dynamic characteristics and diversity of a radio channel.
  • Machine learning refers to a set of actions that train a machine to create a machine that can do tasks that humans can or cannot do.
  • Machine learning requires data and a running model.
  • data learning methods can be largely classified into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network training is aimed at minimizing errors in the output.
  • Neural network learning repeatedly inputs training data to the neural network, calculates the output of the neural network for the training data and the error of the target, and backpropagates the error of the neural network from the output layer of the neural network to the input layer in a direction to reduce the error. ) to update the weight of each node in the neural network.
  • Supervised learning uses training data in which correct answers are labeled in the learning data, and unsupervised learning may not have correct answers labeled in the learning data. That is, for example, learning data in the case of supervised learning related to data classification may be data in which each learning data is labeled with a category. Labeled training data is input to the neural network, and an error may be calculated by comparing the output (category) of the neural network and the label of the training data. The calculated error is back-propagated in a reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back-propagation.
  • a reverse direction ie, from the output layer to the input layer
  • the amount of change in the connection weight of each updated node may be determined according to a learning rate.
  • the neural network's computation of input data and backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently according to the number of iterations of the learning cycle of the neural network. For example, a high learning rate is used in the early stages of neural network learning to increase efficiency by allowing the neural network to quickly achieve a certain level of performance, and a low learning rate can be used in the late stage to increase accuracy.
  • the learning method may vary depending on the characteristics of the data. For example, in a case where the purpose of the receiver is to accurately predict data transmitted by the transmitter in a communication system, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be considered. ) is called
  • the neural network cord used as a learning method is largely divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent Boltzmann Machine (RNN). there is.
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN recurrent Boltzmann Machine
  • An artificial neural network is an example of connecting several perceptrons.
  • the huge artificial neural network structure may extend the simplified perceptron structure shown in FIG. 7 and apply input vectors to different multi-dimensional perceptrons.
  • an input value or an output value is referred to as a node.
  • the perceptron structure shown in FIG. 7 can be described as being composed of a total of three layers based on input values and output values.
  • An artificial neural network in which H number of (d + 1) dimensional perceptrons exist between the 1st layer and the 2nd layer and K number of (H + 1) dimensional perceptrons between the 2nd layer and the 3rd layer can be expressed as shown in FIG. 8 .
  • the layer where the input vector is located is called the input layer
  • the layer where the final output value is located is called the output layer
  • all the layers located between the input layer and the output layer are called hidden layers.
  • three layers are disclosed, but when counting the number of layers of an actual artificial neural network, the number of layers is counted excluding the input layer, so a total of two layers can be considered.
  • the artificial neural network is composed of two-dimensionally connected perceptrons of basic blocks.
  • the above-described input layer, hidden layer, and output layer can be jointly applied to various artificial neural network structures such as CNN and RNN, which will be described later, as well as multi-layer perceptrons.
  • CNN neural network
  • RNN multi-layer perceptrons
  • DNN deep neural network
  • the deep neural network shown in FIG. 9 is a multi-layer perceptron composed of 8 hidden layers + 8 output layers.
  • the multilayer perceptron structure is expressed as a fully-connected neural network.
  • a fully-connected neural network there is no connection relationship between nodes located on the same layer, and a connection relationship exists only between nodes located on adjacent layers.
  • DNN has a fully-connected neural network structure and is composed of a combination of multiple hidden layers and activation functions, so it can be usefully applied to identify the correlation characteristics between inputs and outputs.
  • the correlation characteristic may mean a joint probability of input and output.
  • 9 is a diagram illustrating an example of a deep neural network.
  • nodes located inside one layer are arranged in a one-dimensional vertical direction.
  • the nodes are two-dimensionally arranged with w nodes horizontally and h nodes vertically (convolutional neural network structure of FIG. 10).
  • w nodes horizontally and h nodes vertically
  • h ⁇ w weights since a weight is added for each connection in the connection process from one input node to the hidden layer, a total of h ⁇ w weights must be considered. Since there are h ⁇ w nodes in the input layer, a total of h2w2 weights are required between two adjacent layers.
  • FIG. 10 is a diagram showing an example of a convolutional neural network.
  • the convolutional neural network of FIG. 10 has a problem in that the number of weights increases exponentially according to the number of connections, so instead of considering all mode connections between adjacent layers, it is assumed that there is a filter with a small size, and FIG. 10 As shown in , weighted sum and activation function calculations are performed for overlapping filters.
  • One filter has weights corresponding to the number of filters, and learning of weights can be performed so that a specific feature on an image can be extracted as a factor and output.
  • a 3 ⁇ 3 filter is applied to a 3 ⁇ 3 area at the top left of the input layer, and an output value obtained by performing a weighted sum and an activation function operation on a corresponding node is stored in z22.
  • the filter While scanning the input layer, the filter performs weighted sum and activation function calculations while moving horizontally and vertically at regular intervals, and places the output value at the position of the current filter.
  • This operation method is similar to the convolution operation for images in the field of computer vision, so the deep neural network of this structure is called a convolutional neural network (CNN), and the hidden layer generated as a result of the convolution operation is called a convolutional layer.
  • a neural network having a plurality of convolutional layers is referred to as a deep convolutional neural network (DCNN).
  • FIG. 11 is a diagram showing an example of a filter operation in a convolutional neural network.
  • the number of weights can be reduced by calculating a weighted sum by including only nodes located in a region covered by the filter from the node where the current filter is located. This allows one filter to be used to focus on features for a local area. Accordingly, CNN can be effectively applied to image data processing in which a physical distance in a 2D area is an important criterion. Meanwhile, in the CNN, a plurality of filters may be applied immediately before the convolution layer, and a plurality of output results may be generated through a convolution operation of each filter.
  • a recurrent neural network assigns an element (x1(t), x2(t), ,..., xd(t)) of any line t on a data sequence to a fully connected neural network.
  • the immediately preceding time point t-1 inputs the hidden vector (z1(t-1), z2(t-1),..., zH(t-1)) together to calculate the weighted sum and activation function structure that is applied.
  • the reason why the hidden vector is transmitted to the next time point in this way is that information in the input vector at previous time points is regarded as being accumulated in the hidden vector of the current time point.
  • FIG. 12 shows an example of a neural network structure in which a cyclic loop exists.
  • the recurrent neural network operates in a predetermined sequence of views with respect to an input data sequence.
  • the hidden vector (z1(1),z2(1),.. .,zH(1)) is input together with the input vector (x1(2),x2(2),...,xd(2)) of time 2, and the vector of the hidden layer (z1( 2),z2(2) ,...,zH(2)). This process is repeated until time point 2, time point 3, ,,, time point T.
  • FIG. 13 shows an example of an operating structure of a recurrent neural network.
  • a deep recurrent neural network a recurrent neural network
  • Recurrent neural networks are designed to be usefully applied to sequence data (eg, natural language processing).
  • Deep Q-Network As a neural network core used as a learning method, in addition to DNN, CNN, and RNN, Restricted Boltzmann Machine (RBM), deep belief networks (DBN), and Deep Q-Network It includes various deep learning techniques such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • RBM Restricted Boltzmann Machine
  • DNN deep belief networks
  • Deep Q-Network It includes various deep learning techniques such as computer vision, voice recognition, natural language processing, and voice/signal processing.
  • AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in fundamental signal processing and communication mechanisms. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanism, AI-based resource scheduling and may include allocations, etc.
  • FIG. 14 is a diagram illustrating a communication model for each level to which an embodiment proposed in this specification can be applied.
  • a communication model may be defined at three levels (A to C).
  • Level A is related to how accurately symbols (technical messages) can be transferred between transmitters/receivers. This can be considered when the communication model is understood from a technical point of view.
  • Level B is related to how accurately the symbols transmitted between the transmitter/receiver convey meaning. This can be considered when the communication model is grasped in terms of semantics.
  • Level C is concerned with how effectively the semantics received at the destination contribute to subsequent actions. This can be considered when the communication model is identified in terms of effectiveness.
  • Level A or Level C are considered in the design of the communication model, and may differ depending on the implementation method.
  • a communication model implemented focusing on Level A such as a communication model based on the prior art, may be considered.
  • a communication model considering not only Level A but also Level B (and Level C) for supporting semantic communication may be considered.
  • the transmitter and receiver may be referred to as a semantic transmitter and a semantic receiver, and semantic noise may be additionally considered.
  • One of the goals of 6G communications is to enable a variety of new services that interconnect people and machines with different levels of intelligence. It is necessary to consider not only the existing technical problem (eg, A in FIG. 14) but also the semantic problem (eg, B in FIG. 14). Semantic communication will be described in detail below, taking communication between people as an example.
  • Words for exchanging information are related to "meaning”. Listening to the speaker's words, the listener can interpret the meaning or concept expressed by the speaker's words. If this is related to the communication model of FIG. 14, to support semantic communication, a concept related to a message sent from the source needs to be correctly interpreted at the destination.
  • the communication model of the semantic level (B of FIG. 14) can provide performance improvement compared to the existing communication model of the technical level (A of FIG. 14).
  • One of the main reasons that such performance improvements can be provided is that knowledge sharing between source and destination is utilized. This knowledge can be a language made up of logical rules and entities that allow receivers to correct errors that occur at the symbolic level.
  • knowledge can be represented by a graph comprising nodes (or vertices) and links (or edges).
  • Nodes are associated with entities, and links represent relationships between entities.
  • Shared knowledge may be created based on graphs created to represent the knowledge. Specifically, the following operations may be performed. As shown in FIG. 14, a graph (eg, a subgraph) corresponding to local knowledge possessed by a source and a destination may be shared with each other. Through the sharing, shared knowledge including a plurality of pieces of information may be created. Based on the shared knowledge, since the same knowledge is shared between the source and the destination, the accuracy of interpretation of mutually transmitted concepts (semantic messages) can be improved. That is, normal semantic communication can be guaranteed through the shared knowledge.
  • the following problems may occur in that the size of graph-based knowledge (graph-based information) for generating the shared knowledge is very large.
  • AI/ML artificial intelligence/machine learning
  • GNN graph neural network
  • 15 is a diagram for explaining the operation of a graph neural network to which a method according to an embodiment of the present specification can be applied.
  • a GNN is a neural network that can be applied directly to a graph.
  • the output obtained from the graph input to the GNN is a node (vertex)/sub-graph/graph embedding that can be used for a specific prediction task.
  • Embedding may refer to a result of converting natural language into a vector, which is a number format that can be understood by a machine, or to mean the entire series of processes.
  • a similarity function may be used.
  • the similarity function is used to determine how the graph of the input network is mapped in the embedding space.
  • the similarity function used in the GNN may be based on one or a combination of existing similarity functions implemented in various ways.
  • the GNN model will be described in detail with reference to FIGS. 16 and 17 below.
  • FIG. 16 shows the definition of a GNN model to which a method according to an embodiment of the present specification can be applied.
  • 17 illustrates an operation related to a GNN model to which a method according to an embodiment of the present specification may be applied. Specifically, FIG. 17 shows an operation in which node embeddings are output as outputs from the GNN model.
  • the GNN model may be defined as a neural network for a target node.
  • the following operations may be performed for the GNN model.
  • a neighbor aggregation function is defined to obtain information on nodes neighboring the target node.
  • a loss function for embedding is defined.
  • the loss function corresponds to an index that expresses the difference between the predicted value of the model calculated based on the data and the actual value.
  • the loss function is an indicator of the ‘bad’ performance of the model, and is a function that indicates “how well the current model handles data (e.g. graph data)”. The closer the value according to the loss function is to 0, the higher the accuracy of the model.
  • the learned model can be applied to a graph completely different from the graph data used for learning.
  • a latent vector generated using GNN may be transmitted instead of the entire graph data. Accordingly, compared to the case of transmitting the entire graph data, required resources are reduced.
  • a prediction task to be performed using a latent vector may correspond to a task of interpreting a concept (semantic message) received in semantic communication.
  • GNN can be usefully utilized in a semantic communication system that operates based on graph data.
  • many operations are required in the process of generating a latent vector for the entire graph data. Since the amount of computation required is large, transmitted and received information (eg, latent vectors) may be limited. In this case, it is difficult to generate shared knowledge capable of ensuring normal performance of semantic communication.
  • a method of utilizing a part of the entire graph may be considered to solve the above-described problem. It is described in detail below.
  • Some graphs may be extracted from a graph representing knowledge (eg, entire graph data), and latent vector(s) may be generated from the extracted graphs.
  • the latent vector(s) generated by the transmitting end eg source/destination, wireless device, terminal/base station
  • the receiving end eg destination/source, wireless device, base station/terminal
  • the receiving end calculates the degree of similarity by comparing i) latent vector(s) generated by the receiving end and ii) previously received latent vector(s). Based on the calculated similarity, the received latent vector(s) is restored to periodically update graph data corresponding to local knowledge of the source and destination.
  • a semantic communication system that performs knowledge representation in a method using graph data is assumed.
  • a semantic encoder and a semantic decoder operate based on graph data.
  • Graph data may be updated based on extraction of a sub-graph and periodic transmission of the extracted sub-graph. Through the update, it is possible to increase the similarity of local knowledge between a source and a destination.
  • FIG. 18 is a flowchart illustrating an operation for updating related to processing of graph data according to an embodiment of the present specification. It is assumed that the wireless communication system of FIG. 18 is a communication system in which a semantic level (eg, Level B of FIG. 14 ) is considered.
  • a semantic level eg, Level B of FIG. 14
  • update related to graph data processing may be performed based on S1810 to S1840.
  • Data ie, local knowledge
  • a device eg source, terminal
  • base station eg destination
  • graph data expressed based on a graph.
  • the device/base station performs initialization related to graph data processing.
  • the device/base station extracts a sub-graph from the entire graph data.
  • the device/base station creates latent vector(s) from the extracted sub-graph.
  • Information including the generated latent vectors may be transferred between devices/base stations.
  • the base station may transmit information including latent vector(s) generated (by the base station) to the device.
  • the device may transmit information including latent vector(s) generated (by the device) to the base station.
  • the received sub-graph latent vector is processed.
  • the device/base station processes the latent vector(s) representing the received sub-graph.
  • the device may calculate a similarity between the latent vector(s) generated (by the device) and the received latent vector(s).
  • the base station may calculate a similarity between the latent vector(s) generated (by the base station) and the received latent vector(s).
  • the encoder/decoder related to graph data is updated. Specifically, after the graph-based knowledge expression is updated based on the calculated similarity, the device/base station may update an encoder/decoder related to graph data through a sub-graph extracted from corresponding graph data.
  • Operations according to S1820 to S1840 may be repeatedly performed until graph data update is stopped.
  • the initialization operation according to S1810 will be described in detail with reference to FIG. 19A.
  • 19A is a flowchart for explaining initialization related to graph data update for semantic communication according to an embodiment of the present specification.
  • the initialization operation according to S1810 may be performed based on S1910 to S1960.
  • a device to which power is applied performs an operation related to synchronization.
  • Operations related to the synchronization may include operations based on S611 to S616 in FIG. 6 .
  • a device receives a synchronization signal block (SSB) from a base station.
  • the device synchronizes with the base station using the SSB and obtains system information.
  • an RRC connection may be established. That is, the state of the device may be switched to an RRC connected state (RRC_CONNECTED state).
  • the base station transmits a DL-DCCH-message to the device.
  • the DL-DCCH-message may be an RRC message (UECapabilityEnquiry) related to a request for specific capability information of a device (terminal).
  • the specific performance information may be information indicating whether an operation related to graph data is supported.
  • Operations related to the graph data may include generation of graph data and processing of graph data (eg, sub-graph extraction, latent vector generation).
  • the device transmits a UL-DCCH-message to the base station.
  • the UL-DCCH-message may be a response to the DL-DCCH-message.
  • the UL-DCCH-message may be an RRC message (UECapabilityInformation) including the specific capability information.
  • the base station transmits information including an indicator related to initialization of graph data to the device.
  • Information including an indicator related to initialization of the graph data may be transmitted based on the specific performance information included in the UL-DCCH-message. Specifically, when it is determined that the device supports an operation related to graph data based on the specific performance information, information including an indicator related to initialization of the graph data may be transmitted.
  • information including an indicator related to initialization of the graph data is included in downlink control information (DCI), MAC-CE (Medium Access Control-Control Element), or RRC message.
  • DCI downlink control information
  • MAC-CE Medium Access Control-Control Element
  • RRC message RRC message
  • the information including an indicator related to the initialization of the graph data is 1) a GNN model, 2) a score function, and 3) a node set according to a score.
  • the number of node sets, 4) the number of hops in the sub-graph, 5) the positional-encoding method, 6) the latent vector of the sub-graph It may include information related to at least one of a transmission period of , and 7) a similarity function.
  • Information including an indicator related to the initialization of the graph data is the same as information used in the base station (eg, GNN model, similarity function, etc.).
  • the base station eg, GNN model, similarity function, etc.
  • the information including an indicator related to the initialization of the graph data may include information related to setting a GNN model.
  • a model may be set through the same method between a device (source) and a base station (destination).
  • the set model may include i) a graph encoding model and ii) a graph decoding model.
  • the graph encoding model generates a latent vector through mapping to a d-dimensional embedding space.
  • the generated latent vector may be used to measure similarity with the received latent vector.
  • the graph decoding model reconstructs a sub-graph from a latent vector. That is, the input of the graph decoding model is a latent vector, and the output is a subgraph.
  • the information related to setting the GNN model may include information on a model for which pre-learning has been completed in the base station (for coarse tuning).
  • the information related to setting the GNN model may include GNN model parameters (for fine tuning). Learning for a GNN model may be performed in a device that receives the GNN model parameters.
  • Information including an indicator related to initialization of the graph data may include information related to a score function.
  • the score function is a function for determining a score that is a criterion for finding and extracting a suitable local network (subgraph) from a graph network (graph data) created for knowledge expression. That is, the information related to the score function is information for allowing subgraphs to be extracted based on the same criterion between sources/destinations.
  • the application of the score function will be described with reference to FIG. 19B.
  • 19B is a diagram for explaining an operation related to a score function according to an embodiment of the present specification.
  • the score function is applied to a set consisting of arbitrary node x, node y, and edges connecting them. That is, the score for the set (score between node x and node y) can be calculated by the score function.
  • 19C is a table illustrating a heuristic method related to a score function according to an embodiment of the present specification.
  • high-order heuristics or graph structure features considering the entire network structure may be considered as the score function.
  • the score function may include Katz index, rooted PageRank, SimRank, and the like.
  • the heuristics are -decaying heuristic theory is used to prove according to the high-order heuristic type.
  • the heuristics learn a high-order graph structure from a sub-graph extracted based on a small number of hops to sufficiently reflect the information of the entire network. To have the amount of information do.
  • Information related to the score function may be information indicating any one of score functions (Katz index, rooted PageRank, SimRank) corresponding to the above-listed high-order heuristics.
  • K node sets may be selected in order of high scores (in order of high ranking) (Top-K score selection).
  • information including an indicator related to initialization of the graph data may include the number of node sets (eg, K) according to scores.
  • K the number of node sets according to scores.
  • the number of node sets according to the score may mean the specific number.
  • the number of sub-graphs to be exchanged between a source and a destination may be determined. For example, latent vectors delivered from source (or destination) to destination (or source) may be generated from subgraphs based on the number of node sets according to the score.
  • the information including an indicator related to the initialization of the graph data may include information related to the number of hops of a sub-graph.
  • the information related to the number of hops of the sub-graph represents a range of neighboring nodes included in each sub-graph extracted from each of the node sets selected according to the score.
  • the information related to the number of hops of the sub-graph is how far away from each node of the node set consisting of node pairs and edges between nodes is the sub-graph (sub-graph). graph). mentioned above
  • the number of hops of the sub-graph may be set to a small number of hops. For example, when information related to the number of hops of the sub-graph indicates 1 hop, the sub-graph is configured to include neighboring nodes separated by 1 hop from a node selected according to a score.
  • Information including an indicator related to initialization of the graph data may include information related to a positional-encoding method.
  • Two or more nodes are each located in different parts of the graph, but topologically have the same structure of neighbors.
  • the same latent vector (latent vector embedded at the same point in an embedding space) can be generated from the two or more nodes having the same structure of neighbors.
  • a node/edge needs to be added/modified after a sub-graph is extracted and transmitted (ie, after a latent vector generated from the extracted sub-graph is transmitted) and a similarity comparison between latent vectors is performed
  • information related to a positional-encoding method may be included in information including an indicator related to initialization of the graph data. It will be described with reference to FIG. 20 below.
  • FIG. 20 illustrates a case where a position encoding method according to an embodiment of the present specification is to be applied. Specifically, FIG. 20 illustrates a case in which structures of neighboring nodes of nodes having different positions are the same.
  • nodes V1 and V2 are disposed at different positions in the entire graph data.
  • nodes V1 and V2 have the same structure as neighboring nodes.
  • a sub-graph may be extracted based on at least one of the above-described score function, the number of node sets according to the score, or the number of hops of the sub-graph.
  • positional-encoding should be performed to add positional information of nodes corresponding to the entire graph data. That is, when subgraphs based on nodes V1 and V2 are extracted, information indicating the location of each node V1 and V2 based on the entire graph data may be added.
  • positional encoding may be performed considering both absolute and relative positions on a graph network (eg, positional-encoding method used in a transformer of Graph-BERT model).
  • Information including an indicator related to the initialization of the graph data may include information related to a transmission period of latent vectors of a subgraph.
  • the transmission period of the latent vectors may be set based on an operation amount between the device and the base station (eg, the amount of data that can be processed within a certain time through signaling between the device and the base station), link information, and the like. Based on the transmission period, latent vector generation, similarity comparison between latent vectors, and graph data update operations may be performed.
  • Information including an indicator related to initialization of the graph data may include information related to a similarity function.
  • Information related to the similarity function is used to update graph data on the same basis between sources/destinations.
  • the information related to the similarity function may include information indicating a similarity function equally used in source/destination. Since similarity is measured/calculated based on the same similarity function in source/destination, graph data can be updated according to the same criterion (similarity calculated according to the same similarity function) in source and destination.
  • the information related to the similarity function may indicate a score function according to 2) above.
  • x and y in FIG. 19B may be interpreted as a received latent vector and a generated latent vector, respectively.
  • information related to the similarity function may indicate a similarity function other than the score function.
  • the other similarity function may include a similarity function based on cosine, jaccard, overlap coefficient, and the like.
  • FIGS. 21A and 21B are flowcharts for explaining procedures related to subgraph extraction and latent vector generation/transmission according to an embodiment of the present specification. Specifically, FIGS. 21A and 21B show operations performed based on S1820 of FIG. 18 described above step by step.
  • the device/base station extracts a sub-graph from the graph data.
  • the device/base station creates latent vectors from subgraphs extracted through the GNN model.
  • latent vectors are generated using the subgraphs used in the task. Thereafter, the similarity between the generated latent vectors and recently received latent vectors is measured and the result is stored. The corresponding result is used when measuring the similarity with the next received latent vectors.
  • Latent vectors may be generated by inputting subgraphs including location information of nodes into a GNN model. The corresponding operation corresponds to the operation of the semantic encoder in Level B semantic communication in FIG. 14 .
  • Shared knowledge (or graph data) including multiple pieces of information can be created and updated by transmitting the generated latent vectors according to a predetermined cycle.
  • the above operations may be performed by a device and/or a base station, and in the following description, it is expressed as a device/base station for convenience.
  • a step-by-step description will be made with reference to FIGS. 21A and 21B.
  • the device/base station When the device/base station first transmits the latent vector of the subgraph (S2110), it determines the score of each node set of the graph data (S2111). At this time, the score may be determined based on the score function described above.
  • the device/base station selects K node sets in order of higher scores based on the number of node sets (eg, K) according to the score (S2121).
  • the device/base station extracts K sub-graphs including neighboring nodes h-hop away from each of the K node sets based on the number of hops (eg, h hop) of the sub-graph It does (S2131).
  • the device/base station generates latent vectors from each of the subgraphs extracted through the GNN model (S2141).
  • the device/base station When the device/base station has previously transmitted the latent vector of the subgraph, it performs an operation based on the transmission period described above (S2112). The device/base station waits if the transmission period has not arrived. When the transmission period arrives and the GNN model is (coarse/fine) tuning (S2122), the device/base station generates a latent vector using subgraphs used for the corresponding tuning (S2132) (S2141) . At this time, the positional information may be reflected in the generated latent vector through the above-described encoding (positional encoding).
  • latent vectors are generated using subgraphs extracted based on steps S2111 to S2131 (S2141).
  • the device/base station stores the ranking of the generated latent vectors (S2151).
  • the ranking may be a value based on scores of node sets (the K node sets) related to the extracted subgraphs.
  • latent vectors related to subgraphs extracted based on node sets having the highest scores may have the highest ranking (eg, lowest index).
  • latent vectors related to subgraphs extracted based on node sets having the lowest scores among the K node sets may have the lowest ranking (eg, highest index).
  • the device/base station measures the similarity between latent vector(s) having the same ranking (S2171).
  • a similarity between the first latent vector(s) generated (by the device/base station) and the second latent vector(s) received (from the base station/device) may be calculated.
  • similarity calculation may be performed between the first latent vector(s) and the second latent vector(s) having the same ranking (same index).
  • the first latent vector(s) may be latent vector(s) generated through a fine tuned GNN (eg, a semantic encoder).
  • the second latent vector(s) may be latent vector(s) used for updating graph data.
  • the similarity measurement/calculation may be performed based on a function set through the initialization operation. Specifically, the similarity measurement/calculation may be performed based on a score function used in calculating the score or a similarity function different from the score function (eg, Cosine, Jaccard, overlap coefficient, etc.).
  • a score function used in calculating the score or a similarity function different from the score function (eg, Cosine, Jaccard, overlap coefficient, etc.).
  • the device/base station stores the measured similarity (similarity calculation result) (S2181).
  • the device/base station transmits information including latent vectors generated from the extracted subgraphs to the base station/device (S2190).
  • 22 is a diagram for explaining an operation related to subgraph extraction according to an embodiment of the present specification.
  • FIG. 22 illustrates the operation of a GNN model (eg, GraphRNN) related to graph decoding.
  • GNN model eg, GraphRNN
  • a subgraph may be extracted based on the graph decoding.
  • nodes may be added along a solid line direction and edges may be added along a dotted line direction.
  • FIG. 23 is a flowchart for explaining a procedure related to processing of received latent vectors according to an embodiment of the present specification. Specifically, FIG. 23 shows operations performed based on S1830 of FIG. 18 described above step by step.
  • the device/base station After the device/base station receives the latent vectors of the subgraphs from the base station/device (S2310), the received latent vectors (first latent vectors) having the same ranking as the latent vectors (first latent vectors) of the subgraphs it generated 2 latent vectors), the similarity is measured using the set similarity function (S2320).
  • the similarity measurement/calculation may be performed based on a function set through the initialization operation. Specifically, the similarity measurement/calculation may be performed based on a score function used in calculating the score or a similarity function different from the score function (eg, Cosine, Jaccard, overlap coefficient, etc.).
  • the corresponding result (similarity measurement result) is stored (S2340).
  • the stored similarity is used to determine whether the similarity has changed after the device/base station receives the next latent vectors.
  • the measured similarity is compared with the stored similarity (S2350).
  • the criterion for determining whether the measured similarity is different from the stored similarity may vary depending on the range of values corresponding to the similarity function determined (through initialization) between the source and the destination.
  • the device/base station When the measured similarity is different from the stored similarity (S2360) (ie, when it is determined that an operation to make the possessed local knowledge equal is required), the device/base station performs graph decoding from the received latent vector. A subgraph is extracted (S2371). The extraction operation of the sub-graph may be performed based on the GNN model (eg, GraphRNN) set through the above-described initialization operation. That is, the device/base station may restore the sub-graph based on the GNN model set through the initialization operation from the latent vector. The corresponding operation corresponds to the operation of the semantic decoder in level B semantic communication in FIG. 14 .
  • the GNN model eg, GraphRNN
  • the device/base station compares the decoded sub-graph obtained through the graph decoding with a sub-graph (possessed sub-graph) related to the transmission of the most recent latent vector (S2372).
  • the device/base station may add/modify/remove nodes/edges of the subgraph in a direction in which similarity is improved (S2373).
  • the device/base station reflects changes to the entire graph data (S2374). Specifically, the device/base station reflects an updated subgraph (a subgraph in which nodes/edges are added/modified/removed) to the entire graph data.
  • FIG. 24 is a flowchart for explaining a procedure related to updating graph data according to an embodiment of the present specification. Specifically, FIG. 24 shows operations performed based on S1840 of FIG. 18 described above step by step.
  • a semantic encoder/decoder may be updated based on the updated graph data.
  • the semantic encoder is a GNN that generates a latent vector
  • the semantic decoder is a GNN that restores a subgraph based on a latent vector
  • a sub-graph is extracted from the updated graph data (S2420).
  • the sub-graph may be extracted according to the method described above.
  • the score function the score of each set of nodes included in the updated data graph is determined. K node sets are selected in order of high scores, and subgraphs are extracted based on the number of hops set through the initialization operation.
  • location information may be reflected in each subgraph through the location encoding.
  • a fine tuning operation of the semantic encoder and the semantic decoder may be performed using the extracted subgraph (S2430 and S2440).
  • latent vectors are generated by the semantic encoder.
  • a sub-graph is restored from the generated latent vectors by the semantic decoder.
  • the fine tuning operation may be performed so that the reconstructed subgraph is the same as the extracted subgraph.
  • the extracted subgraph may be used to generate a latent vector according to S2132 of FIG. 21A.
  • operations according to the above-described embodiments are processed by the above-described apparatuses of FIGS. 1 to 5 (eg, the processors 202a and 202b of FIG. 2 ).
  • operations may include instructions for driving at least one processor (eg, the processors 202a and 202b of FIG. 2 ) It may be stored in a memory (eg, 204a, 204b in FIG. 2) in the form of a program (eg, instruction, executable code).
  • wireless devices eg, the first wireless device 200a and the second wireless device 200b of FIG. 2 .
  • the methods described below are only classified for convenience of explanation, and it goes without saying that some components of one method may be substituted with some components of another method, or may be applied in combination with each other.
  • 25 is a flowchart for explaining a method performed by a first wireless device to support semantic communication in a wireless communication system according to an embodiment of the present specification.
  • a method performed by a first wireless device to support semantic communication in a wireless communication system includes a performance information transmission step (S2510), control information related to semantic communication support.
  • the first wireless device transmits capability information related to supporting the semantic communication to the second wireless device.
  • the performance information related to the support of the semantic communication may include information indicating whether the first wireless device supports an operation related to graph data.
  • the performance information may be transmitted based on Device to Device (D2D) communication.
  • the first wireless device may be a terminal
  • the second wireless device may be another terminal.
  • the performance information is transmitted from the terminal to the base station. That is, a terminal (eg, a first wireless device) and another terminal (eg, a second wireless device) may respectively transmit their own performance information to the base station.
  • the performance information may include information indicating whether the first wireless device (or the second wireless device) supports an operation related to graph data.
  • the performance information transmitted by the first wireless device (second wireless device) may include information indicating whether the first wireless device (second wireless device) supports an operation related to graph data.
  • the first wireless device (eg, 200a in FIG. 2 ) transmits capability information related to the support of the semantic communication to the second wireless device (eg, 200b in FIG. 2 ). It can be implemented by the device of FIGS. 1 to 5 .
  • one or more processors 202a may transmit capability information related to support of the semantic communication to a second wireless device 200b using one or more transceivers 206a and/or One or more memories 204a may be controlled.
  • the first wireless device receives control information related to supporting the semantic communication from the second wireless device.
  • control information is i) a score function for extracting a sub-graph related to a part of knowledge represented by preset graph data, ii) the sub-graph It may include information on at least one of a graph neural network model (GNN model) for processing or iii) a similarity function related to a latent vector generated from the subgraph.
  • GNN model graph neural network model
  • information on i) to iii) may be based on information related to 1) a GNN model, 2) a score function, or 7) a similarity function included in the information (indicator) transmitted for the above-described initialization operation. there is.
  • control information may be transmitted when the first wireless device supports an operation related to the graph data. That is, the control information is transmitted from the second wireless device to the first wireless device based on the fact that the performance information related to supporting the semantic communication indicates that an operation related to graph data is supported by the first wireless device.
  • the control information may be transmitted based on Device to Device (D2D) communication.
  • the first wireless device may be a terminal
  • the second wireless device may be another terminal.
  • the control information may be transmitted from the base station to each terminal (the first wireless device and the second wireless device).
  • the base station receiving the performance information from the first wireless device and the second wireless device transmits the control information to the first wireless device when the first wireless device and the second wireless device support an operation related to the graph data.
  • the first wireless device and the second wireless device may perform initial setting for semantic communication (the above-described initialization operation) using the received control information.
  • the GNN model may be related to at least one of an operation of generating a latent vector from graph data or an operation of restoring a sub-graph from a latent vector.
  • An operation of generating a latent vector from the graph data may be related to the operation of the above-described semantic encoder.
  • An operation of restoring a sub-graph from the latent vector may be related to the operation of the above-described semantic decoder.
  • control information is i) the number of specific node sets determined in order of high calculated scores, ii) the number of hops related to the one or more subgraphs, iii) the one or more subgraphs , or iv) information on at least one of a transmission period of information including the one or more second latent vectors.
  • the information on i) to iv) includes 3) the number of node sets according to the score (related to Top-k score selection) included in the information (indicator) transmitted for the aforementioned initialization operation, 4) sub It may be based on information related to the number of hops of a sub-graph, 5) a positional-encoding method, or 6) a transmission period of latent vectors of a sub-graph.
  • one or more processors 202a may use one or more transceivers 206a and/or to receive control information related to support of the semantic communication from a second wireless device (eg, 200b of FIG. 2 ).
  • one or more memories 204a may be controlled.
  • Operations based on S2510 to S2520 described above correspond to initialization operations related to updating graph data. Therefore, after the initialization operation is performed, only operations according to S2530 to S2570 described below may be periodically/repetitively performed.
  • the first wireless device extracts one or more sub-graphs from preset graph data.
  • the one or more subgraphs may be extracted based on the control information.
  • the preset graph data may be graph data set through the aforementioned initialization operation.
  • the one or more sub-graphs may be based on specific node sets determined in order of high scores calculated based on the score function among node sets included in the preset graph data.
  • the specific node sets may be based on K node sets selected in S2121 of FIG. 21A.
  • each of the one or more subgraphs is extracted to include one or more neighboring nodes based on the number of hops (eg, 1 hop or 2 hops) from each node of any one of the specific node sets.
  • each of the one or more subgraphs may be extracted to include location information related to any one of the specific node sets.
  • location information may indicate the location of a node based on preset graph data (entire graph data).
  • the location information may include information representing the location of each node related to any one of the specific note sets based on preset graph data (eg, entire graph data).
  • preset graph data eg, entire graph data
  • the position of each node related to any one of the specific note sets may be identified based on an (absolute/relative) position in preset graph data (eg, entire graph data).
  • Operation of the first wireless device (eg, 200a in FIG. 2 ) extracting one or more sub-graphs from preset graph data based on the control information in accordance with the above-described S2530 may be implemented by the device of FIGS. 1 to 5 .
  • one or more processors 202a use one or more transceivers to extract one or more sub-graphs from preset graph data based on the control information. 206a and/or one or more memories 204a.
  • the first wireless device generates one or more first latent vectors based on the one or more subgraphs. Generation of the one or more first latent vectors may be performed based on an encoding operation related to the GNN model.
  • the first wireless device (eg, 200a in FIG. 2 ) generates one or more first latent vectors based on the one or more subgraphs, as shown in FIG. 1 through the device of FIG. 5 .
  • one or more processors 202a may include one or more transceivers 206a to generate one or more first latent vectors based on the one or more sub-graphs. and/or control one or more memories 204a.
  • the first wireless device receives information including one or more second latent vectors from the second wireless device.
  • the first wireless device includes one or more second latent vectors from the second wireless device (eg, 200b in FIG. 2 ).
  • the operation of receiving the information may be implemented by the device of FIGS. 1 to 5.
  • one or more processors 202a may receive information including one or more second latent vectors from the second wireless device (eg, 200b of FIG. 2 ). may control one or more transceivers 206a and/or one or more memories 204a to receive .
  • the first wireless device calculates a similarity between the one or more first latent vectors and the one or more second latent vectors based on the similarity function.
  • the similarity function may be based on i) the score function or ii) a similarity function different from the score function (eg, Cosine, Jaccard, overlap coefficient).
  • the latent vector may be identified by an index indicating a ranking based on the calculated score. Specifically, the degree of similarity may be calculated between the first latent vector and the second latent vector having the same index indicating the ranking.
  • the first wireless device calculates a similarity between the one or more first latent vectors and the one or more second latent vectors based on the similarity function.
  • the operation may be implemented by the device of FIGS. 1 to 5 .
  • one or more processors 202a may be configured to calculate a similarity between the one or more first latent vectors and the one or more second latent vectors based on the similarity function. may control the transceiver 206a and/or one or more memories 204a.
  • the first wireless device updates the one or more subgraphs based on the calculated similarity.
  • the updating of the one or more subgraphs may include: reconstructing one or more subgraphs from the one or more second latent vectors; comparing the restored one or more subgraphs with the one or more subgraphs; and updating the one or more subgraphs by doing so, and reflecting the updated one or more subgraphs to the preset graph data.
  • Updating the one or more subgraphs may be performed by adding, modifying, or deleting a node or an edge included in each of the one or more subgraphs.
  • the preset graph data may be updated based on the updated one or more subgraphs.
  • the updated (that is, the updated one or more sub-graphs are reflected) preset graph data may correspond to shared knowledge for supporting semantic communication according to the above-described embodiment.
  • Updating the one or more subgraphs may be performed based on a transmission period of information including the one or more second latent vectors.
  • the update of the preset graph data may also be performed at the same cycle as the update of the one or more sub-graphs.
  • the first wireless device (eg, 200a of FIG. 2 ) updating the one or more subgraphs based on the calculated similarity can be implemented by the devices of FIGS. 1 to 5 .
  • one or more processors 202a may control one or more transceivers 206a and/or one or more memories 204a to update the one or more sub-graphs based on the calculated degree of similarity.
  • a transmission operation of a first wireless device may be interpreted as a reception operation of a second wireless device, and a reception operation of the first wireless device may be interpreted as a transmission operation of a second wireless device.
  • an operation performed by a first wireless device may be performed by a second wireless device. That is, subgraph extraction, latent vector generation, and subgraph update operations can be performed on both sides of the source/destination, respectively, and the operations based on S2530 to S2570 described above can also be performed by the second wireless device. there is.
  • the method may be performed based on all or part (eg, S2530 to S2570) of S2510 to S2570.
  • the method may be performed when an initialization operation related to updating graph data is not performed.
  • the method may include S2510 to S2570.
  • the first wireless device may be a user equipment (UE), and the second wireless device may be a base station (BS).
  • UE user equipment
  • BS base station
  • the method may be performed based on Device to Device (D2D) communication.
  • the method may include S2510 to S2570.
  • the initialization operations according to S2510 to S2520 may be performed based on D2D communication.
  • the first wireless device may be a terminal, and the second wireless device may be another terminal.
  • the control information may be transmitted from the base station to each terminal (the first wireless device and the second wireless device).
  • the base station receiving the performance information from the first wireless device and the second wireless device transmits the control information to the first wireless device when the first wireless device and the second wireless device support an operation related to the graph data.
  • the first wireless device and the second wireless device may perform initial setting for semantic communication (the above-described initialization operation) using the received control information.
  • the method may be performed after initialization related to updating graph data.
  • the initialization may be performed based on communication between the base station and the terminal or (in case of D2D communication) based on communication between each terminal and the base station.
  • the method may include S2530 to S2570.
  • the first wireless device may be a user equipment (UE) or a base station (BS)
  • the second wireless device may be a user equipment (UE) or a base station (BS). That is, the remaining operations (S2530 to S2570) excluding the operations according to S2510 to S2520 may be performed by the terminal and the base station as well as the terminal and other terminals.
  • the wireless communication technology implemented in the wireless devices 200a and 200b of the present specification may include Narrowband Internet of Things for low power communication as well as LTE, NR, and 6G.
  • NB-IoT technology may be an example of LPWAN (Low Power Wide Area Network) technology, and may be implemented in standards such as LTE Cat NB1 and / or LTE Cat NB2. no.
  • the wireless communication technology implemented in the wireless device (XXX, YYY) of the present specification may perform communication based on LTE-M technology.
  • LTE-M technology may be an example of LPWAN technology, and may be called various names such as eMTC (enhanced machine type communication).
  • LTE-M technologies are 1) LTE CAT 0, 2) LTE Cat M1, 3) LTE Cat M2, 4) LTE non-BL (non-Bandwidth Limited), 5) LTE-MTC, 6) LTE Machine Type Communication, and/or 7) It may be implemented in at least one of various standards such as LTE M, and is not limited to the above-mentioned names.
  • the wireless communication technology implemented in the wireless device (XXX, YYY) of the present specification is at least one of ZigBee, Bluetooth, and Low Power Wide Area Network (LPWAN) considering low power communication It may include any one, and is not limited to the above-mentioned names.
  • ZigBee technology can generate personal area networks (PANs) related to small/low-power digital communication based on various standards such as IEEE 802.15.4, and can be called various names.
  • PANs personal area networks
  • An embodiment according to the present specification may be implemented by various means, for example, hardware, firmware, software, or a combination thereof.
  • one embodiment of the present invention provides one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), FPGAs ( field programmable gate arrays), processors, controllers, microcontrollers, microprocessors, etc.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, microcontrollers, microprocessors, etc.
  • an embodiment of the present specification may be implemented in the form of a module, procedure, or function that performs the functions or operations described above.
  • the software code can be stored in memory and run by a processor.
  • the memory may be located inside or outside the processor and exchange data with the processor by various means known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Le procédé mis en oeuvre par un premier dispositif radio prenant en charge une communication sémantique dans un système de communication sans fil selon un mode de réalisation de la présente invention, comprend les étapes consistant à : extraire au moins un sous-graphe de données de graphe préconfigurées ; générer au moins un premier vecteur latent ; recevoir des informations comprenant au moins un second vecteur latent en provenance d'un second dispositif radio ; calculer une similarité entre l'au moins un premier vecteur latent et l'au moins un second vecteur latent ; et mettre à jour l'au moins un sous-graphe sur la base de la similarité calculée.
PCT/KR2021/014691 2021-10-20 2021-10-20 Procédé et dispositif pour prendre en charge une communication sémantique dans un système de communication sans fil WO2023068398A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/014691 WO2023068398A1 (fr) 2021-10-20 2021-10-20 Procédé et dispositif pour prendre en charge une communication sémantique dans un système de communication sans fil

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/014691 WO2023068398A1 (fr) 2021-10-20 2021-10-20 Procédé et dispositif pour prendre en charge une communication sémantique dans un système de communication sans fil

Publications (1)

Publication Number Publication Date
WO2023068398A1 true WO2023068398A1 (fr) 2023-04-27

Family

ID=86058193

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/014691 WO2023068398A1 (fr) 2021-10-20 2021-10-20 Procédé et dispositif pour prendre en charge une communication sémantique dans un système de communication sans fil

Country Status (1)

Country Link
WO (1) WO2023068398A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674317A (zh) * 2019-09-30 2020-01-10 北京邮电大学 一种基于图神经网络的实体链接方法及装置
KR102131099B1 (ko) * 2014-02-13 2020-08-05 삼성전자 주식회사 지식 그래프에 기초한 사용자 인터페이스 요소의 동적 수정 방법

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102131099B1 (ko) * 2014-02-13 2020-08-05 삼성전자 주식회사 지식 그래프에 기초한 사용자 인터페이스 요소의 동적 수정 방법
CN110674317A (zh) * 2019-09-30 2020-01-10 北京邮电大学 一种基于图神经网络的实体链接方法及装置

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALSENTZER ALSENTZER EMILY EMILY, FINLAYSON SAMUEL G, LI MICHELLE M, ZITNIK MARINKA: "Subgraph Neural Networks", 34TH CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 6TH - DEC 12TH, 2020, VANCOUVER, CANADA, 1 December 2020 (2020-12-01), Vancouver, Canada, XP093057797, Retrieved from the Internet <URL:https://proceedings.neurips.cc/paper/2020/file/5bca8566db79f3788be9efd96c9ed70d-Paper.pdf> [retrieved on 20230626], DOI: 10.48550/arxiv.2006.10538 *
JIANG HE; LI LUSI; WANG ZHENHUA; HE HAIBO: "Graph Neural Network Based Interference Estimation for Device-to-Device Wireless Communications", 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), IEEE, 18 July 2021 (2021-07-18), pages 1 - 7, XP033975024, DOI: 10.1109/IJCNN52387.2021.9534202 *
LAN LAN ZIXUN ZIXUN, YU LIMIN, YUAN LINGLONG, WU ZILI, NIU QIANG, MA FEI: "Sub-GMN: The Neural Subgraph Matching Network Model", ARXIV:2104.00186V3, 30 April 2021 (2021-04-30), XP093057786, Retrieved from the Internet <URL:https://arxiv.org/pdf/2104.00186v3.pdf> [retrieved on 20230626], DOI: 10.48550/arxiv.2104.00186 *

Similar Documents

Publication Publication Date Title
WO2021112360A1 (fr) Procédé et dispositif d&#39;estimation de canal dans un système de communication sans fil
WO2020130727A1 (fr) Commande d&#39;initiateurs de télémétrie et de répondeurs dans un réseau à bande ultralarge
WO2020067623A1 (fr) Procédé d&#39;émission et de réception de signal de liaison descendante entre un terminal et une station de base dans un système de communication sans fil, et appareil le prenant en charge
WO2022050432A1 (fr) Procédé et dispositif d&#39;exécution d&#39;un apprentissage fédéré dans un système de communication sans fil
WO2021256584A1 (fr) Procédé d&#39;émission ou de réception de données dans un système de communication sans fil et appareil associé
WO2020145741A1 (fr) Service mac spécifique à la télémétrie et attributs pib pour ieee 802.15.4z
WO2022086160A1 (fr) Procédé permettant de transmettre efficacement un rapport de csi de liaison latérale destiné à une gestion de faisceaux dans un système de communication v2x à ondes millimétriques
WO2022075493A1 (fr) Procédé de réalisation d&#39;un apprentissage par renforcement par un dispositif de communication v2x dans un système de conduite autonome
WO2022010012A1 (fr) Procédé et dispositif de formation de faisceaux dans un système de communication sans fil
WO2023003054A1 (fr) Procédé de réalisation d&#39;une communication directe à sécurité quantique dans un système de communication quantique, et appareil associé
WO2022025321A1 (fr) Procédé et dispositif de randomisation de signal d&#39;un appareil de communication
WO2022045399A1 (fr) Procédé d&#39;apprentissage fédéré basé sur une transmission de poids sélective et terminal associé
WO2022014751A1 (fr) Procédé et appareil de génération de mots uniques pour estimation de canal dans le domaine fréquentiel dans un système de communication sans fil
WO2021251511A1 (fr) Procédé d&#39;émission/réception de signal de liaison montante de bande de fréquences haute dans un système de communication sans fil, et dispositif associé
WO2023068398A1 (fr) Procédé et dispositif pour prendre en charge une communication sémantique dans un système de communication sans fil
WO2022265141A1 (fr) Procédé de réalisation d&#39;une gestion de faisceaux dans un système de communication sans fil et dispositif associé
WO2022004927A1 (fr) Procédé d&#39;émission ou de réception de signal avec un codeur automatique dans un système de communication sans fil et appareil associé
WO2022010014A1 (fr) Procédé et appareil d&#39;estimation de bruit de phase dans un système de communication sans fil
WO2022030664A1 (fr) Procédé de communication basé sur la similarité d&#39;informations spatiales de bande inter-fréquence pour canal dans un système de communication sans fil et appareil associé
WO2022014735A1 (fr) Procédé et dispositif permettant à un terminal et une station de base de transmettre et recevoir des signaux dans un système de communication sans fil
WO2022039302A1 (fr) Procédé destiné au contrôle de calculs de réseau neuronal profond dans un système de communication sans fil, et appareil associé
WO2022045377A1 (fr) Procédé par lequel un terminal et une station de base émettent/reçoivent des signaux dans un système de communication sans fil, et appareil
WO2022050434A1 (fr) Procédé et appareil pour effectuer un transfert intercellulaire dans système de communication sans fil
WO2021256585A1 (fr) Procédé et dispositif pour la transmission/la réception d&#39;un signal dans un système de communication sans fil
WO2023013795A1 (fr) Procédé de réalisation d&#39;un apprentissage fédéré dans un système de communication sans fil, et appareil associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21961487

Country of ref document: EP

Kind code of ref document: A1