WO2024060139A1 - 通信方法、装置、存储介质及程序产品 - Google Patents

通信方法、装置、存储介质及程序产品 Download PDF

Info

Publication number
WO2024060139A1
WO2024060139A1 PCT/CN2022/120568 CN2022120568W WO2024060139A1 WO 2024060139 A1 WO2024060139 A1 WO 2024060139A1 CN 2022120568 W CN2022120568 W CN 2022120568W WO 2024060139 A1 WO2024060139 A1 WO 2024060139A1
Authority
WO
WIPO (PCT)
Prior art keywords
communication
artificial intelligence
indication information
service data
information
Prior art date
Application number
PCT/CN2022/120568
Other languages
English (en)
French (fr)
Inventor
王坚
乔云飞
李榕
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2022/120568 priority Critical patent/WO2024060139A1/zh
Publication of WO2024060139A1 publication Critical patent/WO2024060139A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • H04W80/08Upper layer protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • H04W80/08Upper layer protocols
    • H04W80/12Application layer protocols, e.g. WAP [Wireless Application Protocol]

Definitions

  • the present application relates to the field of communication technology, and in particular, to a communication method, device, storage medium and program product.
  • AI traffic artificial intelligence traffic
  • L1/L2 protocol layer such as the physical (PHY) layer or medium access control (MAC) layer of the wireless communication system
  • PHY physical
  • MAC medium access control
  • the transmission method of the new radio (new radio, NR) standard is described as follows: constructing a quality of service (QoS) flow and mapping it to the service data in the protocol stack step by step.
  • QoS quality of service
  • SDAP Service data adaptation protocol
  • PDCP packet data convergence protocol
  • RLC radio link control
  • This application provides a communication method, device, storage medium and program product to realize the transmission of artificial intelligence business data generated by the PHY/MAC layer.
  • a communication method includes: sending first indication information and/or second indication information.
  • the first indication information is used to indicate data set status information.
  • the second indication information is used to indicate data set status information. to indicate model status information; and receive third indication information, the third indication information is used to indicate a communication strategy and communication resources, the communication strategy and the communication resources are based on the first indication information and/or the
  • the second indication information is determined.
  • the method may be executed by a first device, which may be a communication device or a component in the communication device, such as a chip (system).
  • the first device may be a first node in the network system.
  • the first node reports data set status information and/or model status information, so that the second node determines the communication strategy and communication resources based on the data set status information and/or model status information reported by the first node, and indicates
  • the first node communication strategy and communication resources can realize the transmission of artificial intelligence service data generated by the PHY/MAC layer.
  • the data set status information includes at least one of the following information: data type, sampling frequency, sampling location information, sample timestamp, sample number, and sample accuracy.
  • the data set status information refers to how the first node obtains the input data of the model.
  • the data set status information affects the communication strategy and communication resource determination of the artificial intelligence data packet.
  • the model status information includes at least one of the following information: model type, model identification, ordering of sub-models, model version number, model description, model update timestamp, model update location information, model Accuracy, model performance.
  • model state information refers to the description of the model adopted by the first node. This model state information affects the communication strategy and communication resource determination of artificial intelligence data packets.
  • the communication strategy is determined based on at least two items of the first indication information, the second indication information and channel state information.
  • the channel status between the first node and the second node also affects the communication strategy adopted by the first node. Therefore, according to the first indication information, the second indication information and the channel status information, At least two of them determine the communication strategy, which improves the accuracy of the communication strategy.
  • the communication strategy includes at least one of the following strategies: quantization strategy, source coding strategy, modulation coding strategy, source channel joint coding strategy, real number or complex number processing strategy.
  • the method further includes: reporting the size of the artificial intelligence service data packet; wherein the communication resource is determined based on the size of the artificial intelligence service data packet, the data set status information, and the At least two items in the above model status information are determined.
  • communication resources are associated with at least two of the size of the artificial intelligence service data packet, the data set status information, and the model status information. According to the size of the artificial intelligence service data packet, the data set At least two of the status information and the model status information determine communication resources, which improves the accuracy of the determined communication resources.
  • the method further includes: generating an artificial intelligence service data packet according to the third indication information; and sending the artificial intelligence service data packet according to the third indication information.
  • the first node generates an artificial intelligence service data packet and can send the artificial intelligence service data packet to the second node or the third node.
  • the method further includes: receiving the artificial intelligence service data packet according to the third indication information.
  • the second node may also send the artificial intelligence service data packet to the first node according to the third indication information.
  • the second node instructs the third node to send the artificial intelligence service data packet to the first node.
  • the artificial intelligence service data packets are communicated through a physical learning channel.
  • the physical learning channel is a new physical channel defined for the artificial intelligence service data of the PHY/MAC layer.
  • communication through the physical learning channel includes at least one of the following processing operations: quantization, source encoding; or communication through the physical learning channel includes at least one of the following processing operations: quantization, information source Channel joint coding; or communication over said physically learned channel includes the following processing operations: real or complex processing.
  • artificial intelligence service data is generally in the form of real numbers or complex numbers, and cannot be directly processed by bit-level channel coding. Therefore, operations such as quantization, source coding, source channel joint coding, real number or complex number processing need to be added to the data processing flow of the physical learning channel.
  • the artificial intelligence service data package includes a packet header and a payload
  • the payload includes at least one of the following data: parameters or gradients of the machine learning model, inference results, and intermediate results from inference to the segmentation layer, Back-pass the gradient to the segmentation layer, pairwise fraction.
  • the packet header includes at least one of the following: content indication information, content description information, compression method, and packet grouping method.
  • the receiving end can accurately depacket based on the information in the packet header.
  • a communication method comprising: receiving first indication information and/or second indication information, the first indication information being used to indicate data set status information, the second indication information being used to indicate model status information; determining a communication strategy and communication resources according to the first indication information and/or the second indication information; and sending third indication information, the third indication information being used to indicate the communication strategy and the communication resources.
  • the method may be executed by a second device, the second device may be a communication device, or a component in the communication device, such as a chip (system).
  • the second device may be a second node in a network system.
  • the second node determines the communication strategy and communication resources by obtaining the data set status information and/or model status information reported by the first node, and instructs the first node to communicate the communication strategy and communication resources, so that the PHY can be implemented /Transmission of artificial intelligence business data generated by the MAC layer.
  • the data set status information includes at least one of the following information: data type, sampling frequency, sampling location information, sample timestamp, sample number, and sample accuracy.
  • the model status information includes at least one of the following information: model type, model identification, ordering of sub-models, model version number, model description, model update timestamp, model update location information, model Accuracy, model performance.
  • the communication strategy is determined based on at least two items of the first indication information, the second indication information and channel state information.
  • the communication strategy includes at least one of the following strategies: quantization strategy, source coding strategy, modulation coding strategy, source channel joint coding strategy, real number or complex number processing strategy.
  • determining communication resources according to the first indication information and/or the second indication information includes: obtaining the size of the artificial intelligence service data packet; and according to the artificial intelligence service At least two items of the size of the data packet, the first indication information, and the second indication information determine the communication resource.
  • obtaining the size of the artificial intelligence service data package includes: determining the size of the artificial intelligence service data package based on the corresponding relationship between at least one of the task type, the service type, and the data packet size. size; or the size of the reported artificial intelligence service data packet received.
  • the method further includes: receiving an artificial intelligence service data packet sent according to the third indication information.
  • the method further includes: generating an artificial intelligence service data packet according to the third indication information; and sending the artificial intelligence service data packet according to the third indication information.
  • the method further includes: instructing a third node to send the artificial intelligence service data packet according to the third indication information.
  • the artificial intelligence service data packets are communicated through a physical learning channel.
  • communication through the physical learning channel includes at least one of the following processing operations: quantization, source encoding; or communication through the physical learning channel includes at least one of the following processing operations: quantization, information source Channel joint coding; or communication over said physically learned channel includes the following processing operations: real or complex number processing.
  • the artificial intelligence service data package includes a packet header and a payload
  • the payload includes at least one of the following data: parameters or gradients of the machine learning model, inference results, and intermediate results from inference to the segmentation layer, Back-pass the gradient to the segmentation layer, pairwise fraction.
  • the packet header includes at least one of the following: content indication information, content description information, compression method, and packet grouping method.
  • a communication method includes: obtaining artificial intelligence service data generated by the physical PHY layer and/or the media access control MAC layer; and sending the artificial intelligence service data through the first wireless bearer.
  • the method may be executed by a first device, which may be a communication device or a component in the communication device, such as a chip (system).
  • the first device may be a first node in the network system.
  • a new protocol stack processing flow is defined.
  • the artificial intelligence service data generated at the PHY/MAC layer after it is input into the first wireless bearer, the artificial intelligence service data can be sent to the second node through the first wireless bearer, thereby realizing the communication of the artificial intelligence service data generated at the PHY/MAC layer.
  • sending the artificial intelligence service data through the first wireless bearer includes: processing the artificial intelligence service data in at least one of the following protocol layers: Packet Data Convergence Protocol PDCP layer, wireless link Route control RLC layer, MAC layer, PHY layer; and send processed artificial intelligence business data.
  • protocol layers Packet Data Convergence Protocol PDCP layer, wireless link Route control RLC layer, MAC layer, PHY layer; and send processed artificial intelligence business data.
  • the first radio bearer is a learning radio bearer or a first data radio bearer.
  • the method is applied to a terminal, and the method further includes: receiving first signaling, where the first signaling is used to indicate configuring the first radio bearer.
  • the method is applied to a network device, and the method further includes: sending a first signaling, where the first signaling is used to indicate configuration of the first radio bearer.
  • the first radio bearer is a newly defined learning radio bearer or a first data radio bearer, and the first radio bearer can be configured through first signaling.
  • a communication method includes: receiving artificial intelligence service data through a first wireless bearer; and outputting the artificial intelligence service data to the physical PHY layer and/or the media access control MAC layer.
  • the method may be executed by a second device, which may be a communication device or a component in the communication device, such as a chip (system).
  • the second device may be a second node in the network system.
  • a new protocol stack processing flow is defined. After receiving the artificial intelligence service data through the first wireless bearer, the artificial intelligence service data is output to the PHY/MAC layer, realizing the processing generated at the PHY/MAC layer. Communication of artificial intelligence business data.
  • outputting the artificial intelligence service data to the physical PHY layer and/or the media access control MAC layer includes: processing the artificial intelligence service data in at least one of the following protocol layers: PHY layer, MAC layer, wireless link control RLC layer, packet data convergence protocol PDCP layer; and output the processed artificial intelligence service data to the PHY layer and/or MAC layer.
  • the first radio bearer is a learning radio bearer or a first data radio bearer.
  • the method is applied to a network device, and the method further includes: sending first signaling, where the first signaling is used to indicate configuring the first radio bearer.
  • the method is applied to a terminal, and the method further includes: receiving first signaling, where the first signaling is used to indicate configuring the first radio bearer.
  • the first radio bearer is a newly defined learning radio bearer or a first data radio bearer, and the first radio bearer can be configured through first signaling.
  • a fifth aspect provides a communication device that can implement the communication method in the first aspect.
  • the communication device may be a chip or a first node.
  • the above method can be implemented through software, hardware, or through hardware executing corresponding software.
  • the communication device includes a transceiver unit and a processing unit, wherein the transceiver unit is used to send first indication information and/or second indication information, the first indication information is used to indicate data set status information, and the third indication information is used to indicate data set status information.
  • the second indication information is used to indicate the model status information; and the transceiver unit is also used to receive the third indication information, the third indication information is used to indicate the communication strategy and communication resources, and the communication strategy and the communication resources are Determined based on the first indication information and/or the second indication information.
  • the data set status information includes at least one of the following information: data type, sampling frequency, sampling location information, sample timestamp, sample number, and sample accuracy.
  • the model status information includes at least one of the following information: model type, model identification, ordering of sub-models, model version number, model description, model update timestamp, model update location information, model Accuracy, model performance.
  • the communication strategy is determined based on at least two items of the first indication information, the second indication information and channel state information.
  • the communication strategy includes at least one of the following strategies: quantization strategy, source coding strategy, modulation coding strategy, source channel joint coding strategy, real number or complex number processing strategy.
  • the transceiver unit is also used to report the size of the artificial intelligence service data packet; wherein the communication resource is based on the size of the artificial intelligence service data packet, the data set status information , at least two items in the model status information are determined.
  • the processing unit is configured to generate an artificial intelligence service data packet according to the third instruction information; and the transceiver unit is further configured to send the artificial intelligence service data packet according to the third instruction information. Described artificial intelligence business data package.
  • the transceiver unit is further configured to receive the artificial intelligence service data packet according to the third indication information.
  • the artificial intelligence service data packets are communicated through a physical learning channel.
  • communication through the physical learning channel includes at least one of the following processing operations: quantization, source encoding; or communication through the physical learning channel includes at least one of the following processing operations: quantization, information source Channel joint coding; or communication over said physically learned channel includes the following processing operations: real or complex number processing.
  • the artificial intelligence service data package includes a packet header and a payload
  • the payload includes at least one of the following data: parameters or gradients of the machine learning model, inference results, and intermediate results from inference to the segmentation layer, Back-pass the gradient to the segmentation layer, pairwise fraction.
  • the packet header includes at least one of the following: content indication information, content description information, compression method, and packet grouping method.
  • a sixth aspect provides a communication device that can implement the communication method in the above-mentioned second aspect.
  • the communication device may be a chip or a second node.
  • the above method can be implemented through software, hardware, or through hardware executing corresponding software.
  • the communication device includes a transceiver unit and a processing unit, wherein the transceiver unit is used to receive first indication information and/or second indication information, the first indication information is used to indicate data set status information, and the third indication information is used to indicate data set status information.
  • the second indication information is used to indicate model status information; the processing unit is used to determine the communication strategy and communication resources according to the first indication information and/or the second indication information; and the transceiver unit is also used to Send third indication information, where the third indication information is used to indicate the communication policy and the communication resource.
  • the data set status information includes at least one of the following information: data type, sampling frequency, sampling location information, sample timestamp, sample number, and sample accuracy.
  • the model status information includes at least one of the following information: model type, model identification, sub-model order, model version number, model description, model update timestamp, model update location information, model accuracy, and model performance.
  • the communication strategy is determined based on at least two items of the first indication information, the second indication information and channel state information.
  • the communication strategy includes at least one of the following strategies: quantization strategy, source coding strategy, modulation coding strategy, source channel joint coding strategy, real number or complex number processing strategy.
  • the processing unit is further configured to obtain the size of the artificial intelligence service data packet; and the processing unit is further configured to obtain the size of the artificial intelligence service data packet according to the size of the first artificial intelligence service data packet. At least two items of the indication information and the second indication information determine the communication resource.
  • the processing unit is further configured to determine the size of the artificial intelligence service data packet based on the corresponding relationship between at least one of the task type and the service type and the data packet size; or the The transceiver unit is also configured to receive the reported size of the artificial intelligence service data packet.
  • the transceiver unit is further configured to receive an artificial intelligence service data packet sent according to the third instruction information.
  • the processing unit is further configured to generate an artificial intelligence service data packet according to the third instruction information; and the transceiver unit is further configured to send an artificial intelligence service data packet according to the third instruction information.
  • the artificial intelligence business data package is further configured to generate an artificial intelligence service data packet according to the third instruction information; and the transceiver unit is further configured to send an artificial intelligence service data packet according to the third instruction information.
  • the processing unit is further configured to instruct a third node to send the artificial intelligence service data packet according to the third instruction information.
  • the artificial intelligence service data packets are communicated through a physical learning channel.
  • communication through the physical learning channel includes at least one of the following processing operations: quantization, source encoding; or communication through the physical learning channel includes at least one of the following processing operations: quantization, information source Channel joint coding; or communication over said physically learned channel includes the following processing operations: real or complex number processing.
  • the artificial intelligence service data package includes a packet header and a payload
  • the payload includes at least one of the following data: parameters or gradients of the machine learning model, inference results, and intermediate results from inference to the segmentation layer, Back-pass the gradient to the segmentation layer, pairwise fraction.
  • the packet header includes at least one of the following: content indication information, content description information, compression method, and packaging method.
  • a seventh aspect provides a communication device that can implement the communication method in the third aspect.
  • the communication device may be a chip or a first node.
  • the above method can be implemented through software, hardware, or through hardware executing corresponding software.
  • the communication device includes a transceiver unit and a processing unit, wherein the processing unit is used to obtain artificial intelligence service data generated by the physical PHY layer and/or the media access control MAC layer; and the transceiver unit is used to pass the first A radio bearer sends the artificial intelligence service data.
  • the processing unit is configured to process the artificial intelligence service data on at least one of the following protocol layers: Packet Data Convergence Protocol PDCP layer, Radio Link Control RLC layer, MAC layer, and PHY layer ; And the transceiver unit is used to send the processed artificial intelligence business data.
  • protocol layers Packet Data Convergence Protocol PDCP layer, Radio Link Control RLC layer, MAC layer, and PHY layer ;
  • the transceiver unit is used to send the processed artificial intelligence business data.
  • the first radio bearer is a learning radio bearer or a first data radio bearer.
  • the communication device is a terminal, and the transceiver unit is further configured to receive first signaling, where the first signaling is used to indicate configuring the first radio bearer.
  • the communication device is a network device, and the transceiver unit is further configured to send first signaling, where the first signaling is used to indicate configuring the first radio bearer.
  • An eighth aspect provides a communication device that can implement the communication method in the fourth aspect.
  • the communication device may be a chip or a second node.
  • the above method can be implemented through software, hardware, or through hardware executing corresponding software.
  • the communication device includes a transceiver unit and a processing unit, wherein the transceiver unit is used to receive artificial intelligence service data through the first wireless bearer; and the processing unit is used to output the artificial intelligence service data to a physical PHY layer and/or media access control MAC layer.
  • the processing unit is used to process the artificial intelligence service data on at least one of the following protocol layers: PHY layer, MAC layer, Radio Link Control RLC layer, and Packet Data Convergence Protocol PDCP layer. ; And the processing unit is also used to transmit the processed artificial intelligence service data to the PHY layer and/or MAC layer.
  • the first radio bearer is a learning radio bearer or a first data radio bearer.
  • the communication device is a network device, and the transceiver unit is further used to send a first signaling, where the first signaling is used to indicate configuration of the first radio bearer.
  • the communication device is a terminal, and the transceiver unit is further configured to receive first signaling, where the first signaling is used to indicate configuring the first radio bearer.
  • the above-mentioned processing unit may be a processor
  • the above-mentioned transceiver unit may be a transceiver or a communication interface, or may include a receiving unit and a sending unit, and the receiving unit may be a receiver or receiver, or an input interface, so
  • the sending unit may be a transmitter or transmitter, or an output interface; optionally, it may also include a storage unit, and the storage unit may be a memory.
  • the transceiver unit may be an input/output interface of the chip, such as an input/output circuit, pins, etc.
  • a communication device including a processor, configured to execute the above first aspect or any one of the first aspects to implement the method.
  • a communication device including a processor, configured to execute the above second aspect or any one of the second aspects to implement the method.
  • a communication device including a processor, configured to execute the above third aspect or any one of the third aspects to implement the method.
  • a communication device including a processor, configured to execute any one of the above-mentioned fourth aspect or the fourth aspect to implement the method.
  • a communication device including a processor, a memory, and instructions stored on the memory and executable on the processor. When the instructions are executed, the communication device is caused to execute the first aspect. Or any one of the first aspects implements the method.
  • a communication device including a processor, a memory, and instructions stored on the memory and executable on the processor.
  • the communication device is caused to execute the second aspect.
  • any one of the second aspects implements the method.
  • a communication device including a processor, a memory, and instructions stored on the memory and executable on the processor.
  • the communication device is caused to execute the third aspect.
  • any one of the third aspects implements the method.
  • a communication device including a processor, a memory, and instructions stored on the memory and executable on the processor. When the instructions are executed, the communication device is caused to execute the fourth aspect. Or any one of the fourth aspect implements the method.
  • a communication system which includes the communication device of the fifth aspect and the communication device of the sixth aspect.
  • An eighteenth aspect provides a communication system, which includes the communication device of the seventh aspect and the communication device of the eighth aspect.
  • a computer-readable storage medium In a nineteenth aspect, a computer-readable storage medium is provided. Computer programs or instructions are stored in the computer-readable storage medium. When the computer executes the computer program or instructions, the methods described in the above aspects are implemented.
  • a computer program product containing instructions is provided.
  • the communication device causes the communication device to perform the methods described in the above aspects.
  • Figure 1 shows a schematic diagram of a communication system involved in this application
  • Figure 2 is a schematic diagram of a communication system in a specific scenario provided by an embodiment of the present application.
  • FIG3 is a schematic diagram of another communication system involved in the present application.
  • Figure 4 is a schematic diagram of a communication system in another specific scenario provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a federated learning architecture provided by an embodiment of this application.
  • FIG6 is a schematic diagram of a segmentation learning architecture provided in an embodiment of the present application.
  • Figure 7 is a schematic diagram of the architecture of federated distillation provided by an embodiment of the present application.
  • FIG8 is a schematic diagram of a transmission of application layer data provided by the prior art.
  • Figure 9 is a schematic diagram of UCI/DCI communication provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of the data processing flow of an existing physical channel
  • Figure 11 is a schematic flow chart of a communication method provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram of communication of artificial intelligence business data provided by an embodiment of the present application.
  • FIG13a is a schematic diagram of a data processing flow of a physical learning channel provided in an embodiment of the present application.
  • Figure 13b is a schematic diagram of the data processing flow of another physical learning channel provided by an embodiment of the present application.
  • Figure 13c is a schematic diagram of the data processing flow of another physical learning channel provided by the embodiment of the present application.
  • Figure 14 is a schematic flow chart of another communication method provided by an embodiment of the present application.
  • Figure 15 is a schematic flow chart of another communication method provided by an embodiment of the present application.
  • Figure 16 is a schematic diagram of the protocol stack processing flow of artificial intelligence business data provided by the embodiment of the present application.
  • Figure 17 is a schematic structural diagram of a communication device provided by an embodiment of the present application.
  • Figure 18 is a schematic structural diagram of a simplified terminal provided by an embodiment of the present application.
  • Figure 19 is a schematic structural diagram of a simplified network device provided by an embodiment of the present application.
  • This application can be applied to the fifth generation ( 5th generation, 5G) mobile communication system, the sixth generation ( 6th generation, 6G) mobile communication system, future evolved mobile communication systems, satellite communication systems, short-distance communication systems or other Communication systems, etc., this application does not limit this.
  • Figure 1 provides a schematic diagram of a communication system involved in the present application.
  • the communication system includes one or more first nodes 101 (two first nodes are illustrated in the figure) and one or more second nodes 102 (Figure A second node is shown in the example).
  • This communication system can be applied to scenarios such as cellular networks.
  • the first node 101 may be a distributed node, such as a terminal, or a network device;
  • the second node 102 may be a central node, such as a network device, or a terminal.
  • FIG. 2 it is a schematic diagram of a communication system in a specific scenario provided by an embodiment of the present application, in which the first node 101 is a terminal and the second node 102 is a network device.
  • the first node 101 sends the first indication information and/or the second indication information to the second node 102.
  • the first indication information is used to indicate the data set status information
  • the second indication information is used to indicate the model status information
  • the second node 102 Determine the communication strategy and communication resources according to the first indication information and/or the second indication information
  • the second node 102 sends the third indication information to the first node 101, the third indication information is used to indicate the communication strategy and communication resources,
  • the communication strategy and communication resources are determined based on the first indication information and/or the second indication information.
  • the first node 101 may communicate the artificial intelligence service data packet with the second node 102 based on the communication policy and communication resources indicated by the third indication information.
  • FIG. 3 shows a schematic diagram of another communication system involved in this application.
  • the communication system includes a first node 201, a second node 202 and a third node 203.
  • This communication system can be applied to scenarios such as vehicle to everything (V2X) and device-to-device (D2D) communication.
  • the first node 201 and the third node 203 may be distributed nodes, such as terminals; the second node 202 may be a central node, such as a network device.
  • the first node 201 sends first indication information and/or second indication information to the second node 202.
  • the first indication information is used to indicate the data set status information
  • the second indication information is used to indicate the model status information
  • the second node 202 Determine the communication strategy and communication resources according to the first indication information and/or the second indication information
  • the second node 202 sends the third indication information to the first node 201
  • the third indication information is used to indicate the communication strategy and communication resources
  • the communication strategy and communication resources are determined based on the first indication information and/or the second indication information.
  • the first node 201 may communicate the artificial intelligence service data packet with the third node 203 based on the communication policy and communication resources indicated by the third indication information.
  • the terminal is a device with wireless transceiver function, which can be deployed on land (including indoors or outdoors), and can be handheld, worn or mounted on a vehicle; it can also be deployed on the water, such as on a ship; it can also be deployed in the air, such as on an airplane, a balloon, and a satellite.
  • the terminal can be a mobile phone, a tablet computer (pad), a computer with wireless transceiver function, a wearable device, a drone, a helicopter, an airplane, a ship, a robot, a mechanical arm, a smart home device, a virtual reality (VR) terminal, an augmented reality (AR) terminal, a wireless terminal in industrial control, a wireless terminal in self-driving, a whole vehicle, a functional module in a vehicle, a wireless terminal in remote medical, a wireless terminal in smart grid, a wireless terminal in transportation safety, a wireless terminal in smart city (e.g., street lamps, etc.), a wireless terminal in smart home, etc.
  • the embodiments of the present application do not limit the application scenarios.
  • a terminal may also sometimes be referred to as user equipment (UE), access terminal, UE unit, mobile station, mobile station, remote station, remote terminal, mobile device, terminal, wireless communication device, UE agent or UE device, etc.
  • UE user equipment
  • access terminal UE unit
  • mobile station mobile station
  • remote station remote terminal
  • mobile device terminal
  • wireless communication device UE agent or UE device
  • Network equipment can be any device with wireless transceiver functions, including but not limited to: base station (NodeB), evolved base station (eNodeB), base station in 5G communication system, base station or access network equipment in future communication system, Access nodes, wireless relay nodes, wireless backhaul nodes, etc. in WiFi systems.
  • the network device can also be a wireless controller in a cloud radio access network (CRAN) scenario.
  • Network equipment can also be small stations, transmission nodes (transmission reference point, TRP), etc.
  • TRP transmission reference point
  • the communication system may also include core network equipment (not shown in the figure).
  • the functions of the core network equipment are mainly to provide user connections, manage users and carry services, and serve as the bearer network to provide interfaces to external networks.
  • the establishment of user connections includes functions such as mobility management (MM), calling management (CM), switching/routing, and recording notifications (combined with intelligent network services to complete the connection relationship to intelligent network peripheral devices).
  • MM mobility management
  • CM calling management
  • switching/routing switching/routing
  • recording notifications combined with intelligent network services to complete the connection relationship to intelligent network peripheral devices.
  • User management includes user description, quality of service (QoS), user communication records (accounting), virtual home environment (VHE) (dialog with the intelligent network platform to provide a virtual home environment), security ( Corresponding security measures provided by the authentication center include security management of mobile services and security processing of external network access).
  • Bearer connections include to the external public switched telephone network (PSTN), external circuit data networks and packet data networks, the Internet and intranets, as well as mobile's own short message service service, SMS) server, etc.
  • PSTN public switched telephone network
  • the basic services that core network equipment can provide include mobile office, e-commerce, communications, entertainment services, travel and location-based services, remote sensing services (telemetry)-simple messaging services (monitoring and control), etc.
  • the terminal or network device includes a hardware layer, an operating system layer running on the hardware layer, and an application layer running on the operating system layer.
  • This hardware layer includes hardware such as central processing unit (CPU), memory management unit (MMU) and memory (also called main memory).
  • the operating system can be any one or more computer operating systems that implement business processing through processes, such as Linux operating system, Unix operating system, Android operating system, iOS or windows operating system, etc.
  • This application layer includes applications such as browsers, address books, word processing software, and instant messaging software.
  • the embodiments of the present application do not specifically limit the specific structure of the execution subject of the method provided by the embodiments of the present application.
  • the method provided by the embodiments of the present application can be executed by running a program that records the code of the method provided by the embodiments of the present application.
  • the execution subject of the method provided by the embodiment of the present application can be a terminal or a network device, or a functional module in the terminal or network device that can call a program and execute the program.
  • the relevant functions of the terminal or network device in the embodiment of the present application can be implemented by one device, or can be implemented by multiple devices together, or can be implemented by one or more functional modules in one device.
  • the above functions can be either network elements in hardware devices, software functions running on dedicated hardware, or a combination of hardware and software, or instantiated on a platform (for example, a cloud platform) Virtualization capabilities.
  • the communication between the network device and the terminal in the communication system shown in Figure 2 can also be expressed in another form.
  • FIG 4 it is a schematic diagram of the communication system in another specific scenario provided by the embodiment of the present application.
  • the terminal 40 includes a processor 401, a memory 402, and a transceiver 403.
  • the transceiver 403 includes a transmitter 4031, a receiver 4032, and an antenna 4033.
  • the network device 41 includes a processor 411, a memory 412, and a transceiver 413.
  • the transceiver 413 includes a transmitter 4131, a receiver 4132, and an antenna 4133.
  • the receiver 4032 may be configured to receive transmission control information through the antenna 4033, and the transmitter 4031 may be configured to send transmission feedback information to the network device 41 through the antenna 4033.
  • the transmitter 4131 may be used to send transmission control information to the terminal 40 through the antenna 4133, and the receiver 4132 may be used to receive transmission feedback information sent by the terminal 40 through the antenna 4133.
  • the processor 401/processor 411 can be a CPU, a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits used to control the execution of the program of the present application.
  • ASIC application-specific integrated circuit
  • the memory 402/memory 412 may be a device with a storage function.
  • it can be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM) or other types of things that can store information and instructions.
  • Dynamic storage devices can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage ( Including compressed optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be stored by a computer. any other medium, but not limited to this.
  • the memory can exist independently and be connected to the processor through communication lines. Memory can also be integrated with the processor.
  • the memory 402/memory 412 is used to store computer execution instructions for executing the solution of the present application, and the execution is controlled by the processor 401/processor 411.
  • the processor 401/processor 411 is used to execute computer execution instructions stored in the memory 402/memory 412, thereby implementing the information transmission method provided in the embodiment of the present application.
  • the processor 401/processor 411 may also perform processing-related functions in the information transmission method provided in the following embodiments of the present application.
  • the computer-executed instructions in the embodiments of the present application may also be called application codes, which are not specifically limited in the embodiments of the present application.
  • Edge devices are widely distributed in various regions and corners of the world. These devices will continuously generate and accumulate huge amounts of raw data at a rapid speed. If the center needs to collect raw data from all edge devices, it will inevitably bring huge communication losses and computing power requirements.
  • FL federated learning
  • edge devices also known as “ Distributed nodes”
  • central servers also called “central nodes”
  • FIG. 5 it is a schematic diagram of a federated learning architecture provided by the embodiment of this application.
  • the FL architecture is the most extensive training architecture in the current FL field.
  • the FedAvg algorithm is the basic algorithm of FL. Its algorithm flow is roughly as follows:
  • the central node initializes the model to be trained And broadcast it to all distributed nodes.
  • the distributed node k ⁇ [1,K] is based on the local data set to the received global model Perform training for E rounds (epochs) to obtain local training results Report it to the central node.
  • the central node aggregates and collects local training results from all (or part) clients. Assume that the set of clients uploading local models in round t is The central end will use the number of samples corresponding to the client as the weight to perform weighted averaging to obtain a new global model. The specific update rule is: Afterwards, the central end will transfer the latest version of the global model to Broadcasts are sent to all client devices for a new round of training.
  • the trained local gradients can also be After reporting, the central node averages the local gradient and updates the global model based on the direction of this average gradient.
  • the data set exists at the distributed node, that is, the distributed node collects the local data set, conducts local training, and reports the local results (model or gradient) obtained by the training to the central node.
  • the central node itself does not have a data set, and is only responsible for fusion processing of the training results of distributed nodes, obtaining the global model, and issuing it to distributed nodes.
  • split learning split learning
  • the complete neural network model is divided into two parts (i.e., two sub-networks), and one part is deployed in the distribution type nodes, and the other part is deployed on the central node.
  • the place where the complete neural network is divided is called the "segmentation layer".
  • the distributed nodes input local data into the local sub-network, infer to the segmentation layer, and send the result Fk of the segmentation layer through the communication link.
  • the central node inputs the received Fk into another sub-network deployed by itself, and continues forward inference to obtain the final inference result.
  • the gradient is propagated back to the segmentation layer through the sub-network of the central node to obtain the reverse propagation result Gk. Then the central node sends Gk to the distributed nodes and continues on the sub-network of the distributed nodes. Perform gradient backpropagation.
  • the trained subnetwork on the distributed node can be saved locally on the distributed node or on a specific model storage server. When a new distributed node joins the learning system, it can first download the trained distributed node subnetwork and then use local data for further training.
  • FIG. 7 it is a schematic diagram of the architecture of federated distillation provided by the embodiment of the present application.
  • Federated distillation is also a feasible distributed learning method. Similar to federated learning, multiple distributed nodes interact with the central node to jointly complete the training of the machine learning model. Different from federated learning, the information interacted between distributed nodes and central nodes in federated distillation is logits, not model parameters or gradients in federated learning. The workflow is similar to federated learning and will not be described again here.
  • the above-mentioned AI traffic will be generated in the corresponding L1/L2 protocol layer. They are different from the business data flows issued by the application layer, and a new communication method needs to be designed.
  • the existing communication scheme only provides the protocol stack processing process for upper layer data such as the application layer/network layer, and cannot be used for data streams generated at lower protocol layers such as the physical layer/MAC layer, such as the above-mentioned AI traffic.
  • Figure 8 it is a transmission diagram of application layer data provided by the prior art.
  • the transmission method of the new radio (NR) standard is described as follows: Build a QoS stream and map it step by step to each layer in the protocol stack such as SDAP/PDCP/RLC/MAC/PHY, and finally send it through the RF air interface.
  • the functions of each protocol layer are as follows:
  • SDAP Maintains the mapping of QoS flows to data radio bearers (DRB);
  • PDCP data integrity, encryption
  • RLC caching, automatic repeat-request (ARQ);
  • MAC radio resource management (RRM), hybrid automatic repeat-request (HARQ), multiplexing (multiplexing);
  • RRM radio resource management
  • HARQ hybrid automatic repeat-request
  • multiplexing multiplexing
  • PHY Builds transmission channels and performs channel coding, modulation, resource mapping and other processing.
  • UCI uplink control information
  • DCI downlink control information
  • FIG 10 it is a schematic diagram of the data processing flow of an existing physical channel.
  • the above UCI/DCI is transmitted by a dedicated physical channel (control channel) after special processing as shown in Figure 10.
  • UCI/DCI In existing cellular network standards (long term evolution (LTE), NR), UCI/DCI have a fixed format, which contains multiple fields, and the meaning of the bits in each field is given by the standard.
  • LTE long term evolution
  • NR NR
  • DCI of format0_0 in NR includes: identifier (identifier), frequency domain position (frequency domain, F), time domain position (time domain, T), hopping, MCS, new data (newData), redundancy Version (redundancy version, RV), HARQ, TPC, UL/SUL and other fields; and UCI may include channel state information (channel state information, CSI) (channel quality indicator (channel quality indicator, CQI)/rank indication, RI)/precoding matrix indicator (precoding matrix indicator, PMI)/length indicator (length indicator, LI)/CRI), confirmation response (acknowledgement, ACK)/negative response (non-acknowledgement, NACK), etc.
  • CSI channel state information
  • RI channel quality indicator
  • precoding matrix indicator precoding matrix indicator
  • PMI precoding matrix indicator
  • LI length indicator
  • CLI channel state information
  • NACK negative response
  • UCI/DCI is shorter and more important than ordinary data.
  • the physical dedicated channels and related processing used to transmit UCI/DCI as shown in Figure 10 are only suitable for shorter bit sequences.
  • the above AI traffic may include larger data packets such as parameters, gradients, and even training data of the machine learning model.
  • the transmission of these data packets may require unpacking/packaging. This is not supported in the physical channel of the existing protocol. class processing.
  • this application provides a communication solution.
  • the first node sends first indication information and/or second indication information to the second node.
  • the first indication information is used to indicate the data set status information
  • the second indication information is used to indicate the data set status information. to indicate model status information
  • the second node determines the communication strategy and communication resources according to the first indication information and/or the second indication information
  • the second node sends third indication information to the first node, the third indication information is used to
  • the communication strategy and communication resources are indicated, and the communication strategy and communication resources are determined according to the first indication information and/or the second indication information.
  • FIG. 11 it is a schematic flow chart of a communication method provided by an embodiment of the present application, which is applied to the communication system shown in Figure 1.
  • the method may include the following steps:
  • the first node sends the first indication information and/or the second indication information to the second node.
  • the second node receives the first indication information and/or the second indication information.
  • the first indication information is used to indicate dataset state information (DSI)
  • the second indication information is used to indicate model state information (MSI).
  • the first node may send the first indication information to the second node; or the first node may send the second indication information to the second node; or the first node may send the first indication information and the second indication information to the second node.
  • the first node may report the first indication information and/or the second indication information through a data channel, a control channel or other dedicated channels.
  • the data set status information refers to the way in which the first node is obtained as the input data of the model.
  • This data set status information affects the communication strategy of artificial intelligence data packets and the determination of communication resources.
  • the data set status information includes at least one of the following information: data type, sampling frequency, sampling location information, sample timestamp, sample number, and sample accuracy.
  • the physical layer includes at least one of the following types of data: channel, signal strength, signal to interference plus noise ratio (SINR).
  • SINR signal to interference plus noise ratio
  • the MAC layer includes at least one of the following types of data: scheduling, resource allocation, cache status, and link adaptation.
  • the sampling frequency can be how many minislots/slots/subframes/frames/seconds.
  • Sampling location information can be characterized in any of the following ways: cell identifier, base station distance, global positioning system (GPS) location. This application does not limit this.
  • GPS global positioning system
  • Sample accuracy can be characterized in at least one of the following ways: double, 64-bit floating point (float64), float, 32-bit floating point (float32), 16-bit floating point (float16), 8 Bit floating point number (float8), integer (int), 64-bit integer (int64), 32-bit integer (int32), 16-bit integer (int16), 8-bit integer (int8), binary number (bin).
  • the first indication information is used to indicate the data set status information, which may be a separate indication or a joint indication for each piece of information in the at least one data set status information.
  • the data set status information may be a separate indication or a joint indication for each piece of information in the at least one data set status information.
  • the data set status information for example, for a data type, several bits may be used to indicate multiple data types. For example, if there are 8 data types, a 3-bit binary number can be used to indicate the 8 data types.
  • Model status information refers to the description of the model adopted by the first node.
  • the model status information includes at least one of the following information: model type, model identification, ordering of sub-models, model version number, model description, model update timestamp, model update location information, model accuracy, and model performance.
  • the model type includes at least one of the following: fully connected neural network (FCNN), convolutional neural network (CNN), recurrent neural network (RNN), transformer (transformer) ), hybrid model.
  • FCNN fully connected neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • transformer transformer
  • the model identifier is used to index the corresponding model and task.
  • the ordering of submodels is used to indicate whether submodels and their ordering are used to build a complete model.
  • the model version number is used for version synchronization.
  • the model description includes the amount of parameters, the amount of inference operations, etc.
  • Model performance refers to the inference performance on the reference data set.
  • the second indication information is used to indicate the model state information, and may be respectively indicated for each of the at least one model state information or jointly indicated.
  • a plurality of data types may be indicated by a plurality of bits. For example, if there are 6 model types, a 3-bit binary number may be used to indicate the 6 model types.
  • the second node determines the communication strategy and communication resources according to the first indication information and/or the second indication information.
  • the second node may communicate with one or more first nodes. After receiving the first indication information and/or the second indication information, the second node schedules the first node to determine the first node that needs to send uplink artificial intelligence service data.
  • the channel status between the first node and the second node has a certain impact on determining the first node that needs to send artificial intelligence service data uplink. Therefore, the first node that needs to send uplink artificial intelligence service data may also be determined based on at least two items of the first indication information, the second indication information, and the channel state information.
  • the second node can perform channel estimation to determine the channel state information between the first node and the second node; or, the second node can also obtain the first node and the second node through channel reciprocity, feedback of channel state information from the first node, etc. Channel status information between second nodes.
  • the method by which the second node schedules the first node includes but is not limited to the following methods:
  • One way is to schedule the first node with good channel conditions
  • Another way is to schedule the first node whose data set meets the requirements, for example, schedule the first node whose data type, sampling frequency, sampling location, sampling time, sample number, sample accuracy and other parameters meet the requirements of the second node.
  • Another way is to select the first node whose scheduling model meets the requirements, such as scheduling model type, model identification, model description, model update time, model update position, model accuracy, model performance and other parameters that meet the requirements.
  • first node whose scheduling model meets the requirements, such as scheduling model type, model identification, model description, model update time, model update position, model accuracy, model performance and other parameters that meet the requirements.
  • Another implementation is to schedule the first node with good channel conditions and a data set that meets the requirements
  • Another implementation is to schedule the first node with good channel conditions and model that meets the requirements
  • Another implementation is to schedule the first node whose channel conditions are good, the data set meets the requirements, and the model meets the requirements.
  • the second node After the second node determines the first node that needs to send artificial intelligence service data uplink, it determines the communication strategy used by the first node for uplink transmission.
  • the above communication strategy includes at least one of the following strategies: quantization strategy, source coding strategy, modulation and coding scheme (MCS), source channel joint coding strategy, real number or complex number processing strategy.
  • quantization strategy quantization strategy
  • source coding strategy source coding strategy
  • MCS modulation and coding scheme
  • source channel joint coding strategy real number or complex number processing strategy.
  • Data set status information and model status information affect the communication strategy adopted by the first node.
  • the second node may determine the communication strategy based on the first indication information, or the second indication information, or the first indication information and the second indication information.
  • the channel state between the first node and the second node has a certain impact on the communication strategy adopted by the first node and the second node. For example, if the channel conditions are good, the quantization accuracy can be selected to be higher; if the channel conditions are poor, the quantization accuracy can be selected to be lower. For another example, if the channel conditions are good, the MCS level can be selected to be higher; if the channel conditions are poor, the MCS level can be selected to be lower. Therefore, the communication policy may also be determined based on at least two items of the first indication information, the second indication information and the channel state information.
  • the second node may also determine the communication resources that the first node can use or needs to use based on the first indication information and/or the second indication information.
  • the communication resources that the first node can use or need to use are related to the size of the artificial intelligence service data packets that the first node needs to uplink transmit.
  • the second node may first obtain the size of the artificial intelligence service data packet, and then determine the communication resource based on at least two items of the size of the artificial intelligence service data packet, the first indication information, and the second indication information.
  • the AI task is channel state information (CSI) compression, and the type of artificial intelligence service data packet is data, and the size of the corresponding artificial intelligence service data packet is S1; the AI task is CSI compression, And the type of artificial intelligence business data package is model parameter, and the size of the corresponding artificial intelligence business data package is S2; and so on.
  • CSI channel state information
  • the second node can obtain the task type of the AI task and the type of the AI service data packet. Therefore, the size of the AI service data packet (i.e., the amount of data to be communicated) can be determined based on the correspondence between at least one of the task type and the service type and the data packet size. For example, the type of the AI task obtained by the second node is beam management, and the type of the AI service data packet is model parameters. The second node can determine the size of the AI service data packet as A2 based on the correspondence in Table 1 above.
  • the second node After the second node determines the size of the artificial intelligence service data packet, it can determine communication resources based on at least two items of the size of the artificial intelligence service data packet, the first indication information, and the second indication information.
  • the second node calculates the number of modulation symbols based on the size, quantization strategy, source coding strategy, and MCS of the artificial intelligence service data packet, and then determines the communication resources based on the number of modulation symbols.
  • the second node calculates the number of modulation symbols based on the size of the artificial intelligence service data packet, quantization strategy, source channel joint coding strategy, and modulation strategy, and then determines communication resources based on the number of modulation symbols.
  • the second node calculates the number of real symbols to be communicated based on the size of the artificial intelligence service data packet and the real or complex number processing strategy, and then determines the communication resources based on the number of real symbols to be communicated.
  • the first node can report the size of the artificial intelligence service data packet to the second node.
  • the second node receives the size of the artificial intelligence service data packet reported by the first node.
  • the second node sends the third indication information to the first node.
  • the first node receives the third indication information.
  • the third indication information is used to indicate the above communication strategy and communication resources.
  • the second node may indicate the scheduled first node through downlink control information (DCI) or learning control information (LCI), and indicate the communication strategy and communication used for its uplink transmission. resource.
  • DCI downlink control information
  • LCI learning control information
  • the second node can use the reserved fields in the existing DCI format to carry the above communication strategy and communication resources; or use the newly defined DCI format to carry the above communication strategy and communication resources; or use the newly defined LCI to carry the above communication strategy and communication resources. resource.
  • the LCI may be a newly defined information format for distributed learning.
  • the above DCI/LCI can be scrambled using a dedicated radio network temporary identity (RNTI), such as a learning-radio network temporary identity (L-RNTI).
  • RNTI dedicated radio network temporary identity
  • L-RNTI learning-radio network temporary identity
  • the first node After receiving the above DCI/LCI, the first node uses a dedicated RNTI to descramble the DCI/LCI. If the descrambling is successful, the DCI/LCI is sent to the first node.
  • the method may also include the following steps (indicated by dotted lines in the figure):
  • the first node generates an artificial intelligence service data packet according to the third instruction information.
  • the first node After receiving the third instruction information, the first node generates an artificial intelligence service data packet according to the third instruction information.
  • the first node generates an artificial intelligence service data packet according to the communication policy and communication resources indicated by the third indication information. For example, when the communication resources required by the artificial intelligence service data packet are greater than the communication resources indicated by the third indication information, the first node can unpack the artificial intelligence service data packet and split it into several sub-data packets, so that each sub-data packet The data packet conforms to the communication resource indicated by the third indication information.
  • the first node performs quantization processing on the artificial intelligence service data packet according to the quantization strategy (a type of communication strategy) indicated by the third instruction information, so as to comply with the quantization strategy indicated by the third instruction information.
  • the first node modulates the artificial intelligence service data packet according to the modulation and coding strategy (a type of communication strategy) indicated by the third indication information to comply with the modulation and coding strategy indicated by the third indication information.
  • the artificial intelligence service data packet includes a header and a payload.
  • the above-mentioned load includes at least one of the following data: parameters or gradients of the machine learning model, inference results, intermediate results from inference to the segmentation layer, gradients passed back to the segmentation layer, and pair scores.
  • the above header includes at least one of the following:
  • Content indication information used to indicate at least one of the above data included in the payload
  • Content description information used to indicate the timestamp, acquisition location, version information, etc. of at least one of the above data included in the payload;
  • Compression method used to indicate the pruning position of the load (that is, the position where the artificial intelligence business data packets are split), whether to distill, etc.;
  • Packaging method used to indicate the sequence number of the complete artificial intelligence business data packet and the sequence number of the sub-packages after the complete artificial intelligence business data packet is split (when the amount of data in the artificial intelligence business data packet is large, it can be The sub-data packets are sent, and the packet header includes the sequence number of each sub-data packet, which is used for the second node to group the multiple sub-data packets after receiving them).
  • the first node sends the artificial intelligence service data packet to the second node according to the third instruction information.
  • the second node receives the artificial intelligence service data packet sent according to the third indication information.
  • the first node sends the above-mentioned artificial intelligence service data packet to the second node on the communication resource indicated by the third indication information according to the communication policy indicated by the third indication information.
  • the above-mentioned artificial intelligence service data packets are communicated through the physical learning channel.
  • the physical learning channel is a new physical channel defined for the artificial intelligence service data of the PHY/MAC layer, and may include:
  • Uplink physical uplink learning channel (PULCH);
  • Downlink physical downlink learning channel (PDLCH);
  • Physical broadcast learning channel (physical broadcast learning channel, PBLCH).
  • the above-mentioned PULCH/PDLCH/PBLCH may be carried by dedicated time-frequency resources.
  • Implementation method 1) Communication through a physical learning channel includes at least one of the following processing operations: quantization, source coding.
  • FIG. 13a a data processing flow diagram of a physical learning channel provided in an embodiment of the present application is provided, and communication through a physical learning channel includes the following processing operations: quantization, source coding, channel coding, rate matching, scrambling, modulation, layer mapping, precoding, resource mapping, orthogonal frequency division multiplexing (OFDM) signal generation, etc.
  • channel coding and subsequent processing operations can reuse the corresponding processing operations shown in FIG. 10, or specially designed corresponding processing operations can be used (for example, using channel coding, modulation, and signal generation methods different from existing standards). It can be seen that in the processing flow shown in FIG.
  • the above-mentioned physical learning channel adds "quantization” and "source coding” processing operations.
  • quantization refers to quantizing the real value contained in the artificial intelligence service data into a few real numbers with limited possibilities.
  • AI service data is a complex number
  • quantization refers to quantizing the real value of the complex number contained in the AI service data into a few real numbers with limited possibilities, and quantizing the imaginary value of the complex number contained in the AI service data into a few real numbers with limited possibilities.
  • Quantization methods such as uniform quantization, non-uniform quantization, and vector quantization can be used.
  • Source coding refers to encoding the real value obtained by quantization into a bit sequence, and source coding methods such as Huffman coding, arithmetic coding, and L-Z coding can be used.
  • Communication through the physical learning channel includes at least one of the following processing operations: quantization, source channel joint encoding.
  • FIG 13b which is a schematic diagram of the data processing flow of another physical learning channel provided by the embodiment of the present application
  • communication through the physical learning channel includes the following processing operations: quantization, source channel joint coding, and rate matching. , scrambling, modulation, layer mapping, precoding, resource mapping, OFDM signal generation, etc.
  • the rate matching and subsequent processing operations can reuse the corresponding processing operations as shown in Figure 10, or use specially designed corresponding processing operations (for example, using different channel coding, modulation, and signal generation methods from existing standards).
  • Source channel joint coding refers to encoding the quantized real value into a bit sequence.
  • the source-channel joint coding operation combines source coding and channel coding. Through this process, the quantized output is compressed and the reliability of the transmission is improved.
  • Communication through the physical learning channel includes the following processing operations: Real number processing.
  • FIG 13c it is a schematic diagram of the data processing flow of another physical learning channel provided by the embodiment of the present application.
  • Communication through the physical learning channel includes the following processing operations: real number processing, layer mapping, precoding, resources Mapping, OFDM signal generation, etc.
  • the layer mapping and subsequent processing operations can reuse the corresponding processing operations as shown in Figure 10, or use specially designed corresponding processing operations (for example, using different channel coding, modulation, and signal generation methods from existing standards).
  • the above-mentioned physical learning channel adds "real number or complex number processing” operations and reduces “channel coding", “rate matching”, “scrambling”, “Modulation” and other processing operations.
  • the real number or complex number processing operation refers to mapping the real part and imaginary part of the real value or complex value contained in the artificial intelligence business data into a set of real number sequences, and directly performs subsequent processing such as layer mapping before sending.
  • the real or complex number processing operation may map the real part and the imaginary part of the input real value or complex value to a real number sequence.
  • the mapping method may be to directly take the same value (i.e., the output is equal to the input), or to map the input real value to a limited number of real values (e.g., numbers less than or equal to -1 are mapped to -1, numbers from -1 to 0 are mapped to -0.5, numbers from 0 to 1 are mapped to 0.5, and numbers greater than or equal to 1 are mapped to 1).
  • the real number sequence output by the real number or complex number processing operation is mapped to the time-frequency-space transmission resource through layer mapping, precoding, resource mapping and other operations, and the signal is obtained after signal generation and then sent out.
  • the signals on the same time-frequency and space transmission resources are superimposed on each other and then received by the second node.
  • the second node can process based on the superimposed signals to obtain the required information. This is over-the-air computing. the process of.
  • the header and payload of the above-mentioned artificial intelligence service data packet can be sent separately.
  • the bit sequence contained in the header can be transmitted through the existing physical channel as shown in Figure 10 through channel coding, rate matching, scrambling, It is sent after modulation and other processing; the payload can be sent through the above-mentioned physical learning channel.
  • the second node After receiving the above-mentioned artificial intelligence service data packet, the second node unpacks the artificial intelligence service data packet according to the packet header and obtains the required artificial intelligence service data packet.
  • the communication strategy and communication resources are determined by obtaining the data set status information and/or model status information of the first node, and the communication strategy and communication resources are instructed to the first node.
  • One node communicates with the second node based on the communication policy and communication resources, thereby improving communication reliability.
  • the above step S1104 can also be replaced by the second node generating an artificial intelligence service data packet according to the third instruction information.
  • the above step S1105 can also be replaced by: the second node sends the artificial intelligence service data packet to the first node according to the third instruction information.
  • the first node receives the artificial intelligence service data packet sent according to the third indication information. That is, after the second node determines the third indication information, it sends the artificial intelligence service data packet to the first node according to the third indication information.
  • the communication of artificial intelligence service data packets please refer to the description of S1105 above, which will not be described again here.
  • FIG14 it is a schematic diagram of a flow chart of another communication method provided in an embodiment of the present application, which is applied to the communication system shown in FIG3.
  • the method may include the following steps:
  • the first node sends the first indication information and/or the second indication information to the second node.
  • the first indication information is used to indicate data set status information
  • the second indication information is used to indicate model status information.
  • the second node receives the first indication information and/or the second indication information.
  • step S1101 As shown in Figure 11, which will not be described again here.
  • the second node determines a communication strategy and communication resources according to the first indication information and/or the second indication information.
  • step S1102 As shown in Figure 11, which will not be described again here.
  • the The channel state information includes channel state information between the first node and the third node; or the channel state information includes channel state information between the first node, the second node and the third node.
  • the second node sends the third indication information to the first node.
  • the third indication information is used to indicate communication strategies and communication resources.
  • the first node receives the third indication information.
  • step S1103 shown in FIG. 11 The specific implementation of this step can refer to step S1103 shown in FIG. 11 , and will not be described in detail here.
  • the method may also include the following steps (indicated by dotted lines in the figure):
  • the first node generates an artificial intelligence service data packet according to the third instruction information.
  • step S1104 As shown in Figure 11, which will not be described again here.
  • the first node sends the artificial intelligence service data packet to the third node according to the third instruction information.
  • the third node receives the artificial intelligence service data packet.
  • this method is applied to the communication system shown in Figure 3.
  • the first node communicates with the third node according to the communication strategy and communication resources indicated by the third indication information sent by the second node. communication.
  • the first node sends the artificial intelligence service data packet to the third node according to the third indication information.
  • the communication strategy and communication resources are determined by obtaining the data set status information and/or model status information of the first node, and the communication strategy and communication resources are instructed to the first node.
  • One node communicates with the third node according to the communication policy and communication resources, thereby improving the reliability of communication.
  • the above step S1403 may be replaced by the second node sending third indication information to the third node.
  • the third node receives the third indication information.
  • the above step S1404 can be replaced by: the third node generates an artificial intelligence service data packet according to the third instruction information.
  • the above step S1405 can be replaced by: the third node sends the artificial intelligence service data packet to the first node according to the third instruction information.
  • the first node receives the artificial intelligence service data packet.
  • the above embodiments describe a communication method for artificial intelligence service data in a low-layer protocol stack such as the physical layer and/or MAC layer.
  • the following embodiment describes the communication method of artificial intelligence business data through a high-level protocol stack:
  • FIG 15 it is a schematic flowchart of another communication method provided by an embodiment of the present application, which is applied to the communication system shown in Figure 1 or Figure 3.
  • the communication system involves a first node and a second node.
  • the first node may be a terminal and the second node may be a network device; or the first node may be a network device and the second node may be a terminal.
  • the method may include the following steps:
  • the first node obtains artificial intelligence service data generated by the physical layer and/or MAC layer.
  • the first node uses machine learning in an L1/L2 protocol layer such as a physical layer or a MAC layer of a wireless communication system, and obtains artificial intelligence service data generated by the physical layer and/or the MAC layer.
  • L1/L2 protocol layer such as a physical layer or a MAC layer of a wireless communication system
  • the meaning and contents of the artificial intelligence service data can be referred to in the above embodiments.
  • the first node sends artificial intelligence service data to the second node through the first wireless bearer.
  • the second node receives the artificial intelligence service data through the first wireless bearer.
  • a protocol stack processing flow of artificial intelligence business data is shown in Figure 16.
  • An artificial intelligence business data control (AI traffic control, ATC) layer is added to the protocol stack.
  • the ATC layer is used to input the artificial intelligence service data generated by the physical layer/MAC layer to the first wireless bearer.
  • another protocol stack processing flow of artificial intelligence service data can also be to add the function of inputting artificial intelligence service data generated by the physical layer/MAC layer to the first radio bearer in the existing module or protocol stack. Therefore, the existing module or protocol stack with this new function can input the artificial intelligence service data generated by the physical layer/MAC layer to the first radio bearer.
  • the first radio bearer is a newly defined learning radio bearer (LRB) or a first data radio bearer.
  • the first data radio bearer is a new data radio bearer (newDRB).
  • the first node processes the artificial intelligence service data in at least one of the following protocol layers: PDCP layer, RLC layer, MAC layer, PHY layer, and obtains the processed artificial intelligence service data. Specifically, the first node performs data integrity protection and encryption on the artificial intelligence service data at the PDCP layer; the first node caches, segments and reorganizes the artificial intelligence service data processed by the PDCP layer at the RLC layer; the first node performs mapping, multiplexing/demultiplexing, scheduling, HARQ, logical channel priority setting, etc.
  • protocol layers PDCP layer, RLC layer, MAC layer, PHY layer
  • the first node performs channel coding, modulation, resource mapping, etc. on the artificial intelligence service data processed by the MAC layer at the PHY layer. Then, the first node sends the processed artificial intelligence service data to the second node through a data channel, a control channel or a dedicated channel.
  • the second node receives the processed artificial intelligence service data.
  • the second node outputs the artificial intelligence service data to the physical layer and/or MAC layer.
  • the second node After the second node receives the processed artificial intelligence service data through the radio frequency module, the second node processes the artificial intelligence service data in at least one of the following protocol layers: PHY layer, MAC layer, RLC layer, and PDCP layer. Processing the artificial intelligence service data on at least one protocol layer may be processing the artificial intelligence service data on one or more protocol layers among the PHY layer, MAC layer, RLC layer, and PDCP layer. For example, the second node processes the artificial intelligence service data through the PHY layer, MAC layer, RLC layer, and PDCP layer in sequence.
  • the second node outputs the processed artificial intelligence service data to the PHY layer and/or MAC layer.
  • the second node can input the processed artificial intelligence service data to the ATC layer through the first radio bearer, and the ATC layer outputs the artificial intelligence service data input from the first radio bearer to the PHY layer and/or MAC layer.
  • the second node can also input the processed artificial intelligence service data to an existing module or protocol layer through the first wireless bearer, and the existing module or protocol layer is added with the information from the first wireless bearer.
  • the bearer has the function of outputting the input artificial intelligence service data to the PHY layer and/or the MAC layer, so that the artificial intelligence service data input from the first wireless bearer can be output to the PHY layer and/or the MAC layer.
  • the processing flow of the second node is the reverse process of the processing flow in Figure 16, which will not be described again here.
  • a new protocol stack processing flow is defined.
  • artificial intelligence service data generated at the PHY/MAC layer after being input to the first wireless bearer through the ATC layer, it can be processed through the third wireless bearer.
  • One wireless bearer sends artificial intelligence service data to the second node, realizing communication of artificial intelligence service data generated at the PHY/MAC layer.
  • the following steps may also be included:
  • the network device sends a first signaling to the terminal, where the first signaling is used to instruct configuration of a first radio bearer.
  • the terminal receives the first signaling.
  • the second node when the first node is a terminal and the second node is a network device, the second node sends the above first signaling to the first node; when the first node is a network device and the second node is a terminal, the first node sends the above first signaling to the second node.
  • the first signaling is radio resource control (RRC) signaling.
  • RRC radio resource control
  • configuring the first radio bearer includes adding an LRB and releasing an LRB.
  • the LRB is added/released in the Radio Bearer Configuration (RadioBearerConfig) information element of RRC signaling.
  • Radio BearerConfig Radio Bearer Configuration
  • LRB-ToAddModList For each LRB to be added (LRB-ToAddMod) in the LRB list (LRB-ToAddModList) to be added, check its LRB identification (LRB-Identity field) (for example, it can be identified by a number),
  • reestablishPDCP field re-establish the PDCP entity and configure it according to the pdcp-Config field in LRB-ToAddMod.
  • the configuration method is the same as the existing PDCP entity configuration method;
  • recovery PDCP recovery PDCP
  • the corresponding PDCP entity The corresponding PDCP entity.
  • the process of releasing an LRB includes: releasing the LRB corresponding to the LRB-Identity field in the LRB-ToReleaseList and its corresponding PDCP entity.
  • the network device sends Radio Resource Control Reconfiguration (RRCReconfiguration) signaling to the terminal:
  • RRCReconfiguration Radio Resource Control Reconfiguration
  • the RRCReconfiguration signaling includes a radio resource control reconfiguration (rrcReconfiguration) field, and the rrcReconfiguration field includes radio resource control reconfiguration-information elements (RRCReconfiguration-IEs).
  • the radio resource control reconfiguration field is used to configure the establishment of LRB.
  • RRCReconfiguration-IEs field is:
  • the radio resource control reconfiguration-information element also includes a radio bearer configuration (radioBearerConfig) field. This field is used to configure the establishment of LRB.
  • radioBearerConfig radio bearer configuration
  • RadioBearerConfig field is:
  • the radio bearer configuration field includes a list of LRBs to be added (lrb-ToAddModList) and a list of LRBs to be released (lrb-ToReleaseList).
  • LRB-ToAddModList a list of LRBs to be added
  • LRB-ToReleaseList a list of LRBs to be released
  • the terminal After receiving the RRCReconfiguration signaling, the terminal establishes or releases the LRB according to the contents of the LRB-ToAddModList and LRB-ToReleaseList fields.
  • the network device sends Radio Resource Control Recovery (RRCResume) signaling to the terminal:
  • RRCResume Radio Resource Control Recovery
  • the radio resource control recovery signaling includes a radio resource control recovery (rrcResume) field, which in turn includes radio resource control recovery-information elements (RRCResume-IEs).
  • the radio resource control recovery field is used to resume the suspended RRC connection.
  • the radio resource control recovery-information element includes a radio bearer configuration (radioBearerConfig) field.
  • the radio bearer configuration field further includes a radio bearer configuration (RadioBearerConfig) field.
  • RadioBearerConfig field is:
  • the RadioBearerConfig field includes the LRB list to be added (lrb-ToAddModList) and the LRB list to be released (lrb-ToReleaseList).
  • the contents of the LRB-ToAddModList and LRB-ToReleaseList fields are as mentioned above, describing the LRB that needs to be established and the LRB that is released.
  • the terminal After receiving the RRCResume signaling, the terminal establishes or releases the LRB according to the contents of the LRB-ToAddModList and LRB-ToReleaseList fields. The process is as described above.
  • the methods and/or steps implemented by the first node can also be implemented by components (such as chips or circuits) available for the first node; the methods and/or steps implemented by the second node can also be implemented by components (such as chips or circuits) available for the first node. or steps, may also be implemented by components (such as chips or circuits) available for the second node.
  • embodiments of the present application also provide a communication device, which is used to implement the above various methods.
  • the communication device may be the first node in the above method embodiment, or a component applicable to the first node; or the communication device may be the second node, the first node in the above method embodiment, or a component applicable to the third node. Part of the first node of the two nodes. It can be understood that, in order to implement the above functions, the communication device includes corresponding hardware structures and/or software modules for performing each function.
  • Embodiments of the present application can divide the communication device into functional modules according to the above method embodiments.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • this application also provides the following communication device:
  • FIG. 17 it is a schematic diagram of the structure of a communication device provided in an embodiment of the present application, and the communication device 1700 includes: a transceiver unit 1701 and a processing unit 1702. Among them:
  • the transceiver unit 1701 is used to implement the operations of the first node in steps S1101, S1103 and S1105 in the embodiment shown in Figure 11 ;
  • the processing unit 1702 is used to implement step S1104 in the embodiment shown in Figure 11.
  • the transceiver unit 1701 is used to implement the operations of the second node in steps S1101, S1103 and S1105 in the embodiment as shown in Figure 11; and the processing unit 1702 Used to implement step S1102 in the embodiment shown in Figure 11.
  • the transceiver unit 1701 is used to implement the functions of the first node in steps S1401, S1403 and S1405 in the embodiment shown in Figure 14 Operation; and the processing unit 1702 is used to implement step S1404 in the embodiment shown in Figure 14.
  • the transceiver unit 1701 is used to implement the operations of the second node in steps S1401 and S1403 in the embodiment as shown in Figure 14; and the processing unit 1702 is used to Implement step S1402 in the embodiment shown in Figure 14.
  • the transceiver unit 1701 is used to implement the operation of the first node in step S1502 in the embodiment shown in Figure 15; and process Unit 1702 is used to implement step S1501 in the embodiment shown in Figure 15.
  • the transceiver unit 1701 is used to implement the operation of the second node in step S1502 in the embodiment shown in Figure 15; and the processing unit 1702 is used to implement the following: Step S1503 in the embodiment shown in Figure 15 .
  • the first node in the above embodiment may be a terminal.
  • Figure 18 shows a simplified structural diagram of a terminal.
  • a mobile phone is used as an example of the terminal.
  • the terminal includes a processor, memory, radio frequency circuit, antenna and input and output devices.
  • the processor is mainly used to process communication protocols and communication data, control the terminal, execute software programs, process data of software programs, etc.
  • Memory is mainly used to store software programs and data.
  • Radio frequency circuits are mainly used for conversion of baseband signals and radio frequency signals and processing of radio frequency signals.
  • Antennas are mainly used to send and receive radio frequency signals in the form of electromagnetic waves.
  • Input and output devices such as touch screens, display screens, keyboards, etc., are mainly used to receive data input by users and output data to users. It should be noted that some types of terminals may not have input and output devices.
  • the processor When data needs to be sent, the processor performs baseband processing on the data to be sent and then outputs the baseband signal to the radio frequency circuit.
  • the radio frequency circuit performs radio frequency processing on the baseband signal and then sends the radio frequency signal out in the form of electromagnetic waves through the antenna.
  • the radio frequency circuit receives the radio frequency signal through the antenna, converts the radio frequency signal into a baseband signal, and outputs the baseband signal to the processor.
  • the processor converts the baseband signal into data and processes the data.
  • Only one memory and processor are shown in Figure 18. In an actual end product, there may be one or more processors and one or more memories. Memory can also be called storage media or storage devices.
  • the memory may be provided independently of the processor, or may be integrated with the processor, which is not limited in the embodiment of the present application.
  • the antenna and the radio frequency circuit with the transceiver function can be regarded as the receiving unit and the transmitting unit of the terminal (which can also be collectively referred to as the transceiver unit), and the processor with the processing function can be regarded as the processing unit of the terminal.
  • the terminal includes a transceiver unit 1801 and a processing unit 1802.
  • the transceiver unit 1801 may also be called a receiver/transmitter (transmitter), a receiver/transmitter, a receive/transmit circuit, etc.
  • the processing unit 1802 may also be called a processor, a processing board, a processing module, a processing device, etc.
  • the transceiver unit 1801 is used to implement the functions of the transceiver unit 1701 in the embodiment shown in Figure 17; the processing unit 1802 is used to implement the functions of the processing unit 1702 in the embodiment shown in Figure 17.
  • the second node in the above embodiment may be a network device.
  • Figure 19 shows a simplified structural diagram of a network device.
  • the network equipment includes a radio frequency signal transceiver and conversion part and a 192 part.
  • the radio frequency signal transceiver and conversion part also includes a transceiver unit 191 part.
  • the radio frequency signal transceiver and conversion part is mainly used for the transmission and reception of radio frequency signals and the conversion of radio frequency signals and baseband signals; the 192 part is mainly used for baseband processing and control of network equipment.
  • the transceiver unit 191 may also be called a receiver/transmitter (transmitter), a receiver/transmitter, a receive/transmit circuit, or the like.
  • Part 192 is usually the control center of the network device, which can generally be called a processing unit, and is used to control the network device to perform the steps performed on the second node in the above-mentioned Figures 11, 14, and 15.
  • the transceiver unit 191 can be used to implement the functions of the transceiver unit 1701 in the embodiment shown in Figure 17, and part 192 is used to implement the functions of the processing unit 1702 in the embodiment shown in Figure 17.
  • Part 192 may include one or more single boards, and each single board may include one or more processors and one or more memories.
  • the processor is used to read and execute programs in the memory to implement baseband processing functions and perform network device processing. control. If there are multiple boards, each board can be interconnected to increase processing capabilities.
  • multiple single boards may share one or more processors, or multiple single boards may share one or more memories, or multiple single boards may share one or more processors at the same time. device.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • Computer programs or instructions are stored in the computer-readable storage medium. When the computer programs or instructions are executed, the methods in the above embodiments are implemented.
  • Embodiments of the present application also provide a computer program product containing instructions. When the instructions are run on a computer, they cause the computer to execute the method in the above embodiments.
  • An embodiment of the present application also provides a communication system, including the above communication device.
  • the above units or one or more of the units can be implemented by software, hardware, or a combination of both.
  • the software exists in the form of computer program instructions and is stored in the memory.
  • the processor can be used to execute the program instructions and implement the above method flow.
  • the processor can be built into a system on chip (SoC) or ASIC, or it can be an independent semiconductor chip.
  • SoC system on chip
  • the processor can further include necessary hardware accelerators, such as field programmable gate array (FPGA), programmable logic device (programmable logic) device, PLD), or a logic circuit that implements dedicated logic operations.
  • FPGA field programmable gate array
  • PLD programmable logic device
  • the hardware can be a CPU, a microprocessor, a digital signal processing (DSP) chip, a microcontroller unit (MCU), an artificial intelligence processor, an ASIC, Any one or any combination of SoC, FPGA, PLD, dedicated digital circuits, hardware accelerators or non-integrated discrete devices, which can run the necessary software or not rely on software to perform the above method flow.
  • DSP digital signal processing
  • MCU microcontroller unit
  • embodiments of the present application also provide a chip system, including: at least one processor and an interface.
  • the at least one processor is coupled to a memory through the interface.
  • the at least one processor runs a computer program or instruction in the memory
  • the chip system is caused to execute the method in any of the above method embodiments.
  • the chip system may be composed of chips, or may include chips and other discrete devices, which is not specifically limited in the embodiments of the present application.
  • a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • words such as “first” and “second” are used to distinguish identical or similar items with basically the same functions and effects. Those skilled in the art can understand that words such as “first” and “second” do not limit the number and execution order, and words such as “first” and “second” do not limit the number and execution order.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or explanations. Any embodiment or design described as “exemplary” or “such as” in the embodiments of the present application is not to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner that is easier to understand.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • a software program it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • computer program instructions When computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are generated in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or include one or more data storage devices such as servers and data centers that can be integrated with the medium.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state disk (SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请公开了一种通信方法、装置、存储介质及程序产品。该方法包括:第一节点向第二节点发送第一指示信息和/或第二指示信息,该第一指示信息用于指示数据集状态信息,第二指示信息用于指示模型状态信息;第二节点根据第一指示信息和/或第二指示信息,确定通信策略和通信资源;以及第二节点向第一节点发送第三指示信息,该第三指示信息用于指示通信策略和通信资源,通信策略和通信资源是根据第一指示信息和/或第二指示信息确定的。采用本申请的方案,通过获取第一节点的数据集状态信息和/或模型状态信息,以此确定通信策略和通信资源,并指示第一节点通信策略和通信资源,提高了通信的可靠性。

Description

通信方法、装置、存储介质及程序产品 技术领域
本申请涉及通信技术领域,尤其涉及一种通信方法、装置、存储介质及程序产品。
背景技术
在分布式节点和中心节点之间传输的信息统称为人工智能业务(AI traffic)数据。当在无线通信系统的物理(physical,PHY)层或媒体接入控制(medium access control,MAC)层等L1/L2协议层中进行分布式学习时,上述人工智能业务数据将在相应的L1/L2协议层中产生,它们与应用层下发的业务数据流不同。
对于应用层/网络层等上层的数据流,新无线(new radio,NR)标准的传输方式描述如下:构建服务质量(quality of service,QoS)流,并逐级映射到协议栈中服务数据适配协议(service data adaption protocol,SDAP)/分组数据汇聚协议(packet data convergence protocol,PDCP)/无线链路控制(radio link control,RLC)/MAC/PHY等各层,最终通过射频空口发送。然而,其只为上层数据提供了完整的协议栈处理过程,无法用于在PHY/MAC等较低协议层产生的数据流。
有鉴于此,如何进行PHY/MAC层产生的人工智能业务数据的传输,是亟待解决的问题。
发明内容
本申请提供一种通信方法、装置、存储介质及程序产品,以实现PHY/MAC层产生的人工智能业务数据的传输。
第一方面,提供了一种通信方法,所述方法包括:发送第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;以及接收第三指示信息,所述第三指示信息用于指示通信策略和通信资源,所述通信策略和所述通信资源是根据所述第一指示信息和/或所述第二指示信息确定的。所述方法可由第一装置执行,该第一装置可以是通信设备,也可以是通信设备中的组件,如,芯片(系统)。该第一装置可以是网络系统中的第一节点。
在该方面中,第一节点通过上报数据集状态信息和/或模型状态信息,使得第二节点根据第一节点上报的数据集状态信息和/或模型状态信息确定通信策略和通信资源,并指示第一节点通信策略和通信资源,从而可以实现PHY/MAC层产生的人工智能业务数据的传输。
在一种可能的实现中,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
在该实现中,数据集状态信息是指第一节点是以什么方式获得的模型的输入数据。该数据集状态信息影响人工智能数据包的通信策略和通信资源的确定。
在另一种可能的实现中,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
在该实现中,模型状态信息是指对第一节点采用的模型的描述。该模型状态信息影响人工智能数据包的通信策略和通信资源的确定。
在又一种可能的实现中,所述通信策略是根据所述第一指示信息、所述第二指示信息和 信道状态信息中的至少两项确定的。
在该实现中,第一节点和第二节点之间的信道状态也对第一节点采用的通信策略有影响,因此,根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定通信策略,提高了通信策略的准确性。
在又一种可能的实现中,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略,实数或复数处理策略。
在又一种可能的实现中,所述方法还包括:上报人工智能业务数据包的大小;其中,所述通信资源是根据所述人工智能业务数据包的大小、所述数据集状态信息、所述模型状态信息中的至少两项确定的。
在该实现中,通信资源与人工智能业务数据包的大小、所述数据集状态信息、所述模型状态信息中的至少两项关联,根据所述人工智能业务数据包的大小、所述数据集状态信息、所述模型状态信息中的至少两项确定通信资源,提高了确定的通信资源的准确性。
在又一种可能的实现中,所述方法还包括:根据所述第三指示信息,生成人工智能业务数据包;以及根据所述第三指示信息,发送所述人工智能业务数据包。
在该实现中,第一节点生成人工智能业务数据包,可以向第二节点或第三节点发送该人工智能业务数据包。
在又一种可能的实现中,所述方法还包括:根据所述第三指示信息,接收所述人工智能业务数据包。
在该实现中,第二节点确定第三指示信息后,也可以根据所述第三指示信息,向第一节点发送所述人工智能业务数据包。或者,第二节点指示第三节点向第一节点发送所述人工智能业务数据包。
在又一种可能的实现中,所述人工智能业务数据包通过物理学习信道通信。
在该实现中,该物理学习信道是为PHY/MAC层的人工智能业务数据定义的新的物理信道。
在又一种可能的实现中,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或通过所述物理学习信道的通信包括以下处理操作:实数或复数处理。
在该实现中,人工智能业务数据一般是实数或复数形式的,不能够直接进行比特级的信道编码等处理。因此,物理学习信道的数据处理流程中需要添加量化、信源编码、信源信道联合编码、实数或复数处理等操作。
在又一种可能的实现中,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
在又一种可能的实现中,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
在该实现中,通过在包头中包括上述至少一项信息,接收端根据包头中的信息可以准确地解包。
第二方面,提供了一种通信方法,所述方法包括:接收第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;根据所述第一指示信息和/或所述第二指示信息,确定通信策略和通信资源;以及发送第三指示信息,所述第三指示信息用于指示所述通信策略和所述通信资源。所述方法可由第二装置执 行,该第二装置可以是通信设备,也可以是通信设备中的组件,如,芯片(系统)。该第二装置可以是网络系统中的第二节点。
在该方面中,第二节点通过获取第一节点上报的数据集状态信息和/或模型状态信息,以此确定通信策略和通信资源,并指示第一节点通信策略和通信资源,从而可以实现PHY/MAC层产生的人工智能业务数据的传输。
在一种可能的实现中,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
在另一种可能的实现中,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
在又一种可能的实现中,所述通信策略是根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定的。
在又一种可能的实现中,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略,实数或复数处理策略。
在又一种可能的实现中,所述根据所述第一指示信息和/或所述第二指示信息,确定通信资源,包括:获取人工智能业务数据包的大小;以及根据所述人工智能业务数据包的大小、所述第一指示信息、所述第二指示信息中的至少两项,确定所述通信资源。
在又一种可能的实现中,所述获取人工智能业务数据包的大小,包括:根据任务类型、业务类型中的至少一项与数据包大小的对应关系,确定所述人工智能业务数据包的大小;或接收上报的所述人工智能业务数据包的大小。
在又一种可能的实现中,所述方法还包括:接收根据所述第三指示信息发送的人工智能业务数据包。
在又一种可能的实现中,所述方法还包括:根据所述第三指示信息,生成人工智能业务数据包;以及根据所述第三指示信息,发送所述人工智能业务数据包。
在又一种可能的实现中,所述方法还包括:根据所述第三指示信息,指示第三节点发送所述人工智能业务数据包。
在又一种可能的实现中,所述人工智能业务数据包通过物理学习信道通信。
在又一种可能的实现中,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或通过所述物理学习信道的通信包括以下处理操作:实数或复数处理。
在又一种可能的实现中,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
在又一种可能的实现中,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
第三方面,提供了一种通信方法,所述方法包括:获取物理PHY层和/或媒体接入控制MAC层产生的人工智能业务数据;以及通过第一无线承载发送所述人工智能业务数据。所述方法可由第一装置执行,该第一装置可以是通信设备,也可以是通信设备中的组件,如,芯片(系统)。该第一装置可以是网络系统中的第一节点。
在该方面中,定义了一种新的协议栈处理流程,对于在PHY/MAC层产生的人工智能业务数据,输入到第一无线承载后,可以通过第一无线承载向第二节点发送人工智能业务数据, 实现了在PHY/MAC层产生的人工智能业务数据的通信。
在一种可能的实现中,所述通过第一无线承载发送所述人工智能业务数据,包括:将所述人工智能业务数据进行以下至少一个协议层的处理:分组数据汇聚协议PDCP层,无线链路控制RLC层,MAC层,PHY层;以及发送经过处理后的人工智能业务数据。
在另一种可能的实现中,所述第一无线承载为学习无线承载或第一数据无线承载。
在又一种可能的实现中,所述方法应用于终端,所述方法还包括:接收第一信令,所述第一信令用于指示配置所述第一无线承载。
在又一种可能的实现中,所述方法应用于网络设备,所述方法还包括:发送第一信令,所述第一信令用于指示配置所述第一无线承载。
在该实现中,第一无线承载为新定义的学习无线承载或第一数据无线承载,可以通过第一信令配置该第一无线承载。
第四方面,提供了一种通信方法,所述方法包括:通过第一无线承载接收人工智能业务数据;以及将所述人工智能业务数据输出至物理PHY层和/或媒体接入控制MAC层。所述方法可由第二装置执行,该第二装置可以是通信设备,也可以是通信设备中的组件,如,芯片(系统)。该第二装置可以是网络系统中的第二节点。
在该方面中,定义了一种新的协议栈处理流程,通过第一无线承载接收到人工智能业务数据后,将人工智能业务数据输出至PHY/MAC层,实现了在PHY/MAC层产生的人工智能业务数据的通信。
在一种可能的实现中,所述将所述人工智能业务数据输出至物理PHY层和/或媒体接入控制MAC层,包括:将所述人工智能业务数据进行以下至少一个协议层的处理:PHY层,MAC层,无线链路控制RLC层,分组数据汇聚协议PDCP层;以及将经过处理后的人工智能业务数据输出至所述PHY层和/或MAC层。
在另一种可能的实现中,所述第一无线承载为学习无线承载或第一数据无线承载。
在又一种可能的实现中,所述方法应用于网络设备,所述方法还包括:发送第一信令,所述第一信令用于指示配置所述第一无线承载。
在又一种可能的实现中,所述方法应用于终端,所述方法还包括:接收第一信令,所述第一信令用于指示配置所述第一无线承载。
在该实现中,第一无线承载为新定义的学习无线承载或第一数据无线承载,可以通过第一信令配置该第一无线承载。
第五方面,提供了一种通信装置,可以实现上述第一方面中的通信方法。例如所述通信装置可以是芯片或者第一节点。可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
所述通信装置包括收发单元和处理单元,其中,所述收发单元,用于发送第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;以及所述收发单元,还用于接收第三指示信息,所述第三指示信息用于指示通信策略和通信资源,所述通信策略和所述通信资源是根据所述第一指示信息和/或所述第二指示信息确定的。
在一种可能的实现中,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
在另一种可能的实现中,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精 度,模型性能。
在又一种可能的实现中,所述通信策略是根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定的。
在又一种可能的实现中,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略,实数或复数处理策略。
在又一种可能的实现中,所述收发单元,还用于上报人工智能业务数据包的大小;其中,所述通信资源是根据所述人工智能业务数据包的大小、所述数据集状态信息、所述模型状态信息中的至少两项确定的。
在又一种可能的实现中,所述处理单元,用于根据所述第三指示信息,生成人工智能业务数据包;以及所述收发单元,还用于根据所述第三指示信息,发送所述人工智能业务数据包。
在又一种可能的实现中,所述收发单元,还用于根据所述第三指示信息,接收所述人工智能业务数据包。
在又一种可能的实现中,所述人工智能业务数据包通过物理学习信道通信。
在又一种可能的实现中,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或通过所述物理学习信道的通信包括以下处理操作:实数或复数处理。
在又一种可能的实现中,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
在又一种可能的实现中,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
第六方面,提供了一种通信装置,可以实现上述第二方面中的通信方法。例如所述通信装置可以是芯片或者第二节点。可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
所述通信装置包括收发单元和处理单元,其中,所述收发单元,用于接收第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;所述处理单元,用于根据所述第一指示信息和/或所述第二指示信息,确定通信策略和通信资源;以及所述收发单元,还用于发送第三指示信息,所述第三指示信息用于指示所述通信策略和所述通信资源。
在一种可能的实现中,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
在另一种可能的实现中,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
在又一种可能的实现中,所述通信策略是根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定的。
在又一种可能的实现中,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略,实数或复数处理策略。
在又一种可能的实现中,所述处理单元,还用于获取人工智能业务数据包的大小;以及所述处理单元,还用于根据所述人工智能业务数据包的大小、所述第一指示信息、所述第二 指示信息中的至少两项,确定所述通信资源。
在又一种可能的实现中,所述处理单元,还用于根据任务类型、业务类型中的至少一项与数据包大小的对应关系,确定所述人工智能业务数据包的大小;或所述收发单元,还用于接收上报的所述人工智能业务数据包的大小。
在又一种可能的实现中,所述收发单元,还用于接收根据所述第三指示信息发送的人工智能业务数据包。
在又一种可能的实现中,所述处理单元,还用于根据所述第三指示信息,生成人工智能业务数据包;以及所述收发单元,还用于根据所述第三指示信息,发送所述人工智能业务数据包。
在又一种可能的实现中,所述处理单元,还用于根据所述第三指示信息,指示第三节点发送所述人工智能业务数据包。
在又一种可能的实现中,所述人工智能业务数据包通过物理学习信道通信。
在又一种可能的实现中,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或通过所述物理学习信道的通信包括以下处理操作:实数或复数处理。
在又一种可能的实现中,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
在又一种可能的实现中,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
第七方面,提供了一种通信装置,可以实现上述第三方面中的通信方法。例如所述通信装置可以是芯片或者第一节点。可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
所述通信装置包括收发单元和处理单元,其中,所述处理单元,用于获取物理PHY层和/或媒体接入控制MAC层产生的人工智能业务数据;以及所述收发单元,用于通过第一无线承载发送所述人工智能业务数据。
在一种可能的实现中,所述处理单元,用于将所述人工智能业务数据进行以下至少一个协议层的处理:分组数据汇聚协议PDCP层,无线链路控制RLC层,MAC层,PHY层;以及所述收发单元,用于发送经过处理后的人工智能业务数据。
在另一种可能的实现中,所述第一无线承载为学习无线承载或第一数据无线承载。
在又一种可能的实现中,所述通信装置为终端,所述收发单元,还用于接收第一信令,所述第一信令用于指示配置所述第一无线承载。
在又一种可能的实现中,所述通信装置为网络设备,所述收发单元,还用于发送第一信令,所述第一信令用于指示配置所述第一无线承载。
第八方面,提供了一种通信装置,可以实现上述第四方面中的通信方法。例如所述通信装置可以是芯片或者第二节点。可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
所述通信装置包括收发单元和处理单元,其中,所述收发单元,用于通过第一无线承载接收人工智能业务数据;以及所述处理单元,用于将所述人工智能业务数据输出至物理PHY层和/或媒体接入控制MAC层。
在一种可能的实现中,所述处理单元,用于将所述人工智能业务数据进行以下至少一个 协议层的处理:PHY层,MAC层,无线链路控制RLC层,分组数据汇聚协议PDCP层;以及所述处理单元,还用于将经过处理后的人工智能业务数据传输至所述PHY层和/或MAC层。
在另一种可能的实现中,所述第一无线承载为学习无线承载或第一数据无线承载。
在又一种可能的实现中,所述通信装置为网络设备,所述收发单元,还用于发送第一信令,所述第一信令用于指示配置所述第一无线承载。
在又一种可能的实现中,所述通信装置为终端,所述收发单元,还用于接收第一信令,所述第一信令用于指示配置所述第一无线承载。
示例性地,上述处理单元可以为处理器,上述收发单元可以为收发器或通信接口,也可以包括接收单元和发送单元,所述接收单元可以是接收器或接收机,或者是输入接口,所述发送单元可以是发射器或发射机,或者是输出接口;可选的,还可以包括存储单元,所述存储单元可以为存储器。可以理解的,如果通信装置为设置在设备中的芯片,则所述收发单元可以是该芯片的输入/输出接口,例如输入/输出电路、管脚等。
第九方面,提供了一种通信装置,包括处理器,用于执行上述第一方面或第一方面的任一种实现所述的方法。
第十方面,提供了一种通信装置,包括处理器,用于执行上述第二方面或第二方面的任一种实现所述的方法。
第十一方面,提供了一种通信装置,包括处理器,用于执行上述第三方面或第三方面的任一种实现所述的方法。
第十二方面,提供了一种通信装置,包括处理器,用于执行上述第四方面或第四方面的任一种实现所述的方法。
第十三方面,提供了一种通信装置,包括处理器、存储器以及存储在存储器上并可在处理器上运行的指令,当所述指令被运行时,使得所述通信装置执行如第一方面或第一方面的任一种实现所述的方法。
第十四方面,提供了一种通信装置,包括处理器、存储器以及存储在存储器上并可在处理器上运行的指令,当所述指令被运行时,使得所述通信装置执行如第二方面或第二方面的任一种实现所述的方法。
第十五方面,提供了一种通信装置,包括处理器、存储器以及存储在存储器上并可在处理器上运行的指令,当所述指令被运行时,使得所述通信装置执行如第三方面或第三方面的任一种实现所述的方法。
第十六方面,提供了一种通信装置,包括处理器、存储器以及存储在存储器上并可在处理器上运行的指令,当所述指令被运行时,使得所述通信装置执行如第四方面或第四方面的任一种实现所述的方法。
第十七方面,提供了一种通信系统,该通信系统包括第五方面的通信装置和第六方面的通信装置。
第十八方面,提供了一种通信系统,该通信系统包括第七方面的通信装置和第八方面的通信装置。
第十九方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序或指令,当计算机执行所述计算机程序或指令时,实现上述各方面所述的方法。
第二十方面,提供了一种包含指令的计算机程序产品,当该指令在通信装置上运行时,使得通信装置执行上述各方面所述的方法。
附图说明
图1给出了本申请涉及的一种通信系统的示意图;
图2为本申请实施例提供的一种具体场景的通信系统的示意图;
图3给出了本申请涉及的另一种通信系统的示意图;
图4为本申请实施例提供的另一种具体场景的通信系统的示意图;
图5为本申请实施例提供的一种联邦学习的架构示意图;
图6为本申请实施例提供的一种分割学习的架构示意图;
图7为本申请实施例提供的一种联邦蒸馏的架构示意图;
图8为现有技术提供的一种应用层数据的传输示意图;
图9为本申请实施例提供的一种UCI/DCI的通信示意图;
图10为已有物理信道的数据处理流程示意图;
图11为本申请实施例提供的一种通信方法的流程示意图;
图12为本申请实施例提供的一种人工智能业务数据的通信示意图;
图13a为本申请实施例提供的一种物理学习信道的数据处理流程示意图;
图13b为本申请实施例提供的另一种物理学习信道的数据处理流程示意图;
图13c为本申请实施例提供的又一种物理学习信道的数据处理流程示意图;
图14为本申请实施例提供的另一种通信方法的流程示意图;
图15为本申请实施例提供的又一种通信方法的流程示意图;
图16为本申请实施例提供的一种人工智能业务数据的协议栈处理流程示意图;
图17为本申请实施例提供的一种通信装置的结构示意图;
图18为本申请实施例提供的一种简化的终端的结构示意图;
图19为本申请实施例提供的一种简化的网络设备的结构示意图。
具体实施方式
本申请可以适用于第五代(5 th generation,5G)移动通信系统,第六代(6 th generation,6G)移动通信系统,未来演进的移动通信系统,卫星通信系统、短距通信系统或者其它通信系统等,本申请对此不作限制。
图1给出了本申请涉及的一种通信系统的示意图,该通信系统包括一个或多个第一节点101(图中示例了两个第一节点)以及一个或多个第二节点102(图中示例了一个第二节点)。该通信系统可以应用于蜂窝网等场景。示例性地,第一节点101可以是分布式节点,例如终端,或网络设备;第二节点102可以是中心节点,例如网络设备,或终端。
如图2所示,为本申请实施例提供的一种具体场景的通信系统的示意图,其中,第一节点101为终端,以及第二节点102为网络设备。第一节点101向第二节点102发送第一指示信息和/或第二指示信息,该第一指示信息用于指示数据集状态信息,第二指示信息用于指示模型状态信息;第二节点102根据第一指示信息和/或第二指示信息,确定通信策略和通信资源;以及第二节点102向第一节点101发送第三指示信息,该第三指示信息用于指示通信策略和通信资源,通信策略和通信资源是根据第一指示信息和/或第二指示信息确定的。第一节点101可以基于第三指示信息指示的通信策略和通信资源与第二节点102之间进行人工智能业务数据包的通信。
图3给出了本申请涉及的另一种通信系统的示意图,该通信系统包括第一节点201、第二节点202和第三节点203。该通信系统可以应用于车联网(vehicle to everything,V2X),设 备到设备(device-to-device,D2D)通信等场景。示例性地,第一节点201、第三节点203可以是分布式节点,例如终端;第二节点202可以是中心节点,例如网络设备。第一节点201向第二节点202发送第一指示信息和/或第二指示信息,该第一指示信息用于指示数据集状态信息,第二指示信息用于指示模型状态信息;第二节点202根据第一指示信息和/或第二指示信息,确定通信策略和通信资源;以及第二节点202向第一节点201发送第三指示信息,该第三指示信息用于指示通信策略和通信资源,通信策略和通信资源是根据第一指示信息和/或第二指示信息确定的。第一节点201可以基于第三指示信息指示的通信策略和通信资源与第三节点203之间进行人工智能业务数据包的通信。
在图2所示的通信系统中,其中,终端是一种具有无线收发功能的设备,可以部署在陆地上(包括室内或室外),可以手持、穿戴或车载;也可以部署在水面上,如轮船上等;还可以部署在空中,如飞机、气球和卫星上等。终端可以是手机(mobile phone)、平板电脑(pad)、带无线收发功能的电脑、可穿戴设备、无人机、直升机、飞机、轮船、机器人、机械臂、智能家居设备、虚拟现实(virtual reality,VR)终端、增强现实(augmented reality,AR)终端、工业控制(industrial control)中的无线终端、无人驾驶(self-driving)中的无线终端、整车、车辆中的功能模块、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端(例如,路灯等)、智慧家庭(smarthome)中的无线终端等等。本申请的实施例对应用场景不做限定。终端有时也可以称为用户设备(user equipment,UE)、接入终端、UE单元、移动站、移动台、远方站、远程终端、移动设备、终端(terminal)、无线通信设备、UE代理或UE装置等。本申请的实施例对终端所采用的具体技术和具体设备形态不做限定。
网络设备可以是任意一种具有无线收发功能的设备,包括但不限于:基站(NodeB)、演进型基站(eNodeB)、5G通信系统中的基站、未来通信系统中的基站或接入网设备、WiFi系统中的接入节点、无线中继节点、无线回传节点等。网络设备还可以是云无线接入网络(cloud radio access network,CRAN)场景下的无线控制器。网络设备还可以是小站,传输节点(transmission reference point,TRP)等。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。
该通信系统还可以包括核心网设备(图中未示出)。其中,核心网设备的功能主要是提供用户连接、对用户进行管理以及对业务完成承载,作为承载网络提供到外部网络的接口。用户连接的建立包括移动性管理(mobility management,MM)、呼叫管理(calling management,CM)、交换/路由、录音通知(结合智能网业务完成到智能网外围设备的连接关系)等功能。用户管理包括用户的描述、服务质量(quality of service,QoS)、用户通信记录(accounting)、虚拟家庭环境(virtual home environment,VHE)(与智能网平台的对话提供虚拟居家环境)、安全性(由鉴权中心提供相应的安全性措施包含了对移动业务的安全性管理和对外部网络访问的安全性处理)。承载连接包括到外部的公共交互电话网(public switched telephone network,PSTN)、外部电路数据网和分组数据网、因特网(internet)和企业内部网(intranets)、以及移动自己的短信息服务(short message service,SMS)服务器等等。核心网设备可以提供的基本业务包括移动办公、电子商务、通信、娱乐性业务、旅行和基于位置的服务、遥感业务(telemetry)-简单消息传递业务(监视控制)等等。
可选地,在本申请实施例中,终端或网络设备包括硬件层、运行在硬件层之上的操作系统层,以及运行在操作系统层上的应用层。该硬件层包括中央处理器(central processing unit,CPU)、内存管理单元(memory management unit,MMU)和内存(也称为主存)等硬件。该 操作系统可以是任意一种或多种通过进程(process)实现业务处理的计算机操作系统,例如,Linux操作系统、Unix操作系统、安卓(Android)操作系统、iOS或windows操作系统等。该应用层包含浏览器、通讯录、文字处理软件、即时通信软件等应用。并且,本申请实施例并未对本申请实施例提供的方法的执行主体的具体结构特别限定,能够通过运行记录有本申请实施例的提供的方法的代码的程序,以根据本申请实施例提供的方法进行通信即可,例如,本申请实施例提供的方法的执行主体可以是终端或网络设备,或者,是终端或网络设备中能够调用程序并执行程序的功能模块。
换言之,本申请实施例中的终端或者网络设备的相关功能可以由一个设备实现,也可以由多个设备共同实现,还可以是由一个设备内的一个或多个功能模块实现,本申请实施例对此不作具体限定。可以理解的是,上述功能既可以是硬件设备中的网络元件,也可以是在专用硬件上运行的软件功能,或者是硬件与软件的结合,或者是平台(例如,云平台)上实例化的虚拟化功能。
图2所示的通信系统中网络设备和终端之间的通信还可以用另一种形式来表示,如图4所示,为本申请实施例提供的另一种具体场景的通信系统的示意图,终端40包括处理器401、存储器402和收发器403,收发器403包括发射机4031、接收机4032和天线4033。网络设备41包括处理器411、存储器412和收发器413,收发器413包括发射机4131、接收机4132和天线4133。接收机4032可以用于通过天线4033接收传输控制信息,发射机4031可以用于通过天线4033向网络设备41发送传输反馈信息。发射机4131可以用于通过天线4133向终端40发送传输控制信息,接收机4132可以用于通过天线4133接收终端40发送的传输反馈信息。
其中,处理器401/处理器411可以是一个CPU,微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
存储器402/存储器412可以是具有存储功能的装置。例如可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器402/存储器412用于存储执行本申请方案的计算机执行指令,并由处理器401/处理器411来控制执行。处理器401/处理器411用于执行存储器402/存储器412中存储的计算机执行指令,从而实现本申请实施例中提供的信息传输方法。
或者,本申请实施例中,也可以是处理器401/处理器411执行本申请下述实施例提供的信息传输方法中的处理相关的功能。
本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
需要说明的是,本申请实施例中的术语“系统”和“网络”可被互换使用。
随着大数据时代的到来,每台设备每天都会以各种形式产生巨量的原始数据,这些数据 将以“孤岛”的形式诞生并存在于世界的各个角落。传统的集中式学习要求各个边缘设备将本地数据统一传输到中心端的服务器上,其后再利用收集到的数据进行模型的训练与学习,然而这一架构随着时代的发展逐渐受到如下因素的限制:
(1)边缘设备广泛地分布于世界上各个地区和角落,这些设备将以飞快的速度源源不断地产生和积累巨大量级的原始数据。若中心端需要收集来自全部边缘设备的原始数据,势必会带来巨大的通信损耗和算力需求。
(2)随着现实生活中实际场景的复杂化,越来越多的学习任务要求边缘设备能够做出及时而有效的决策与反馈。传统的集中式学习由于涉及到大量数据的上传势必会导致较大程度的时延,致使其无法满足实际任务场景的实时需求。
(3)考虑到行业竞争、用户隐私安全、行政手续复杂等问题,将数据进行集中整合将面临越来越大的阻力制约。因而系统部署将越来越倾向于在本地存储数据,同时由边缘设备自身完成模型的本地计算。
因此,如何在满足数据隐私、安全和监管要求的前提下,设计一个机器学习框架,让人工智能(artificial intelligence,AI)系统能够更加高效、准确地共同使用各自的数据,成为了当前人工智能发展的一个重要议题。
目前提出了如下解决方案:
联邦学习:
联邦学习(federated learning,FL)这一概念的提出有效地解决了当前人工智能发展所面临的困境,其在充分保障用户数据隐私和安全的前提下,通过促使各个边缘设备(也可以称为“分布式节点”)和中心端服务器(也可以称为“中心节点”)协同合作来高效地完成模型的学习任务。如图5所示,为本申请实施例提供的一种联邦学习的架构示意图,FL架构是当前FL领域最为广泛的训练架构,FedAvg算法是FL的基础算法,其算法流程大致如下:
(1)中心节点初始化待训练模型
Figure PCTCN2022120568-appb-000001
并将其广播发送给所有分布式节点。
(2)在第t∈[1,T]轮中,分布式节点k∈[1,K]基于局部数据集
Figure PCTCN2022120568-appb-000002
对接收到的全局模型
Figure PCTCN2022120568-appb-000003
进行E个回合(epoch)的训练以得到本地训练结果
Figure PCTCN2022120568-appb-000004
将其上报给中心节点。
(3)中心节点汇总收集来自全部(或部分)客户端的本地训练结果,假设第t轮上传局部模型的客户端集合为
Figure PCTCN2022120568-appb-000005
中心端将以对应客户端的样本数为权重进行加权求均得到新的全局模型,具体更新法则为
Figure PCTCN2022120568-appb-000006
其后中心端再将最新版本的全局模型
Figure PCTCN2022120568-appb-000007
广播发送给所有客户端设备进行新一轮的训练。
(4)重复步骤(2)和(3)直至模型最终收敛或训练轮数达到上限。
除了上报本地模型
Figure PCTCN2022120568-appb-000008
还可以将训练的本地梯度
Figure PCTCN2022120568-appb-000009
进行上报,中心节点将本地梯度求平均,并根据这个平均梯度的方向更新全局模型。
可以看到,在FL框架中,数据集存在于分布式节点处,即分布式节点收集本地的数据集,并进行本地训练,将训练得到的本地结果(模型或梯度)上报给中心节点。中心节点本身没有数据集,只负责将分布式节点的训练结果进行融合处理,得到全局模型,并下发给分布式节点。
分割学习:
如图6所示,为本申请实施例提供的一种分割学习(split learning)的架构示意图,分割学习中,完整的神经网络模型被分割为两部分(即两个子网络),一部分部署在分布式节点上,另一部分部署在中心节点上。完整的神经网络被分割的地方被称为“分割层”,前向推理时,分布式节点将本地数据输入本地的子网络,推理到分割层,将分割层的结果Fk通过通信链路发送个中心节点,中心节点将收到的Fk输入自身部署的另一个子网络,并继续进行前向推理,得到最终的推理结果。模型训练的梯度反向传递中,梯度通过中心节点的子网络反向传递到分割层,得到反向传递结果Gk,然后中心节点将Gk发送给分布式节点,继续在分布式节点的子网络上进行梯度反向传递。
可以看到,分割学习的前向推理和梯度反向传递过程中,只涉及一个分布式节点和一个中心节点。训练好的分布式节点上的子网络可以保存在分布式节点本地或特定的模型存储服务器上。当有新的分布式节点加入学习系统时,它可以先下载已训练好的分布式节点子网络,再使用本地数据进行进一步训练。
联邦蒸馏:
如图7所示,为本申请实施例提供的一种联邦蒸馏的架构示意图,联邦蒸馏也是一种可行的分布式学习方法。与联邦学习类似,多个分布式节点与中心节点之间进行信息交互,共同完成机器学习模型的训练。与联邦学习不同,联邦蒸馏中分布式节点和中心节点之间交互的信息为对分数(logits),为不是联邦学习中的模型参数或梯度。其工作流程和联邦学习类似,在此不再赘述。
通过上面对联邦学习、分割学习、联邦蒸馏的介绍可以看到,不同的学习架构需要参与节点之间传输不同的信息,例如:联邦学习中,中心节点和分布式节点之间传输的是机器学习模型的参数或梯度;分割学习中,分布式节点向中心节点传输的是推理到分割层的推理中间结果,中心节点向分布式节点传输的是梯度反向传递至分割层的梯度;联邦蒸馏中,中心节点和分布式节点之间传输的是对分数(logits)。本申请中,把上述在节点间传输的信息统称为AI traffic。当在无线通信系统的物理层或MAC层等L1/L2协议层中使用分布式学习(即模型的任务是物理层或MAC层任务,模型的输入和/或输出是来自或发给物理层或MAC层)时,上述AI traffic将在相应的L1/L2协议层中产生,它们与应用层下发的业务数据流不同,需要设计全新的通信方法。
然而,现有的通信方案仅提供了应用层/网络层等上层数据的协议栈处理过程,无法用于在物理层/MAC层等较低协议层产生的数据流,如上述AI traffic。如图8所示,为现有技术提供的一种应用层数据的传输示意图,对于应用层/网络层等上层的数据流,新无线(new radio,NR)标准的传输方式描述如下:构建QoS流,并逐级映射到协议栈中SDAP/PDCP/RLC/MAC/PHY等各层,最终通过射频空口发送。各协议层的功能如下:
SDAP:维护QoS流到数据无线承载(data radio bearer,DRB)的映射;
PDCP:数据完整性、加密;
RLC:缓存、自动重传请求(automatic repeat-request,ARQ);
MAC:无线资源管理(radio resource management,RRM)、混合自动重传请求(hybrid automatic repeat-request,HARQ)、多路复用(multiplexing);
PHY:构建传输通道,进行信道编码、调制、资源映射等处理。
另外,如图9所示,为上行控制信息(uplink control information,UCI)和下行控制信息(downlink control information,DCI)的通信示意图,UCI/DCI是已有的在物理层/MAC层产生的待传输数据。
如图10所示,为已有物理信道的数据处理流程示意图,上述UCI/DCI由专用的物理信道(控制信道)经过如图10所示的特殊的处理后进行传输。现有蜂窝网络标准(长期演进(long term evolution,LTE)、NR)中,UCI/DCI都有固定的格式,其包含多个字段,每个字段中的比特的含义由标准给定。以NR中format0_0的DCI为例,其包括:标识符(identifier),频域位置(frequency domain,F),时域位置(time domain,T),hopping,MCS,新数据(newData),冗余版本(redundancy version,RV),HARQ,TPC,UL/SUL等字段;而UCI可能包括信道状态信息(channel state information,CSI)(信道质量指示(channel quality indicator,CQI)/秩指示(rank indication,RI)/预编码矩阵指示(precoding matrix indicator,PMI)/长度指示(length indicator,LI)/CRI)、确定响应(acknowledgement,ACK)/否定响应(non-acknowledgement,NACK)等内容。UCI/DCI相较于普通数据而言更短,而重要性更高。
然而,如图10所示的用于传输UCI/DCI的物理专用信道和相关处理只适用于较短的比特序列。上述AI traffic可能包括机器学习模型的参数、梯度、甚至训练用数据等较大的数据包,这些数据包的传输有可能存在拆包/组包的需求,现有协议的物理信道中不支持此类处理。
有鉴于此,本申请提供一种通信方案,第一节点向第二节点发送第一指示信息和/或第二指示信息,该第一指示信息用于指示数据集状态信息,第二指示信息用于指示模型状态信息;第二节点根据第一指示信息和/或第二指示信息,确定通信策略和通信资源;以及第二节点向第一节点发送第三指示信息,该第三指示信息用于指示通信策略和通信资源,通信策略和通信资源是根据第一指示信息和/或第二指示信息确定的。采用本申请的方案,通过获取第一节点的数据集状态信息和/或模型状态信息,以此确定通信策略和通信资源,并指示第一节点通信策略和通信资源,提高了通信的可靠性。
如图11所示,为本申请实施例提供的一种通信方法的流程示意图,应用于图1所示的通信系统。示例性地,该方法可以包括以下步骤:
S1101.第一节点向第二节点发送第一指示信息和/或第二指示信息。
相应地,第二节点接收该第一指示信息和/或第二指示信息。
其中,该第一指示信息用于指示数据集状态信息(dataset state information,DSI),第二指示信息用于指示模型状态信息(model state information,MSI)。
示例性地,第一节点可以向第二节点发送第一指示信息;或第一节点向第二节点发送第二指示信息;或第一节点向第二节点发送第一指示信息和第二指示信息。
示例性地,第一节点可以通过数据信道、控制信道或其它专用信道上报该第一指示信息和/或第二指示信息。
其中,数据集状态信息是指第一节点是以什么方式获得的模型的输入数据。该数据集状态信息影响人工智能数据包的通信策略和通信资源的确定。该数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
示例性地,对于数据种类,物理层包括以下至少一个种类的数据:信道、信号强度、信号与干扰加噪声比(signal to interference plus noise ratio,SINR)。MAC层包括以下至少一个种类的数据:调度、资源分配、缓存状态、链路自适应。
采样频率可以是多少微时隙(minislot)/时隙(slot)/子帧(subframe)/帧(frame)/秒采样一次。
采样位置信息可以用以下任意一种方式表征:小区标识(cell identifier)、基站距离、全球定位系统(global positioning system,GPS)位置。本申请对此不作限制。
样本精度可以用以下至少一种方式表征:双精度浮点数(double),64位浮点数(float64),浮点(float),32位浮点数(float32),16位浮点数(float16),8位浮点数(float8),整数(int),64位整数(int64),32位整数(int32),16位整数(int16),8位整数(int8),二进制数(bin)。
示例性地,第一指示信息用于指示数据集状态信息,可以是对于上述至少一个数据集状态信息中的每个信息分别指示或联合指示。对于上述至少一个数据集状态信息中的每个信息,例如,对于数据种类,可以用若干个比特指示多个数据种类。例如,有8个数据种类,则可以用3比特的二进制数指示该8个数据种类。
模型状态信息是指对第一节点采用的模型的描述。该模型状态信息包括以下至少一个信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
其中,模型类型包括以下至少一种:全连接神经网络(fully connected neural network,FCNN)、卷积神经网络(convolutional neural networks,CNN)、循环神经网络(recurrent neural networks,RNN)、转换器(transformer)、混合型模型。
模型标识(model identifier)用于索引对应模型和任务。
子模型的排序用于指示是否子模型及排序,用于构建完整模型。
模型版本号用于版本同步。
模型描述包括参数量、推理运算量等。
模型性能是指参考数据集上的推理性能。
示例性地,第二指示信息用于指示模型状态信息,可以是对于上述至少一个模型状态信息中的每个信息分别指示或联合指示。对于上述至少一个模型状态信息中的每个信息,例如,对于模型类型,可以用若干个比特指示多个数据种类。例如,有6个模型类型,则可以用3比特的二进制数指示该6个模型类型。
S1102.第二节点根据第一指示信息和/或第二指示信息,确定通信策略和通信资源。
如图1所示的通信系统,第二节点可能与一个或多个第一节点通信。第二节点接收到第一指示信息和/或第二指示信息后,调度第一节点,确定需要上行发送人工智能业务数据的第一节点。
此外,第一节点和第二节点之间的信道状态对确定需要上行发送人工智能业务数据的第一节点有一定的影响。因此,需要上行发送人工智能业务数据的第一节点还可以是根据第一指示信息、第二指示信息和信道状态信息中的至少两项确定的。
第二节点可以进行信道估计,确定第一节点和第二节点之间的信道状态信息;或者,第二节点也可以通过信道互易性、第一节点反馈信道状态信息等方式获得第一节点和第二节点之间的信道状态信息。
第二节点调度第一节点的方式包括但不限于以下方式:
一个方式为,调度信道条件好的第一节点;
另一个方式为,调度数据集符合要求的第一节点,例如调度数据种类、采样频率、采样位置、采样时间、样本数量、样本精度等参数中的一项或多项符合第二节点要求的第一节点。
又一个方式为,调度模型符合要求的第一节点,例如调度模型类型、模型标识、模型描述、模型更新时间、模型更新位置、模型精度、模型性能等参数中的一项或多项符合要求的第一节点;
又一个实现为,调度信道条件好和数据集符合要求的第一节点;
又一个实现为,调度信道条件好和模型符合要求的第一节点;
又一个实现为,调度信道条件好、数据集符合要求和模型符合要求的第一节点。
第二节点确定需要上行发送人工智能业务数据的第一节点后,确定第一节点上行传输所使用的通信策略。
上述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略(modulation and coding scheme,MCS),信源信道联合编码策略,实数或复数处理策略。
数据集状态信息、模型状态信息影响第一节点采用的通信策略。第二节点可以根据第一指示信息,或第二指示信息,或第一指示信息和第二指示信息,确定上述通信策略。
此外,第一节点和第二节点之间的信道状态对第一节点和第二节点采用的通信策略有一定的影响。例如,信道条件好,则量化精度可以选择高一些;信道条件差,则量化精度可以选择低一些。又例如,信道条件好,则MCS等级可以选择高一些;信道条件差,则MCS等级可以选择低一些。因此,通信策略还可以是根据第一指示信息、第二指示信息和信道状态信息中的至少两项确定的。
第二节点还可以根据第一指示信息和/或第二指示信息,确定第一节点可以使用或需要使用的通信资源。
第一节点可以使用或需要使用的通信资源与第一节点需要上行传输的人工智能业务数据包的大小有关。示例性地,第二节点可以先获取人工智能业务数据包的大小,然后根据人工智能业务数据包的大小、第一指示信息、第二指示信息中的至少两项,确定通信资源。
其中,获取人工智能业务数据包的大小可以有以下两种方式:
一种方式为,考虑到PHY/MAC层的AI相关任务的数量有限,可以预先针对每一类AI任务的每一种人工智能业务数据包的类型,定义固定的数据包大小,如下表1所示:
表1固定大小的人工智能业务数据包
Figure PCTCN2022120568-appb-000010
如表1所示,AI任务为信道状态信息(channel state information,CSI)压缩,且人工智能业务数据包的类型为数据,对应的人工智能业务数据包的大小为S1;AI任务为CSI压缩,且人工智能业务数据包的类型为模型参数,对应的人工智能业务数据包的大小为S2;等等。
第二节点是可以获得AI任务的任务类型和人工智能业务数据包的类型的,因此,可以根据任务类型、业务类型中的至少一项与数据包大小的对应关系,确定人工智能业务数据包的大小(即待通信的数据量)。例如,第二节点获得AI任务的类型为波束管理,人工智能业务数据包的类型为模型参数,第二节点可以根据上述表1中的对应关系,确定人工智能业务数据包的大小为A2。
第二节点确定人工智能业务数据包的大小后,可以根据人工智能业务数据包的大小、第 一指示信息、第二指示信息中的至少两项,确定通信资源。
在一个示例中,第二节点根据人工智能业务数据包的大小、量化策略、源编码策略、MCS计算调制符号数量,再根据调制符号数量确定通信资源。
在另一个示例中,第二节点根据人工智能业务数据包的大小、量化策略、信源信道联合编码策略、调制策略计算调制符号数量,再根据调制符号数量确定通信资源。
在又一个示例中,第二节点根据人工智能业务数据包的大小、实数或复数处理策略计算待通信的实数符号数量,再根据待通信的实数符号数量确定通信资源。
另一种方式为,在人工智能业务数据包的大小没有如上述方式所述可以通过类似表1的方式限定为固定大小时,可以由第一节点向第二节点上报人工智能业务数据包的大小,第二节点接收第一节点上报的人工智能业务数据包的大小。
S1103.第二节点向第一节点发送第三指示信息。
相应地,第一节点接收该第三指示信息。
其中,该第三指示信息用于指示上述通信策略和通信资源。
示例性地,第二节点可以通过下行控制信息(downlink control information,DCI)或学习控制信息(learning control information,LCI)指示被调度的第一节点,并指示其上行传输所使用的通信策略和通信资源。第二节点可以采用已有的DCI格式中的保留字段中携带上述通信策略和通信资源;或采用新定义的DCI格式携带上述通信策略和通信资源;或采用新定义的LCI携带上述通信策略和通信资源。该LCI可以是新定义的用于分布式学习的信息格式。
示例性地,上述DCI/LCI可以使用专用的无线网络临时标识(radio network temporary identity,RNTI),例如学习-无线网络临时标识(learning-radio network temporary identity,L-RNTI)进行加扰。
第一节点接收到上述DCI/LCI后,采用专用的RNTI对DCI/LCI进行解扰,解扰成功,则该DCI/LCI是发送给第一节点的。
可选地,该方法还可以包括以下步骤(图中以虚线表示):
S1104.第一节点根据第三指示信息,生成人工智能业务数据包。
第一节点接收到第三指示信息后,根据第三指示信息,生成人工智能业务数据包。示例性地,第一节点根据第三指示信息所指示的通信策略和通信资源,生成人工智能业务数据包。例如,人工智能业务数据包所需的通信资源大于第三指示信息所指示的通信资源时,第一节点可以将人工智能业务数据包进行拆包,拆分成若干个子数据包,以使得每个子数据包符合第三指示信息所指示的通信资源。又例如,第一节点根据第三指示信息所指示的量化策略(通信策略的一种),对人工智能业务数据包进行量化处理,以符合第三指示信息所指示的量化策略。又例如,第一节点根据第三指示信息所指示的调制编码策略(通信策略的一种),对人工智能业务数据包进行调制,以符合第三指示信息所指示的调制编码策略。
其中,该人工智能业务数据包包括包头和负载(payload)。
其中,上述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
上述包头包括以下至少一项:
内容指示信息:用于指示负载中包括的上述至少一种数据;
内容描述信息:用于指示负载中包括的上述至少一种数据的时间戳、获取位置、版本信息等;
压缩方式:用于指示负载的剪枝位置(即人工智能业务数据包被拆分的位置)、是否蒸馏 等;
组包方式:用于指示完整的人工智能业务数据包的序号、完整的人工智能业务数据包被拆分后的子包的序号(当人工智能业务数据包的数据量较大时,可以通过多个子数据包进行发送,并在包头中包括各个子数据包的序号,用于第二节点接收到多个子数据包后进行组包)。
S1105.第一节点根据第三指示信息,向第二节点发送人工智能业务数据包。
相应地,第二节点接收根据第三指示信息发送的人工智能业务数据包。
示例性地,第一节点根据第三指示信息所指示的通信策略,在第三指示信息所指示的通信资源上向第二节点发送上述人工智能业务数据包。
进一步地,上述人工智能业务数据包通过物理学习信道通信。
如图12所示,为本申请实施例提供的一种人工智能业务数据的通信示意图,该物理学习信道是为PHY/MAC层的人工智能业务数据定义的新的物理信道,可以包括:
上行:物理上行学习信道(physical uplink learning channel,PULCH);
下行:物理下行学习信道(physical downlink learning channel,PDLCH);
物理广播学习信道(physical broadcast learning channel,PBLCH)。
示例性地,上述PULCH/PDLCH/PBLCH可以由专用的时频资源承载。
上述物理学习信道的处理与现有技术(如图10所示)的主要不同之处在于,人工智能业务数据一般是实数或复数形式的,不能够直接进行比特级的信道编码等处理。因此,物理学习信道的数据处理流程中需要添加量化、信源编码、信源信道联合编码、实数或复数处理等操作。物理学习信道的处理操作的可能实现方式为:
实现方式1):通过物理学习信道的通信包括以下至少一个处理操作:量化,信源编码。示例性地,如图13a所示,为本申请实施例提供的一种物理学习信道的数据处理流程示意图,通过物理学习信道的通信包括以下处理操作:量化、信源编码、信道编码、速率匹配、加扰、调制、层映射、预编码、资源映射、正交频分复用(orthogonal frequency division multiplexing,OFDM)信号生成等。其中,信道编码及之后的处理操作可以复用如图10中对应的处理操作,也可以使用特殊设计的相应处理操作(例如使用与现有标准不同的信道编码、调制、信号生成方式)。可以看到,图13a所示的处理流程中,相较于现有标准,上述物理学习信道增加了“量化”和“信源编码”处理操作。其中,如果人工智能业务数据是实数,量化是指将人工智能业务数据中包含的实数值量化为可能性有限的几个实数。如果人工智能业务数据是复数,量化是指将人工智能业务数据中包含的复数的实部值量化为可能性有限的几个实数,以及将人工智能业务数据中包含的复数的虚部值量化为可能性有限的几个实数。可以采用均匀量化、非均匀量化、向量量化等量化方式。信源编码是指将量化得到的实数值编码为比特序列,可以采用霍夫曼(Huffman)编码、算术编码、L-Z编码等信源编码方式。
实现方式2):通过物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码。示例性地,如图13b所示,为本申请实施例提供的另一种物理学习信道的数据处理流程示意图,通过物理学习信道的通信包括以下处理操作:量化、信源信道联合编码、速率匹配、加扰、调制、层映射、预编码、资源映射、OFDM信号生成等。其中速率匹配及之后的处理操作可以复用如图10中对应的处理操作,也可以使用特殊设计的相应处理操作(例如使用与现有标准不同的信道编码、调制、信号生成方式)。可以看到,图13b所示的处理操作中,相较于现有标准,上述物理学习信道增加了“量化”和“信源信道联合编码”处理操作,减少“信道编码”处理操作。其中,量化操作的概念可参考上述实现方式1)的描述。信源信道联合编码是指将量化得到的实数值编码为比特序列。信源信道联合编码操作结合了信源 编码和信道编码,通过该处理,对量化的输出进行了压缩,并提升了传输的可靠性。
实现方式3):通过物理学习信道的通信包括以下处理操作:实数处理。示例性地,如图13c所示,为本申请实施例提供的又一种物理学习信道的数据处理流程示意图,通过物理学习信道的通信包括以下处理操作:实数处理、层映射、预编码、资源映射、OFDM信号生成等。其中层映射及之后的处理操作可以复用如图10中对应的处理操作,也可以使用特殊设计的相应处理操作(例如使用与现有标准不同的信道编码、调制、信号生成方式)。可以看到,图13c所示的处理操作中,相较于现有标准,上述物理学习信道增加了“实数或复数处理”操作,减少“信道编码”、“速率匹配”、“加扰”、“调制”等处理操作。实数或复数处理操作是指将人工智能业务数据中包含的实数值或复数值的实部和虚部映射成一组实数序列,直接进行层映射等后续处理进行发送。
示例性地,实数或复数处理操作可以将输入的实数值或复数值的实部和虚部映射为实数序列。以输入为实数值为例,映射的方式可以是直接取等(即输出等于输入),也可以是将输入的实数值映射到有限的几个实数值上(例如小于等于-1的数映射为-1,-1到0的数映射为-0.5,0到1的数映射为0.5,大于等于1的数映射为1)。
实数或复数处理操作输出的实数序列通过层映射、预编码、资源映射等操作被映射到时频空传输资源上,经过信号生成得到信号后被发送出去。多个第一节点同时发送的信号,其相同时频空传输资源上的信号相互叠加后被第二节点接收,第二节点可以基于叠加后的信号进行处理,获得所需信息,这就是空中计算的过程。
示例性地,可以对上述人工智能业务数据包的包头和负载分开发送,例如,包头包含的比特序列可以通过如图10所示的已有的物理信道,经信道编码、速率匹配、加扰、调制等处理后发送;负载可以通过上述物理学习信道发送。
第二节点接收到上述人工智能业务数据包后,根据包头对人工智能业务数据包进行解包,获得所需的人工智能业务数据包。
根据本申请实施例提供的一种通信方法,通过获取第一节点的数据集状态信息和/或模型状态信息,以此确定通信策略和通信资源,并指示第一节点通信策略和通信资源,第一节点根据该通信策略和通信资源与第二节点进行通信,提高了通信的可靠性。
在另外的实施例中,上述步骤S1104也可以替换为第二节点根据第三指示信息,生成人工智能业务数据包。具体如何生成人工智能业务数据包,可参考上述步骤S1104的描述。上述步骤S1105也可以替换为:第二节点根据第三指示信息,向第一节点发送人工智能业务数据包。相应地,第一节点接收根据第三指示信息发送的人工智能业务数据包。即第二节点确定第三指示信息后,自身根据该第三指示信息,向第一节点发送人工智能业务数据包。人工智能业务数据包的通信可以参考上述S1105的描述,在此不再赘述。
如图14所示,为本申请实施例提供的另一种通信方法的流程示意图,应用于图3所示的通信系统。示例性地,该方法可以包括以下步骤:
S1401.第一节点向第二节点发送第一指示信息和/或第二指示信息。其中,该第一指示信息用于指示数据集状态信息,第二指示信息用于模型状态信息。
相应地,第二节点接收该第一指示信息和/或第二指示信息。
该步骤的具体实现可参考如图11所示的步骤S1101,在此不再赘述。
S1402.第二节点根据第一指示信息和/或第二指示信息,确定通信策略和通信资源。
该步骤的具体实现可参考如图11所示的步骤S1102,在此不再赘述。
其中,根据第一指示信息、第二指示信息和信道状态信息确定需要上行发送人工智能业 务数据的第一节点,以及根据第一指示信息、第二指示信息和信道状态信息确定通信策略时,该信道状态信息包括第一节点和第三节点之间的信道状态信息;或者该信道状态信息包括第一节点、第二节点和第三节点之间的信道状态信息。
S1403.第二节点向第一节点发送第三指示信息。该第三指示信息用于指示通信策略和通信资源。
相应地,第一节点接收该第三指示信息。
该步骤的具体实现可参考如图11所示的步骤S1103,在此不再赘述。
可选地,该方法还可以包括以下步骤(图中以虚线表示):
S1404.第一节点根据第三指示信息,生成人工智能业务数据包。
该步骤的具体实现可参考如图11所示的步骤S1104,在此不再赘述。
S1405.第一节点根据第三指示信息,向第三节点发送人工智能业务数据包。
相应地,第三节点接收该人工智能业务数据包。
与图11所示实施例不同的是,该方法应用于图3所示的通信系统,第一节点根据第二节点发送的第三指示信息所指示的通信策略和通信资源,与第三节点进行通信。示例性地,第一节点根据第三指示信息,向第三节点发送人工智能业务数据包。
根据本申请实施例提供的一种通信方法,通过获取第一节点的数据集状态信息和/或模型状态信息,以此确定通信策略和通信资源,并指示第一节点通信策略和通信资源,第一节点根据该通信策略和通信资源与第三节点进行通信,提高了通信的可靠性。
在另外的实施例中,上述步骤S1403可替换为,第二节点向第三节点发送第三指示信息。相应地,第三节点接收该第三指示信息。上述步骤S1404可替换为,第三节点根据第三指示信息,生成人工智能业务数据包。上述步骤S1405可替换为,第三节点根据第三指示信息,向第一节点发送人工智能业务数据包。相应地,第一节点接收该人工智能业务数据包。
上述实施例描述了在物理层和/或MAC层等低层协议栈进行人工智能业务数据的通信方法。下面实施例描述通过高层协议栈进行人工智能业务数据的通信方法:
如图15所示,为本申请实施例提供的又一种通信方法的流程示意图,应用于图1或图3所示的通信系统。该通信系统涉及第一节点和第二节点,该第一节点可以是终端,第二节点可以是网络设备;或者,该第一节点可以是网络设备,第二节点可以是终端。示例性地,该方法可以包括以下步骤:
S1501.第一节点获取物理层和/或MAC层产生的人工智能业务数据。
第一节点在无线通信系统的物理层或MAC层等L1/L2协议层中使用机器学习,并获取物理层和/或MAC层产生的人工智能业务数据。其中,人工智能业务数据的含义及包括的内容可参考前述实施例。
S1502.第一节点通过第一无线承载向第二节点发送人工智能业务数据。
相应地,第二节点通过第一无线承载接收该人工智能业务数据。
示例性地,一种人工智能业务数据的协议栈处理流程如图16所示,该协议栈中增加人工智能业务数据控制(AI traffic control,ATC)层。该ATC层用于实现将物理层/MAC层产生的人工智能业务数据输入到第一无线承载。
示例性地,另一种人工智能业务数据的协议栈处理流程也可以是在现有的模块或协议栈中增加将物理层/MAC层产生的人工智能业务数据输入到第一无线承载的功能,从而新增该功能的现有的模块或协议栈可以将物理层/MAC层产生的人工智能业务数据输入到第一无线承载。
示例性地,该第一无线承载为新定义的学习无线承载(learning radio bearer,LRB)或第一数据无线承载。该第一数据无线承载为新的数据无线承载(newDRB)。
ATC层或新增该功能的现有的模块或协议栈将人工智能业务数据输入到第一无线承载后,第一节点将人工智能业务数据进行以下至少一个协议层的处理:PDCP层,RLC层,MAC层,PHY层,得到处理后的人工智能业务数据。具体地,第一节点将人工智能业务数据在PDCP层进行数据完整性保护、加密等;第一节点将PDCP层处理后的人工智能业务数据在RLC层进行缓存、分段重组等;第一节点将RLC层处理后的人工智能业务数据在MAC层进行逻辑信道和传输信道之间的映射、复用/解复用、调度、HARQ、逻辑信道优先级设置等;以及第一节点将MAC层处理后的人工智能业务数据在PHY层进行信道编码、调制、资源映射等处理。然后,第一节点通过数据信道、控制信道或专用信道向第二节点发送经过处理后的人工智能业务数据。
相应地,第二节点接收该处理后的人工智能业务数据。
S1503.第二节点将人工智能业务数据输出至物理层和/或MAC层。
第二节点通过射频模块接收到该处理后的人工智能业务数据后,第二节点将人工智能业务数据进行以下至少一个协议层的处理:PHY层,MAC层,RLC层,PDCP层。将人工智能业务数据进行至少一个协议层的处理,可以是将人工智能业务数据进行PHY层,MAC层,RLC层,PDCP层中的一个或多个协议层的处理。示例性地,第二节点将人工智能业务数据依次经由PHY层,MAC层,RLC层,PDCP层进行处理。
然后,第二节点将经过处理后的人工智能业务数据输出至PHY层和/或MAC层。
在一个示例中,第二节点可以将经过处理后的人工智能业务数据经过第一无线承载输入到ATC层,ATC层将从第一无线承载输入的人工智能业务数据输出至PHY层和/或MAC层。
在另一个示例中,第二节点也可以将经过处理后的人工智能业务数据经过第一无线承载输入到现有的模块或协议层,该现有的模块或协议层增加了将从第一无线承载输入的人工智能业务数据输出至PHY层和/或MAC层的功能,从而可以将从第一无线承载输入的人工智能业务数据输出至PHY层和/或MAC层。
第二节点的处理流程为图16的处理流程的逆过程,在此不再赘述。
根据本申请实施例提供的一种通信方法,定义了一种新的协议栈处理流程,对于在PHY/MAC层产生的人工智能业务数据,经过ATC层输入到第一无线承载后,可以通过第一无线承载向第二节点发送人工智能业务数据,实现了在PHY/MAC层产生的人工智能业务数据的通信。
在另外的实施例中,为了构建第一无线承载,还可以包括以下步骤:
网络设备向终端发送第一信令。该第一信令用于指示配置第一无线承载。
相应地,终端接收该第一信令。
结合上述实施例,第一节点为终端,第二节点为网络设备时,则第二节点向第一节点发送上述第一信令;第一节点为网络设备,第二节点为终端时,则第一节点向第二节点发送上述第一信令。
示例性地,该第一信令为无线资源控制(radio resource control,RRC)信令。以第一无线承载为LRB为例,其中,配置第一无线承载包括增加LRB和释放LRB。具体地,在RRC信令的无线承载配置(RadioBearerConfig)信元中增加/释放LRB,该信元的具体内容为:
Figure PCTCN2022120568-appb-000011
Figure PCTCN2022120568-appb-000012
对于待增加的LRB列表(LRB-ToAddModList)中的每一个待增加的LRB(LRB-ToAddMod),查看其LRB标识(LRB-Identity字段)(示例性地,可以采用数字进行标识),
a)若该LRB-Identity字段对应的LRB不在当前终端的LRB列表中
i.建立PDCP实体(reestablishPDCP),并根据LRB-ToAddMod中的PDCP配置(pdcp-Config)字段进行配置,配置方法与现有PDCP实体配置方法相同;
b)若该LRB-Identity字段对应的LRB已在当前终端的LRB列表中
i.若reestablishPDCP字段为真(true),则重新建立PDCP实体,并根据LRB-ToAddMod中pdcp-Config字段进行配置,配置方法与现有PDCP实体配置方法相同;
ii.若恢复PDCP(recoverPDCP)字段为true,则恢复LRB-Identity字段对应的LRB对
应的PDCP实体。
释放LRB的流程包括:释放待释放的LRB列表(LRB-ToReleaseList)中LRB-Identity字段对应的LRB和其对应的PDCP实体。
一种示例的LRB建立流程如下,其中与现有通信标准相同的字段,其含义参考现有通信标准中的解释,在此不再赘述:
网络设备向终端发送无线资源控制重配置(RRCReconfiguration)信令:
Figure PCTCN2022120568-appb-000013
其中,该RRCReconfiguration信令中包括无线资源控制重配置(rrcReconfiguration)字段,该rrcReconfiguration字段包括无线资源控制重配置-信元(RRCReconfiguration-IEs)。该无线资源控制重配置字段用于配置LRB的建立。
进一步地,其中的RRCReconfiguration-IEs字段为:
Figure PCTCN2022120568-appb-000014
Figure PCTCN2022120568-appb-000015
该无线资源控制重配置-信元中又包括无线承载配置(radioBearerConfig)字段。该字段用于配置LRB的建立。
进一步地,其中的RadioBearerConfig字段为:
Figure PCTCN2022120568-appb-000016
该无线承载配置字段包括待增加的LRB列表(lrb-ToAddModList)和待释放的LRB列表(lrb-ToReleaseList)。其中的LRB-ToAddModList和LRB-ToReleaseList字段包含的内容如前所述,描述了需要建立的LRB和释放的LRB。
终端收到RRCReconfiguration信令后,根据其中的LRB-ToAddModList和LRB-ToReleaseList字段内容,建立或释放LRB。
另一种可能的示例的LRB建立流程用于恢复被挂起的RRC连接时,步骤如下,其中与现有通信标准相同的字段,其含义参考现有通信标准中的解释,在此不再赘述:
网络设备向终端发送无线资源控制恢复(RRCResume)信令:
Figure PCTCN2022120568-appb-000017
该无线资源控制恢复信令包括无线资源控制恢复(rrcResume)字段,该无线资源控制恢复字段又包括无线资源控制恢复-信元(RRCResume-IEs)。该无线资源控制恢复字段用于恢复被挂起的RRC连接。
进一步地,其中的RRCResume-IEs字段为
Figure PCTCN2022120568-appb-000018
Figure PCTCN2022120568-appb-000019
该无线资源控制恢复-信元包括无线承载配置(radioBearerConfig)字段。该无线承载配置字段又包括无线承载配置(RadioBearerConfig)字段。
进一步地,其中的RadioBearerConfig字段为:
Figure PCTCN2022120568-appb-000020
该RadioBearerConfig字段包括待增加的LRB列表(lrb-ToAddModList)和待释放的LRB列表(lrb-ToReleaseList)。其中的LRB-ToAddModList和LRB-ToReleaseList字段包含的内容如前所述,描述了需要建立的LRB和释放的LRB。
终端收到RRCResume信令后,根据其中的LRB-ToAddModList和LRB-ToReleaseList字段内容,建立或释放LRB,流程如上所述。
可以理解的是,以上各个实施例中,由第一节点实现的方法和/或步骤,也可以由可用于第一节点的部件(例如芯片或者电路)实现;由第二节点实现的方法和/或步骤,也可以由可用于第二节点的部件(例如芯片或者电路)实现。
上述主要从各个网元之间交互的角度对本申请实施例提供的方案进行了介绍。相应地,本申请实施例还提供了通信装置,该通信装置用于实现上述各种方法。该通信装置可以为上述方法实施例中的第一节点,或者为可用于第一节点的部件;或者,该通信装置可以为上述方法实施例中的第二节点第一节点,或者为可用于第二节点第一节点的部件。可以理解的是,该通信装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可 以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法实施例中对通信装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
基于上述通信方法的同一构思,本申请还提供了如下通信装置:
如图17所示,为本申请实施例提供的一种通信装置的结构示意图,该通信装置1700包括:收发单元1701和处理单元1702。其中:
在一个示例中,该通信装置1700用于实现上述实施例中第一节点的功能时,收发单元1701用于实现如图11所示的实施例中步骤S1101、S1103和S1105中第一节点的操作;以及处理单元1702用于实现如图11所示的实施例中步骤S1104。
该通信装置1700用于实现上述实施例中第二节点的功能时,收发单元1701用于实现如图11所示的实施例中步骤S1101、S1103和S1105中第二节点的操作;以及处理单元1702用于实现如图11所示的实施例中步骤S1102。
在另一个示例中,该通信装置1700用于实现上述实施例中第一节点的功能时,收发单元1701用于实现如图14所示的实施例中步骤S1401、S1403和S1405中第一节点的操作;以及处理单元1702用于实现如图14所示的实施例中步骤S1404。
该通信装置1700用于实现上述实施例中第二节点的功能时,收发单元1701用于实现如图14所示的实施例中步骤S1401、S1403中第二节点的操作;以及处理单元1702用于实现如图14所示的实施例中步骤S1402。
在又一个示例中,该通信装置1700用于实现上述实施例中第一节点的功能时,收发单元1701用于实现如图15所示的实施例中步骤S1502中第一节点的操作;以及处理单元1702用于实现如图15所示的实施例中步骤S1501。
该通信装置1700用于实现上述实施例中第二节点的功能时,收发单元1701用于实现如图15所示的实施例中步骤S1502中第二节点的操作;以及处理单元1702用于实现如图15所示的实施例中步骤S1503。
上述实施例中的第一节点可以是终端。图18示出了一种简化的终端的结构示意图。为便于理解和图示方便,图18中,终端以手机作为例子。如图18所示,终端包括处理器、存储器、射频电路、天线以及输入输出装置。处理器主要用于对通信协议以及通信数据进行处理,以及对终端进行控制,执行软件程序,处理软件程序的数据等。存储器主要用于存储软件程序和数据。射频电路主要用于基带信号与射频信号的转换以及对射频信号的处理。天线主要用于收发电磁波形式的射频信号。输入输出装置,例如触摸屏、显示屏,键盘等主要用于接收用户输入的数据以及对用户输出数据。需要说明的是,有些种类的终端可以不具有输入输出装置。
当需要发送数据时,处理器对待发送的数据进行基带处理后,输出基带信号至射频电路,射频电路将基带信号进行射频处理后将射频信号通过天线以电磁波的形式向外发送。当有数据发送到终端时,射频电路通过天线接收到射频信号,将射频信号转换为基带信号,并将基带信号输出至处理器,处理器将基带信号转换为数据并对该数据进行处理。为便于说明,图18中仅示出了一个存储器和处理器。在实际的终端产品中,可以存在一个或多个处理器和一 个或多个存储器。存储器也可以称为存储介质或者存储设备等。存储器可以是独立于处理器设置,也可以是与处理器集成在一起,本申请实施例对此不做限制。
在本申请实施例中,可以将具有收发功能的天线和射频电路视为终端的接收单元和发送单元(也可以统称为收发单元),将具有处理功能的处理器视为终端的处理单元。如图18所示,终端包括收发单元1801和处理单元1802。收发单元1801也可以称为接收/发送(发射)器、接收/发送机、接收/发送电路等。处理单元1802也可以称为处理器,处理单板,处理模块、处理装置等。该收发单元1801用于实现图17所示实施例中收发单元1701的功能;该处理单元1802用于实现图17所示实施例中处理单元1702的功能。
上述实施例中的第二节点可以是网络设备。图19示出了一种简化的网络设备的结构示意图。网络设备包括射频信号收发及转换部分以及192部分,该射频信号收发及转换部分又包括收发单元191部分。射频信号收发及转换部分主要用于射频信号的收发以及射频信号与基带信号的转换;192部分主要用于基带处理,对网络设备进行控制等。收发单元191也可以称为接收/发送(发射)器、接收/发送机、接收/发送电路等。192部分通常是网络设备的控制中心,通常可以称为处理单元,用于控制网络设备执行上述图11、图14、图15中关于第二节点所执行的步骤。具体可参见上述相关部分的描述。收发单元191可用于实现图17所示实施例中收发单元1701的功能,192部分用于实现图17所示实施例中处理单元1702的功能。
192部分可以包括一个或多个单板,每个单板可以包括一个或多个处理器和一个或多个存储器,处理器用于读取和执行存储器中的程序以实现基带处理功能以及对网络设备的控制。若存在多个单板,各个单板之间可以互联以增加处理能力。作为一种可选的实施方式,也可以是多个单板共用一个或多个处理器,或者是多个单板共用一个或多个存储器,或者是多个单板同时共用一个或多个处理器。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被执行时,实现上述实施例中的方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当该指令在计算机上运行时,使得计算机执行上述实施例中的方法。
本申请实施例还提供了一种通信系统,包括上述的通信装置。
需要说明的是,以上单元或单元的一个或多个可以软件、硬件或二者结合来实现。当以上任一单元或单元以软件实现的时候,所述软件以计算机程序指令的方式存在,并被存储在存储器中,处理器可以用于执行所述程序指令并实现以上方法流程。该处理器可以内置于片上系统(system on chip,SoC)或ASIC,也可是一个独立的半导体芯片。该处理器内处理用于执行软件指令以进行运算或处理的核外,还可进一步包括必要的硬件加速器,如现场可编程门阵列(field programmable gate array,FPGA)、可编程逻辑器件(programmable logic device,PLD)、或者实现专用逻辑运算的逻辑电路。
当以上单元或单元以硬件实现的时候,该硬件可以是CPU、微处理器、数字信号处理(digital signal processing,DSP)芯片、微控制单元(microcontroller unit,MCU)、人工智能处理器、ASIC、SoC、FPGA、PLD、专用数字电路、硬件加速器或非集成的分立器件中的任一个或任一组合,其可以运行必要的软件或不依赖于软件以执行以上方法流程。
可选的,本申请实施例还提供了一种芯片系统,包括:至少一个处理器和接口,该至少一个处理器通过接口与存储器耦合,当该至少一个处理器运行存储器中的计算机程序或指令时,使得该芯片系统执行上述任一方法实施例中的方法。可选的,该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件,本申请实施例对此不作具体限定。
应理解,在本申请的描述中,除非另有说明,“/”表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;其中A,B可以是单数或者复数。并且,在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。同时,在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,便于理解。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。

Claims (73)

  1. 一种通信方法,其特征在于,所述方法包括:
    发送第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;
    接收第三指示信息,所述第三指示信息用于指示通信策略和通信资源,所述通信策略和所述通信资源是根据所述第一指示信息和/或所述第二指示信息确定的。
  2. 根据权利要求1所述的方法,其特征在于,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
  3. 根据权利要求1所述的方法,其特征在于,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述通信策略是根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定的。
  5. 根据权利要求4所述的方法,其特征在于,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:
    上报人工智能业务数据包的大小;
    其中,所述通信资源是根据所述人工智能业务数据包的大小、所述数据集状态信息、所述模型状态信息中的至少两项确定的。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第三指示信息,生成人工智能业务数据包;
    根据所述第三指示信息,发送所述人工智能业务数据包。
  8. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第三指示信息,接收所述人工智能业务数据包。
  9. 根据权利要求7或8所述的方法,其特征在于,所述人工智能业务数据包通过物理学习信道通信。
  10. 根据权利要求9所述的方法,其特征在于,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或
    通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或
    通过所述物理学习信道的通信包括以下处理操作:实数处理。
  11. 根据权利要求7-10中任一项所述的方法,其特征在于,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
  12. 根据权利要求11所述的方法,其特征在于,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
  13. 一种通信方法,其特征在于,所述方法包括:
    接收第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;
    根据所述第一指示信息和/或所述第二指示信息,确定通信策略和通信资源;
    发送第三指示信息,所述第三指示信息用于指示所述通信策略和所述通信资源。
  14. 根据权利要求13所述的方法,其特征在于,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
  15. 根据权利要求13所述的方法,其特征在于,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
  16. 根据权利要求13-15中任一项所述的方法,其特征在于,所述通信策略是根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定的。
  17. 根据权利要求16所述的方法,其特征在于,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略。
  18. 根据权利要求13-17中任一项所述的方法,其特征在于,所述根据所述第一指示信息和/或所述第二指示信息,确定通信资源,包括:
    获取人工智能业务数据包的大小;
    根据所述人工智能业务数据包的大小、所述第一指示信息、所述第二指示信息中的至少两项,确定所述通信资源。
  19. 根据权利要求18所述的方法,其特征在于,所述获取人工智能业务数据包的大小,包括:
    根据任务类型、业务类型中的至少一项与数据包大小的对应关系,确定所述人工智能业务数据包的大小;或
    接收上报的所述人工智能业务数据包的大小。
  20. 根据权利要求13-19中任一项所述的方法,其特征在于,所述方法还包括:
    接收根据所述第三指示信息发送的人工智能业务数据包。
  21. 根据权利要求13-19中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第三指示信息,生成人工智能业务数据包;
    根据所述第三指示信息,发送所述人工智能业务数据包。
  22. 根据权利要求20或21所述的方法,其特征在于,所述人工智能业务数据包通过物理学习信道通信。
  23. 根据权利要求22所述的方法,其特征在于,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或
    通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或
    通过所述物理学习信道的通信包括以下处理操作:实数处理。
  24. 根据权利要求20-23中任一项所述的方法,其特征在于,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
  25. 根据权利要求24所述的方法,其特征在于,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
  26. 一种通信方法,其特征在于,所述方法包括:
    获取物理PHY层和/或媒体接入控制MAC层产生的人工智能业务数据;
    通过第一无线承载发送所述人工智能业务数据。
  27. 根据权利要求26所述的方法,其特征在于,所述通过第一无线承载发送所述人工智能业务数据,包括:
    将所述人工智能业务数据进行以下至少一个协议层的处理:分组数据汇聚协议PDCP层,无线链路控制RLC层,MAC层,PHY层;
    发送经过处理后的人工智能业务数据。
  28. 根据权利要求26或27所述的方法,其特征在于,所述第一无线承载为学习无线承载或第一数据无线承载。
  29. 根据权利要求26-28中任一项所述的方法,其特征在于,所述方法还包括:
    接收第一信令,所述第一信令用于指示配置所述第一无线承载。
  30. 一种通信方法,其特征在于,所述方法包括:
    通过第一无线承载接收人工智能业务数据;
    将所述人工智能业务数据输出至物理PHY层和/或媒体接入控制MAC层。
  31. 根据权利要求30所述的方法,其特征在于,所述将所述人工智能业务数据输出至物 理PHY层和/或媒体接入控制MAC层,包括:
    将所述人工智能业务数据进行以下至少一个协议层的处理:PHY层,MAC层,无线链路控制RLC层,分组数据汇聚协议PDCP层;
    将经过处理后的人工智能业务数据传输至所述PHY层和/或MAC层。
  32. 根据权利要求30或31所述的方法,其特征在于,所述第一无线承载为学习无线承载或第一数据无线承载。
  33. 根据权利要求30-32中任一项所述的方法,其特征在于,所述方法还包括:
    发送第一信令,所述第一信令用于指示配置所述第一无线承载。
  34. 一种通信装置,其特征在于,所述装置包括:收发单元;其中:
    所述收发单元,用于发送第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;
    所述收发单元,还用于接收第三指示信息,所述第三指示信息用于指示通信策略和通信资源,所述通信策略和所述通信资源是根据所述第一指示信息和/或所述第二指示信息确定的。
  35. 根据权利要求34所述的装置,其特征在于,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
  36. 根据权利要求34所述的装置,其特征在于,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
  37. 根据权利要求34-36中任一项所述的装置,其特征在于,所述通信策略是根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定的。
  38. 根据权利要求37所述的装置,其特征在于,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略。
  39. 根据权利要求34-38中任一项所述的装置,其特征在于,所述收发单元,还用于上报人工智能业务数据包的大小;
    其中,所述通信资源是根据所述人工智能业务数据包的大小、所述数据集状态信息、所述模型状态信息中的至少两项确定的。
  40. 根据权利要求34-39中任一项所述的装置,其特征在于,所述装置还包括:处理单元;
    所述处理单元,用于根据所述第三指示信息,生成人工智能业务数据包;
    所述收发单元,还用于根据所述第三指示信息,发送所述人工智能业务数据包。
  41. 根据权利要求34-39中任一项所述的装置,其特征在于,所述收发单元,还用于根据所述第三指示信息,接收所述人工智能业务数据包。
  42. 根据权利要求40或41所述的装置,其特征在于,所述人工智能业务数据包通过物理学习信道通信。
  43. 根据权利要求42所述的装置,其特征在于,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或
    通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或
    通过所述物理学习信道的通信包括以下处理操作:实数处理。
  44. 根据权利要求40-43中任一项所述的装置,其特征在于,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
  45. 根据权利要求44所述的装置,其特征在于,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
  46. 一种通信装置,其特征在于,所述装置包括:收发单元和处理单元;其中:
    所述收发单元,用于接收第一指示信息和/或第二指示信息,所述第一指示信息用于指示数据集状态信息,所述第二指示信息用于指示模型状态信息;
    所述处理单元,用于根据所述第一指示信息和/或所述第二指示信息,确定通信策略和通信资源;
    所述收发单元,还用于发送第三指示信息,所述第三指示信息用于指示所述通信策略和所述通信资源。
  47. 根据权利要求46所述的装置,其特征在于,所述数据集状态信息包括以下至少一种信息:数据种类,采样频率,采样位置信息,样本时间戳,样本数量,样本精度。
  48. 根据权利要求46所述的装置,其特征在于,所述模型状态信息包括以下至少一种信息:模型类型,模型标识,子模型的排序,模型版本号,模型描述,模型更新时间戳,模型更新位置信息,模型精度,模型性能。
  49. 根据权利要求46-48中任一项所述的装置,其特征在于,所述通信策略是根据所述第一指示信息、所述第二指示信息和信道状态信息中的至少两项确定的。
  50. 根据权利要求49所述的装置,其特征在于,所述通信策略包括以下至少一种策略:量化策略,源编码策略,调制编码策略,信源信道联合编码策略。
  51. 根据权利要求46-50中任一项所述的装置,其特征在于,所述处理单元,还用于获取人工智能业务数据包的大小;
    所述处理单元,还用于根据所述人工智能业务数据包的大小、所述第一指示信息、所述 第二指示信息中的至少两项,确定所述通信资源。
  52. 根据权利要求51所述的装置,其特征在于,所述处理单元,还用于根据任务类型、业务类型中的至少一项与数据包大小的对应关系,确定所述人工智能业务数据包的大小;或
    所述收发单元,还用于接收上报的所述人工智能业务数据包的大小。
  53. 根据权利要求46-52中任一项所述的装置,其特征在于,所述收发单元,还用于接收根据所述第三指示信息发送的人工智能业务数据包。
  54. 根据权利要求46-52中任一项所述的装置,其特征在于,所述处理单元,用于根据所述第三指示信息,生成人工智能业务数据包;
    所述收发单元,还用于根据所述第三指示信息,发送所述人工智能业务数据包。
  55. 根据权利要求53或54所述的装置,其特征在于,所述人工智能业务数据包通过物理学习信道通信。
  56. 根据权利要求55所述的装置,其特征在于,通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源编码;或
    通过所述物理学习信道的通信包括以下至少一个处理操作:量化,信源信道联合编码;或
    通过所述物理学习信道的通信包括以下处理操作:实数处理。
  57. 根据权利要求53-56中任一项所述的装置,其特征在于,所述人工智能业务数据包包括包头和负载,所述负载包括以下至少一种数据:机器学习模型的参数或梯度,推理结果,推理至分割层的中间结果,反向传递至分割层的梯度,对分数。
  58. 根据权利要求57所述的装置,其特征在于,所述包头包括以下至少一项:内容指示信息,内容描述信息,压缩方式,组包方式。
  59. 一种通信装置,其特征在于,所述装置包括:处理单元和收发单元;其中:
    所述处理单元,用于获取物理PHY层和/或媒体接入控制MAC层产生的人工智能业务数据;
    所述收发单元,用于通过第一无线承载发送所述人工智能业务数据。
  60. 根据权利要求59所述的装置,其特征在于,所述处理单元,还用于将所述人工智能业务数据进行以下至少一个协议层的处理:分组数据汇聚协议PDCP层,无线链路控制RLC层,MAC层,PHY层;
    所述收发单元,还用于发送经过处理后的人工智能业务数据。
  61. 根据权利要求59或60所述的装置,其特征在于,所述第一无线承载为学习无线承载或第一数据无线承载。
  62. 根据权利要求59-61中任一项所述的装置,其特征在于,所述收发单元,还用于接收第一信令,所述第一信令用于指示配置所述第一无线承载。
  63. 一种通信装置,其特征在于,所述装置包括:收发单元和处理单元;其中:
    所述收发单元,用于通过第一无线承载接收人工智能业务数据;
    所述处理单元,用于将所述人工智能业务数据输出至物理PHY层和/或媒体接入控制MAC层。
  64. 根据权利要求63所述的装置,其特征在于,所述处理单元,还用于将所述人工智能业务数据进行以下至少一个协议层的处理:PHY层,MAC层,无线链路控制RLC层,分组数据汇聚协议PDCP层;
    所述处理单元,还用于将经过处理后的人工智能业务数据传输至所述PHY层和/或MAC层。
  65. 根据权利要求63或64所述的装置,其特征在于,所述第一无线承载为学习无线承载或第一数据无线承载。
  66. 根据权利要求63-65中任一项所述的装置,其特征在于,所述收发单元,还用于发送第一信令,所述第一信令用于指示配置所述第一无线承载。
  67. 一种通信系统,其特征在于,包括如权利要求34-45中任一项所述的通信装置以及如权利要求46-58中任一项所述的通信装置。
  68. 一种通信系统,其特征在于,包括如权利要求59-62中任一项所述的通信装置以及如权利要求63-66中任一项所述的通信装置。
  69. 一种通信装置,其特征在于,包括处理器,用于执行存储器中存储的程序,当所述程序被执行时,使得所述装置执行如权利要求1~33中任一项所述的方法。
  70. 根据权利要求69所述的装置,其特征在于,所述存储器位于所述装置之外。
  71. 一种通信装置,其特征在于,包括处理器、存储器以及存储在所述存储器上并在所述处理器上运行的指令,当所述指令被运行时,使得所述通信装置执行如权利要求1~33中任一项所述的方法。
  72. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,如权利要求1-33中任一项所述的方法被执行。
  73. 一种计算机程序产品,其特征在于,当其在计算机上运行时,使得如权利要求1-33中任一项所述的方法被执行。
PCT/CN2022/120568 2022-09-22 2022-09-22 通信方法、装置、存储介质及程序产品 WO2024060139A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/120568 WO2024060139A1 (zh) 2022-09-22 2022-09-22 通信方法、装置、存储介质及程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/120568 WO2024060139A1 (zh) 2022-09-22 2022-09-22 通信方法、装置、存储介质及程序产品

Publications (1)

Publication Number Publication Date
WO2024060139A1 true WO2024060139A1 (zh) 2024-03-28

Family

ID=90453535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120568 WO2024060139A1 (zh) 2022-09-22 2022-09-22 通信方法、装置、存储介质及程序产品

Country Status (1)

Country Link
WO (1) WO2024060139A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111865502A (zh) * 2019-04-28 2020-10-30 华为技术有限公司 通信方法及装置
CN113950057A (zh) * 2020-07-15 2022-01-18 北京三星通信技术研究有限公司 信息处理的方法、装置、设备及计算机可读存储介质
US20220095366A1 (en) * 2019-11-07 2022-03-24 Lg Electronics Inc. Method and apparatus for channel state reporting in wireless communication system
CN114302506A (zh) * 2021-12-24 2022-04-08 中国联合网络通信集团有限公司 一种基于人工智能的协议栈、数据处理方法和装置
CN114747277A (zh) * 2019-12-13 2022-07-12 高通股份有限公司 与人工智能信息相关联的调度请求
CN114915983A (zh) * 2021-02-07 2022-08-16 展讯通信(上海)有限公司 一种数据获取方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111865502A (zh) * 2019-04-28 2020-10-30 华为技术有限公司 通信方法及装置
US20220095366A1 (en) * 2019-11-07 2022-03-24 Lg Electronics Inc. Method and apparatus for channel state reporting in wireless communication system
CN114747277A (zh) * 2019-12-13 2022-07-12 高通股份有限公司 与人工智能信息相关联的调度请求
CN113950057A (zh) * 2020-07-15 2022-01-18 北京三星通信技术研究有限公司 信息处理的方法、装置、设备及计算机可读存储介质
CN114915983A (zh) * 2021-02-07 2022-08-16 展讯通信(上海)有限公司 一种数据获取方法及装置
CN114302506A (zh) * 2021-12-24 2022-04-08 中国联合网络通信集团有限公司 一种基于人工智能的协议栈、数据处理方法和装置

Similar Documents

Publication Publication Date Title
US20220116799A1 (en) Method and device for o-ran-based performance optimization and configuration
WO2021128746A1 (zh) 通信方法及装置
CN113329431B (zh) 一种无线承载的配置方法、终端、存储介质及芯片
US11337118B2 (en) Method and system for performing handover in wireless communication network
WO2020156127A1 (zh) 无线通信的方法和通信设备
WO2022041285A1 (zh) 一种模型数据传输方法及通信装置
US20230319601A1 (en) Communication method, apparatus, and system
US11877327B2 (en) Method and apparatus for transmitting and receiving connection information in wireless communication system
US20210392538A1 (en) Method and device for configuring node to transmit and receive data
JP2020530237A (ja) ユーザプレーンを分離するためのサービス実装の品質
US20230309118A1 (en) Sidelink carrier management method and apparatus, and system
WO2022227995A1 (zh) 一种节能方法及装置
WO2024060139A1 (zh) 通信方法、装置、存储介质及程序产品
WO2019149210A1 (zh) 一种数据压缩方法、解压缩方法、发送端及接收端
CN114422320A (zh) 一种终端的管控方法、装置及系统
CN111148123A (zh) 一种无线回传处理方法及通信装置
US20220060292A1 (en) Determination of maximum number of uplink retransmissions
CN110831155B (zh) 一种信息传输方法、设备及系统
CN111491322B (zh) 一种处理方法及设备
CN113938985A (zh) 一种通信方法及装置
WO2024026846A1 (zh) 一种人工智能模型处理方法及相关设备
WO2024061125A1 (zh) 一种通信方法及装置
WO2024011581A1 (zh) 一种通信方法及装置
WO2024067408A1 (zh) 通信方法和装置
WO2024032434A1 (zh) 业务控制方法、装置、通信设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22959148

Country of ref document: EP

Kind code of ref document: A1