WO2024046419A1 - Procédé et appareil de communication - Google Patents

Procédé et appareil de communication Download PDF

Info

Publication number
WO2024046419A1
WO2024046419A1 PCT/CN2023/116088 CN2023116088W WO2024046419A1 WO 2024046419 A1 WO2024046419 A1 WO 2024046419A1 CN 2023116088 W CN2023116088 W CN 2023116088W WO 2024046419 A1 WO2024046419 A1 WO 2024046419A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
pieces
difference
original
encoder
Prior art date
Application number
PCT/CN2023/116088
Other languages
English (en)
Chinese (zh)
Inventor
柴晓萌
孙琰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024046419A1 publication Critical patent/WO2024046419A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • the present disclosure relates to the field of communication technology, and in particular, to a communication method and device.
  • wireless communication networks such as mobile communication networks
  • the services supported by the network are becoming more and more diverse, and therefore the requirements that need to be met are becoming more and more diverse.
  • the network needs to be able to support ultra-high speeds, ultra-low latency, and/or ultra-large connections.
  • This feature makes network planning, network configuration, and/or resource scheduling increasingly complex.
  • the functions of the network become more and more powerful, such as supporting higher and higher spectrum, supporting high-order multiple input multiple output (MIMO) technology, supporting beam forming, and/or supporting beam management and other new Technology has made network energy conservation a hot research topic.
  • MIMO multiple input multiple output
  • These new requirements, new scenarios, and new features have brought unprecedented challenges to network planning, O&M, and efficient operations.
  • artificial intelligence technology can be introduced into wireless communication networks to achieve network intelligence. Based on this, how to effectively implement artificial intelligence in the network, for example, how to monitor the performance of artificial intelligence, is a question worth studying.
  • the present disclosure provides a communication method and device, in order to improve the accuracy of autoencoder performance monitoring while reducing transmission overhead.
  • the present disclosure provides a communication method, which is applied to a first device and includes: receiving M pieces of first encoded information from a second device, where M is a positive integer; using a first decoder and inputting the The M first encoded information of the first decoder determines the M first decoded information; wherein the first decoder and the first encoder belong to the first autoencoder, and the first encoder is used to Process the input M first original information to determine the M first encoded information; use the second encoder and the M first decoded information input to the second encoder to determine M first re-encodings Information, the M first re-encoding information corresponds to the M first encoding information one-to-one; according to each first encoding information in the M first encoding information and each first encoding information The difference between the corresponding first re-encoded information determines the performance of the first autoencoder.
  • an encoder is introduced at one end of the decoder in the autoencoder, and the difference between the encoding information output by the encoder included in the autoencoder and the re-encoding information output by the encoder introduced by the decoder is compared. , indirectly judging the performance of the autoencoder. It can safely and effectively monitor the performance of the autoencoder while ensuring reduced information transmission overhead.
  • the first encoder and the second encoder are the same, or the functions of the first encoder and the second encoder are the same.
  • the functions of the first encoder and the second encoder are the same, including at least one of the following: when the first encoder and the second encoder input the same data, the first encoder and the second encoder The output of the second encoder is the same; when the input data of the first encoder and the second encoder are the same, the output difference of the first encoder and the second encoder is less than the preset threshold; the first encoder has compression and quantization functions , the second encoder has compression and quantization functions.
  • the first device may determine The performance of the first autoencoder corresponds to a first value, or if the difference parameter between the first encoded information and the first re-encoded information is less than or equal to a second threshold, the first device It may be determined that the performance of the first autoencoder corresponds to the second value.
  • the first device calculates the difference between each of the M first encoded information and the first re-encoded information corresponding to each of the first encoded information. difference parameters to obtain M difference parameters; if the average of the M difference parameters is greater than or equal to the first threshold, the first device can determine that the performance of the first autoencoder corresponds to the first value, or if the M differences If the average value of the parameters is less than or equal to the second threshold, the first device may determine that the performance of the first autoencoder corresponds to the second value.
  • the first threshold and the second threshold in the above design may be equal or unequal. Using thresholds to divide the performance levels of autoencoders facilitates subsequent personalized processing of autoencoders with different performance levels, which is more flexible.
  • the first threshold and the second threshold may be pre-configured.
  • the second device indicates the first threshold and/or the second threshold to the first device. For example, when the second device indicates the first threshold or the second threshold to the first device, the first device may determine that the first threshold and the second threshold are equal. For another example, when the second device indicates the first threshold and the second threshold to the first device, the first device may determine whether the first threshold and the second threshold are equal according to the instruction of the second device.
  • the difference parameter corresponding to the i-th first coded information among the M first coded information includes the i-th first coded information and the i-th first coded information among the M first re-coded information.
  • the first threshold, the second threshold, and the difference between the encoded information and the re-encoded information belong to the same magnitude of information, and information of the same magnitude can be compared.
  • Such a design uses thresholds to measure the difference between the encoded information and the re-encoded information to infer the difference between the original information and the decoded information, assists in determining the performance of the autoencoder, and can improve the accuracy of monitoring the performance of the autoencoder. sex.
  • the difference parameter corresponding to the i-th first coded information among the M first coded information is determined by a scaling factor and the difference between the i-th first coded information and the M first coded information.
  • the difference between the i-th first re-encoded information in the re-encoded information is determined; where i is any positive integer from 1 to M.
  • the difference parameter, the first threshold, and the second threshold determined by the difference between the encoded information and the re-encoded information and the scaling factor belong to information of the same magnitude, and information of the same magnitude can be compared.
  • the first threshold, the second threshold, and the difference between the original information and the decoded information belong to the information of the first magnitude, and the scaling factor is used to scale the difference between the encoded information and the re-encoded information to the first magnitude of information.
  • Such a design uses the scale factor and the difference between the encoded information and the re-encoded information to simulate the difference between the original information and the decoded information, and uses the threshold to measure the difference between the original information and the decoded information to assist the performance of the autoencoder
  • the determination can improve the accuracy of autoencoder performance monitoring.
  • the scale factor is preconfigured; or, the scale factor is determined by the second device, and the first device can obtain the scale factor from the second device.
  • the first device can determine the scaling factor by itself.
  • the first device may receive N pieces of second original information from the second device, where N is an integer greater than 1; furthermore, the first device may receive two second pieces of original information from the N pieces of second original information.
  • the scaling factor is determined by the ratio of the difference between the information and the difference between the pair of second encoded information corresponding to the pair of second original information. Wherein, all or part of the N pieces of second original information belong to the M pieces of first original information, or the N pieces of second original information do not include any of the M pieces of first original information.
  • the first device receives N pieces of second original information from the second device, where N is a positive integer; furthermore, the first device compares the data with the reference according to each of the N pieces of second original information.
  • the scaling factor is determined by the ratio of the difference between the original information and the difference between the second encoded information corresponding to each original information and the reference encoded information corresponding to the reference original information.
  • the first device receives N pieces of second original information from the second device, where N is a positive integer; furthermore, the first device compares N pieces of second original information with each other according to each of the N pieces of second original information.
  • the difference between each of the second decoded information and the second encoded information corresponding to each second original information and the second re-encoded information corresponding to each second encoded information The proportion of the difference determines the scaling factor; wherein the N pieces of second decoded information correspond to the N pieces of second original information one-to-one.
  • the second device may trigger the first device to determine the performance of the first autoencoder. For example, before determining the M first re-encoding information, the first device receives fourth information from the second device, and the fourth information is used to trigger performance determination of the first autoencoder.
  • the first device first determines that the performance determination method of the first autoencoder in the above design is feasible and effective, Or it can be described as: in the current scenario, the performance determination method of the first autoencoder in the above design can be used; further, the first device starts the performance determination of the first autoencoder.
  • the first device obtains P pieces of third encoded information from the second device; wherein, the difference between the P pieces of third original information used to generate the P pieces of third encoded information Satisfying the first preset relationship, P is a positive integer greater than or equal to 3; all or part of the P third original information belongs to the M first original information, or the P third original information Any one of the M pieces of first original information is not included.
  • the first device determines that the difference between the P pieces of third encoded information satisfies the first preset relationship, the first device can determine the performance of the first autoencoder through the above design.
  • the first device may send first information to the second device, where the first information is used to request the P pieces of third coded information.
  • the second device sends P pieces of third encoded information to the first device.
  • the first device obtains P pieces of third coded information from the second device; wherein the P pieces of third original information used to generate the P pieces of third coded information are the same as the reference original information.
  • the difference between the information satisfies the second preset relationship, and P is a positive integer greater than or equal to 2; all or part of the P third original information belongs to the M first original information, or the P The third original information does not include any of the M first original information.
  • the first device may determine the first self-coding information through the above design. Encoder performance.
  • the first device may send second information to the second device, where the second information is used to request the P pieces of third coded information.
  • the second device sends P pieces of third encoded information to the first device.
  • the difference between the encoded information and the re-encoded information is used to determine the performance of the autoencoder, which can ensure Accuracy of autoencoder performance monitoring.
  • the first device may also receive third information from the second device, where the third information indicates the difference between the P pieces of third original information and the P pieces of third encoded information. corresponding relationship.
  • the third information indicates the difference between the P pieces of third original information and the P pieces of third encoded information. corresponding relationship.
  • the above-mentioned first coded information is quantized information, or the first device receives M pieces of first coded information from the second device, which are M pieces of first coded information corresponding to the M pieces of first coded information. Quantitative information. The first device may dequantize the received M pieces of first quantized information to obtain the M pieces of first coded information.
  • the performance of the first autoencoder corresponds to a first value or a second value
  • the first value is used to indicate that the performance of the first autoencoder does not meet the first requirement
  • the second value is used to indicate that the performance of the first autoencoder meets the first requirement.
  • the first autoencoder is applied to the CSI feedback scenario.
  • the first coding information includes first channel status indication information, such as quantized CSI based on original CSI;
  • the first decoding information includes first restored channel information, such as restored CSI;
  • the first original information includes first original Channel information, such as original CSI;
  • the first re-encoding information includes second channel status indication information corresponding to the first restored channel information, such as quantized CSI based on the restored CSI.
  • the present disclosure provides a communication method, which is applied to a second device and includes: using a first encoder to process M pieces of input first original information and determine the M pieces of first encoded information, M is a positive integer; send the M first encoding information to the first device, the M first encoding information is used for performance determination of the first autoencoder, the first autoencoder includes the first Encoder.
  • the second device may send fourth information to the first device, where the fourth information is used to trigger performance determination of the first autoencoder.
  • the second device may send first parameter information to the first device, where the first parameter information is used for self-encoding performance determination; wherein the first parameter information includes one or more of the following: Item: one or more reference thresholds used to measure the difference between encoded information and re-encoded information; used to characterize the difference between at least two original information and the difference between the encoded information corresponding to the at least two original information.
  • the re-encoding information is obtained by the first device inputting the encoding information into the decoder and then inputting the output into the encoder for re-encoding.
  • the second device may send P third encoding information to the first device.
  • the P third encoding information is used to determine the effectiveness of the performance of the first autoencoder. Determine; wherein the difference between the P third original information used to generate the P third encoded information satisfies the first preset relationship, and P is a positive integer greater than or equal to 3; or, used to generate the The difference between the P third original information of the P third encoded information and the reference original information satisfies the second preset relationship, and P is a positive integer greater than or equal to 2.
  • the second device may also send third information to the first device, where the third information indicates the correspondence between the P pieces of third original information and the P pieces of third encoded information. .
  • the present disclosure provides a communication method, applied to a first device, including: receiving P pieces of third coded information from a second device, where the P pieces of third coded information correspond to P pieces of third original information; Wherein, the difference between the P pieces of third original information satisfies the first preset relationship, and P is a positive integer greater than or equal to 2; it is determined that the difference between the P pieces of third coded information satisfies the first preset relationship. Assume a relationship, or the difference between the P pieces of third coded information does not satisfy the first preset relationship.
  • the first device when the difference between the P pieces of third coded information satisfies the first preset relationship, the first device sends sixth information to the second device, and the sixth information for triggering the performance determination of the first autoencoder; or, when the first result indicates that the difference between the P third encoded information does not satisfy the first preset relationship, the first device reports to the The second device sends seventh information, the seventh information being used to instruct the second device to stop using the first encoder or to stop determining the performance of the first autoencoder.
  • the performance of the autoencoder is indirectly judged, and then it can be determined whether to further determine the performance of the autoencoder, or to use the current autoencoder.
  • the P third encoded information is the output of the first encoder of the second device for the P third original information input to the first encoder, and the first device A first decoder corresponding to the first encoder is included.
  • a first autoencoder includes the first encoder and the first decoder.
  • the first device sends second information to the second device, where the second information is used to request the P third encoded information.
  • the first device receives third information from the second device, the third information indicating the correspondence between the P pieces of third original information and the P pieces of third encoded information.
  • the third information may be the index of the third original information corresponding to the third encoded information carried in the third encoded information, or may be transmitted outside the third encoded information but through a preset transmission method.
  • the index of the third original information corresponding to the third encoded information For example, 1 index of the third original information + 1 method of the third encoding information. It can be understood that, in a possible manner, the correspondence between the P pieces of third coded information and the P pieces of third original information can also be determined in a time division manner. In this case, the third information is then No instructions are required to the first device.
  • all or part of the P third original information belongs to the M first original information described in the first aspect, or the P third original information does not include all Any one of the M first original information.
  • the first device determines the first self-determination according to the first aspect and the possible design of the first aspect. Encoder performance.
  • the present disclosure provides a communication method, applied to a second device, including: determining P pieces of third original information, the difference between the P pieces of third original information satisfies a first preset relationship, and P is greater than Or a positive integer equal to 2; and, sending P third coded information to the first device, the P third coded information corresponding to P third original information, the P third coded information being used for self-encoding Determination of device performance.
  • the second device receives sixth information from the first device, and the sixth information is used to indicate that the difference between the P pieces of third coded information satisfies a first preset relationship, The sixth information is used to trigger the performance determination of the first autoencoder; or the second device receives seventh information from the first device, the seventh information is used to indicate the P third encoding information The difference does not satisfy the first preset relationship, and the seventh information is used to instruct the second device to stop using the first encoder or to stop determining the performance of the first autoencoder.
  • the present disclosure provides a communication method, applied to a first device, including: receiving P pieces of third coded information from a second device, where the P pieces of third coded information correspond to P pieces of third original information, Wherein, the difference between the P pieces of third original information and the reference original information satisfies the second preset relationship, and P is a positive integer; the first device determines the reference codes corresponding to the P pieces of third coded information and the reference original information. The difference between the information satisfies the second preset relationship, or the difference between the P pieces of third coded information and the reference coded information corresponding to the reference original information does not satisfy the second preset relationship.
  • the first device when the difference between the P pieces of third coding information and the reference coding information corresponding to the reference original information satisfies the second preset relationship, the first device sends the eighth device to the second device.
  • Information, the eighth information is used to trigger the performance determination of the first autoencoder; or, when the difference between the P third encoding information and the reference encoding information corresponding to the reference original information does not satisfy the second preset relationship,
  • the first device sends ninth information to the second device, the ninth information being used to instruct the second device to stop using the first encoder or to stop determining the performance of the first autoencoder;
  • the performance of the autoencoder is indirectly judged, and then it can be determined whether to further determine the performance of the autoencoder, or to use the current autoencoder.
  • the first device when the difference between the P pieces of third coded information and the reference coded information corresponding to the reference original information satisfies the second preset relationship, the first device may be configured according to the first aspect and the first aspect. The design determines the performance of the first autoencoder.
  • the present disclosure provides a communication method, applied to a second device, including: determining that the difference between the P pieces of third original information and the reference original information satisfies the second preset relationship, and P is a positive integer; and , sending P pieces of third encoding information to the first device, where the P pieces of third encoding information correspond to P pieces of third original information, and the P pieces of third encoding information are used to determine the performance of the autoencoder.
  • the second device receives eighth information from the first device, the eighth information is used to indicate the difference between the P pieces of third encoding information and the reference encoding information corresponding to the reference original information.
  • the second preset relationship is satisfied, the eighth information is used to trigger the performance determination of the first autoencoder; or the second device receives the ninth information from the first device, the ninth information is used to indicate P
  • the difference between the third encoding information and the reference encoding information corresponding to the reference original information does not satisfy the second preset relationship, and the ninth information is used to instruct the second device to stop using the first encoder or stop encoding.
  • the performance of the first autoencoder is determined.
  • the present disclosure provides a communication method, applied to a first device, including: receiving first parameter information from a second device, the first parameter information being used to determine the performance of an autoencoder; the first device according to The first parameter information determines the performance of the autoencoder.
  • the first parameter information includes one or more of the following: one or more reference thresholds used to measure the difference between the encoded information and the re-encoded information; used to characterize at least two original information a proportional factor between the difference and the difference between the encoded information respectively corresponding to the at least two original information; a proportion used to characterize the proportion between the difference between the original information and the decoded information and the difference between the encoded information and the re-encoded information Factor, the number of the encoding information or the re-encoding information; the value range corresponding to the performance of the auto-encoder, the re-encoding information is the output obtained by the first device inputting the encoding information into the decoder and then input into the encoder Obtained by recoding.
  • the present disclosure provides a communication method, applied to a second device, including: determining first parameter information; and sending the first parameter information to the first device, where the first parameter information is used for an autoencoder The performance is determined.
  • the first parameter information includes one or more of the following: one or more reference thresholds used to measure the difference between the encoded information and the re-encoded information; used to characterize at least two original information a proportional factor between the difference and the difference between the encoded information respectively corresponding to the at least two original information; a proportion used to characterize the proportion between the difference between the original information and the decoded information and the difference between the encoded information and the re-encoded information Factor, the number of the encoding information or the re-encoding information; the value range corresponding to the performance of the auto-encoder, the re-encoding information is the output obtained by the first device inputting the encoding information into the decoder and then input into the encoder Obtained by recoding.
  • the present disclosure provides a communication device.
  • the communication device may be a first device, a device, a module or a chip in the first device, or a device that can be used in conjunction with the first device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the first aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a communication module used to receive M pieces of first coded information from the second device, where M is a positive integer
  • Processing module for:
  • M pieces of first decoding information are determined; wherein the first decoder and the first encoder belong to the first autoencoder , the first encoder is used to process the input M pieces of first original information and determine the M pieces of first coded information;
  • the performance of the first autoencoder is determined according to the difference between each first encoding information in the M first encoding information and the first re-encoding information corresponding to each first encoding information.
  • first encoder and the second encoder are the same, or the functions of the first encoder and the second encoder are the same.
  • the specific introduction can be understood with reference to the description in the first aspect, which will not be described again in this disclosure.
  • the processing module is specifically configured to: if the difference parameter between the first encoded information and the first re-encoded information is greater than or equal to the first threshold , then it is determined that the performance of the first autoencoder corresponds to the first value; or if the difference parameter between the one first encoding information and the one first re-encoding information is less than or equal to the second threshold, then It is determined that the performance of the first autoencoder corresponds to a second value.
  • the processing module is specifically configured to: if each first coding information among the K first coding information corresponds to the first recoding corresponding to each first coding information, If the difference parameter between the information is greater than or equal to the first threshold, and the proportion of the K first encoded information to the M first encoded information is greater than or equal to the first proportion threshold, then the first autoencoder is determined.
  • the performance corresponds to the first value; or if the difference parameter between each first coding information in the K first coding information and the first re-coding information corresponding to each first coding information is less than or equal to the second threshold, If the proportion of the K pieces of first coded information to the M pieces of first coded information is greater than or equal to the first proportion threshold, then it is determined that the performance of the first autoencoder corresponds to the second value; wherein, the Kth A coded information is included in the M pieces of first coded information, and K is a positive integer less than or equal to M.
  • the processing module is specifically configured to: calculate each first coded information in the M first coded information and the first weight corresponding to each first coded information.
  • the difference parameters between the encoding information are obtained to obtain M difference parameters; if the average of the M difference parameters is greater than or equal to the first threshold, it is determined that the performance of the first autoencoder corresponds to the first value, or if the M differences If the average value of the parameters is less than or equal to the second threshold, it is determined that the performance of the first autoencoder corresponds to the second value.
  • first threshold and the second threshold can be understood with reference to the description in the first aspect, which will not be described again in this disclosure.
  • the involved sending and receiving operations can be performed by the communication module, and the remaining operations are performed by the processing module.
  • the situation in which the processing module starts the performance determination of the first autoencoder can be understood with reference to the description in the first aspect, which will not be described again in this disclosure.
  • the sending and receiving operations involved can be performed by the communication module, and the remaining operations are performed by the processing module.
  • the present disclosure provides a communication device.
  • the communication device may be a second device, a device, a module or a chip in the second device, or a device that can be used in conjunction with the second device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the second aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a processing module configured to use the first encoder to process the input M pieces of first original information and determine the M pieces of first encoded information, where M is a positive integer;
  • a communication module configured to send the M first encoding information to the first device, the M first encoding information being used for performance determination of a first autoencoder, the first autoencoder including the first Encoder.
  • the communication module is also configured to send fourth information to the first device, where the fourth information is used to trigger performance determination of the first autoencoder.
  • the communication module is also configured to send first parameter information to the first device, where the first parameter information is used to determine the performance of the self-encoding; wherein the definition of the first parameter information may refer to the second It is understood that this aspect will not be described again in this disclosure.
  • the communication module is also configured to send P third encoding information to the first device and the second device to send P third encoding information, and the P third encoding information is used for the performance determination method of the first autoencoder.
  • Validity determination wherein, used to generate the P third codes
  • the difference between the P third original information of the information satisfies the first preset relationship, and P is a positive integer greater than or equal to 3; or, the P third original information used to generate the P third encoded information and The difference between the reference original information satisfies the second preset relationship, and P is a positive integer greater than or equal to 2.
  • the communication module is also configured to send third information to the first device, where the third information indicates the correspondence between the P third original information and the P third encoded information. relation.
  • the present disclosure provides a communication device.
  • the communication device may be a first device, a device, a module or a chip in the first device, or a device that can be used in conjunction with the first device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the third aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a communication module configured to receive P pieces of third encoded information from the second device, the P pieces of third encoded information corresponding to P pieces of third original information; wherein the difference between the P pieces of third original information satisfies The first preset relationship, P is a positive integer greater than or equal to 2;
  • a processing module configured to determine that the difference between the P pieces of third coded information satisfies the first preset relationship, or that the difference between the P pieces of third coded information does not satisfy the first preset relationship.
  • the communication module when the difference between the P pieces of third coded information satisfies the first preset relationship, the communication module is also used to send sixth information to the second device, and the sixth information is used to trigger the first Determine the performance of the self-encoder; or, when the first result indicates that the difference between the P third encoding information does not satisfy the first preset relationship, the communication module is also configured to send a message to the second device. Seventh information, the seventh information is used to instruct the second device to stop using the first encoder or to stop determining the performance of the first autoencoder.
  • the P third encoded information is the output of the first encoder of the second device for the P third original information input to the first encoder, and the first device A first decoder corresponding to the first encoder is included.
  • a first autoencoder includes the first encoder and the first decoder.
  • the communication module is also configured to send second information to the second device, where the second information is used to request the P third encoded information.
  • the communication module is also configured to receive third information from the second device, where the third information indicates the difference between the P third original information and the P third encoded information. correspondence between.
  • all or part of the P third original information belongs to the M first original information described in the first aspect, or the P third original information does not include all Any one of the M first original information.
  • the first device determines the first self-determination according to the first aspect and the possible design of the first aspect. Encoder performance.
  • the present disclosure provides a communication device.
  • the communication device may be a second device, a device, a module or a chip in the second device, or a device that can be used in conjunction with the second device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the fourth aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a processing module configured to determine P third pieces of original information, the difference between the P pieces of third original information satisfies the first preset relationship, and P is a positive integer greater than or equal to 2;
  • a communication module configured to send P third coded information to the first device, the P third coded information corresponding to P third original information, and the P third coded information used to determine the performance of the autoencoder .
  • the communication module is also configured to receive sixth information from the first device, where the sixth information is used to indicate that the difference between the P pieces of third coded information satisfies the first preset relationship. , the sixth information is used to trigger the performance determination of the first autoencoder; or, the communication module is also used to receive the seventh information from the first device, the seventh information is used to indicate the P third The difference between the encoding information does not satisfy the first preset relationship, and the seventh information is used to instruct the second device to stop using the first encoder or to stop determining the performance of the first autoencoder.
  • the present disclosure provides a communication device applied to a first device.
  • the communication device may be the first device, or may be a device, module or chip in the first device, or may be capable of communicating with the first device. Match the device used.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the fifth aspect.
  • the module may be a hardware circuit, or However, software can also be implemented by hardware circuits combined with software.
  • the communication device may include a processing module and a communication module.
  • a communication module configured to receive P pieces of third encoded information from the second device, the P pieces of third encoded information corresponding to P pieces of third original information, wherein the P pieces of third original information and the reference original information The difference between satisfies the second preset relationship, and P is a positive integer;
  • a processing module configured to determine that the difference between the P pieces of third coded information and the reference coded information corresponding to the reference original information satisfies the second preset relationship, or that the difference between the P pieces of third coded information and the reference coded information corresponding to the reference original information satisfies the second preset relationship. The difference between does not satisfy the second preset relationship.
  • the communication module when the difference between the P pieces of third coded information and the reference coded information corresponding to the reference original information satisfies the second preset relationship, the communication module is also configured to send eighth information to the second device, and the third Eight pieces of information are used to trigger the performance determination of the first autoencoder; or, when the difference between the P pieces of third encoding information and the reference encoding information corresponding to the reference original information does not satisfy the second preset relationship, the communication module also uses In sending ninth information to the second device, the ninth information is used to instruct the second device to stop using the first encoder or to stop determining the performance of the first autoencoder;
  • the processing module when the difference between the P third coded information and the reference coded information corresponding to the reference original information satisfies the second preset relationship, the processing module performs the first aspect according to the first aspect and the possible first aspect. Design, determine the performance of the first autoencoder.
  • the present disclosure provides a communication device applied to a second device.
  • the communication device may be the second device, or may be a device, module or chip in the second device, or may be capable of communicating with the second device. Match the device used.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the sixth aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a processing module configured to determine that the difference between the P pieces of third original information and the reference original information satisfies the second preset relationship, and P is a positive integer
  • a communication module configured to send P third coded information to the first device, the P third coded information corresponding to P third original information, and the P third coded information used to determine the performance of the autoencoder .
  • the communication module is also configured to receive eighth information from the first device, where the eighth information is used to indicate the difference between the P pieces of third encoding information and the reference encoding information corresponding to the reference original information. The difference satisfies the second preset relationship, and the eighth information is used to trigger the performance determination of the first autoencoder; or, the communication module is also used to receive ninth information from the first device, the ninth information is used to Indicates that the difference between the P third encoding information and the reference encoding information corresponding to the reference original information does not satisfy the second preset relationship, and the ninth information is used to instruct the second device to stop using the first encoder or Stop the performance determination of the first autoencoder.
  • the present disclosure provides a communication device applied to a first device.
  • the communication device may be the first device, or may be a device, module or chip in the first device, or may be capable of communicating with the first device. Match the device used.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the seventh aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a communication module configured to receive first parameter information from the second device, where the first parameter information is used to determine the performance of the autoencoder;
  • a processing module configured to determine the performance of the autoencoder according to the first parameter information.
  • the present disclosure provides a communication device applied to a second device.
  • the communication device may be the second device, or may be a device, module or chip in the second device, or may be capable of communicating with the second device. Match the device used.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the eighth aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software. accomplish.
  • the communication device may include a processing module and a communication module.
  • a processing module used to determine the first parameter information
  • a communication module configured to send the first parameter information to the first device, where the first parameter information is used for performance determination of the autoencoder.
  • the present disclosure provides a communication device, which includes a processor for implementing the method described in the first, third, fifth or seventh aspect.
  • the processor is coupled to a memory, and the memory is used to store instructions and data.
  • the communication device may also include a memory.
  • the communication device may further include a communication interface, the communication interface The interface is used for the device to communicate with other devices.
  • the communication interface may be a transceiver, a circuit, a bus, a module, a pin, or other types of communication interfaces.
  • the present disclosure provides a communication device, which includes a processor for implementing the method described in the above second, fourth, sixth or eighth aspect.
  • the processor is coupled to a memory, and the memory is used to store instructions and data.
  • the communication device may also include a memory.
  • the communication device may also include a communication interface, which is used for the device to communicate with other devices.
  • the communication interface may be a transceiver, a circuit, a bus, a module, a pin, or other types of communication interfaces.
  • the present disclosure provides a communication system, including a communication device as described in any one of the ninth to sixteenth aspects, or a communication device as described in the ninth and tenth aspects, Or the communication device as described in the eleventh and twelfth aspects, or the communication device as described in the thirteenth and fourteenth aspects, or the communication device as described in the fifteenth and sixteenth aspects .
  • the present disclosure also provides a computer program, which when the computer program is run on a computer, causes the computer to execute the method provided in any one of the above-mentioned first to eighth aspects.
  • the present disclosure also provides a computer program product, including instructions, which, when run on a computer, cause the computer to execute the method provided in any one of the above-mentioned first to eighth aspects.
  • the present disclosure also provides a computer-readable storage medium, where a computer program or instructions are stored in the computer-readable storage medium.
  • a computer program or instructions are stored in the computer-readable storage medium.
  • the computer program or instructions When the computer program or instructions are run on a computer, the computer program or instructions cause the The computer executes the method provided in any one of the above-mentioned first to eighth aspects.
  • the present disclosure also provides a chip, which is used to execute the method provided in any one of the above-mentioned first to eighth aspects.
  • the chip is used to read the computer program stored in the memory and execute the method provided in any one of the above-mentioned first to eighth aspects.
  • the present disclosure also provides a chip system.
  • the chip system includes a processor and is used to support a computer device to implement the method provided in any one of the above-mentioned first to eighth aspects.
  • the chip system further includes a memory, and the memory is used to store necessary programs and data of the computer device.
  • the chip system can be composed of chips or include chips and other discrete devices.
  • Figure 1 is a schematic structural diagram of a communication system
  • Figure 2A is a schematic diagram of the structure of a neuron
  • Figure 2B is a schematic diagram of the layer relationship of the neural network
  • Figure 2C is a schematic diagram of an AI application framework provided by the present disclosure.
  • Figure 3 is a schematic structural diagram of another communication system
  • FIGS. 4A to 4D are schematic diagrams of several network architectures
  • Figures 5A to 5B are schematic diagrams of several autoencoder-based CSI feedback frameworks provided by the present disclosure
  • Figure 6 is a schematic diagram of a spatial mapping relationship provided by the present disclosure.
  • Figure 7 is a schematic diagram of the correlation relationships of several differential changes provided by the present disclosure.
  • Figure 8 is one of the flow diagrams of a communication method provided by the present disclosure.
  • Figure 9 is a schematic diagram of another spatial mapping relationship provided by the present disclosure.
  • Figure 10 is one of the flow diagrams of a communication method provided by the present disclosure.
  • Figure 11 is one of the flow diagrams of a communication method provided by the present disclosure.
  • Figure 12 is one of the structural schematic diagrams of the communication device provided by the present disclosure.
  • Figure 13 is one of the structural schematic diagrams of the communication device provided by the present disclosure.
  • the disclosure below refers to at least one (item), indicating one (item) or more (items). Multiple (items) refers to two (items) or more than two (items).
  • “And/or” describes the relationship between associated objects, indicating that there can be three relationships, for example, A and/or B. To express: A alone exists, A and B exist simultaneously, and B exists alone. The character “/” generally indicates that the related objects are in an "or” relationship.
  • first, second, etc. may be used to describe various objects in this disclosure, these objects should not be limited to these terms. These terms are only used to distinguish objects from each other.
  • the communication system can be a third generation (3G) communication system (such as a universal mobile telecommunication system (UMTS)), a fourth generation (4th generation, 4G) communication system (such as long term evolution (LTE) system), fifth generation (5th generation, 5G) communication system, global interoperability for microwave access (WiMAX) or wireless Local area network (wireless local area network, WLAN) system, or integration system of multiple systems, or future communication system, such as 6G communication system, etc.
  • the 5G communication system can also be called a new radio (NR) system.
  • One network element in a communication system can send signals to or receive signals from another network element.
  • the signal may include information, signaling or data, etc.
  • the network element can also be replaced by an entity, a network entity, a device, a communication device, a communication module, a node, a communication node, etc.
  • a network element is taken as an example for description.
  • the communication system may include at least one terminal device and at least one access network device.
  • the access network device can send downlink signals to the terminal device, and/or the terminal device can send uplink signals to the access network device.
  • the communication system includes multiple terminal devices, multiple terminal devices can also communicate with each other.
  • Mutual signaling means that both the signal sending network element and the signal receiving network element can be terminal devices.
  • FIG. 1 is a simplified schematic diagram of the wireless communication system provided by the present disclosure.
  • the wireless communication system includes a wireless access network 100.
  • the radio access network 100 may be a next-generation (eg, 6G or higher) radio access network, or a legacy (eg, 5G, 4G, 3G or 2G) radio access network.
  • One or more communication devices 120a-120j, collectively 120
  • the wireless communication system may also include other equipment, such as core network equipment, wireless relay equipment and/or wireless backhaul equipment, etc., which are not shown in Figure 1 .
  • the wireless communication system may include multiple network devices (also called access network devices) at the same time, or may include multiple communication devices at the same time.
  • a network device can serve one or more communication devices at the same time.
  • a communication device can also access one or more network devices at the same time. This disclosure does not limit the number of communication devices and network devices included in the wireless communication system.
  • the network device may be an entity on the network side that is used to transmit or receive signals.
  • the network device may be an access device through which the communication device wirelessly accesses the wireless communication system.
  • the network device may be a base station.
  • Base stations can broadly cover various names as follows, or be replaced with the following names, such as: Node B (NodeB), evolved base station (evolved NodeB, eNB), next generation base station (next generation NodeB, gNB), open wireless Access network equipment, relay station, access point, transmission point (transmitting and receiving point, TRP), transmitting point (TP), main station MeNB, Secondary station SeNB, multi-standard wireless (MSR) node, home base station, network controller, access node, wireless node, access point (AP), transmission node, transceiver node, baseband unit (BBU), radio frequency remote unit ( RRU), active antenna unit (AAU), radio head (RRH), central unit (CU), distribution unit (DU), wireless unit (radio unit, RU), centralized unit
  • the base station may be a macro base station, a micro base station, a relay node, a donor node or the like, or a combination thereof.
  • Network equipment may also refer to communication modules, modems or chips provided in the aforementioned equipment or devices.
  • Network equipment can also be a mobile switching center and responsible for device-to-device (D2D), vehicle outreach (vehicle-to-everything, V2X), and machine-to-machine (M2M) communications.
  • D2D device-to-device
  • V2X vehicle outreach
  • M2M machine-to-machine
  • Network devices can support networks with the same or different access technologies. The embodiments of this application do not limit the specific technology and specific equipment form used by the network equipment.
  • Network equipment can be fixed or mobile.
  • base stations 110a, 110b are stationary and are responsible for wireless transmission and reception in one or more cells from communication devices 120.
  • the helicopter or drone 120i shown in Figure 1 may be configured to act as a mobile base station, and one or more cells may move based on the location of the mobile base station 120i.
  • helicopters or drones (120i) may be configured to function as a communications device that communicates with base station 110b.
  • the communication device used to implement the above access network function may be an access network device, or may be a network device with partial functions of the access network, or may be a device capable of supporting the implementation of the access network function, such as Chip system, hardware circuit, software module, or hardware circuit plus software module. This device can be installed in access network equipment or used in conjunction with access network equipment.
  • the communication device used to implement the function of the access network device is an access network device as an example for description.
  • a communication device may be an entity on the user side that is used to receive or transmit signals, such as a mobile phone.
  • Communication devices can be used to connect people, things and machines.
  • the communication device may communicate with one or more core networks through network devices.
  • Communication devices include handheld devices with wireless connectivity, other processing devices connected to wireless modems, or vehicle-mounted devices.
  • the communication device may be a portable, pocket-sized, handheld, computer-built-in, or vehicle-mounted mobile device.
  • the communication device 120 can be widely used in various scenarios, such as cellular communication, device-to-device D2D, vehicle-to-everything V2X, end-to-end P2P, machine-to-machine M2M, machine type communication MTC, Internet of Things IOT, virtual reality VR, and augmented reality AR, industrial control, autonomous driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city, drone, robot, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and Mobile etc.
  • cellular communication device-to-device D2D, vehicle-to-everything V2X, end-to-end P2P, machine-to-machine M2M, machine type communication MTC, Internet of Things IOT, virtual reality VR, and augmented reality AR, industrial control, autonomous driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city, drone, robot, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and Mobile etc.
  • Some examples of communication devices 120 are: 3GPP standard user equipment (UE), fixed devices, mobile devices, handheld devices, wearable devices, cellular phones, smartphones, Session Initiation Protocol (SIP) phones, laptops, personal computers, Smart books, vehicles, satellites, global positioning system (GPS) equipment, target tracking equipment, drones, helicopters, aircraft, ships, remote control equipment, smart home equipment, industrial equipment, personal communication service (PCS) phones , wireless local loop (WLL) station, personal digital assistant (PDA), wireless network camera, tablet computer, handheld computer, mobile Internet device (mobile internet device, MID), wearable devices such as Smart watches, virtual reality (VR) equipment, augmented reality (AR) equipment, wireless terminals in industrial control (industrial control), terminals in Internet of Vehicles systems, and self-driving (self-driving) Wireless terminals, wireless terminals in smart grid, wireless terminals in transportation safety, wireless terminals in smart city (smart city) such as smart refuelers, terminal equipment on high-speed rail and smart homes (smart Wireless terminal
  • the communication device 120 may be a wireless device in the above various scenarios or a device provided in the wireless device, for example, a communication module, a modem or a chip in the above device.
  • Communication equipment can also be called terminal, terminal equipment, user equipment (UE), mobile station (MS), mobile terminal (mobile terminal, MT), etc.
  • the communication device may also be a communication device in a future wireless communication system.
  • Communication equipment can be used in dedicated network equipment or general equipment. The embodiments of this application do not limit the specific technology and specific equipment form used in the communication equipment.
  • the communication device can be used to act as a base station.
  • a UE may act as a scheduling entity that provides sidelink signals between UEs in V2X, D2D or P2P, etc.
  • cell phone 120a and car 120b communicate with each other using sidelink signals.
  • Cell phone 120a and smart home device 120e communicate without relaying communication signals through base station 110b.
  • the communication device used to implement the functions of the communication device may be a terminal device, or a terminal device having part of the functions of the above communication device, or a device that can support the realization of the functions of the above communication device, such as a chip system,
  • the device can be installed in a terminal device or used in conjunction with the terminal device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the communication device is a terminal device or a UE as an example for description.
  • a wireless communication system is usually composed of a cell.
  • a base station provides management of the cell.
  • the base station provides communication services to multiple mobile stations (MS) in the cell.
  • the base station includes a baseband unit (BBU) and a remote radio unit (RRU).
  • BBU baseband unit
  • RRU remote radio unit
  • the BBU and RRU can be placed in different places.
  • the RRU is remote and placed in a high traffic area, and the BBU is placed in the central computer room.
  • BBU and RRU can also be placed in the same computer room.
  • the BBU and RRU can also be different components under the same rack.
  • a cell may correspond to a carrier or component carrier.
  • the present disclosure can be applied between a network device and a communication device, between a network device and a network device, or between a communication device and a communication device, that is, between a primary device and a secondary device.
  • the primary device can It is a network device or a communication device.
  • the secondary device can be another network device or communication device.
  • the main device is a communication device
  • the secondary device can be another communication device.
  • the primary device as a network device, such as an access network device
  • the secondary device as a communication device, such as a terminal device, as an example.
  • the communication direction corresponding to the downlink is from the primary device to the secondary device
  • the communication direction corresponding to the uplink is from the secondary device to the primary device.
  • Protocol layer structure between access network equipment and terminal equipment
  • the protocol layer structure may include a control plane protocol layer structure and a user plane protocol layer structure.
  • the control plane protocol layer structure may include radio resource control (RRC) layer, packet data convergence protocol (PDCP) layer, radio link control (RLC) layer, media access control (MAC) layer and physical layer.
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • RLC radio link control
  • MAC media access control
  • the user plane protocol layer structure may include the functions of protocol layers such as the PDCP layer, RLC layer, MAC layer and physical layer.
  • the PDCP layer may also include service data adaptation protocol (service data adaptation protocol). protocol, SDAP) layer.
  • service data adaptation protocol service data adaptation protocol
  • SDAP service data adaptation protocol
  • the protocol layer structure between the access network device and the terminal may also include an artificial intelligence (artificial intelligence, AI) layer for transmitting data related to the AI function.
  • AI artificial intelligence
  • the SDAP layer such as the SDAP layer, PDCP layer, RLC layer, MAC layer, and physical layer.
  • the SDAP layer, PDCP layer, RLC layer, MAC layer and physical layer can also be collectively referred to as the access layer.
  • the transmission direction of data it is divided into sending or receiving, and each layer mentioned above is divided into sending part and receiving part.
  • the PDCP layer obtains data from the upper layer, it transmits the data to the RLC layer and MAC layer, and then the MAC layer generates a transmission block, and then wirelessly transmits it through the physical layer.
  • Data is encapsulated accordingly in each layer.
  • the data received by a certain layer from the upper layer of the layer is regarded as the service data unit (SDU) of the layer.
  • SDU service data unit
  • PDU protocol data unit
  • the terminal device may also have an application layer and a non-access layer.
  • the application layer can be used to provide services to applications installed in the terminal device.
  • the downlink data received by the terminal device can be sequentially transmitted from the physical layer to the application layer, and then provided to the application program by the application layer; for another example, The application layer can obtain the data generated by the application program and transmit the data to the physical layer in turn and send it to other communication devices.
  • the non-access layer can be used to forward user data, such as forwarding uplink data received from the application layer to the SDAP layer or forwarding downlink data received from the SDAP layer to the application layer.
  • Access network equipment can include centralized units (central unit, CU) and distributed units (distributed unit, DU). Multiple DUs can be centrally controlled by one CU.
  • the interface between the CU and the DU may be called the F1 interface.
  • the control panel (CP) interface can be F1-C
  • the user panel (UP) interface can be F1-U.
  • CU and DU can be divided according to the protocol layer of the wireless network: for example, the functions of the PDCP layer and above are set in the CU, and the functions of the protocol layers below the PDCP layer (such as the RLC layer and MAC layer, etc.) are set in the DU; for example, PDCP
  • the functions of the protocol layers above are set in the CU, and the functions of the PDCP layer and the lower protocol layers are set in the DU.
  • CU or DU can be divided into functions with more protocol layers, and CU or DU can also be divided into partial processing functions with protocol layer.
  • part of the functions of the RLC layer and the functions of the protocol layer above the RLC layer are set in the CU, and the remaining functions of the RLC layer and the functions of the protocol layer below the RLC layer are set in the DU.
  • the functions of CU or DU can also be divided according to business types or other system requirements, for example, according to delay, and the functions whose processing time needs to meet the delay requirements are set in DU, but do not need to meet the delay.
  • the required functionality is set in CU.
  • the CU may also have one or more functions of the core network.
  • the CU can be set on the network side to facilitate centralized management.
  • the RU of the DU is set far away. Among them, RU has radio frequency function.
  • DU and RU can be divided at the physical layer (PHY).
  • PHY physical layer
  • DU can implement high-level functions in the PHY layer
  • RU can implement low-level functions in the PHY layer.
  • the functions of the PHY layer can include adding cyclic redundancy check (CRC) code, channel coding, rate matching, scrambling, modulation, layer mapping, precoding, resource mapping, physical antenna Mapping, and/or RF transmitting functions.
  • CRC cyclic redundancy check
  • the functions of the PHY layer may include CRC, channel decoding, derate matching, descrambling, demodulation, delayer mapping, channel detection, resource demapping, physical antenna demapping, and/or radio frequency reception functions.
  • the high-level functions in the PHY layer may include part of the functions of the PHY layer, for example, this part of the function is closer to the MAC layer, and the lower-layer functions of the PHY layer may include another part of the function of the PHY layer, for example, this part of the function is closer to the radio frequency function.
  • high-level functions in the PHY layer may include adding CRC codes, channel coding, rate matching, scrambling, modulation, and layer mapping
  • low-level functions in the PHY layer may include precoding, resource mapping, physical antenna mapping, and radio frequency transmission.
  • the high-level functions in the PHY layer may include adding CRC codes, channel coding, rate matching, scrambling, modulation, layer mapping, and precoding
  • the low-layer functions in the PHY layer may include resource mapping, physical antenna mapping, and radio frequency Send function.
  • the functions of the CU may be implemented by one entity, or may be implemented by different entities.
  • the functions of the CU can be further divided, that is, the control plane and the user plane are separated and implemented through different entities, namely the control plane CU entity (i.e., CU-CP entity) and the user plane CU entity (i.e., CU-UP entity).
  • the CU-CP entity and the CU-UP entity can be coupled with the DU to jointly complete the functions of the access network equipment.
  • the signaling generated by the CU can be sent to the terminal device through DU, or the signaling generated by the terminal device can be sent to the CU through DU.
  • RRC or PDCP layer signaling will eventually be processed into physical layer signaling and sent to the terminal device, or it will be converted from the received physical layer signaling.
  • the RRC or PDCP layer signaling can be considered to be sent through DU, or sent through DU and RU.
  • any one of the above DU, CU, CU-CP, CU-UP and RU can be a software module, a hardware structure, or a software module + hardware structure, without limitation.
  • the existence forms of different entities can be different and are not limited.
  • DU, CU, CU-CP, and CU-UP are software modules
  • RU is a hardware structure.
  • the communication system may also include more terminal devices and more access networks.
  • the equipment may also include other network elements, such as core network equipment, network management and/or network elements used to implement artificial intelligence functions.
  • network management can also be called operation administration and maintenance (OAM) network element, referred to as OAM.
  • OAM operation administration and maintenance
  • Operations mainly complete the analysis, prediction, planning and configuration of daily networks and services; maintenance mainly involves daily operational activities such as testing and fault management of the network and its services.
  • the network manager can detect the network operating status and optimize network connections and performance. , improve network operation stability and reduce network maintenance costs.
  • the method provided by the present disclosure can be used for communication between access network equipment and terminal equipment, and can also be used for communication between other communication equipment, such as communication between macro base stations and micro base stations in wireless backhaul links.
  • other communication equipment such as communication between macro base stations and micro base stations in wireless backhaul links.
  • the communication between two terminal devices in the side link (SL), or the communication between the terminal device and the network management (OAM), etc. are not restricted.
  • AI artificial intelligence
  • the AI model is the specific implementation of the AI technical function.
  • the AI model represents the mapping relationship between the input and output of the model.
  • AI models can be neural networks, linear regression models, decision tree models, support vector machines (SVM), Bayesian networks, Q learning models or other machine learning (ML) models.
  • An auto-encoder is a kind of unsupervised learning neural network. Its characteristic is that the input data is used as label data. Therefore, the auto-encoder can also be understood as self-supervised learning. neural network. Autoencoders can be used for data compression and recovery.
  • An autoencoder can generally refer to a network structure composed of two sub-models.
  • An autoencoder can also be called a bilateral model, a two-terminal model or a collaborative model.
  • an autoencoder includes an encoder and a decoder. In part, the encoder and decoder are used in a matched manner, and it can be understood that the encoder and decoder are matching AI models.
  • the encoder in the autoencoder can encode (for example, compress) the original information A to obtain the encoded information B; the decoder in the autoencoder can decode the encoded information B to obtain the corresponding information of the original information A.
  • the decoded information A' can also be described as the restored original information A.
  • the decoder is the inverse operation of the encoder; for example, the encoding process of the encoder includes compression processing, and the decoding process of the decoder includes decompression processing.
  • the encoder and decoder included in the same autoencoder can be deployed on different devices respectively.
  • the terminal device side can use the encoder to compress the original channel information
  • the access network device side can use the decoder to compress the compressed original channel information. The information is restored and the restored channel information is obtained.
  • Neural network is a specific implementation form of AI or machine learning technology. According to the universal approximation theorem, neural networks can theoretically approximate any continuous function, which enables neural networks to have the ability to learn arbitrary mappings.
  • each neuron performs a weighted sum operation on its input values and outputs the operation result through an activation function.
  • the bias for weighted summation of input values according to the weight is, for example, b. There are many forms of activation functions.
  • b, wi , xi can be decimals, integers (such as 0, positive integers or negative integers), or complex numbers and other possible values.
  • the activation functions of different neurons in a neural network can be the same or different.
  • Neural networks generally include multiple layers, and each layer may include one or more neurons. By increasing the depth and/or width of a neural network, the expressive ability of the neural network can be improved, providing more powerful information extraction and abstract modeling capabilities for complex systems.
  • the depth of the neural network may refer to the number of layers included in the neural network, and the number of neurons included in each layer may be called the width of the layer.
  • the neural network includes an input layer and an output layer. The input layer of the neural network processes the received input information through neurons, and passes the processing results to the output layer, which obtains the output results of the neural network.
  • the neural network includes an input layer, a hidden layer and an output layer, as shown in Figure 2B.
  • the input layer of the neural network processes the received input information through neurons and passes the processing results to the intermediate hidden layer.
  • the hidden layer calculates the received processing results and obtains the calculation results.
  • the hidden layer passes the calculation results to the output layer or
  • the adjacent hidden layer finally obtains the output result of the neural network from the output layer.
  • a neural network may include one hidden layer, or multiple hidden layers connected in sequence, without limitation.
  • the AI model used by the encoder or decoder in this disclosure can be a deep neural network (DNN).
  • DNNs can include feedforward neural networks (FNN), convolutional neural networks (CNN), or recurrent neural networks (RNN).
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • the training data set is used for training the AI model.
  • the training data set can include the input of the AI model, or include the input and target output of the AI model.
  • the training data set includes one or more training data.
  • the training data may be training samples input to the AI model, or may be the target output of the AI model. Among them, the target output can also be called a label or a label sample.
  • the training data set is one of the important parts of machine learning. Model training is essentially to learn some of its features from the training data so that the output of the AI model is as close as possible to the target output, such as the difference between the output of the AI model and the target output. The difference is as small as possible.
  • the composition and selection of training data sets can, to a certain extent, determine the performance of the trained AI model.
  • the training data only includes training samples input to the encoder in the autoencoder, or the training samples are label samples.
  • a loss function can be defined during the training process of AI models (such as neural networks).
  • the loss function describes the gap or difference between the output value of the AI model and the target output value. This disclosure does not limit the specific form of the loss function.
  • the training process of the AI model is the process of adjusting the model parameters of the AI model so that the value of the loss function is less than the threshold, or the value of the loss function meets the target requirements.
  • the AI model is a neural network, and adjusting the model parameters of the neural network includes adjusting at least one of the following parameters: the number of layers of the neural network, the width, the weight of the neuron, or the parameters in the activation function of the neuron.
  • Inference data can be used as input to the trained AI model for inference of the AI model.
  • the inference data is input into the AI model, and the corresponding output can be obtained, which is the inference result.
  • the design of AI models mainly includes data collection links (such as collecting training data and/or inference data), model training links, and model inference links. It may further include an application link of the reasoning results.
  • data collection links such as collecting training data and/or inference data
  • model training links such as collecting training data and/or inference data
  • model inference links It may further include an application link of the reasoning results.
  • FIG 2C to illustrate an AI application framework.
  • data sources are used to provide training data sets and inference data.
  • the AI model is obtained by analyzing or training the training data (training data) provided by the data source.
  • the AI model represents the mapping relationship between the input and output of the model. Learning an AI model through model training nodes is equivalent to using training data to learn the mapping relationship between the input and output of the model.
  • the AI model trained through the model training link is used to perform inference based on the inference data provided by the data source to obtain the inference results.
  • This link can also be understood as: input the inference data into the AI model, obtain the output through the AI model, and the output is the inference result.
  • the inference result may indicate: configuration parameters used (executed) by the execution object, and/or operations performed by the execution object.
  • the inference results are released in the inference result application link.
  • the inference results can be planned uniformly by the execution (actor) entity.
  • the execution entity can send the inference results to one or more execution objects (for example, core network equipment, access network equipment). , terminal equipment or network management, etc.) to execute.
  • the execution entity can also feed back the performance of the model to the data source to facilitate the subsequent update and training of the model.
  • network elements with artificial intelligence functions may be included in the communication system.
  • the above steps related to AI model design can be executed by one or more network elements with artificial intelligence functions.
  • AI functions (such as AI modules or AI entities) can be configured in existing network elements in the communication system to implement AI-related operations, such as AI model training and/or inference.
  • the existing network element may be access network equipment (such as gNB), terminal equipment, core network equipment, or network management, etc.
  • Operations mainly complete the analysis, prediction, planning and configuration of daily networks and services; maintenance mainly involves daily operational activities such as testing and fault management of the network and its services.
  • the network manager can detect the network operating status and optimize network connections and performance. , improve network operation stability and reduce network maintenance costs.
  • independent network elements can also be introduced into the communication system to perform AI-related operations, such as training AI models.
  • the independent network element may be called an AI network element or an AI node, and this disclosure does not limit this name.
  • the AI network element can be directly connected to the access network equipment in the communication system, or indirectly connected through a third-party network element and the access network equipment. catch.
  • third-party network elements can be authentication management function (AMF) network elements, user plane function (UPF) network elements and other core network elements, network management, cloud servers or other network elements. They are not allowed. limit.
  • AMF authentication management function
  • UPF user plane function
  • FIG. 3 a communication system is shown.
  • the communication system includes an access network device 110, a terminal device 120 and a terminal device 130; and, the AI network element 140 is also introduced into the communication system shown in Figure 3.
  • a model can infer one parameter or multiple parameters.
  • the training processes of different models can be deployed in different devices or nodes, or they can be deployed in the same device or node.
  • the inference processes of different models can be deployed in different devices or nodes, or they can be deployed in the same device or node.
  • the terminal device can train the supporting encoder and decoder and then send the model parameters of the decoder to the access network device.
  • the access network equipment can indicate the model parameters of the encoder to the terminal equipment after training the supporting encoder and decoder.
  • the AI network element can train the supporting encoder and decoder, and then send the model parameters of the encoder to the terminal device and the model parameters of the decoder to the access network. equipment. Then, the model inference step corresponding to the encoder is performed in the terminal equipment, and the model inference step corresponding to the decoder is performed in the access network equipment. Take other existing network elements (such as network management and core network elements) other than terminal equipment and access network equipment to complete the model training process as an example. After other existing network elements can train supporting encoders and decoders, they will The model parameters of the encoder are sent to the terminal equipment, and the model parameters of the decoder are sent to the access network equipment. Then, the model inference step corresponding to the encoder is performed in the terminal equipment, and the model inference step corresponding to the decoder is performed in the access network equipment.
  • the model inference step corresponding to the encoder is performed in the terminal equipment, and the model inference step corresponding to the decoder is performed in the access network equipment.
  • the model parameters may include the structural parameters of one or more of the following models (such as the number of layers and/or weights of the model, etc.), the input parameters of the model (such as input dimensions, the number of input ports), or the output of the model. Parameters (such as output dimensions, number of output ports).
  • the input dimension may refer to the size of an input data.
  • the input dimension corresponding to the sequence may indicate the length of the sequence.
  • the number of input ports may refer to the amount of input data.
  • the output dimension can refer to the size of an output data.
  • the output dimension corresponding to the sequence can indicate the length of the sequence.
  • the number of output ports can refer to the amount of output data.
  • the present disclosure also provides several network architectures as shown in Figures 4A to 4D, taking model training and/or inference in access network equipment as an example, and performing model training and/or inference in access network equipment.
  • the functional modules are divided.
  • the access network equipment includes a near-real-time access network intelligent control (RAN intelligent controller, RIC) module for model learning and/or inference.
  • the near real-time RIC may obtain network side and/or terminal side information from at least one of CU, DU and RU, and the information may include training data or inference data.
  • near-real-time RIC can be used for model training, and the trained model can also be used for inference.
  • the near real-time RIC may submit the inference results to at least one of CU, DU and RU.
  • CU and DU can interact with inference results.
  • inference results can be exchanged between DU and RU.
  • near-real-time RIC submits inference results to DU, and DU submits them to RU.
  • a non-real-time RIC can be included outside the access network equipment.
  • the non-real-time RIC can be located in the OAM or core. network equipment.
  • This non-real-time RIC is used for model learning and inference.
  • the non-real-time RIC may obtain network-side and/or terminal-side information from at least one of CU, DU, and RU, and the information may include training data or inference data.
  • non-real-time RIC is used for model training, and the trained model can also be used for inference.
  • the non-real-time RIC may submit the inference results to at least one of CU, DU and RU.
  • CU and DU can interact with inference results.
  • inference results can be exchanged between DU and RU.
  • non-real-time RIC submits inference results to DU, and DU submits them to RU.
  • the access network equipment includes near-real-time RIC, and the access network equipment also includes non-real-time RIC.
  • the non-real-time RIC can be located in the OAM or core network equipment.
  • non-real-time RIC can be used for model training.
  • Near-real-time RIC can obtain the model parameters of the trained AI model from the non-real-time RIC, and obtain the network-side sum from at least one of CU, DU and RU. /Or information on the terminal side, use this information and the trained AI model to obtain inference results.
  • the near real-time RIC can also submit the inference results to at least one of CU, DU and RU.
  • CU and DU can exchange inference results.
  • DU and RU can exchange inference results, for example.
  • Near real-time RIC submits the inference results to DU, and DU submits them to RU.
  • near-real-time RIC is used for model training and inference using the trained model, and non-real-time RIC does not participate in the training or inference of the model; or, non-real-time RIC is used for model training and
  • the trained model is used for inference, and real-time RIC does not participate in the training or inference of the model.
  • near-real-time RIC is used to train the model and send the model parameters of the trained AI model to the non-real-time RIC.
  • Real-time RIC utilizes trained models for inference.
  • FIG. 4B shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied. Compared with (c) in Figure 4A, the CU is separated into CU-CP and CU-UP in Figure 4B.
  • Figure 4C shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied.
  • the access network equipment includes one or more AI entities, and the function of the AI entity is similar to the above-mentioned near real-time RIC.
  • OAM includes one or more AI entities whose functions are similar to the above-mentioned non-real-time RIC.
  • the core network equipment includes one or more AI entities whose functions are similar to the above-mentioned non-real-time RIC.
  • differences in models may include at least one of the following differences: structural parameters of the model (such as the number of layers and/or weights of the model, etc.), input parameters of the model, or output parameters of the model.
  • Figure 4D shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied.
  • the access network equipment in Figure 4D is separated into CU and DU.
  • the CU may include an AI entity whose functions are similar to the above-mentioned near real-time RIC.
  • the DU may include an AI entity whose functions are similar to the above-mentioned near real-time RIC.
  • both CU and DU include AI entities, the models trained by their respective AI entities are different, and/or the models used for inference are different.
  • the CU in Figure 4D can be further divided into CU-CP and CU-UP.
  • one or more AI models can be deployed in CU-CP.
  • one or more AI models may be deployed in CU-UP.
  • the OAM of the access network device and the OAM of the core network device can be deployed independently.
  • the present disclosure relates to feedback technology of channel state information (CSI).
  • the access network device needs to obtain the channel state information CSI of the channel.
  • the access network device can decide to schedule the resources of the downlink data channel of the terminal device based on the CSI.
  • Configurations such as modulation and coding scheme (MCS) and precoding.
  • MCS modulation and coding scheme
  • CSI belongs to a kind of channel information, which is a kind of information that can reflect channel characteristics and channel quality.
  • the channel information may also be called channel response.
  • the CSI may be represented by a channel matrix.
  • the CSI may include a channel matrix, or the CSI may be composed of a feature vector of a channel.
  • the access network equipment since the uplink and downlink channels do not have reciprocity or cannot guarantee the reciprocity of the uplink and downlink channels, the access network equipment usually sends downlink reference signals to the terminal equipment. , the terminal equipment performs channel measurement and interference measurement to estimate the downlink channel information based on the received downlink reference signal.
  • the downlink channel information includes CSI, and then feeds back the CSI to the access network equipment.
  • the terminal device can generate a CSI report based on the estimated CSI and feed it back to the access network device in a predefined or configured manner by the access network device.
  • the downlink reference signal includes channel state information-reference signal (CSI-RS) or synchronizing signal block (synchronizing signal/physical broadcast channel block, SSB).
  • CSI-RS channel state information-reference signal
  • SSB synchronizing signal block
  • the CSI report includes feedback quantities such as rank indicator (rank indicator, RI), channel quality indicator (channel quality indicator, CQI), and precoding matrix indicator (precoding matrix indicator, PMI).
  • RI is used to indicate the number of downlink transmission layers recommended by the terminal equipment
  • CQI is used to indicate the modulation and coding method that the current channel conditions judged by the terminal equipment can support
  • PMI is used to indicate the precoding recommended by the terminal equipment
  • the precoding layer indicated by the PMI The number corresponds to RI. For example, if RI is 1, then PMI indicates one layer of precoding; if RI is N, then PMI indicates N layer precoding. N is a positive integer greater than 1. It can be understood that N layer precoding corresponds to The N downlink transmission layers indicated by RI. It can be seen that the larger the number of downlink transmission layers indicated by the RI, the higher the PMI feedback overhead, resulting in larger CSI feedback overhead.
  • Introducing artificial intelligence AI into wireless communication networks creates an AI-based CSI feedback method.
  • the CSI in the downlink channel information measured and estimated by the terminal equipment is recorded as the original CSI.
  • the terminal equipment uses the AI model to compress and quantify the original CSI and then sends it to the access network equipment.
  • the access network equipment uses the AI model to compress and quantize the received CSI.
  • the quantized CSI is dequantized and decompressed to obtain the restored CSI.
  • the autoencoder can be introduced into CSI feedback to construct a CSI feedback method based on the autoencoder.
  • the autoencoder is trained based on the downlink channel information or the original CSI in the downlink channel information.
  • the training data set of the autoencoder includes several original CSIs.
  • the training process of the autoencoder can be completed in model training nodes such as access network equipment, network management OAM or core network equipment, and then the model training node can deploy the encoder in the trained autoencoder on the terminal device, and The decoder in the trained autoencoder is deployed on the access network device.
  • Figure 5A illustrates a CSI feedback method based on an autoencoder.
  • the encoder of the autoencoder has compression and quantization functions.
  • the terminal equipment inputs the original CSI to the encoder, and the encoder outputs Quantized CSI based on original CSI; deploy an autoencoder decoder in the access network equipment, which has dequantization and decompression functions.
  • FIG. 5B illustrates a CSI feedback method based on an autoencoder.
  • the encoding of the autoencoder in terminal equipment deployment The encoder has a compression function and a quantization function, but the compression function and the quantization function are implemented by a compressor and a quantizer respectively.
  • the terminal device inputs the original CSI to the encoder, and the compressor in the encoder outputs compression based on the original CSI.
  • CSI then the terminal device uses a quantizer to quantize the compressed CSI based on the original CSI to obtain quantized CSI based on the original CSI.
  • the decoder has decompression and dequantization functions, but the dequantization and decompression functions are implemented by the dequantizer and the decompressor respectively.
  • the access network device uses the dequantizer to dequantize the received quantized CSI based on the original CSI to obtain the dequantized compressed CSI, and then the access network device inputs the dequantized compressed CSI to the decompressor, and the decompressor Output the recovered CSI.
  • the compressor can also be described as a compression module, compression unit or compression device;
  • the quantizer can also be described as a quantization module, quantization unit or quantization device;
  • the dequantizer can also be described as a dequantization module, dequantization unit or dequantization device;
  • a decompressor may alternatively be described as a decompression module, decompression unit or decompression device. This disclosure is not limiting in this regard.
  • AI models are sensitive to scene changes.
  • the performance of the AI model may decrease sharply.
  • the performance of the autoencoder may be reduced. The performance of the autoencoder will directly affect the feedback and recovery accuracy of CSI. How to monitor the performance of the autoencoder has become an issue worth studying.
  • the performance of the autoencoder is generally determined by comparing the difference between the input of the encoder (original CSI) and the output of the decoder (recovered CSI).
  • original CSI original CSI
  • decoder the output of the decoder
  • recovered CSI residual CSI
  • a decoder is deployed in addition to the encoder in the terminal device.
  • the encoder and decoder are matched and used to form an autoencoder.
  • the terminal device inputs the original CSI to the autoencoder, and the output of the autoencoder is the recovery CSI.
  • the end device compares the difference between the original CSI and the recovered CSI to determine the performance of the autoencoder.
  • the decoder is more complex than the encoder.
  • Running the encoder and decoder on the terminal device will increase the processing complexity of the terminal device, and the decoder may involve the algorithm privacy of the access network device. Deploying the decoder in the terminal device will Reduce communication security.
  • the terminal device sends the original CSI to the access network device, and the access network device compares the difference between the original CSI and the CSI restored by the decoder to determine the performance of the autoencoder. Such a design will cause CSI feedback overhead increases.
  • the present disclosure provides a performance monitoring solution for an autoencoder.
  • the input of the first encoder includes original information.
  • the output of includes encoded information;
  • the input of the first decoder includes encoded information, and the output of the first decoder includes decoded information.
  • the output of the first decoder is processed by the second encoder to obtain the output of the second encoder, and the output of the second encoder includes re-encoding information.
  • the performance of the autoencoder is indirectly determined by comparing the difference between the re-encoding information output by the second encoder and the encoding information output by the first encoder.
  • the autoencoder performance monitoring solution provided by this disclosure can be applied to autoencoder-based CSI feedback scenarios, or other scenarios that require the deployment of autoencoders.
  • the original information includes original CSI.
  • Deploy the first encoder in the terminal device and deploy the first decoder and the second encoder in the access network device. There is no need to deploy the decoder in the terminal device, which can avoid increasing the processing complexity of the terminal device and improve communication security. ; There is also no need to transmit a large amount of original CSI for autoencoder performance monitoring, which can reduce CSI overhead.
  • the CSI feedback in this application is not limited to downlink CSI feedback, and may also be applied to uplink CSI feedback, or CSI feedback between things, such as sidelink SL. CSI in this application can be replaced by channel information.
  • the second encoder may be the first encoder itself, or an encoder with the same function as the first encoder.
  • the same function of the first encoder and the second encoder can be understood as: when the input data of the first encoder and the second encoder are the same, the outputs of the first encoder and the second encoder are the same, or the first encoder
  • the output difference between the first encoder and the second encoder is less than the preset threshold, but the model parameters and/or the applied AI model types of the first encoder and the second encoder are different.
  • the functions of the first encoder and the second encoder may also be the same: when the first encoder has compression and quantization functions, the second encoder also needs to have compression and quantization functions; or, the first encoder has compression but When it does not have the quantization function, the second encoder has the compression function but does not have the quantization function.
  • the aforementioned original information includes original channel information, such as original CSI; the encoding information includes quantized CSI based on the original CSI; decoding The information includes restored CSI; the recoding information includes quantized CSI based on the restored CSI.
  • the set of original CSI and restored CSI is called channel space, and the set of quantized CSI based on original CSI and quantized CSI based on restored CSI is called latent variable space.
  • One sample point in the channel space represents one original CSI or one restored CSI
  • one sample point in the latent variable space represents one quantized CSI based on the original CSI or one quantized CSI based on the restored CSI.
  • the first encoder or the second encoder can be regarded as a function f(*).
  • the function f(*) can represent mapping 1 sample point in the channel space to 1 sample point in the latent variable space, or it can be described as 1 sample point in the latent variable space is 1 sample point in the channel space projection.
  • the original CSI is denoted as V
  • the quantized CSI based on the original CSI is denoted as z
  • there is a functional relationship: z f(V).
  • a mapping relationship diagram is shown.
  • the difference between the restored CSI and the original CSI can be expressed as the distance between corresponding sample points in the channel space. Then there is a corresponding corresponding
  • the difference between the quantized CSI based on the restored CSI and the quantized CSI based on the original CSI can be expressed. Therefore, you can monitor The difference between z and indirectly determines the recovered CSI and the original CSI V, thereby enabling performance monitoring of the autoencoder.
  • calculation methods such as Euclidean distance, minimum mean square error (MSE) distance, or cosine similarity can be used to measure the distance between two sample points in the channel space or latent variable space.
  • MSE minimum mean square error
  • the Euclidean distance refers to the distance between two points in the Euclidean space.
  • the Euclidean distance of point r(r 1 ,r 2 ,...,r n ) and point q(q 1 ,q 2 ,...,q n ) is specifically expressed as
  • the MSE distance is specifically expressed as The smaller the Euclidean distance or MSE distance of two sample points in the same space, the smaller the distance or difference between the two sample points.
  • the cosine similarity of two sample points in the same space refers to the cosine value of the angle between the two sample points.
  • the cosine value of an angle of 0 degrees is 1.
  • the cosine value of any other angle is not greater than 1.
  • the minimum cosine value is The value is -1.
  • the cosine values corresponding to two sample points can determine whether the two sample points point roughly in the same direction. The greater the cosine similarity between the two sample points in the same space, the smaller the distance or difference between the two sample points.
  • Such a correlation can also be described as the distance between sample points in the channel space and the distance between the corresponding projections in the latent variable space are consistent, or it can also be described as the distance between the sample points in the channel space and the distance in the hidden variable space.
  • the distances between corresponding projections in the latent variable space remain positively correlated.
  • Figure 7 shows several situations showing the correlation between the distance between two sample points in the channel space and the distance between their projections in the latent variable space. Among them, the solid line in Figure 7 represents the channel space, and the dotted line represents the hidden variable space. (a) in Figure 7 illustrates the distance correlation corresponding to the training data set, and (b) in Figure 7 illustrates the distance correlation corresponding to the measurement data set 1.
  • (c) in Figure 7 illustrates the distance correlation corresponding to measurement data set 2.
  • the training data set includes several pieces of downlink channel information or the original CSI in the downlink channel information.
  • the test data set 1 includes several pieces of downlink channel information or the original CSI in the downlink channel information that have the same distribution as the training data set, such as the same channel environment.
  • the test Data set 2 includes several pieces of downlink channel information or original CSI in the downlink channel information whose distribution is different from that of the training data set, for example, the channel environment is different.
  • the first device is a device responsible for monitoring the performance of the autoencoder.
  • the first device can be a model inference node using a decoder or a model training node used to train the autoencoder.
  • the second encoder and the second encoder are deployed in the first device.
  • the first decoder in the first autoencoder; the second device is a model inference node using the encoder, and the first encoder in the first autoencoder is deployed in the second device.
  • the first device can be an access network device or other network elements used to train the autoencoder, such as network management OAM, core network equipment or independent AI network elements, etc.
  • the second device may be a terminal device.
  • Figure 8 illustrates a communication method, which mainly includes the following processes.
  • the first device receives M pieces of first coded information from the second device.
  • the process by which the second device determines any one of the M first encoded information can be understood as follows: the second device uses the first encoder and the first original information input to the first encoder. , determine the first encoding information.
  • the M pieces of first coded information correspond to the M pieces of first original information in a one-to-one manner.
  • the first encoder has compression and quantization functions, and the original information input to the first encoder directly outputs the compressed and quantized information.
  • the first encoder can compress and quantize the input one first original information, and output the one first original information.
  • a piece of first coded information corresponding to the information, and the piece of first coded information is quantized information.
  • the aforementioned first original information includes the first original channel information estimated by the second device measurement reference signal, such as the downlink reference signal;
  • the first original channel information includes original CSI.
  • the first coding information may include quantized CSI based on the original CSI, and the first coding information is specifically a bit sequence. In this case, the first equipment What the body receives is M first coded information.
  • the first encoder has a compression function and a quantization function, but the quantization function and the compression function are respectively completed by the first quantizer and the first compressor included in the first encoder, and the input
  • the original information of the first encoder may first obtain compressed information through the first compressor and then obtain quantized information through the first quantizer.
  • the first coded information in this application may refer to information after compression without quantization, or may refer to information after compression and quantization.
  • the first compressor may Perform compression processing on a first original information, and output a first encoded information corresponding to the first original information.
  • the first encoded information is unquantized information; furthermore, the second device can use the first quantization
  • the processor performs quantization processing on the piece of first coded information to obtain a piece of first quantized information corresponding to the piece of first coded information.
  • the input of the first quantizer includes the first encoding information, and the output of the first quantizer includes the first quantization information.
  • the first original information includes the first original channel information estimated by the second device by measuring the downlink reference signal; wherein the first original channel information includes the original CSI.
  • the first coding information can be understood as compressed CSI based on the original CSI, and the first coding information is specifically a floating point number sequence. In this case, what the first device specifically receives is M pieces of first coding information corresponding to M pieces of first quantization information.
  • the first coded information may refer to compressed and quantized information
  • the first compressor can perform compression processing on the input first original information, and output a first compressed information corresponding to the first original information, and the first compressed information is unquantized information; and then , the second device can use the first quantizer to perform quantization processing on the piece of first compressed information, and obtain a piece of first quantized information corresponding to the piece of first compressed information.
  • the input of the first quantizer includes the first compression information
  • the output of the first quantizer includes the first quantization information.
  • the first original information includes the first original channel information estimated by the second device by measuring the downlink reference signal; wherein the first original channel information includes the original CSI.
  • the first compressed information can be understood as compressed CSI based on the original CSI, and the first compressed information is specifically a floating point number sequence.
  • what the first device specifically receives is M pieces of first quantized information corresponding to M pieces of first compressed information, and the M pieces of first quantized information are M pieces of first encoded information.
  • the quantizer may be one of the following: a non-uniform quantizer, a scalar quantizer, a vector quantizer, a quantizer designed based on experience, or a quantizer obtained through AI training.
  • the quantizer may be combined with the first encoder Train together.
  • Quantization processing can also be called quantization operation. Quantization processing can be understood as finding the item closest to the quantity to be quantified in the quantization dictionary and outputting the index of the item.
  • the quantity to be quantized in this design is the first compressed information, and the index output by querying the quantization dictionary is included in the first quantized information corresponding to the first compressed information.
  • the first quantizer used by the second device may be pre-configured, for example, in a protocol-defined manner, or the second device and the first device may pre-negotiate a first quantizer.
  • the second device can also decide on its own the first quantizer to use, and send information indicating the first quantizer to use to the first device.
  • the M pieces of first coded information (or M pieces of first quantized information) sent by the second device are transmitted to the first device through the channel, and the M pieces of first coded information (or M pieces of first quantized information) received by the first device There may be a certain transmission loss between the M first quantized information) and the M first coded information (or M first quantized information) sent by the second device, or it may be understood that the M first codes received by the first device There is a certain difference between the information (or M first quantized information) and the M first encoded information (or M first quantized information) sent by the second device.
  • the present disclosure ignores the transmission loss or difference, or the method provided by the present disclosure is not limited by the transmission loss or difference.
  • M is a positive integer.
  • the value of M can be determined by the first device.
  • the first device may monitor the performance of the autoencoder based on the continuously received first encoding information, and the value of M may specifically correspond to the frequency with which the first device determines the performance of the autoencoder. For example, every time the first device receives a first encoding information, it judges the performance of the autoencoder, and M is 1. In another example, when the first device receives a specified number of first encoding information and determines the performance of the primary autoencoder, the value of M is the aforementioned specified number.
  • the first device judges the performance of the autoencoder every specified time period, the value of M is the number of first encoding information sent by the second device within the aforementioned specified time period.
  • the first device may also monitor the performance of the autoencoder based on the discontinuously received first encoding information. For example, M pieces of first encoding information are received every specified time period or every set number of pieces of first encoding information, and the performance of the autoencoder is judged based on the M pieces of first encoding information, where M is greater than or equal to 1.
  • the first device uses the first decoder and the M pieces of first encoding information to determine the M pieces of first decoding information.
  • the first decoder and the first encoder belong to the same autoencoder, such as being recorded as the first autoencoder. Understandably, yes It is predefined that the first device uses the first decoder in the first autoencoder, and the second device uses the first encoder in the first autoencoder. Or when sending M pieces of first encoding information, the second device indicates one or more of the following to the first device: a first encoder, a first decoder, or a first autoencoder.
  • the M pieces of first decoded information are in one-to-one correspondence with the M pieces of first encoded information, and the M pieces of first decoded information are in one-to-one correspondence with the M pieces of first original information.
  • the i-th first decoded information among the M pieces of first decoded information corresponds to the i-th first original information among the M pieces of first original information
  • the i-th first decoded information is the i-th piece of first decoded information recovered from the first device.
  • the first original information. i is any positive integer from 1 to M, or it can be described as a positive integer from 1 to M.
  • the first decoding information may specifically refer to the recovered CSI.
  • the M pieces of first coded information sent by the first device are quantized information.
  • the first decoder has dequantization and decompression functions.
  • the first device may input M pieces of first encoded information to the first decoder in parallel or serially, and the first decoder may output M pieces of first decoded information.
  • the first decoder has a decompression function and a dequantization function, and the dequantization function and the decompression function are respectively provided by a first dequantizer and a first decompressor included in the first decoder. accomplish.
  • the first device may first use the first dequantizer to dequantize the M pieces of first coded information to obtain M pieces of dequantized information corresponding to the M pieces of first coded information; then, the first device parallelizes the M pieces of dequantized information. Or it is serially input to the first decompressor, and the first decompressor outputs corresponding M pieces of first decoded information.
  • the dequantization information corresponding to the first encoding information includes dequantized compressed CSI.
  • the second device sends M pieces of first quantized information corresponding to the M pieces of first coded information.
  • the first decoder has dequantization and decompression functions.
  • the first device may input M pieces of first quantized information to the first decoder in parallel or serially, and the first decoder may output M pieces of first decoded information.
  • the first decoder has a decompression function and a dequantization function, but the decompression function and dequantization function are respectively implemented by the first decompressor and the first dequantizer included in the first decoder.
  • the first device may first use a first dequantizer to dequantize the M pieces of first quantized information and restore the M pieces of first compressed information; furthermore, the first device may input the M pieces of first compressed information in parallel or serially to The first decompressor outputs corresponding M pieces of first decompressed information.
  • the M pieces of first decompressed information are also M pieces of first decoded information.
  • the first encoding information obtained by the second device using the first encoder includes compressed CSI based on the original CSI; the first quantization information includes quantized CSI based on the original CSI;
  • the first encoded information restored by the first device using the dequantizer includes dequantized compressed CSI.
  • the aforementioned first dequantizer is used in conjunction with the first quantizer in S801, and the first dequantizer can be understood as the inverse operation of the first quantizer.
  • the input of the first dequantizer includes M pieces of first quantization information
  • the output includes M pieces of first encoding information.
  • the first dequantizer is also preconfigured.
  • the second device sends information indicating the first quantizer
  • the first device can determine the matching first dequantizer based on the information indicating the first quantizer.
  • the first device uses the second encoder and the M pieces of first decoding information input to the second encoder to determine M pieces of first re-encoding information.
  • the second encoder may be the first encoder or an encoder with the same function as the first encoder.
  • the definition of the same function can be understood with reference to the foregoing description, and will not be repeated in this disclosure.
  • the M pieces of first re-encoded information correspond to the M pieces of first decoded information on a one-to-one basis, and the M pieces of first re-encoded information correspond to the M pieces of first encoded information on a one-to-one basis.
  • the first re-encoded information is information comparable to the first encoded information
  • the first re-encoded information is quantized information
  • the first re-encoded information is quantized information
  • the first re-encoded information is unquantized information
  • the second encoder has compression and quantization functions and the compression and quantization functions are implemented by the same module.
  • the first device can input M pieces of first decoded information to the second encoder in parallel or serially,
  • the second encoder may output M pieces of first re-encoded information, where the M pieces of first re-encoded information are quantized information.
  • one of the M first re-encoding information may include quantized CSI based on one recovered CSI.
  • the second encoder has a compression function and a quantization function, and the compression function and the quantization function are respectively implemented by a second compressor and a second quantizer included in the second encoder.
  • the first device may input M pieces of first decoded information to the second compressor in parallel or serially, and the second compressor may output M pieces of first heavily compressed information.
  • the first device may use the second quantizer to perform quantization processing on M pieces of first re-compressed information to obtain M pieces of first re-compressed information.
  • the M pieces of first re-encoded information may be the M pieces of first re-compressed information, or M pieces of first re-encoded information, specifically corresponding to whether the first encoded information is quantized information.
  • one of the M first-level compressed information may include one based on recovery.
  • Compressed CSI of CSI may include quantized CSI based on the recovered 1 CSI.
  • the first device determines the performance of the first autoencoder based on the difference between the M pieces of first encoding information and the M pieces of first re-encoding information.
  • the M first encoding information and the M first re-encoding information in step S804 are both For quantified information, the first device calculates the difference between the M pieces of first encoded information and the M pieces of first re-encoded information received by the first device.
  • one of the M pieces of first original information includes first original channel information, such as original CSI.
  • One first coding information among the M pieces of first coding information includes quantized CSI based on the original CSI; one piece of first recoding information among the M pieces of first recoding information includes quantized CSI based on the recovered CSI.
  • the first device may determine the performance of the first autoencoder by comparing differences between the M quantized CSIs based on the original CSI and the M quantized CSIs based on the recovered CSIs.
  • the M pieces of first encoding information and M pieces of first re-encoding information in step S804 can be All are unquantified information, or all are quantified information.
  • the first device receives M corresponding to the M first coded information. the first quantitative information.
  • the first device may use a dequantizer to process M pieces of first quantized information to obtain M pieces of first coded information.
  • the first device calculates is the difference between the M pieces of first encoded information and the M pieces of first re-encoded information obtained by dequantization processing by the first device.
  • one of the M pieces of first original information includes first original channel information, such as original CSI.
  • One first coding information among the M pieces of first coding information includes dequantized compressed CSI; one piece of first recoding information among the M pieces of first recoding information includes compressed CSI based on the restored CSI.
  • the first device may determine the performance of the first autoencoder by comparing differences between the M dequantized compressed CSIs and the M compressed CSIs based on the recovered CSIs.
  • the first device can use a quantizer to perform the compression on the M pieces of first re-compressed information. After quantization processing, M pieces of first quantized information are obtained, that is, M pieces of first re-encoded information.
  • the first device may calculate differences between the M pieces of first quantized information and the M pieces of first weighted information to determine the performance of the autoencoder. For example, in a CSI feedback scenario, one of the M pieces of first original information includes first original channel information, such as original CSI.
  • One of the M first quantization information includes the quantized CSI based on the original CSI; one of the M first re-encoding information includes the compressed CSI based on the restored CSI, and the M-th first re-encoding information includes the compressed CSI based on the restored CSI.
  • One of the first quantization information in the quantization information includes quantized CSI based on the restored CSI.
  • the first device may determine the performance of the first autoencoder by comparing differences between the M quantized CSIs based on the original CSI and the M quantized CSIs based on the recovered CSIs.
  • the performance determination method of the first self-editer will be described on a case-by-case basis.
  • Case 1 M is 1. If the difference parameter between the first encoded information and the first re-encoded information is greater than or equal to the first threshold, it is determined that the performance of the first autoencoder corresponds to the first value, or if the If the difference parameter between the first encoded information and the first re-encoded information is less than or equal to the second threshold, it is determined that the performance of the first autoencoder corresponds to the second value.
  • the first threshold and the second threshold may be equal or unequal.
  • the performance of the first autoencoder corresponds to the first value or the second value.
  • the performance of the first autoencoder can correspond to other values in addition to the first value or the second value. Different values correspond to different performance definitions, and this disclosure is not limiting.
  • the first threshold and the second threshold are thresholds set to measure the difference between the first encoded information and the first re-encoded information.
  • the difference parameter between one piece of first coded information and the one first re-coded information includes the difference between one piece of first coded information and the one first re-coded information.
  • one piece of first coding information is represented as z
  • one piece of first recoding information is represented as Illustratively, the sample points in the latent variable space are used as an example to describe.
  • the first threshold is equal to the second threshold
  • the first threshold or the second threshold can be understood as the distance difference threshold D1 between sample points in the latent variable space.
  • First device can calculate distance The size relationship with D1 is used to determine the performance of the first autoencoder.
  • the first device may determine that the performance of the first autoencoder corresponds to the first value, and the first value may be used to indicate that the performance of the first autoencoder does not meet the first requirement.
  • the first device may determine that the performance of the first autoencoder corresponds to the second value, and the second value may be used to indicate that the performance of the first autoencoder meets the first requirement.
  • the first threshold and the second threshold are thresholds set to measure the difference between the first original information and the first decoded information.
  • the difference parameter between the first encoded information and the first re-encoded information may be determined based on the scaling factor and the difference between the first encoded information and the first re-encoded information.
  • the scaling factor may be understood as a coefficient used to adjust the difference between the first encoded information and the first re-encoded information to the same level as the difference between the original information and the decoded information.
  • the difference parameter between one first encoded information and the one first re-encoded information may be equal to the product of the scaling factor and the difference between one first encoded information and the one first re-encoded information.
  • the difference parameter between the first encoded information and the first re-encoded information is a value of the same magnitude as the difference between the first original information and the first decoded information.
  • sample points in the channel space are used as an example for description.
  • the first threshold can be understood as the distance difference threshold D2 between sample points in the channel space.
  • one first encoding information is represented as z
  • one first re-encoding information is represented as The scaling factor is denoted k.
  • the first device may determine that the performance of the first autoencoder corresponds to the first value, and the first value may be used to indicate that the performance of the first autoencoder does not meet the first requirement.
  • the first device may determine that the performance of the first autoencoder corresponds to the second value, and the second value may be used to indicate that the performance of the first autoencoder meets the first requirement.
  • the aforementioned scaling factor may be preconfigured to the first device, or the scaling factor may be determined by the first device based on multiple pieces of original information and multiple pieces of coded information corresponding to the multiple pieces of original information.
  • the scale factor is determined by the second device based on multiple pieces of original information and multiple pieces of coded information corresponding to the pieces of original information, and then the first device obtains the scale factor from the second device.
  • the scale factor is variable, and the first device can periodically update the scale factor when determining the scale factor.
  • the second device determines the scaling factor the first device can configure the timing for the second device to report the scaling factor. For example, the second device reports once for periodic reporting or dynamic indication reporting.
  • the scaling factor reported by the second device each time may be the same or different. This disclosure is not limiting in this regard.
  • first device and the second device determine the scaling factor in the same manner.
  • the following is an example of how the first device determines the scaling factor.
  • the first device may receive N pieces of second original information from the second device and N pieces of second encoded information corresponding to the N pieces of second original information. Wherein, all or part of the N pieces of second original information belong to the M pieces of first original information, or the N pieces of second original information do not include any of the M pieces of first original information.
  • N is a positive integer greater than 1.
  • the first device may determine the scaling factor based on the ratio between the difference between pairs of second original information in the N pieces of second original information and the difference between pairs of second encoded information corresponding to the pair of second original information.
  • the first device may divide the N pieces of second original information into multiple groups of original information, where each group of original information includes two second pieces of original information among the N pieces of second original information.
  • the N pieces of second coded information can also be divided into multiple groups of coded information, where each group of coded information includes two pieces of second coded information among the N pieces of second coded information.
  • the N second original information includes N second original channel information in the current communication environment.
  • N may be 3, and the 3 second original information may include 3 original CSIs in the current environment.
  • the three second original information are represented as V1, V2, and V3 respectively.
  • the three second coded information corresponding to the three second original information are respectively represented as z1, z2, and z3 in the latent variable space.
  • the scaling factor k can be expressed as:
  • the first device may use the difference between the N pieces of second original information and the reference original information and the difference between the N pieces of second encoding information and the reference encoding information corresponding to the reference original information. proportion, determine the scaling factor.
  • the reference original information may include one or more original information in the training data set of the first encoder.
  • the reference encoding information may be label data corresponding to the reference original information in the training data set of the first encoder, or the reference
  • the encoded information may be sent by the second device to the first device.
  • N can be 1 or an integer greater than 1.
  • the N second original information includes N second original channel information in the current communication environment.
  • the 3 second original information includes 3 original CSIs
  • the reference original information includes 1 original CSI in the training data set of the first autoencoder.
  • V1 represents the reference original information
  • the reference coding information corresponding to V1 is represented as z1 in the latent variable space
  • V2, V3, and V4 represent 3 original CSIs among the M first original information
  • the 3 second The three second coded information corresponding to the original information are represented as z2, z3, and z4 respectively in the latent variable space.
  • the scaling factor k can be expressed as:
  • N is 2, and the two second original information may include two original CSIs in the current environment.
  • the reference original information The information includes the 2 original CSIs in the training data set of the first autoencoder.
  • V1 and V2 represent the two original CSIs in the training data set of the first autoencoder
  • V3 and V4 represent the two original CSIs in the M first original information
  • V1, V2, V3, and V4 are They are represented as z1, z2, z3, and z4 respectively in the latent variable space.
  • the scaling factor k can be expressed as:
  • the first device may calculate the difference between each second original information among the N second original information and each second decoded information among the N second decoded information.
  • the ratio of the difference between the second encoded information corresponding to each second original information and the second re-encoded information corresponding to each second encoded information determines the scaling factor; wherein, the N second decoded The information corresponds one-to-one to the N pieces of second original information.
  • the first device can use the first decoder to decode the N second encoder information corresponding to the N second original information to obtain N second decoded information, and then the first device can use the second encoder to decode the N second encoder information.
  • N pieces of second decoded information are re-encoded to obtain N pieces of second re-encoded information.
  • the N second original information includes N second original channel information in the current communication environment.
  • N 3 pieces of second original information include 3 pieces of original CSI.
  • the 3 second original information denotes the 3 second original information as V1, V2, V3.
  • the three second coded information corresponding to the three second original information are recorded as z1, z2, and z3 respectively.
  • the three second decoded information corresponding to the three second original information are represented as V1', V2', V3'.
  • the three second re-encoded information corresponding to the three second decoded information are recorded as z1', z2', z3'
  • the scale factor k can be expressed as:
  • the first device can measure each of the K first encoding information and the corresponding first encoding information of the K first encoding information according to the first threshold, the second threshold or other thresholds. Difference parameter between one layer of encoded information.
  • K pieces of first coded information are included in M pieces of first coded information.
  • K pieces of first coded information include part or all of the M pieces of first coded information.
  • K is an integer less than or equal to M.
  • the performance corresponding to the first autoencoder is determined.
  • the first value or if the difference parameter between the K pieces of first encoded information and the first re-encoded information corresponding to the K pieces of first encoded information is less than or equal to the second threshold, determine the value of the first autoencoder. Performance corresponds to the second value.
  • the proportion of the K pieces of first coding information to the M pieces of first coding information is greater than or equal to a first proportion threshold, and the first proportion threshold may be preconfigured.
  • K can be expressed as Among them, ⁇ represents the first proportion threshold, and the value of ⁇ can be 0.6, 0.7 or other values. Indicates rounding up. Or it can also be described as: if Then it can be determined that the performance of the first autoencoder corresponds to the first value.
  • the first device can calculate the difference parameter between each of the M first encoded information and the first re-encoded information corresponding to each of the first encoded information, and obtain M difference parameters; then the first device can determine the average value of the M difference parameters.
  • the first device may measure the average of the M difference parameters based on the first threshold, the second threshold, or other thresholds.
  • the average of the M difference parameters is greater than or equal to the first threshold, it is determined that the performance of the first autoencoder corresponds to the first value, or if the average of the M difference parameters is less than or equal to the second threshold, Then it is determined that the performance of the first autoencoder corresponds to the second value.
  • first threshold the second threshold, the first value, the second value, and the difference parameters between a piece of first encoded information and a corresponding piece of first re-encoded information
  • second threshold the first value
  • second value the difference parameters between a piece of first encoded information and a corresponding piece of first re-encoded information
  • the first device may determine that the performance of the first autoencoder meets the first requirement or determine that the performance of the first autoencoder does not meet the first requirement. Further, when the performance of the first autoencoder is poor, the first device can also perform one or more of the following operations: update the first autoencoder; instruct the second device to stop using the first autoencoder; instruct the second device to stop using the first autoencoder.
  • the second device uses an encoder in another autoencoder; instructs the second device to use a traditional CSI feedback method for CSI feedback.
  • Figure 8 further illustrates optional step S805:
  • the above communication method provided by the present disclosure introduces an encoder at one end of the decoder in the deployed autoencoder, and compares the encoding information output by the encoder included in the autoencoder with the re-encoding information output by the encoder introduced by the decoder. The difference between them can be indirectly judged Performance of autoencoders. It can safely and effectively monitor the performance of the autoencoder while ensuring reduced information transmission overhead.
  • the disclosure also provides a performance monitoring solution for the autoencoder, mainly by judging whether the difference relationship between the encoded information is consistent with the corresponding original information, or by judging whether the difference relationship between the encoded information and the re-encoded information is consistent with the corresponding difference.
  • the difference relationship between the original information and the decoded information is consistent and determines the performance of the autoencoder.
  • this solution mainly uses the distance relationship between sample points in the latent variable space and the distance relationship between sample points in the channel space to measure the performance of the autoencoder. .
  • Figure 10 illustrates a communication method, which mainly includes the following processes.
  • the second device sends P pieces of third coded information to the first device.
  • the second device can use the first encoder to process the P pieces of third original information and determine the P pieces of third encoded information, that is, the P pieces of third encoded information are generated based on the P pieces of third original information.
  • the P third coded information can be P projections of the P third original information in the channel space into the latent variable space.
  • the difference between the P pieces of third original information satisfies the first preset relationship
  • P is a positive integer greater than or equal to 3.
  • the P pieces of third original information form a group of two pieces of third original information.
  • the P pieces of third original information are divided into multiple groups of third original information.
  • One set of third original information among multiple sets of third original information corresponds to one piece of difference information, and the difference information corresponding to one set of third original information is used to indicate the difference between two third original information in the third original information.
  • the magnitude relationship between the plurality of difference information corresponding to the plurality of sets of third original information satisfies the first preset relationship.
  • P third original information corresponds to P sample points in the channel space. When P is 3, the three third original information are recorded as V1, V2, and V3 respectively.
  • the first preset relationship may include: d(V1, V2)>d(V2, V3).
  • the second device may actively send P pieces of third coded information to the first device, or may send P pieces of third coded information triggered by a request from the first device.
  • Figure 10 also illustrates an optional step S1000 before S1001: the first device sends second information to the second device, and the second information is used to request P third original information that satisfies the first preset relationship, Or the second information is used to request P pieces of third coded information generated based on P pieces of third original information that satisfy the first preset relationship. Furthermore, in S1001, the second device sends P pieces of third coded information to the first device.
  • first preset relationships corresponding to the P pieces of third original information may be predefined.
  • the second device and the first device can pre-agree on the resources occupied by sending the P pieces of third coded information corresponding to the P pieces of third original information.
  • the first device can determine the P pieces of third original information after receiving the P pieces of third coded information.
  • the second device may send third information to the first device, where the third information is used to indicate the correspondence between the P pieces of third original information and the P pieces of third encoded information.
  • the third information includes the identifier, resource ID, etc. of the third original information corresponding to each channel state indication information in the P third coded information.
  • S1002 The first device determines whether the differences between the P pieces of third coded information satisfy the first preset relationship.
  • pairs of P pieces of third coded information form a group of third coded information
  • the P pieces of third coded information are divided into multiple groups of third coded information.
  • One group of third encoding information among multiple groups of third encoding information corresponds to one piece of difference information
  • the difference information corresponding to one group of third encoding information is used to indicate the difference between two third encoding information in the group of third encoding information.
  • the first device may specifically determine that the magnitude relationship between the plurality of difference information corresponding to the plurality of sets of third encoding information satisfies the first preset relationship.
  • P third original information corresponds to P sample points in the channel space.
  • the three third original information are recorded as V1, V2, and V3 respectively.
  • the projections of the P third original information in the latent variable space are recorded as z1, z2, and z3 respectively.
  • the first device can determine whether z1, z2, z3 conform to the following relationship included in the first preset relationship: d(z1,z2)>d(z2,z3 ).
  • the first device may determine that the performance of the first autoencoder meets the second requirement, and the first device may not perform Any operation; if the difference between the P third encoded information does not satisfy the first preset relationship, the first device can determine that the performance of the first autoencoder does not meet the second requirement, and the first device can then provide the second device with Send fifth information, the fifth information being used to instruct the second device to stop using the first encoder.
  • the method in Figure 10 and the method in Figure 8 can be used together. First, the principle of using the method in Figure 10 and the method in Figure 8 together will be explained below.
  • a key point in judging the performance of the autoencoder is to use the difference between the encoded information and the re-encoded information to reflect the difference between the original information and the decoded information, or to use the sample points in the latent variable space
  • the distance between reflects the distance between sample points in the channel space. It can also be known from the several relationship diagrams illustrated in Figure 7 that when the distance between sample points in the channel space is consistent with the distance between the corresponding projections in the latent variable space, the relationship between the encoding information and the re-encoding information is used The difference reflects the difference between the original information and the decoded information with reliability.
  • the method in Figure 10 can be regarded as a scheme for judging the effectiveness of the performance monitoring scheme of the autoencoder described in Figure 8.
  • judging Determine whether the difference relationship between the encoded information and the re-encoded information is consistent with the difference relationship between the original information and the decoded information, and determine whether the performance monitoring scheme of the autoencoder described in Figure 8 can be applied.
  • Such a design can improve the effectiveness and accuracy of the autoencoder performance monitoring solution.
  • Figure 10 illustrates S1003a: when the difference between the P pieces of third encoding information satisfies the first preset relationship, the first device performs performance monitoring of the first autoencoder based on the encoding information and the re-encoding information.
  • the first device may determine that the solution illustrated in Figure 8 is invalid.
  • FIG. 10 illustrates S1003b: when the difference between the P pieces of third encoded information does not satisfy the first preset relationship, the first device stops performance monitoring of the first autoencoder.
  • Figure 11 illustrates another communication method, which mainly includes the following processes.
  • the second device sends P pieces of third coded information to the first device.
  • the second device can use the first encoder to process the P pieces of third original information and determine the P pieces of third encoded information, that is, the P pieces of third encoded information are generated based on the P pieces of third original information.
  • the P third coded information can be P projections of the P third original information in the channel space into the latent variable space.
  • the reference original information may include at least one reference original information in the training data set of the first autoencoder.
  • the difference between 1 third original information in the P third original information and the reference original information can be recorded as 1 difference information.
  • the differences between the P pieces of third original information and the reference original information can be recorded as P pieces of difference information, and the magnitude relationship between the P pieces of difference information satisfies the second preset relationship.
  • take the autoencoder-based CSI feedback scenario as an example.
  • One sample point corresponding to the reference original information in the channel space is recorded as V1.
  • P third original information corresponds to P sample points in the channel space.
  • the two third original information are recorded as V2 and V3 respectively.
  • the second preset relationship may include: d(V1,V2)>d(V1,V3).
  • the reference original information includes Q pieces of reference original information in the training data set of the first autoencoder, and Q is a positive integer greater than 1, 1 third original information among the P pieces of third original information can be combined with Q
  • the difference between one of the reference original information is recorded as one piece of difference information.
  • the difference between the P pieces of third original information and the reference original information can be recorded as Q ⁇ P pieces of difference information, and the size relationship between the Q ⁇ P pieces of difference information satisfies the second preset relationship.
  • the two sample points corresponding to the reference original information in the channel space are denoted V1 and V2.
  • P third original information corresponds to P sample points in the channel space.
  • the two third original information are recorded as V3 and V4 respectively.
  • the second preset relationship may include: d(V1,V3)>d(V1,V4), and d(V2,V3)>d(V2,V4).
  • the second device may actively send P pieces of third coded information to the first device, or may send P pieces of third coded information triggered by a request from the first device.
  • Figure 10 also illustrates an optional step S1100 before S1101: the first device sends second information to the second device, and the second information is used to request P third original information that satisfies the second preset relationship, Or the second information is used to request P pieces of third coded information generated based on P pieces of third original information that satisfy the second preset relationship.
  • the second device sends P pieces of third coded information to the first device.
  • a second preset relationship corresponding to the P pieces of third original information and the reference original information may be predefined.
  • the second device and the first device can pre-agree on the resources occupied by sending the P pieces of third coded information corresponding to the P pieces of third original information.
  • the first device can determine the P pieces of third original information after receiving the P pieces of third coded information.
  • the second device may send third information to the first device, where the third information is used to indicate the correspondence between the P pieces of third original information and the P pieces of third encoded information.
  • the third information includes the identifier, resource ID, etc. of the third original information corresponding to each of the P pieces of third coded information.
  • the first device determines whether the difference between the P pieces of third coding information and the reference coding information generated based on the reference original information satisfies the second preset relationship.
  • the difference between one piece of third piece of coded information and the reference coded information among the P pieces of third coded information is recorded as one piece of difference information.
  • the differences between the P pieces of third coded information and the reference coded information can be recorded as P pieces of difference information.
  • the magnitude relationship between the P pieces of difference information satisfies the second preset relationship.
  • the first device can specifically determine P third coded information The size relationship between the corresponding plurality of difference information satisfies the second preset relationship.
  • the reference original information refers to the reference channel information.
  • One reference channel information corresponds to one sample point in the channel space, denoted as V1.
  • P third original information corresponds to P sample points in the channel space.
  • the two third original information are recorded as V2 and V3 respectively.
  • the projection of the reference channel information in the latent variable space is denoted as z1
  • the projection of the P third original information in the latent variable space is denoted as z2 and z3 respectively.
  • the first device can determine whether z1, z2, and z3 comply with the following relationship included in the second preset relationship: d(z1,z2)>d(z1,z3 ).
  • the first device may determine that the performance of the first autoencoder meets the second requirement.
  • a device may not perform any operation; if the difference between the P third encoding information and the reference encoding information does not meet the second preset relationship, the first device may determine that the performance of the first autoencoder does not meet the second requirement, and then The first device may send fifth information to the second device, the fifth information being used to instruct the second device to stop using the first encoder.
  • the method in Figure 11 and the method in Figure 8 can be used together.
  • the principle of combining the method of Fig. 11 and the method of Fig. 8 will be explained below. It can be seen from the description in S804 that a key point in judging the performance of the autoencoder is to use the difference between the encoded information and the re-encoded information to reflect the difference between the original information and the decoded information, or to use the sample points in the latent variable space The distance between reflects the distance between sample points in the channel space.
  • the method in Figure 11 can be regarded as a scheme to determine the effectiveness of the performance monitoring scheme of the autoencoder described in Figure 8, by judging the difference between the encoded information and the re-encoded information, and the difference between the original information and the decoded information. Whether the difference relationship is consistent, determine whether the performance monitoring scheme of the autoencoder described in Figure 8 can be applied. Such a design can improve the effectiveness and accuracy of the autoencoder performance monitoring solution.
  • FIG. 11 illustrates S1103a: when the difference between the P third encoding information and the reference encoding information satisfies the second preset relationship, the first device performs the first autoencoder according to the encoding information and the re-encoding information. Performance monitoring.
  • the first device may determine that the solution illustrated in Figure 8 is invalid.
  • FIG. 11 illustrates S1103b: when the difference between the P pieces of third encoding information and the reference encoding information satisfies the second preset relationship, the first device stops performance monitoring of the first autoencoder.
  • FIG. 10 and FIG. 11 are only examples, illustrating the way in which the first device determines the effectiveness of the monitoring solution described in FIG. 8 .
  • the method of determining the validity of the performance monitoring scheme of the autoencoder in the present disclosure can be performed by the second device.
  • the second device and the first device can use the same validity determination method, which can be implemented with reference to the solution in Figure 10 or Figure 11 , which will not be described again in this disclosure.
  • the second device when determining that the solution described in Figure 8 is valid, the second device sends fourth information to the first device, and the fourth information is used to trigger the performance determination of the first autoencoder.
  • the present disclosure provides a communication device 1200 , which includes a processing module 1201 and a communication module 1202 .
  • the communication device 1200 may be a second device, or may be a communication device applied to the second device or used in conjunction with the second device, capable of implementing a communication method executed by the second device; or, the communication device 1200 may be a first device.
  • the device may also be a communication device applied to the first device or used in conjunction with the first device, capable of implementing the communication method executed by the first device.
  • the communication module may also be called a transceiver module, a transceiver, a transceiver, or a transceiver device, etc.
  • the processing module may also be called a processor, a processing board, a processing unit, or a processing device.
  • the communication module is used to perform the sending operation and receiving operation on the second device side or the first device side in the above method.
  • the device used to implement the receiving function in the communication module can be regarded as a receiving unit, and the devices used in the communication module can be regarded as receiving units.
  • the device that implements the sending function is regarded as a sending unit, that is, the communication module includes a receiving unit and a sending unit.
  • the processing module 1201 can be used to implement the processing functions of the first device in the examples described in Figures 8, 10 and 11, and the communication module 1202 can be used to implement the processing functions of the first device in the examples of Figures 8, 10 and 11.
  • the processing module 1201 can be used to implement the processing functions of the second device in the examples described in Figures 8, 10 and 11, and the communication module 1202 can be used to implement the processing functions of the second device in the examples of Figures 8, 10 and 11.
  • the aforementioned communication module and/or processing module can be implemented through a virtual module.
  • the processing module can be implemented through a software functional unit or a virtual device, and the communication module can be implemented through a software function or a virtual device.
  • the processing module or the communication module can also be implemented by a physical device.
  • the communication module can be an input/output circuit and/or a communication interface to perform input operations (corresponding to the aforementioned receiving operations), Output operation (corresponding to the aforementioned sending operation); the processing module is an integrated processor or microprocessor or integrated circuit.
  • each functional module in each example of this disclosure may be integrated into one processor. It can also exist physically alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software function modules.
  • the present disclosure also provides a communication device 1300.
  • the communication device 1300 may be a chip or a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the communication device 1300 can be used to implement the functions of any network element in the communication system described in the foregoing examples.
  • Communication device 1300 may include at least one processor 1310.
  • the processor 1310 is coupled to a memory, and the memory may be located within the device, or the memory may be integrated with the processor, or the memory may be located outside the device.
  • the communication device 1300 may further include at least one memory 1320.
  • the memory 1320 stores the necessary computer programs, computer programs or instructions and/or data to implement any of the above examples; the processor 1310 may execute the computer program stored in the memory 1320 to complete the method in any of the above examples.
  • the communication device 1300 may also include a communication interface 1330, and the communication device 1300 may interact with other devices through the communication interface 1330.
  • the communication interface 1330 may be a transceiver, a circuit, a bus, a module, a pin, or other types of communication interfaces.
  • the communication interface 1330 in the device 1300 can also be an input-output circuit, which can input information (or receive information) and output information (or send information)
  • the processor is an integrated processor, a microprocessor, an integrated circuit, or a logic circuit, and the processor can determine output information based on input information.
  • Coupling in this disclosure is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information interaction between devices, units or modules.
  • the processor 1310 may cooperate with the memory 1320 and the communication interface 1330.
  • the present disclosure does not limit the specific connection medium between the above-mentioned processor 1310, memory 1320 and communication interface 1330.
  • the bus 1340 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in Figure 13, but it does not mean that there is only one bus or one type of bus.
  • a processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field-programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component that may implement or execute the present disclosure.
  • the disclosed methods, steps and logical block diagrams are disclosed.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the present disclosure can be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or it may be a volatile memory (volatile memory), such as a random access memory.
  • Get memory random-access memory, RAM.
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in the present disclosure can also be a circuit or any other device capable of realizing a storage function, used to store program instructions and/or data.
  • the communication device 1300 can be applied to the first device.
  • the communication device 1300 can be the first device, or can support the first device to implement the first device in any of the above-mentioned examples.
  • functional device The memory 1320 stores computer programs (or instructions) and/or data that implement the functions of the first device in any of the above examples.
  • the processor 1310 can execute the computer program stored in the memory 1320 to complete the method executed by the first device in any of the above examples.
  • the communication interface in the communication device 1300 can be used to interact with the second device, send information to the second device or receive information from the second device.
  • the communication device 1300 can be applied to the second device, and specifically the communication device 1300 can be a third device.
  • the second device may also be a device capable of supporting the second device and realizing the functions of the second device in any of the above-mentioned examples.
  • the memory 1320 stores computer programs (or instructions) and/or data that implement the functions of the second device in any of the above examples.
  • the processor 1310 can execute the computer program stored in the memory 1320 to complete the method executed by the second device in any of the above examples.
  • the communication interface in the communication device 1300 can be used to interact with the first device, send information to the first device or receive information from the first device.
  • the communication device 1300 provided in this example can be applied to a first device to complete the method performed by the first device, or applied to a second device to complete the method performed by the second device. Therefore, the technical effects that can be obtained can be referred to the above method examples and will not be described again here.
  • the present disclosure provides a communication system, including a first device and a second device, wherein the first device and the second device can implement what is provided in the examples shown in Figures 8, 10 and 11 communication method.
  • the technical solutions provided by this disclosure can be implemented in whole or in part through software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the processes or functions described in the present disclosure are produced in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a second device, a first device, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, digital video disc (digital video disc, DVD)), or semiconductor media, etc.
  • examples may refer to each other.
  • methods and/or terms between method embodiments may refer to each other.
  • functions and/or terms between device embodiments may refer to each other.
  • Cross-references, for example, functions and/or terms between apparatus examples and method examples may refer to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de communication, qui sont appliqués au domaine technique des communications. Le procédé de communication peut être exécuté par un premier dispositif, et consiste à : recevoir M éléments de premières informations codées en provenance d'un deuxième dispositif ; à déterminer M éléments de premières informations décodées à l'aide d'un premier décodeur et des M éléments de premières informations codées entrées dans le premier décodeur, le premier décodeur et un premier codeur appartenant à un premier auto-codeur, et le premier codeur étant utilisé pour traiter M éléments entrés de premières informations d'origine pour déterminer les M éléments de premières informations codées ; déterminer M éléments de premières informations re-codées à l'aide d'un deuxième codeur et des M éléments de premières informations décodées entrées dans le deuxième codeur ; et déterminer les performances du premier auto-codeur en fonction de la différence entre chacun des M éléments de premières informations codées et des premières informations recodées correspondant à chaque élément de premières informations codées. Selon la présente divulgation, la précision de la surveillance de performance d'auto-codeur peut être améliorée tout en réduisant le surdébit de rétroaction d'informations d'état de canal.
PCT/CN2023/116088 2022-08-31 2023-08-31 Procédé et appareil de communication WO2024046419A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211056078.6 2022-08-31
CN202211056078.6A CN117676630A (zh) 2022-08-31 2022-08-31 一种通信方法及装置

Publications (1)

Publication Number Publication Date
WO2024046419A1 true WO2024046419A1 (fr) 2024-03-07

Family

ID=90064933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/116088 WO2024046419A1 (fr) 2022-08-31 2023-08-31 Procédé et appareil de communication

Country Status (2)

Country Link
CN (1) CN117676630A (fr)
WO (1) WO2024046419A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178523A (zh) * 2019-08-02 2020-05-19 腾讯科技(深圳)有限公司 一种行为检测方法、装置、电子设备及存储介质
US20210273707A1 (en) * 2020-02-28 2021-09-02 Qualcomm Incorporated Neural network based channel state information feedback
WO2022040655A1 (fr) * 2020-08-18 2022-02-24 Qualcomm Incorporated Apprentissage fédéré de paires d'autocodeur pour communication sans fil

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178523A (zh) * 2019-08-02 2020-05-19 腾讯科技(深圳)有限公司 一种行为检测方法、装置、电子设备及存储介质
US20210273707A1 (en) * 2020-02-28 2021-09-02 Qualcomm Incorporated Neural network based channel state information feedback
WO2022040655A1 (fr) * 2020-08-18 2022-02-24 Qualcomm Incorporated Apprentissage fédéré de paires d'autocodeur pour communication sans fil

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SAMSUNG: "Views on Evaluation of AI/ML for CSI feedback enhancement", 3GPP DRAFT; R1-2203897, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG1, no. e-Meeting; 20220509 - 20220520, 29 April 2022 (2022-04-29), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052153235 *

Also Published As

Publication number Publication date
CN117676630A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
US11477770B2 (en) Asynchronous multi-point transmission schemes
US20200136776A1 (en) Facilitating incremental downlink control information design to support downlink control information scheduling
US20240137082A1 (en) Communication method and apparatus
WO2024046419A1 (fr) Procédé et appareil de communication
WO2023125660A1 (fr) Procédé et dispositif de communication
WO2023098860A1 (fr) Procédé de communication et appareil de communication
CN113039732A (zh) Acqi解码置信度检测
WO2024008004A1 (fr) Procédé et appareil de communication
WO2024046215A1 (fr) Procédé et appareil de communication
CN114492784A (zh) 神经网络的测试方法和装置
WO2024131900A1 (fr) Procédé de communication et appareil de communication
WO2023125996A1 (fr) Procédé et appareil de précodage de liaison montante
WO2024094038A1 (fr) Procédé de commutation ou de mise à jour d'un modèle d'ia et appareil de communication
WO2023202514A1 (fr) Procédé et appareil de communication
CN118118133A (zh) 一种通信方法及装置
WO2023231881A1 (fr) Procédé et appareil d'application de modèle
US20240211770A1 (en) Communication method and apparatus
US20240211769A1 (en) Communication method and apparatus
WO2023036280A1 (fr) Procédé et appareil de test de modèle
WO2023198018A1 (fr) Procédé et appareil d'envoi de signal de liaison montante, ainsi que procédé et appareil de réception de signal de liaison montante
WO2023036323A1 (fr) Procédé et appareil de communication
WO2023279947A1 (fr) Procédé et appareil de communication
WO2024051789A1 (fr) Procédé de gestion de faisceau
US20240211810A1 (en) Model test method and apparatus
US20240048207A1 (en) Method and apparatus for transmitting and receiving feedback information based on artificial neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23859448

Country of ref document: EP

Kind code of ref document: A1