CN117648966A - AI model training method, apparatus, device and readable storage medium - Google Patents
AI model training method, apparatus, device and readable storage medium Download PDFInfo
- Publication number
- CN117648966A CN117648966A CN202210981397.1A CN202210981397A CN117648966A CN 117648966 A CN117648966 A CN 117648966A CN 202210981397 A CN202210981397 A CN 202210981397A CN 117648966 A CN117648966 A CN 117648966A
- Authority
- CN
- China
- Prior art keywords
- model
- training
- information
- channel state
- state information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 301
- 238000000034 method Methods 0.000 title claims abstract description 111
- 238000003860 storage Methods 0.000 title claims abstract description 21
- 238000004364 calculation method Methods 0.000 claims abstract description 38
- 238000004891 communication Methods 0.000 claims description 25
- 230000006835 compression Effects 0.000 claims description 10
- 238000007906 compression Methods 0.000 claims description 10
- 238000013473 artificial intelligence Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 33
- 230000005540 biological transmission Effects 0.000 description 19
- 238000005259 measurement Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000011664 signaling Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000001774 stimulated Raman spectroscopy Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/0413—MIMO systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/02—Arrangements for detecting or preventing errors in the information received by diversity reception
- H04L1/06—Arrangements for detecting or preventing errors in the information received by diversity reception using space diversity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/02—Arrangements for detecting or preventing errors in the information received by diversity reception
- H04L1/06—Arrangements for detecting or preventing errors in the information received by diversity reception using space diversity
- H04L1/0618—Space-time coding
- H04L1/0675—Space-time coding characterised by the signaling
- H04L1/0693—Partial feedback, e.g. partial channel state information [CSI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L5/00—Arrangements affording multiple use of the transmission path
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Mobile Radio Communication Systems (AREA)
- Monitoring And Testing Of Transmission In General (AREA)
Abstract
The embodiment of the application provides an AI model training method, an AI model training device, AI model training equipment and a readable storage medium, wherein the AI model training method comprises the following steps: acquiring uplink channel state information; training an AI model according to the uplink channel state information, wherein the AI model comprises a first AI model and/or a second AI model; and/or, acquiring downlink channel state information and a first result, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model; training a second AI model according to the first result and the downlink channel state information; the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
Description
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an AI model training method, an AI model training device, AI model training equipment and an AI model training program.
Background
The channel state information may describe a current channel environment, in a mobile communication network, a base station transmits a channel state information-Reference Signal (CSI-RS), a terminal evaluates the channel state information and quantitatively feeds back the channel state information to the base station, and the base station side can adjust in time when transmitting the channel state information Reference Signal by introducing the channel state information (Channel State Information, CSI) feedback information, thereby reducing an error rate at the terminal and obtaining an optimal receiving Signal.
In wireless communications, channel prediction may be used to compensate for the delay between channel measurement and actual scheduling, improving throughput. How to reduce the transmission overhead of data required by training on the premise of ensuring the performance of a model is a problem to be solved.
Disclosure of Invention
The embodiment of the application aims to provide an AI model training method, an AI model training device, AI model training equipment and a readable storage medium, and solves the problem of how to reduce transmission overhead of data required by training on the premise of guaranteeing the performance of an AI model.
In a first aspect, an AI model training method is provided, applied to a network side device, and includes:
acquiring uplink channel state information;
training an AI model according to the uplink channel state information, wherein the AI model comprises a first AI model and/or a second AI model;
and/or the number of the groups of groups,
acquiring downlink channel state information and a first result, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model;
training a second AI model according to the first result and the downlink channel state information;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
Optionally, the method further comprises:
and transmitting the trained first AI model to a terminal.
Optionally, the acquiring the uplink channel state information includes:
receiving one or more SRS or DMRS from one or more terminals;
and acquiring uplink channel state information according to the one or more SRS or the DMRS.
Optionally, training the second AI model according to the first result and the downlink channel state information includes:
obtaining a calculation result of reasoning of the second AI model according to the first result and the second AI model;
acquiring a calculation result of the second AI model reasoning and a loss value between the downlink channel state information;
and training the second AI model according to the loss value.
Optionally, training the second AI model according to the loss value includes:
determining first information according to the loss value, wherein the first information comprises: training depth information and/or training mode information;
and training the second AI model according to the first information.
Optionally, the training the second AI model according to the first information includes:
Gradient information is acquired, wherein the gradient information is obtained by back-propagating the loss value in the second AI model;
and training the second AI model according to the first information and the gradient information.
Optionally, the method further comprises:
and sending second information to the terminal according to the first information, wherein the second information is used for indicating the end of training.
Optionally, the method further comprises:
according to the first information, the gradient information is sent to a terminal, the terminal continues to conduct back propagation on the gradient information in the first AI model, and the weight of the first AI model is updated;
and/or the number of the groups of groups,
and receiving third information according to the first information, wherein the third information is used for indicating whether training is finished or not.
Optionally, the method further comprises:
configuring a first parameter of the AI model;
wherein the first parameter comprises one or more of:
training intervals;
training the number of samples;
training the number of iterations.
Optionally, the method further comprises:
and sending at least one of training iteration times, training intervals and training sample numbers to the terminal.
Optionally, the method further comprises:
Judging whether to reconfigure the training interval;
if the training interval is judged to be required to be reconfigured, the training interval is reconfigured;
and sending the reconfigured training interval to the terminal.
Optionally, determining whether to reconfigure the training interval includes:
acquiring communication quality;
and judging whether to reconfigure the training interval according to the communication quality.
Optionally, determining whether to reconfigure the training interval according to the communication quality includes:
obtaining the highest bit error rate obtained when the current first AI model carries out channel compression feedback, and comparing the highest bit error rate with a bit error rate threshold;
obtaining the highest bit error rate and bit error rate threshold obtained when the current first AI model carries out channel compression feedback;
comparing the highest bit error rate with a bit error rate threshold;
if the highest bit error rate is smaller than or equal to a bit error rate threshold, increasing a training interval;
or,
and if the highest bit error rate is greater than or equal to the bit error rate threshold, reducing the training interval.
In a second aspect, an AI model training method is provided, applied to a terminal, and includes:
transmitting uplink channel state information to network side equipment, wherein the uplink channel state information is used for training an AI model, and the AI model comprises a first AI model and/or a second AI model;
And/or the number of the groups of groups,
transmitting downlink channel state information and a first result to network side equipment, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model, and the first result and the downlink channel state information are used for training a second AI model;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
Optionally, the method further comprises:
the trained first AI model is received.
Optionally, the method further comprises:
receiving second information, wherein the second information is used for indicating the end of training;
the second information is sent by the network side equipment according to first information, the first information is determined according to a loss value, and the first information comprises: training depth information and/or training mode information; the loss value represents a loss condition between a calculation result of a second AI model reasoning and the downlink channel state information, and the calculation result of the second AI model reasoning is a result obtained by calculating the first result in the second AI model reasoning.
Optionally, the method further comprises:
receiving gradient information, wherein the gradient information is obtained by back-propagating a loss value in the second AI model, and the loss value represents the loss condition between the calculation result of second AI model reasoning and the downlink channel state information;
and continuing back propagation of the gradient information in the first AI model, and updating the weight of the first AI model.
Optionally, the method further comprises:
and sending third information, wherein the third information is used for indicating whether training is finished or not.
Optionally, the method further comprises:
at least one of a training iteration number, a training interval, and a training sample number is received.
In a third aspect, an AI model training apparatus is provided, which is applied to a network side device, and includes:
the first acquisition module is used for acquiring the uplink channel state information;
the first training module is used for training an AI model according to the uplink channel state information, and the AI model comprises a first AI model and/or a second AI model;
and/or the number of the groups of groups,
the second acquisition module is used for acquiring downlink channel state information and a first result, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model;
The second training module is used for training a second AI model according to the first result and the downlink channel state information;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
In a fourth aspect, an AI model training method is provided, applied to a terminal, and includes:
a sixth sending module, configured to send uplink channel state information to a network side device, where the uplink channel state information is used to train an AI model, and the AI model includes a first AI model and/or a second AI model;
and/or the number of the groups of groups,
a seventh sending module, configured to send downlink channel state information and a first result to a network side device, where the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model, and the first result and the downlink channel state information are used to train a second AI model;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
In a fifth aspect, there is provided a communications device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction when executed by the processor effecting the steps of the manner described in the first or second aspect.
In a sixth aspect, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor performs the steps of the manner described in the first or second aspect.
In one implementation manner of the embodiment of the application, the network side device acquires uplink channel state information sent by the terminal, trains the first AI model and/or the second AI model by utilizing reciprocity of uplink and downlink channels, and avoids that the network side device needs to send a downlink channel reference signal to the terminal when training by using the downlink channel state information, the terminal needs to perform downlink channel measurement, and then the terminal also needs to feed back a downlink channel measurement result to the network side, so that transmission cost of data required by training can be effectively reduced on the premise of guaranteeing performance of the AI model.
In another embodiment of the present application, a network side device obtains downlink channel state information and a first result sent by a terminal; the network side device trains the second AI model according to the first result and the downlink channel state information, and it can be understood that, except that the terminal sends the downlink channel state information to the network side device, the data amount of training data sent by the terminal to the network side device includes: the data quantity related to the first result fed back once in the training process is obviously reduced compared with the transmission cost of all layer parameters of the first AI model obtained by feeding back training to the network side equipment by the terminal, so that the transmission cost of the data required by training the second AI model can be effectively reduced on the premise of ensuring the performance of the second AI model.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a schematic diagram of a communication system provided in an embodiment of the present application;
FIG. 2 is one of the flowcharts of the AI model training method provided by the embodiments of the present application;
FIG. 3 is a second flowchart of an AI model training method provided by an embodiment of the application;
FIG. 4 is a third flowchart of an AI model training method provided by an embodiment of the application;
FIG. 5 is a fourth flowchart of an AI model training method provided by an embodiment of the application;
FIG. 6 is a schematic illustration of selection of model retraining depths;
FIG. 7 is a schematic illustration of the effect of model retraining depth on model performance;
FIG. 8 is a schematic diagram of a reciprocity scheme of an upstream and downstream channel;
FIG. 9 is one of the schematic diagrams of the AI model training apparatus provided in the embodiments of the application;
FIG. 10 is a second schematic diagram of an AI model training apparatus provided in an embodiment of the disclosure;
FIG. 11 is a schematic diagram of a communication device provided by an embodiment of the present application;
fig. 12 is a schematic diagram of pre-training provided by an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the use of "and/or" in the specification and claims means at least one of the connected objects, e.g., a and/or B, meaning that it includes a single a, a single B, and that there are three cases of a and B.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Referring to fig. 1, an architecture diagram of a wireless communication system according to an embodiment of the present invention is provided. The wireless communication system may include: a network-side device 11 and a terminal 12, the terminal 12 may communicate (transmit signaling or transmit data) with the network-side device 11. In practical application, the connection between the devices may be wireless connection, and for convenience and intuitionistic representation of the connection relationship between the devices, a solid line is used for illustration in fig. 1.
The terminal referred to in the present application may be a mobile phone, a tablet (Tablet Personal Computer), a Laptop (Laptop Computer) or a terminal-side Device called a notebook, a personal digital assistant (Personal Digital Assistant, PDA), a palm top, a netbook, an ultra-mobile personal Computer (ultra-mobile personal Computer, UMPC), a mobile internet appliance (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) Device, a robot, a Wearable Device (weather Device), a vehicle-mounted Device (VUE), a pedestrian terminal (PUE), a smart home (home Device with a wireless communication function, such as a refrigerator, a television, a washing machine, or furniture, etc.), a game machine, a personal Computer (personal Computer, PC), a teller machine, or a self-service machine, etc., and the Wearable Device includes: intelligent wrist-watch, intelligent bracelet, intelligent earphone, intelligent glasses, intelligent ornament (intelligent bracelet, intelligent ring, intelligent necklace, intelligent anklet, intelligent foot chain etc.), intelligent wrist strap, intelligent clothing etc.. It should be noted that, the embodiment of the present application is not limited to a specific type of terminal.
The network-side device referred to in the present application may comprise an access network device, which may also be referred to as a radio access network device, a radio access network (Radio Access Network, RAN), a radio access network function or a radio access network element. The access network devices may include base stations, WLAN access points, wiFi nodes, or the like, which may be referred to as node bs, evolved node bs (enbs), access points, base station transceivers (Base Transceiver Station, BTSs), radio base stations, radio transceivers, basic service sets (Basic Service Set, BSS), extended service sets (Extended Service Set, ESS), home node bs, home evolved node bs, transmit receive points (Transmitting Receiving Point, TRP), or some other suitable terminology in the art.
Referring to fig. 2, an embodiment of the present application provides an AI model training method, which is applied to a network side device, and specifically includes the steps of: step 201, step 202, and/or step 203, step 204.
Step 201: acquiring uplink channel state information;
the uplink channel state information may also be referred to as uplink channel data;
optionally, step 201 may include: receiving one or more sounding reference signals (Sounding Reference Signal, SRS) or demodulation reference signals (Demodulation Reference Signal, DMRS) from one or more terminals; according to the one or more SRS or DMRS, uplink channel state information is obtained, for example, the network side device obtains actually measured uplink channel state information based on a plurality of SRS or DMRS of one terminal, or the network side device performs uplink channel measurement based on one or more SRS or DMRS sent by a plurality of terminals, so that the network side can obtain enough uplink channel data samples, and further the accuracy of the AI model obtained through training can be improved.
Step 202: training an AI model according to the uplink channel state information;
wherein the AI model may comprise a first AI model and/or a second AI model.
The first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
In step 202, the network side device may train the AI model of the network side device using the received uplink channel state information as training data, that is, adjust parameters in the AI model using the training data. It can be appreciated that the training of the AI model in combination with the training data determined in the embodiments of the present application may be performed in an existing manner, which is not described herein.
The first AI model may be referred to as an encoder (decoder) in the AI model of the network-side device, and the second AI model may be referred to as a decoder (decoder) in the AI model of the network-side device.
Optionally, after the training process in step 202 is finished, the first AI model may be sent to a terminal.
Step 203: acquiring downlink channel state information and a first result, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model;
For example, the network side device receives downlink channel state information and a first result from the terminal, for example, the network side device sends a channel state information Reference Signal (CSI-RS) to the terminal, the terminal performs downlink channel measurement according to the CSI-RS, and the terminal sends the actually measured downlink channel state information to the network side device.
Optionally, the first result is a forward propagation intermediate result obtained by forward propagating (Forward Propagation) the measured downlink channel state information in the first AI model by the terminal at the first iteration, where forward propagation refers to a process of gradually calculating low-level features as abstract high-level features from the input feature vector to the output of the last to the cost function until loss is obtained. The forward propagation sequentially calculates and stores intermediate variables of the neural network in order from the input layer to the output layer.
Alternatively, the downlink channel state information may be compressed and decompressed in an existing manner.
Step 204: and training a second AI model according to the first result and the downlink channel state information.
It will be appreciated that in one embodiment, the method illustrated in fig. 2 may include steps 201 and 202, and that in this embodiment training of the model may be understood as pre-training.
In this embodiment, in the pre-training process, the reciprocity between the uplink channel and the downlink channel is utilized, and the training data transmission overhead can be greatly reduced by using the uplink channel state information to perform pre-training on the network side. Further, the pre-training data come from the measured data in the pre-training process, so that the adaptability of the model to the actual application scene is improved.
In another embodiment, the method shown in fig. 2 may include steps 203 and 204, in which case training of the model may be understood as retraining.
In this embodiment, in the retraining process, the first AI model may be sent to the terminal for retraining, and the second AI model is retrained at the network side, so that, considering that the number of iterations required in the retraining process is smaller, compared with the case that the whole model is transmitted after the terminal or the network side is retrained uniformly, only a small amount of intermediate results of interactive training are required in this embodiment, and data overhead in the retrained stage may be reduced.
In yet another embodiment, the method shown in fig. 2 may include step 201, step 202, step 203 and step 204, where the order of step 201 and step 203 is not limited, i.e. step 201 and step 202 may be performed first, then step 203 and step 204 may be performed, or step 201 and step 203 may be performed simultaneously, and training of the model in this embodiment includes pre-training and retraining.
In this embodiment, transmission overhead during AI model training may be reduced on the basis of ensuring the performance of the first AI model and/or the second AI model. Based on reciprocity of uplink and downlink channels, the first AI model and/or the second AI model are pre-trained by using uplink channel state information, a small amount of iteration is performed by using actually measured downlink channel state information, the second AI model is retrained at a network side, the mechanism can be used for pre-training at an extremely low frequency and retrained at a lower frequency under the condition that channel scenes do not change rapidly and severely, and the availability of the first AI model and/or the second AI model can be ensured.
Further, the retraining data come from the measured data in the retraining process, so that the adaptability of the second AI model to the actual application scene is improved.
In one embodiment of the present application, step 204 may include:
step 2041: obtaining a calculation result of reasoning of the second AI model according to the first result and the second AI model;
that is, the network side device propagates the first result forward in the second AI model to obtain a calculation result of the second AI model reasoning;
the forward propagation (Forward propagation) refers to: the data starts from the input layer, passes through the hidden layer (if any) in sequence, and finally reaches the output layer.
Wherein, each time data propagates through a layer, the higher the information level represented by the value output by the node is. The value output in a node is the result of a weighted summation of the output values of all nodes in the previous layer to which it is connected.
The calculation result of the second AI model reasoning may also be referred to as the forward propagation intermediate layer result of the second AI model.
Step 2042: acquiring a calculation result of the second AI model reasoning and a loss value between downlink channel state information;
that is, the network side equipment calculates a Loss value (Loss value) between the calculation result of the second AI model reasoning and the downlink channel state information according to the calculation result of the second AI model reasoning;
step 2043: and training a second AI model according to the loss value.
In this embodiment, the second AI model is retrained by the loss value, and only a small number of retrained iterations and retrained data are required to obtain better performance of the second AI model.
In one embodiment of the present application, step 2043 includes:
step 20431: determining first information according to the loss value, wherein the first information comprises: training depth information and/or training mode information;
Wherein the information of the training depth is used to indicate the model type involved in training (such as retraining), or the number of layers involved in the model, etc.
Wherein the information of the training mode is used to indicate mode 1) a mode in which the second AI model participates in training (such as retraining), or is used to indicate mode 2) a mode in which the first AI model and the second AI model participate in training (such as retraining).
In the training process of step 203 and step 204, for the second AI model with a certain depth, taking the fully connected model as an example, not all layers need to participate in the training (e.g. retraining) process, only a small number of layers may be allowed to participate in the training (e.g. retraining) by selecting frozen part of the layer parameters, as shown in fig. 6.
From the results in fig. 7, it can be seen that as the number of layers involved in the training (e.g., retraining) process increases, the performance gain obtained by the second AI model is continuously decreasing, i.e., the performance improvement of the second AI model by selecting too deep a training (e.g., retraining) depth is very limited. Taking into account the difference in training depth, the process of the first AI model and the second AI model for separating training will be divided into two cases according to whether the training (e.g. retraining) depth will touch the first AI model, i.e. the manner in which the first AI model participates in training, and the manner in which the first AI model and the second AI model participate in training, as shown in fig. 4 and fig. 5.
For the judgment basis of which training mode is used, the Loss value calculated in the first iteration can be considered, and the network side equipment can preset the Loss threshold according to the type of the first AI model, as shown in Table 1.
Table 1: the number of training (e.g., retraining) layers is chosen based on.
The network side may determine, according to table 1, the number of model layers that need to participate in the training (e.g., retraining) process according to the first AI model type currently used and the Loss value calculated in the first iteration, and determine which training (e.g., retraining) mode is specifically adopted according to whether the model layers that participate in the training (e.g., retraining) process relate to the first AI model.
For example, when the type of the first AI model is model A, if Loss belongs to (A 1 ,A 2 ) The number of retraining layers is determined to be 1. For another example, when the type of the first AI model is model B, if Loss belongs to (B 2 ,B 3 ) The number of retraining layers is determined to be 2.
Step 20432: the second AI model may be trained (e.g., retrained) based on the first information, i.e., the second AI model may be trained based on information of training depth and/or information of training mode.
In this embodiment, there is flexibility in selecting a training (e.g., retraining) mode, and the training (e.g., retraining) mode may be flexibly adjusted according to the Loss value, so that the data transmission overhead in the training (e.g., retraining) process is as small as possible.
In one embodiment of the present application, the 20432 includes:
step 204321: gradient information is acquired, wherein the gradient information is obtained by back-propagating the loss value in the second AI model;
for example, back Propagation (Back Propagation) may calculate gradient information of the loss value for each parameter through derivative chain law, and update the parameter according to the gradient information.
Step 204322: and training the second AI model according to the first information and the gradient information.
It will be appreciated that after step 204322 is performed, the step of obtaining downlink channel state information by the terminal according to the channel state information Reference Signal (Channel State Information-Reference Signal, CSI-RS) may be returned, and then the training steps in step 203 and step 204 are performed until the training is finished.
In one embodiment of the present application, the method further comprises:
and sending second information to the terminal according to the first information, wherein the second information is used for indicating that training (such as retraining) is finished.
For example, if the information of the training mode in the first information indicates that only the second AI model participates in training (such as retraining), the network side device is required to indicate to the terminal that training is completed, and further, if the information of the training mode in the first information indicates that only the second AI model participates in training and the Loss value is lower than or equal to the preset value, the network side device may indicate to the terminal that training is completed.
In a training (e.g., retraining) mode in which only the second AI model is frozen, the terminal is notified by the network-side device that the second AI model of the network-side device has completed training (e.g., retraining).
The second information may also be referred to as retraining iterative acknowledgement signaling.
In another embodiment of the present application, the method further comprises:
and according to the first information, sending the gradient information to a terminal, and enabling the terminal to continue back propagation of the gradient information in the first AI model and update the weight of the first AI model.
For example, in case both the first AI model and the second AI model participate in retraining, the gradient information is sent to the terminal. In this embodiment, in the retraining process of the first AI model and the second AI model, the device on the network side may determine the retraining depth required in retraining according to the Loss value calculated in the first iteration, and determine a specific retraining mode according to the retraining depth, where when the retraining depth does not involve the first AI model, the retraining processes of the first AI model and the second AI model are all completed on the network side, and when the retraining depth involves the first AI model and the second AI model, the network side needs to send gradient information and the Loss value to the terminal, and the terminal continues to reversely propagate the gradient information in the first AI model, and updates the weight of the first AI model.
It can be appreciated that, during retraining of the first AI model and the second AI model, the terminal needs to feed back an intermediate result of forward propagation of the first AI model to the network side, and when the retraining depth does not relate to the first AI model, the feedback only occurs in the first retraining iteration, and when the retraining depth relates to the first AI model, the terminal feeds back an intermediate result of forward propagation of the first AI model to the network side in each retraining iteration, and in addition, the terminal feeds back a channel sounding reference signal (Sounding Reference Signal, CSI) to the network side in each retraining iteration.
Further optionally, the method further comprises:
based on the first information, third information is received, the third information being used to indicate whether training (e.g., retraining) is finished, such as receiving third information in the case where both the first AI model and the second AI model are involved in retraining.
For example, if the information of the training manner in the first information indicates that both the first AI model and the second AI model participate in training (e.g., retraining), the network side device is required to receive third information indicating whether the training is finished from the terminal.
The above-described third information may also be referred to as retraining CSI-RS request signaling.
In this embodiment, if the third information is used to indicate that retraining is not completed, the network side device may send the CSI-RS to the terminal, and then repeat the retraining step.
In this embodiment, in a mode in which both the first AI model and the second AI model participate in training (e.g., retraining), the network side device is notified by the terminal that the terminal has completed back propagation, and declares whether the retraining iteration has been completed.
It can be appreciated that when the training (e.g., retraining) depth relates to the first AI model, the terminal needs to send third information, such as retraining CSI-RS request signaling, to the network side after each training (e.g., retraining) iteration is completed.
In another embodiment of the present application, the method further comprises:
configuring a first parameter of an AI model; wherein the first parameter comprises one or more of:
(1) Training intervals, which may also be referred to as retraining intervals;
optionally, the training interval is the feedback times of the downlink channel state information between the time when the AI model completes retraining and the time when the next training is performed.
(2) Training the number of samples;
optionally, the training sample number includes: the number of pre-training samples and the number of retraining samples;
(3) Training iteration times;
Optionally, the training iteration number includes: the number of pre-training iterations and the number of retraining iterations.
The retraining iteration number refers to the iteration number required by retraining the AI model by using the actually measured downlink channel state information after the AI model is pre-trained by using the uplink channel state information.
In one embodiment of the present application, the method further comprises:
and sending at least one of training iteration times, training intervals and training sample numbers to the terminal.
The network side device may determine the number of uplink channel state information according to the number of training samples, and the terminal may determine the number of SRS or DMRS to be transmitted according to the number of training samples.
The network side equipment can determine the times of the downlink channel state information and the first result which need to be acquired according to the training iteration times, and the terminal can determine the times of the downlink channel state information and the first result which need to be reported according to the training iteration times.
The network side device and the terminal can determine the interval between two adjacent training steps, such as time and the like, according to the training interval.
Optionally, the pre-training is performed on the network side, the number of pre-training iterations and the number of samples required for pre-training and re-training are pre-configured on the network side, wherein the pre-training process is completed only on the network side, and the training process is independent of the terminal, so that the number of pre-training iterations can not be synchronous with the terminal, but the pre-training samples required on the network side need to be acquired with the assistance of a sounding reference signal (Sounding Reference Signal, SRS) sent by the terminal, and the re-training also needs to be completed by the network side and the terminal in a cooperative manner, so that the number of samples required for pre-training and re-training configured on the network side need to be synchronous with the terminal.
In one embodiment of the present application, the method further comprises:
judging whether to reconfigure the training interval;
if it is determined that the training interval needs to be reconfigured, then the training interval is reconfigured (such as retraining);
the reconfigured training interval (e.g., retraining) is sent to the terminal.
Further, optionally, the manner of determining whether to reconfigure the training interval includes: acquiring communication quality; and judging whether to reconfigure the training interval according to the communication quality.
Further, optionally, determining whether to reconfigure the training interval according to the communication quality includes:
obtaining the highest bit error rate and bit error rate threshold obtained when the current first AI model carries out channel compression feedback;
comparing the highest bit error rate with a bit error rate threshold;
if the highest bit error rate is less than or equal to the bit error rate threshold, increasing the training (e.g., retraining) interval;
or,
if the highest bit error rate is greater than or equal to the bit error rate threshold, the training (e.g., retraining) interval is reduced.
In this embodiment, the network side device may determine whether to adjust the training (e.g. retraining) interval according to the information such as the transmission bit error rate, and if so, the network side device may reconfigure the training (e.g. retraining) interval according to the high-level information and send the new training (e.g. retraining) interval to the terminal.
In this embodiment, the frequency of training (such as retraining) is flexibly adjusted by judging the transmission quality at the network side, so that the waste of calculation and data transmission resources is reduced as much as possible on the premise of ensuring the communication quality.
In one implementation manner of the embodiment of the present application, the network side device obtains uplink channel state information sent by the terminal, and trains the first AI model and/or the second AI model by using reciprocity of uplink and downlink channels, so that only the terminal needs to send an uplink channel reference signal (such as SRS or DMRS) to the network side device in the training process, and when the network side device is prevented from training by using the downlink channel state information, the network side device needs to send a downlink channel reference signal to the terminal, the terminal needs to perform downlink channel measurement, and then the terminal needs to feed back a downlink channel measurement result to the network side, so that transmission overhead of data required for training can be effectively reduced on the premise of guaranteeing performance of the AI model.
In another embodiment of the present application, a network side device obtains downlink channel state information and a first result sent by a terminal; the network side device trains the second AI model according to the first result and the downlink channel state information, and it can be understood that, except that the terminal sends the downlink channel state information to the network side device, the data amount of training data sent by the terminal to the network side device includes: the number of training iterations is equal to the data size of the first result fed back once in the training process, and the transmission cost of the data required by training in the embodiment is obviously reduced compared with the transmission cost of all layer parameters of the first AI model obtained by feeding back training to the network side equipment by the terminal, so that the transmission cost of the data required by training the second AI model can be effectively reduced on the premise of ensuring the performance of the second AI model.
Referring to fig. 3, an embodiment of the present application provides an AI model training method, which is applied to a terminal, and specifically includes the steps of: step 301 and/or step 302.
Step 301: transmitting uplink channel state information to network side equipment, wherein the uplink channel state information is used for training an AI model, and the AI model comprises a first AI model and/or a second AI model;
step 302: transmitting downlink channel state information and a first result to network side equipment, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model, and the first result and the downlink channel state information are used for training a second AI model;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
In one embodiment of the present application, the method further comprises:
the trained first AI model is received.
In one embodiment of the present application, the method further comprises:
receiving second information, wherein the second information is used for indicating the end of training;
the second information is sent by the network side equipment according to first information, the first information is determined according to a loss value, and the first information comprises: training depth information and/or training mode information; the loss value represents a loss condition between a calculation result of a second AI model reasoning and the downlink channel state information, and the calculation result of the second AI model reasoning is a result obtained by calculating the first result in the second AI model reasoning.
In one embodiment of the present application, the method further comprises:
receiving gradient information, wherein the gradient information is obtained by back-propagating a loss value in the second AI model, and the loss value represents the loss condition between the calculation result of second AI model reasoning and the downlink channel state information;
and continuing back propagation of the gradient information in the first AI model, and updating the weight of the first AI model.
In one embodiment of the present application, the method further comprises:
and sending third information, wherein the third information is used for indicating whether training is finished or not.
In one embodiment of the present application, the method further comprises:
at least one of a training iteration number, a training interval, and a training sample number is received.
In one implementation manner of the embodiment of the present application, only the terminal is required to send an uplink channel reference signal (such as SRS or DMRS) to the network side device in the training process, so that when the network side device performs training by using downlink channel state information, the network side device needs to send the downlink channel reference signal to the terminal, the terminal needs to perform downlink channel measurement, and then the terminal needs to feed back a downlink channel measurement result to the network side, so that transmission overhead of data required for training can be effectively reduced on the premise of ensuring performance of an AI model.
In another embodiment of the present application, in addition to the terminal sending downlink channel state information to the network side device, the data amount of the training data sent by the terminal to the network side device includes: the number of training iterations is equal to the data size of the first result fed back once in the training process, and the transmission cost of the data required by training in the embodiment is obviously reduced compared with the transmission cost of all layer parameters of the first AI model obtained by feeding back training to the network side equipment by the terminal, so that the transmission cost of the data required by training the second AI model can be effectively reduced on the premise of ensuring the performance of the second AI model.
For ease of understanding, the implementation of the present application is described below in connection with embodiment one and embodiment two, where the first AI model is an encoder (decoder) and the second AI model is a decoder (decoder).
Embodiment one: modes in which only decoders participate in retraining
Referring to fig. 4, the specific steps are as follows:
step 0: an encoder and a decoder required by the compression of the synchronization channels of the terminal and the network side;
step 1: the higher layer of the network side configures a retraining interval N_d, namely the number of feedback times required to wait after the training is completed and the number of samples required for pre-training and/or retraining;
Step 2: the high-level configuration of the network side is pre-trained and/or retrained for iteration times;
step 3: the terminal and the network side synchronize the retraining iteration times, the retraining interval N_d, and the number of samples required by pre-training and/or retraining;
step 4: the network side randomly sets initial weight parameters of the encoder and the decoder based on the synchronized encoder and decoder framework;
step 5: the terminal sends SRS or DMRS to the network side;
step 6: the network side processes and analyzes the SRS or the DMRS to obtain uplink channel state information;
step 7: the network side pre-trains the initial weight W1 of the encoder and the decoder on the network side by utilizing the reciprocity between the state information of the uplink channel and the uplink and downlink channels;
considering that the center frequencies of the uplink and downlink channels are similar, the channel characteristics are quite similar, and certain reciprocity is achieved, as shown in fig. 8. As can be seen from fig. 8, although the uplink and downlink channels are greatly different in value (mainly due to the mutual independence of the phases of the uplink and downlink channels), the two have high similarity in the distribution of each component in the time delay domain and the angle domain, so that it can be considered to provide actual measurement raw data with coarse granularity for pre-training of the encoder and the decoder by means of the state information of the uplink channel.
The process of pre-training the encoder and decoder for compression and decompression of the downlink channel using the uplink channel state information corresponds to steps 5-7 in the flow, and the steps in the flow chart are partially simplified, and the actual flow chart is shown in fig. 12.
In the pre-training process, a terminal is required to send a large number of SRS or DMRS to the network side to obtain a sufficient number of uplink channel data samples, and the number of samples is already configured by the network side.
Step 8: the network side transmits the pre-trained encoder to the terminal;
step 9: the network side sends CSI-RS to the terminal;
step 10: the terminal performs downlink channel measurement according to the CSI-RS;
step 11: the terminal carries out forward propagation (only first iterative execution) on the actually measured downlink channel state information in the encoder;
step 12: the terminal feeds back the forward propagation intermediate result (only executed for the first time) and the actually measured downlink channel state information (compressed according to the existing standard scheme) to the network side;
step 13: the network side forward propagates the received forward propagation intermediate result in the decoder;
step 14: the network side calculates a Loss value between a forward propagation intermediate result of the decoder and the received actually measured downlink channel state information;
step 15: judging the retraining depth and the retraining mode to be used according to the Loss value;
Step 16: the decoder at the network side carries out back propagation and updates the model weight based on the calculated Loss value, returns to the step 10 until the retraining is finished, and executes the step 17 after the retraining is finished;
step 17: the network side sends a retraining iteration confirmation signaling to the terminal to inform the terminal that retraining is finished;
step 18: the network side sends CSI-RS to the terminal;
step 19: the terminal completes downlink channel measurement based on the CSI-RS;
step 20: the terminal compresses and quantizes the downlink channel state information by using an encoder;
step 21: the terminal feeds the quantized bit stream back to the network side;
step 22: the network side uses a decoder to dequantize and decompress the downlink channel state information;
step 23: the network side performs precoding according to the downlink channel state information;
step 24: the network side judges whether the retraining interval needs to be reconfigured according to the communication quality;
step 25: and (3) synchronizing the terminal and the network side, retraining for the interval times N, and returning to the step (9) after the terminal is completed.
Embodiment two: both the encoder and decoder participate in the retraining mode.
Referring to fig. 5, the specific steps are as follows:
step 0: an encoder and a decoder required by the compression of the synchronization channels of the terminal and the network side;
step 1: the higher layer of the network side configures a retraining interval N_d, namely the number of feedback times required to wait after the training is completed and the number of samples required for pre-training and/or retraining;
Step 2: the high-level configuration of the network side is pre-trained and/or retrained for iteration times;
step 3: the terminal and the network side synchronize the retraining iteration times, the retraining interval N_d, and the number of samples required by pre-training and/or retraining;
step 4: the network side randomly sets initial weight parameters of the encoder and the decoder based on the synchronized encoder and decoder framework;
step 5: the terminal sends an uplink channel reference signal SRS or DMRS to a network side;
step 6: processing and analyzing SRS or DMRS to obtain uplink channel data;
step 7: pre-training initial weights W1 of an encoder and a decoder at a network side by utilizing the reciprocity between the state information of an uplink channel and an uplink channel;
step 8: the network side only transmits the pre-trained encoder to the terminal;
step 9: the network side sends CSI-RS to the terminal;
step 10: the terminal performs downlink channel measurement according to the CSI-RS;
step 11: the terminal carries out forward propagation (only first iterative execution) on the actually measured downlink channel state information in the encoder;
step 12: the terminal feeds back the forward propagation intermediate result (only executed for the first time) and the actually measured downlink channel state information (compressed according to the existing standard scheme) to the network side;
Step 13: the network side forward propagates the received forward propagation intermediate result in the decoder;
step 14: the network side calculates a Loss value between a forward propagation intermediate result of the decoder and the received actually measured downlink channel state information;
step 15: judging the retraining depth and the retraining mode to be used according to the Loss value;
step 16: the decoder at the network side performs back propagation based on the calculated Loss value and updates the model weight;
step 17: the network side transmits back-propagation intermediate information (gradient information) to the terminal;
step 18: the terminal encoder continues back propagation based on the back propagation intermediate information, and the encoder weight is updated;
step 19: the terminal sends a retraining CSI-RS request signaling to the network side to inform the network side whether retraining is finished or not, if not, the step 9 is returned to, and if yes, the step 20 is continued;
step 20: the network side sends CSI-RS to the terminal;
step 21: the terminal completes downlink channel measurement based on the CSI-RS;
step 22: compressing and quantizing the downlink channel state information by using an encoder;
step 23: the terminal feeds the quantized bit stream back to the network side;
step 24: the network side uses a decoder to dequantize and decompress the downlink channel state information;
Step 25: the network side performs precoding according to the downlink channel state information;
step 26: the network side judges whether the retraining interval needs to be reconfigured according to the communication quality;
step 27: and (3) synchronizing the terminal and the network side, retraining for the interval times N, and returning to the step (9) after the terminal is completed.
Referring to fig. 9, an embodiment of the present application provides an AI model training apparatus, applied to a network side device, where the apparatus 900 includes:
a first obtaining module 901, configured to obtain uplink channel state information;
a first training module 902, configured to train an AI model according to the uplink channel state information, where the AI model includes a first AI model and/or a second AI model;
and/or the number of the groups of groups,
the second obtaining module 903 is configured to obtain downlink channel state information and a first result, where the first result represents a calculation result obtained by reasoning the downlink channel state information in the first AI model;
a second training module 904, configured to train a second AI model according to the first result and the downlink channel state information;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
In one embodiment of the present application, the apparatus further comprises:
and the first sending module is used for sending the trained first AI model to a terminal.
In one embodiment of the present application, the first obtaining module 901 includes:
a receiving unit, configured to receive one or more SRSs or DMRSs from one or more terminals;
and the acquisition unit is used for acquiring the uplink channel state information according to the one or more SRS or the DMRS.
In one embodiment of the present application, second training module 904 includes:
the first processing unit is used for obtaining a calculation result of reasoning of the second AI model according to the first result and the second AI model;
the first acquisition unit is used for acquiring a calculation result of the second AI model reasoning and a loss value between the downlink channel state information;
and the first training unit is used for training the second AI model according to the loss value.
In one embodiment of the present application, the first training unit is further configured to:
determining first information according to the loss value, wherein the first information comprises: training depth information and/or training mode information;
and training the second AI model according to the first information.
In one embodiment of the present application, the first training unit is further configured to:
gradient information is acquired, wherein the gradient information is obtained by back-propagating the loss value in the second AI model;
and training the second AI model according to the first information and the gradient information.
In one embodiment of the present application, the apparatus further comprises:
and the second sending module is used for sending second information to the terminal according to the first information, and the second information is used for indicating the end of training.
In one embodiment of the present application, the apparatus further comprises:
and the third sending module is used for sending the gradient information to a terminal according to the first information, enabling the terminal to continue back propagation of the gradient information in the first AI model, and updating the weight of the first AI model.
In one embodiment of the present application, the apparatus further comprises:
the first receiving module is used for receiving third information according to the first information, and the third information is used for indicating whether training is finished or not.
In one embodiment of the present application, the apparatus further comprises:
a configuration module for configuring a first parameter of the AI model;
Wherein the first parameter comprises one or more of:
training intervals;
training the number of samples;
training the number of iterations.
In one embodiment of the present application, the apparatus further comprises:
and the fourth sending module is used for sending at least one of training iteration times, training intervals and training sample numbers to the terminal.
In one embodiment of the present application, the apparatus further comprises:
and the judging module is used for judging whether the training interval is reconfigured or not.
In one embodiment of the present application, the judging module is further configured to: acquiring communication quality; and judging whether to reconfigure the training interval according to the communication quality.
In one embodiment of the present application, the judging module is further configured to: obtaining the highest bit error rate obtained when the current first AI model carries out channel compression feedback, and comparing the highest bit error rate with a bit error rate threshold; obtaining the highest bit error rate and bit error rate threshold obtained when the current first AI model carries out channel compression feedback; comparing the highest bit error rate with a bit error rate threshold; if the highest bit error rate is smaller than or equal to a bit error rate threshold, increasing a training interval; or if the highest bit error rate is greater than or equal to the bit error rate threshold, reducing the training interval.
In one embodiment of the present application, the apparatus further comprises:
the reconfiguration module is used for reconfiguring the training interval if the training interval is judged to be required to be reconfigured;
and a fifth sending module, configured to send the reconfigured training interval to the terminal.
The device provided in this embodiment of the present application can implement each process implemented in the embodiment of the manner shown in fig. 2, and achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
Referring to fig. 10, an embodiment of the present application provides an AI model training method, which is applied to a terminal, and includes:
a sixth sending module 1001, configured to send uplink channel state information to a network side device, where the uplink channel state information is used to train an AI model, and the AI model includes a first AI model and/or a second AI model;
and/or the number of the groups of groups,
a seventh sending module 1002, configured to send downlink channel state information and a first result to a network side device, where the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model, and the first result and the downlink channel state information are used to train a second AI model;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
In one embodiment of the present application, the apparatus further comprises:
and the second receiving module is used for receiving the trained first AI model.
In one embodiment of the present application, the apparatus further comprises:
the third receiving module is used for receiving second information, and the second information is used for indicating the end of training;
the second information is sent by the network side equipment according to first information, the first information is determined according to a loss value, and the first information comprises: training depth information and/or training mode information; the loss value represents a loss condition between a calculation result of a second AI model reasoning and the downlink channel state information, and the calculation result of the second AI model reasoning is a result obtained by calculating the first result in the second AI model reasoning.
In one embodiment of the present application, the apparatus further comprises:
the fourth receiving module is used for receiving gradient information, the gradient information is obtained by back propagation of a loss value in the second AI model, and the loss value represents the loss condition between the calculation result of the second AI model reasoning and the downlink channel state information;
And the second processing module is used for continuing back propagation of the gradient information in the first AI model and updating the weight of the first AI model.
In one embodiment of the present application, the apparatus further comprises:
and the eighth sending module is used for sending third information, and the third information is used for indicating whether training is finished or not.
In one embodiment of the present application, the apparatus further comprises:
and the fifth receiving module is used for receiving at least one of training iteration times, training intervals and training sample numbers.
The device provided in this embodiment of the present application can implement each process implemented in the embodiment of the manner shown in fig. 3, and achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
As shown in fig. 11, the embodiment of the present application further provides a communication device 1100, including a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and capable of running on the processor 1101, where the program or the instruction is executed by the processor 1101 to implement each process of the embodiment shown in fig. 2 or fig. 3, and achieve the same technical effect. In order to avoid repetition, a description thereof is omitted.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the processes of the embodiment shown in fig. 2 or fig. 3 are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is provided herein.
Wherein the processor is a processor in the terminal described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a read-only optical disk, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may be carried in a core network interface device. The processor and the storage medium may reside as discrete components in a core network interface device.
Those of skill in the art will appreciate that in one or more of the examples described above, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing embodiments have been provided for the purpose of illustrating the technical solution and advantageous effects of the present application in further detail, and it should be understood that the foregoing embodiments are merely illustrative of the present application and are not intended to limit the scope of the present application, and any modifications, equivalents, improvements, etc. made on the basis of the technical solution of the present application should be included in the scope of the present application.
Those skilled in the art will appreciate that the present embodiments may be provided as a manner, system, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to encompass such modifications and variations.
Claims (23)
1. An artificial intelligence AI model training method applied to network side equipment is characterized by comprising the following steps:
acquiring uplink channel state information;
training an AI model according to the uplink channel state information, wherein the AI model comprises a first AI model and/or a second AI model;
and/or the number of the groups of groups,
acquiring downlink channel state information and a first result, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model;
Training a second AI model according to the first result and the downlink channel state information;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
2. The method according to claim 1, wherein the method further comprises:
and transmitting the trained first AI model to a terminal.
3. The method of claim 1, wherein the obtaining uplink channel state information comprises:
receiving one or more Sounding Reference Signals (SRS) or demodulation reference signals (DMRS) from one or more terminals;
and acquiring uplink channel state information according to the one or more SRS or the DMRS.
4. The method of claim 1, wherein training a second AI model based on the first result and downlink channel state information comprises:
obtaining a calculation result of reasoning of the second AI model according to the first result and the second AI model;
acquiring a calculation result of the second AI model reasoning and a loss value between the downlink channel state information;
and training the second AI model according to the loss value.
5. The method of claim 4, wherein training the second AI model based on the loss value comprises:
determining first information according to the loss value, wherein the first information comprises: training depth information and/or training mode information;
and training the second AI model according to the first information.
6. The method of claim 5, wherein the training the second AI model based on the first information comprises:
gradient information is acquired, wherein the gradient information is obtained by back-propagating the loss value in the second AI model;
and training the second AI model according to the first information and the gradient information.
7. The method of claim 5, wherein the method further comprises:
and sending second information to the terminal according to the first information, wherein the second information is used for indicating the end of training.
8. The method of claim 6, wherein the method further comprises:
according to the first information, the gradient information is sent to a terminal, the terminal continues to conduct back propagation on the gradient information in the first AI model, and the weight of the first AI model is updated;
And/or the number of the groups of groups,
and receiving third information according to the first information, wherein the third information is used for indicating whether training is finished or not.
9. The method according to claim 1, wherein the method further comprises:
configuring a first parameter of the AI model;
wherein the first parameter comprises one or more of:
training intervals;
training the number of samples;
training the number of iterations.
10. The method according to claim 9, wherein the method further comprises:
and sending at least one of training iteration times, training intervals and training sample numbers to the terminal.
11. The method according to claim 9, wherein the method further comprises:
judging whether to reconfigure the training interval;
if the training interval is judged to be required to be reconfigured, the training interval is reconfigured;
and sending the reconfigured training interval to the terminal.
12. The method of claim 11, wherein determining whether to reconfigure a training interval comprises:
acquiring communication quality;
and judging whether to reconfigure the training interval according to the communication quality.
13. The method of claim 12, wherein determining whether to reconfigure a training interval based on the communication quality comprises:
Obtaining the highest bit error rate obtained when the current first AI model carries out channel compression feedback, and comparing the highest bit error rate with a bit error rate threshold;
obtaining the highest bit error rate and bit error rate threshold obtained when the current first AI model carries out channel compression feedback;
comparing the highest bit error rate with a bit error rate threshold;
if the highest bit error rate is smaller than or equal to a bit error rate threshold, increasing a training interval;
or,
and if the highest bit error rate is greater than or equal to the bit error rate threshold, reducing the training interval.
14. An AI model training method applied to a terminal is characterized by comprising the following steps:
transmitting uplink channel state information to network side equipment, wherein the uplink channel state information is used for training an AI model, and the AI model comprises a first AI model and/or a second AI model;
and/or the number of the groups of groups,
transmitting downlink channel state information and a first result to network side equipment, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model, and the first result and the downlink channel state information are used for training a second AI model;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
15. The method of claim 14, wherein the method further comprises:
the trained first AI model is received.
16. The method of claim 14, wherein the method further comprises:
receiving second information, wherein the second information is used for indicating the end of training;
the second information is sent by the network side equipment according to first information, the first information is determined according to a loss value, and the first information comprises: training depth information and/or training mode information; the loss value represents a loss condition between a calculation result of a second AI model reasoning and the downlink channel state information, and the calculation result of the second AI model reasoning is a result obtained by calculating the first result in the second AI model reasoning.
17. The method of claim 14, wherein the method further comprises:
receiving gradient information, wherein the gradient information is obtained by back-propagating a loss value in the second AI model, and the loss value represents the loss condition between the calculation result of second AI model reasoning and the downlink channel state information;
and continuing back propagation of the gradient information in the first AI model, and updating the weight of the first AI model.
18. The method of claim 17, wherein the method further comprises:
and sending third information, wherein the third information is used for indicating whether training is finished or not.
19. The method of claim 14, wherein the method further comprises:
at least one of a training iteration number, a training interval, and a training sample number is received.
20. An AI model training apparatus applied to a network side device, comprising:
the first acquisition module is used for acquiring the uplink channel state information;
the first training module is used for training an AI model according to the uplink channel state information, and the AI model comprises a first AI model and/or a second AI model;
and/or the number of the groups of groups,
the second acquisition module is used for acquiring downlink channel state information and a first result, wherein the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model;
the second training module is used for training a second AI model according to the first result and the downlink channel state information;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
21. An AI model training method applied to a terminal is characterized by comprising the following steps:
a sixth sending module, configured to send uplink channel state information to a network side device, where the uplink channel state information is used to train an AI model, and the AI model includes a first AI model and/or a second AI model;
and/or the number of the groups of groups,
a seventh sending module, configured to send downlink channel state information and a first result to a network side device, where the first result represents a calculation result obtained by reasoning the downlink channel state information in a first AI model, and the first result and the downlink channel state information are used to train a second AI model;
the first AI model is used for compressing the downlink channel state information, and the second AI model is used for decompressing the downlink channel state information.
22. A communication device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which program or instruction when executed by the processor implements the steps of the method of any of claims 1 to 19.
23. A readable storage medium, characterized in that it has stored thereon a program or instructions which, when executed by a processor, implement the steps of the method according to any of claims 1 to 19.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210981397.1A CN117648966A (en) | 2022-08-16 | 2022-08-16 | AI model training method, apparatus, device and readable storage medium |
PCT/CN2023/110199 WO2024037321A1 (en) | 2022-08-16 | 2023-07-31 | Ai model training method and apparatus, and device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210981397.1A CN117648966A (en) | 2022-08-16 | 2022-08-16 | AI model training method, apparatus, device and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117648966A true CN117648966A (en) | 2024-03-05 |
Family
ID=89940664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210981397.1A Pending CN117648966A (en) | 2022-08-16 | 2022-08-16 | AI model training method, apparatus, device and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117648966A (en) |
WO (1) | WO2024037321A1 (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114616762A (en) * | 2019-10-25 | 2022-06-10 | Oppo广东移动通信有限公司 | Method and apparatus for transmitting channel state information |
WO2021102917A1 (en) * | 2019-11-29 | 2021-06-03 | Nokia Shanghai Bell Co., Ltd. | Feedback of channel state information |
WO2021177870A1 (en) * | 2020-03-06 | 2021-09-10 | Telefonaktiebolaget Lm Ericsson (Publ) | Compression and decompression of delay profile |
CN112039807A (en) * | 2020-08-31 | 2020-12-04 | 中兴通讯股份有限公司 | Downlink channel estimation method, device, communication equipment and storage medium |
CN113556159A (en) * | 2021-07-22 | 2021-10-26 | 上海海事大学 | Channel feedback method of large-scale MIMO multi-user system |
CN114867013B (en) * | 2022-04-21 | 2024-03-29 | 中国电信股份有限公司 | Key generation method, device, electronic equipment and storage medium |
-
2022
- 2022-08-16 CN CN202210981397.1A patent/CN117648966A/en active Pending
-
2023
- 2023-07-31 WO PCT/CN2023/110199 patent/WO2024037321A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2024037321A1 (en) | 2024-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021217519A1 (en) | Method and apparatus for adjusting neural network | |
CN112054863A (en) | Communication method and device | |
US11956031B2 (en) | Communication of measurement results in coordinated multipoint | |
WO2022253023A1 (en) | Communication method and apparatus | |
CN116034381A (en) | Communication method and communication device | |
CN113543186A (en) | Transmission method and device | |
WO2022001822A1 (en) | Method and device for acquiring neural network | |
CN116546567B (en) | Data processing method and system based on Bayesian federal learning and electronic equipment | |
CN114157722A (en) | Data transmission method and device | |
CN117318774A (en) | Channel matrix processing method, device, terminal and network side equipment | |
US20240072927A1 (en) | Signal processing method, communication device, and communication system | |
CN117648966A (en) | AI model training method, apparatus, device and readable storage medium | |
CN110633798B (en) | Parameter updating method and device in distributed training | |
WO2023283785A1 (en) | Method for processing signal, and receiver | |
CN115694722A (en) | Communication method and device | |
WO2020087293A1 (en) | Communication receiver and method for processing signal | |
WO2023060503A1 (en) | Information processing method and apparatus, device, medium, chip, product, and program | |
WO2024037380A1 (en) | Channel information processing methods and apparatus, communication device, and storage medium | |
WO2024088161A1 (en) | Information transmission method and apparatus, information processing method and apparatus, and communication device | |
WO2023070675A1 (en) | Data processing method and apparatus | |
WO2024032606A1 (en) | Information transmission method and apparatus, device, system, and storage medium | |
WO2023115254A1 (en) | Data processing method and device | |
CN118042450A (en) | Information transmission method, method and device for updating AI network model and communication equipment | |
WO2024099091A1 (en) | Beam prediction method and apparatus, terminal, network side device, and storage medium | |
Yue et al. | Deep learning-based CSI feedback for IoT-oriented massive MIMO systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |