WO2022237822A1 - 训练数据集获取方法、无线传输方法、装置及通信设备 - Google Patents
训练数据集获取方法、无线传输方法、装置及通信设备 Download PDFInfo
- Publication number
- WO2022237822A1 WO2022237822A1 PCT/CN2022/092144 CN2022092144W WO2022237822A1 WO 2022237822 A1 WO2022237822 A1 WO 2022237822A1 CN 2022092144 W CN2022092144 W CN 2022092144W WO 2022237822 A1 WO2022237822 A1 WO 2022237822A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interface signaling
- training data
- training
- transmission
- data under
- Prior art date
Links
- 230000005540 biological transmission Effects 0.000 title claims abstract description 596
- 238000012549 training Methods 0.000 title claims abstract description 561
- 238000000034 method Methods 0.000 title claims abstract description 181
- 238000004891 communication Methods 0.000 title claims abstract description 46
- 238000013528 artificial neural network Methods 0.000 claims abstract description 119
- 230000011664 signaling Effects 0.000 claims description 362
- 238000003062 neural network model Methods 0.000 claims description 79
- 230000006870 function Effects 0.000 claims description 65
- 238000005457 optimization Methods 0.000 claims description 57
- 238000012545 processing Methods 0.000 claims description 40
- 230000007423 decrease Effects 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000003247 decreasing effect Effects 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 10
- 102100039292 Cbp/p300-interacting transactivator 1 Human genes 0.000 claims description 9
- 101000888413 Homo sapiens Cbp/p300-interacting transactivator 1 Proteins 0.000 claims description 9
- 230000006835 compression Effects 0.000 claims description 6
- 238000007906 compression Methods 0.000 claims description 6
- 238000013139 quantization Methods 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 20
- 238000007726 management method Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 11
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012517 data analytics Methods 0.000 description 2
- 238000013523 data management Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013468 resource allocation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
Definitions
- the present application belongs to the technical field of communication, and in particular relates to a training data set acquisition method, a wireless transmission method, a device and a communication device.
- Generalization means that the neural network can also obtain reasonable output for data not encountered in the training (learning) process.
- a common neural network can be trained based on mixed data, and the parameters of the neural network do not need to be switched as the environment changes.
- this neural network cannot achieve optimal performance in every transmission condition.
- Embodiments of the present application provide a training data set acquisition method, wireless transmission method, device, and communication equipment, which can solve problems such as insufficient generalization ability of neural networks in existing wireless transmission.
- a method for obtaining a training data set comprising:
- the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
- a training data set acquisition device including:
- the first processing module is used to determine the amount of training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization goal;
- the second processing module is used to obtain the training data under each of the transmission conditions based on the data amount of the training data under each of the transmission conditions, so as to form a training data set for training the neural network;
- the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
- a wireless transmission method includes:
- the neural network model is obtained by using a training data set for training in advance, and the training data set is obtained based on the method for obtaining a training data set as described in the first aspect.
- a wireless transmission device including:
- the third processing module is used to perform wireless transmission calculation based on the neural network model to realize the wireless transmission;
- the neural network model is obtained by using a training data set for training in advance, and the training data set is obtained based on the method for obtaining a training data set as described in the first aspect.
- a communication device which includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is executed by the processor When executed, the steps of the method described in the first aspect are realized, or the steps of the method described in the third aspect are realized.
- a communication device including a processor and a communication interface, wherein the processor is used to determine the amount of training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization goal and based on the amount of training data under each of the transmission conditions, obtain the training data under each of the transmission conditions to form a training data set for training the neural network; wherein the transmission conditions are optimized for the neural network
- the contribution degree of the target indicates the influence degree of the transmission condition on the value of the neural network optimization target.
- a communication device including a processor and a communication interface, wherein the processor is used to perform wireless transmission operations based on a neural network model to realize the wireless transmission; wherein the neural network model is The training is obtained by using the training data set, and the training data set is obtained based on the method for obtaining the training data set as described in the first aspect.
- a readable storage medium is provided, and programs or instructions are stored on the readable storage medium, and when the programs or instructions are executed by a processor, the steps of the method described in the first aspect are realized, or the steps of the method described in the first aspect are realized, or The steps of the method described in the third aspect.
- a ninth aspect provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, the processor is used to run programs or instructions, and implement the method as described in the first aspect , or implement the method described in the third aspect.
- a computer program/program product is provided, the computer program/program product is stored in a non-transitory storage medium, and the program/program product is executed by at least one processor to implement the program described in the first aspect.
- a variety of The data under transmission conditions and the construction of a mixed training data set can effectively improve the generalization ability of the neural network.
- FIG. 1 is a structural diagram of a wireless communication system applicable to an embodiment of the present application
- Fig. 2 is a schematic flow chart of the training data set acquisition method provided by the embodiment of the present application.
- FIG. 3 is a schematic structural diagram of a training data set acquisition device provided in an embodiment of the present application.
- FIG. 4 is a schematic flowchart of a wireless transmission method provided in an embodiment of the present application.
- FIG. 5 is a schematic flow diagram of constructing a neural network model in a wireless transmission method provided according to an embodiment of the present application
- FIG. 6 is a schematic flow diagram of determining the proportion of training data in the training data set acquisition method provided according to an embodiment of the present application
- FIG. 7 is a schematic structural diagram of a neural network used for DMRS channel estimation in a wireless transmission method provided according to an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a wireless transmission device provided by an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of a communication device provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of a hardware structure of a terminal implementing an embodiment of the present application.
- FIG. 11 is a schematic diagram of a hardware structure of an access network device implementing an embodiment of the present application.
- FIG. 12 is a schematic diagram of a hardware structure of a core network device implementing an embodiment of the present application.
- first, second and the like in the specification and claims of the present application are used to distinguish similar objects, and are not used to describe a specific sequence or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein and that "first" and “second” distinguish objects. It is usually one category, and the number of objects is not limited. For example, there may be one or more first objects.
- “and/or” in the description and claims means at least one of the connected objects, and the character “/” generally means that the related objects are an "or” relationship.
- LTE Long Term Evolution
- LTE-Advanced LTE-Advanced
- LTE-A Long Term Evolution-Advanced
- CDMA Code Division Multiple Access
- TDMA Time Division Multiple Access
- FDMA Frequency Division Multiple Access
- OFDMA Orthogonal Frequency Division Multiple Access
- SC-FDMA Single-carrier Frequency-Division Multiple Access
- system and “network” in the embodiments of the present application are often used interchangeably, and the described technology can be used for the above-mentioned system and radio technology, and can also be used for other systems and radio technologies.
- NR New Radio
- the following description describes the New Radio (NR) system for illustrative purposes, and uses NR terminology in most of the following descriptions, but these techniques can also be applied to applications other than NR system applications, such as the 6th generation (6 th Generation, 6G) communication system.
- 6G 6th Generation
- FIG. 1 shows a structural diagram of a wireless communication system to which this embodiment of the present application is applicable.
- the wireless communication system includes a terminal 101 and a network side device 102 .
- the terminal 101 can also be called a terminal device or a user terminal (User Equipment, UE), and the terminal 101 can be a mobile phone, a tablet computer (Tablet Personal Computer), a laptop computer (Laptop Computer) or a notebook computer, a personal digital Assistant (Personal Digital Assistant, PDA), handheld computer, netbook, ultra-mobile personal computer (UMPC), mobile Internet device (Mobile Internet Device, MID), wearable device (Wearable Device) or vehicle-mounted device (VUE), Pedestrian Terminal (PUE) and other terminal-side devices, wearable devices include: smart watches, bracelets, earphones, glasses, etc.
- the network side device 102 may be an access network device 1021 or a core network device 1022 or a data network (data network, DN) device 1023.
- the access network device 1021 may also be called a wireless access network device or a radio access network (Radio Access Network, RAN), and the access network device 1021 may be a base station or a node responsible for neural network training on the RAN side, etc., and the base station may Known as Node B, Evolved Node B, Access Point, Base Transceiver Station (BTS), Radio Base Station, Radio Transceiver, Basic Service Set (BSS), Extended Service Set (Extended Service Set) Set, ESS), Node B, Evolved Node B (eNB), Home Node B, Home Evolved Node B, WLAN access point, WiFi node, Transmitting Receiving Point (TRP) or others in the field
- TRP Transmitting Receiving Point
- the core network device 1022 may also be called a core network (Core Network, CN) or a 5G core (5G core, 5GC) network, and the core network device 1022 may include but not limited to at least one of the following: a core network node, a core network function, a mobile Management entity (Mobility Management Entity, MME), access management function (Access Management Function, AMF), session management function (Session Management Function, SMF), user plane function (User Plane Function, UPF), policy control function (Policy Control Function (PCF), Policy and Charging Rules Function (PCRF), Edge Application Server Discovery Function (EASDF), Application Function (AF), etc.
- MME mobile Management entity
- MME mobile Management Entity
- Access Management Function Access Management Function
- AMF session management function
- SMF Session Management Function
- PCF Policy Control Function
- PCF Policy and Charging Rules Function
- EASDF Edge Application Server Discovery Function
- AF Application Function
- the data network device 1023 may include but not limited to at least one of the following: Network Data Analytics Function (Network Data Analytics Function, NWDAF), Unified Data Management (Unified Data Management, UDM), Unified Data Warehouse (Unified Data Repository, UDR) and wireless Unstructured Data Storage Function (UDSF). It should be noted that, in the embodiment of the present application, only the data network equipment in the 5G system is taken as an example, but it is not limited thereto.
- NWDAF Network Data Analytics Function
- UDM Unified Data Management
- UDR Unified Data Warehouse
- UDR Unstructured Data Storage Function
- FIG. 2 is a schematic flow diagram of a method for obtaining a training data set provided by an embodiment of the present application.
- the method can be executed by a terminal and/or a network-side device.
- the terminal can be the terminal 101 shown in FIG. 1
- the network-side device Specifically, it may be the network side device 102 shown in FIG. 1 .
- the method includes:
- Step 201 based on the contribution of each transmission condition to the neural network optimization goal, determine the data volume of training data under each transmission condition.
- the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
- the degree of influence of each transmission condition on the value of the neural network optimization target can be tested in advance, that is, the value of the test transmission condition can affect the optimization target.
- the degree of influence of each transmission condition on the optimization goal of the neural network Generally, the higher the degree of influence, the larger the contribution value corresponding to the transmission condition, and vice versa, the smaller the contribution value.
- the optimization target is expressed as a function of the transmission condition
- the function is an increasing function about the transmission condition (such as the throughput is an increasing function of SNR)
- the contribution of the transmission condition with a large value of the optimization target is high
- the function is a decreasing function about the transmission condition (for example, NMSE is a decreasing function of SNR)
- the contribution of the transmission condition with a small optimization target value is high.
- the contribution of the transmission condition to the optimization objective may be determined based on the attainable optimal value of the optimization objective under the transmission condition. That is to say, the contribution of a given transmission condition to the optimization objective can be measured by the size of the attainable optimal value of the optimization objective under the given transmission condition. The greater the impact of the given transmission conditions on the optimization objective, the greater the corresponding contribution.
- the embodiment of the present application can adjust the data volume of the training data corresponding to each transmission condition based on the size of the contribution value corresponding to each transmission condition, that is, each The data volume of the training data under the transfer condition is associated with the contribution of the transfer condition to the optimization objective.
- the data amount is the reference data amount of the training data that needs to be prepared for each transmission condition, or called the set data amount. When actually obtaining the training data set, this data amount needs to be referred to when preparing the training data for each transmission condition.
- the amount of data of the transmission conditions with a large contribution can be reduced to reduce the influence of these transmission conditions; at the same time, the transmission conditions with a small contribution can be increased
- the amount of data to improve the influence of these transmission conditions that is, to balance the impact of each transmission condition on the optimization goal through the amount of data.
- the data volume may refer to the absolute value of the data volume or the relative proportion of the data volume.
- the transmission condition is a parameter of a transmission medium, a transmission signal, a transmission environment, etc. involved in an actual wireless transmission environment.
- the type of the transmission condition includes at least one of the following:
- Signal-to-noise ratio or signal-to-interference-to-noise ratio
- RSRP Reference Signal Receiving Power
- the distance between the terminal and the base station is the distance between the terminal and the base station
- Antenna configuration information at the transmitting end or receiving end
- the types of transmission conditions involved in this embodiment of the application may include, but are not limited to, one or a combination of multiple types of transmission conditions listed above.
- the interference intensity can indicate the intensity of co-channel interference between cells, or the magnitude of other interference; channel parameters such as number of paths (or LOS or NLOS scenarios), delay (or maximum delay), Doppler (or maximum Puler), arrival angle (including horizontal, vertical) range, departure angle (including horizontal, vertical) range or channel correlation coefficient, etc.; cell type such as indoor cell, outdoor cell, macro cell, micro cell or pico cell, etc.; station distance For example, it can be divided into station spacing within 200 meters, 200-500 meters or more than 500 meters, etc.; weather and environmental factors, such as the temperature and/or humidity of the network environment where the training data is located; the antenna configuration information of the transmitting end or receiving end, For example, it may be the number of antennas and/or antenna polarization; the UE capability/type may be, for example, Redcap UE and/or normal UE.
- Step 202 based on the data volume of the training data under each of the transmission conditions, acquire the training data under each of the transmission conditions to form a training data set for training the neural network.
- the reference data volume corresponding to each transmission condition is used as a reference or setting
- the training data under each transmission condition is acquired, so that the data amount of the finally acquired training data under each transmission condition conforms to the reference or setting.
- the obtained training data under various transmission conditions are non-uniformly mixed to obtain a data set, which is the training data set, which can be used to train the above-mentioned neural network or neural network model in the wireless transmission environment.
- the acquiring training data under each of the transmission conditions to form a training data set for training the neural network includes: collecting each of the training data based on the amount of training data under each of the transmission conditions. Data under the transmission conditions and calibrated to form a training data set under each of the transmission conditions; or, collect a set amount of data under each of the transmission conditions, and based on the data volume of the training data under each of the transmission conditions, by Select part of data from the set amount of data and calibrate, or complement and calibrate the set amount of data to form a training data set under each of the transmission conditions.
- the data volume required for each transmission condition can be calculated first, and then the data of each transmission condition can be obtained according to the data volume . It is also possible to obtain a large amount of data for each transmission condition first, that is, obtain a set amount of data, then calculate the amount of data required for each transmission condition, and then select or supplement from the above-mentioned large amount of data obtained in advance.
- the total amount of data acquired in advance under the kth transmission condition is M k
- the required data amount determined by calculation is N k . If M k ⁇ N k , it is necessary to randomly select N k data from the M k data and put them into the training data set; K data is then put into the training data set.
- the required data volume is the data volume of the training data under each transmission condition determined in the previous step.
- the label added to the data DMRS signal under the transmission condition is the true value of the channel corresponding to the DMRS signal.
- the embodiments of the present application can be applied in any scenario where machine learning can be used to replace the functions of one or more modules in the existing wireless transmission network, that is, the present application can be used when machine learning is used to train the neural network.
- the training data set acquisition method in the embodiment of the application constructs the training data set.
- Application scenarios include, for example, pilot design, channel estimation, signal detection, user pairing, HARQ, and positioning at the physical layer, resource allocation, handover, and mobility management at the high layer, and scheduling or slicing at the network layer. There is no limitation on specific wireless transmission application scenarios.
- various transmission conditions are selected in different proportions according to the contribution of data with different transmission conditions to the neural network optimization goal (or objective function or loss function).
- the following data can be used to construct a mixed training data set, which can effectively improve the generalization ability of the neural network.
- determining the amount of training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization target includes: sorting the contribution of each transmission condition; On the basis of performing at least one of the operations of reducing the data volume of training data under the transmission condition corresponding to the larger contribution degree in the sorting and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the sorting .
- the contribution of the data with different transmission conditions to the neural network optimization goal (or objective function, loss function) when mixed in equal proportions can be calculated first. Sort. Afterwards, when constructing the mixed training data set, based on the above sorting, the smaller contribution degree and the greater contribution degree can be determined, so as to further determine the transmission conditions with lower contribution degree and the transmission condition with higher contribution degree, and can be used in Under the premise of ensuring sufficient data for all transmission conditions, increase the data volume of transmission conditions with low contribution, and/or reduce the data volume of transmission conditions with high contribution.
- a threshold that is, a preset threshold
- the ratio of the data volume of any transmission condition to the total data volume should not be lower than the threshold, so as to meet the above-mentioned "premise of ensuring sufficient data for all transmission conditions" .
- At least one of the operations of reducing the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the ranking and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the ranking is performed.
- One of them includes: according to the following rules, performing the reduction of the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the sorting and increasing the data of the training data under the transmission condition corresponding to the smaller contribution degree in the sorting
- At least one of the amount of operations include: the greater the value of the greater contribution, the greater the magnitude of the reduction; the smaller the value of the smaller contribution, the greater the magnitude of the increase bigger.
- the goal is to make the transmission condition gradually increase with the contribution of transmission conditions.
- the amount of training data is gradually reduced. Therefore, the lower the contribution degree of the transmission condition is, the more the amount of data will be increased by the transmission condition; the higher the contribution degree of the transmission condition will be, the more the data volume will be reduced by the transmission condition.
- the embodiment of the present application increases or decreases the data amount of the training data corresponding to the transmission condition proportionally according to the contribution value, so that as the contribution degree of the transmission condition increases gradually, the data amount of the training data of the transmission condition gradually decreases, thereby A better balance of the influence of each transmission condition on the final neural network is more conducive to improving the generalization ability of the neural network.
- the data volume of the training data under the transmission conditions decreases in the direction of the sorting, and when the sorting result is from large to small In this case, the data volume of the training data under the transmission condition increases according to the sorting direction.
- the proportion of the corresponding transmission condition data to the total data can be arbitrarily decreasing, as in Linear reduction, arithmetic reduction, proportional reduction, exponential function reduction or power function reduction, etc.
- the proportion of the data corresponding to the transmission condition to the total data can be increased in any way, such as linear increase, arithmetic difference increase, proportional increase, exponential function increase or power function formula increase, etc.
- At least one of the operations of reducing the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the ranking and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the ranking is performed.
- the contribution degree of the transmission condition is greater than the reference contribution degree, then determine that the contribution degree of the transmission condition is the greater contribution degree, and reduce the amount of training data under the transmission condition;
- the contribution degree of the transmission condition is not greater than the reference contribution degree, determine that the contribution degree of the transmission condition is the smaller contribution degree, and increase the amount of training data under the transmission condition.
- an intermediate comparison reference amount of the contribution degree may be determined first according to the ranking, which is called the reference contribution degree.
- the reference contribution degree is the median of the ranking, or the contribution degree of a set position in the ranking, or the average of each contribution degree in the ranking, or the difference between the ranking and the The closest contribution to the average.
- the average may be an arithmetic mean, a geometric mean, a harmonic mean, a weighted mean, a square mean or an exponential mean, etc.
- the contribution degree of each transmission condition is compared with the reference contribution degree in turn. If the contribution degree of the i-th transmission condition is greater than the reference contribution degree, then the i-th transmission condition is determined to be the larger contribution degree described in the above-mentioned embodiment, and the data volume of the i-th transmission condition is reduced; otherwise, If the contribution degree of the i-th transmission condition is smaller than the median contribution degree, the i-th transmission condition is determined as the smaller contribution degree described in the above embodiment, and the data amount of the i-th transmission condition is increased.
- the intermediate comparison reference amount of the contribution degree by determining the intermediate comparison reference amount of the contribution degree, it is only necessary to compare other contribution degrees with the comparison reference amount, and then determine to increase or decrease the amount of data corresponding to the transmission condition according to the comparison result.
- the algorithm is simple and the calculation amount is small. .
- determining the amount of training data under each of the transmission conditions based on the contribution of each of the transmission conditions to the neural network optimization goal includes: determining each The weighting coefficient corresponding to the transmission condition; based on the contribution of each transmission condition to the optimization goal, combined with the weighting coefficient, the data amount of the training data under each transmission condition is determined.
- the weighting item when determining the proportion of data under different transmission conditions to the total data volume, can be designed based on the actual probability density of different transmission conditions, and the data volume of conditions with high probability density can be increased, and the data volume with low probability density can be reduced.
- the amount of data for the condition For example, assuming that the probability density of the kth SNR is p k , the corresponding weighting item is f(p k ), and f(p k ) is an increasing function about p k .
- the data amount of the kth SNR after the weighting item is updated is f(p k ) ⁇ N k .
- the probability density reflects the occurrence probability of transmission conditions in different environments, and the transmission conditions in different environments do not exist with equal probability, and some transmission conditions have slightly higher probability of occurrence, and some slightly lower.
- the weighting coefficient and the probability density have an increasing function relationship. That is to say, the relationship between the weighted item and the probability density can be any increasing function relationship, that is, it is necessary to ensure that the weighted item with high probability density is large, and the weighted item with low probability density is small.
- the weighting item is designed according to the actual probability density of the transmission condition, which can better adapt to the actual environment.
- the method further includes: sending the training data set to a target device, where the target device is used to train the neural network based on the training data set.
- the training data set obtained in the embodiment of the present application can be used to train the neural network, and the method for obtaining the training data set in the embodiment of the present application can be applied to data transmission when the data acquisition end and the neural network training end may not be at the same execution end scene.
- data collection is completed according to the determined ratio of the mixed data set, and a training data set is built after calibration, and the training data set is fed back to other devices that need to execute the neural network training process, that is, the target device.
- the target device is a second device different from the current device, which may be a terminal or a network side device, and is used to complete the training of the neural network model by using the obtained training data set.
- the sending the training data set to the target device includes: directly sending the training data set to the target device, or sending the training data set to the
- the setting transformation includes at least one of specific quantization, specific compression, and neural network processing according to a pre-agreement or configuration.
- the sending method may be direct sending or indirect sending.
- Indirect sending refers to transforming and feeding back the training data in the training data set.
- it can adopt a specific quantization method, a specific compression method, or process the training data to be sent according to the pre-agreed and configured neural network before sending it.
- the neural network optimization objective or objective function, loss function
- MSE mean square error
- NMSE normalized mean square error
- N k the data volume of the kth SNR
- the contribution of the kth SNR data to the above-mentioned index that needs to be minimized is recorded as C k .
- the SNRs are sorted in the order of C k from small to large (or from large to small).
- the decreasing rule can be any decreasing rule; or, according to the order of C k from large to small, make the value of the corresponding N k
- the value increases from small to large and the increment rule can be any increment rule.
- weighting items when determining the proportion of data under different transmission conditions to the total data volume, weighting items may be designed based on actual probability densities of different transmission conditions. Assuming that the probability density of the kth SNR is p k , the corresponding weighting item is f(p k ), and f(p k ) is an increasing function about p k . Consider that the data amount of the kth SNR after the weighting item is updated is f(p k ) ⁇ N k .
- the optimization objective (or objective function, loss function) of the neural network is used as the index to be maximized such as signal to interference plus noise ratio (SINR), spectral efficiency or throughput, and the transmission condition is the signal-to-noise ratio (signal to noise ratio, SNR) application scenario as an example, consider K types of SNR data to be mixed, record the total amount of data as N all , record the k-th signal-to-noise ratio as SNR k , and record the k-th signal-to-noise ratio as SNR k .
- the data volume of the noise ratio is denoted as N k .
- the contribution of the kth SNR data to the above index to be maximized is recorded as C k .
- the SNRs are sorted in the order of C k from small to large (or from large to small).
- the decreasing rule can be any decreasing rule; or, according to the order of C k from large to small, make the value of the corresponding N k
- the value increases from small to large and the increment rule can be any increment rule.
- weighting items when determining the proportion of data under different transmission conditions to the total data volume, weighting items may be designed based on actual probability densities of different transmission conditions. Assuming that the probability density of the kth SNR is p k , the corresponding weighting item is f(p k ), and f(p k ) is an increasing function about p k . Consider that the data amount of the kth SNR after the weighting item is updated is f(p k ) ⁇ N k .
- the training data set acquisition method provided in the embodiment of the present application may be executed by a training data set acquisition device, or a control module in the training data set acquisition device for executing the training data set acquisition method.
- the method for obtaining the training data set executed by the training data set obtaining device is taken as an example to illustrate the training data set obtaining device provided in the embodiment of the present application.
- the structure of the training data set acquisition device in the embodiment of the present application is shown in Figure 3, which is a schematic structural diagram of the training data set acquisition device provided in the embodiment of the present application, and the device can be used to implement the above-mentioned training data set acquisition method embodiments.
- the training data set acquisition, the device includes: a first processing module 301 and a second processing module 302, wherein:
- the first processing module 301 is used to determine the data volume of the training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization goal;
- the second processing module 302 is configured to acquire the training data under each of the transmission conditions based on the data amount of the training data under each of the transmission conditions, so as to form a training data set for training the neural network;
- the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
- the first processing module is configured to:
- the type of the transmission condition includes at least one of the following:
- Signal-to-noise ratio or signal-to-interference-to-noise ratio
- the distance between the terminal and the base station is the distance between the terminal and the base station
- Antenna configuration information at the transmitting end or receiving end
- the second processing module is configured to:
- the data under each of the transmission conditions is collected and calibrated to form a training data set under each of the transmission conditions;
- Collect a set amount of data under each of the transmission conditions and based on the data volume of the training data under each of the transmission conditions, select part of the data from the set amount of data and calibrate, or the set amount of data
- the data is supplemented and calibrated to form a training data set under each of the transmission conditions.
- the first processing module is used for the execution to reduce the amount of training data under the transmission condition corresponding to the larger contribution degree in the ranking and increase the transmission condition corresponding to the smaller contribution degree in the ranking
- at least one of the operations on the data volume of the training data is used:
- the data volume of the training data under the transmission conditions decreases in the direction of the sorting, and when the sorting result is from large to small In this case, the data volume of the training data under the transmission condition increases according to the sorting direction.
- the first processing module is used for the execution to reduce the amount of training data under the transmission condition corresponding to the larger contribution degree in the ranking and increase the transmission condition corresponding to the smaller contribution degree in the ranking
- at least one of the operations on the data volume of the training data is used:
- the contribution degree of the transmission condition is greater than the reference contribution degree, then determine that the contribution degree of the transmission condition is the greater contribution degree, and reduce the amount of training data under the transmission condition;
- the contribution degree of the transmission condition is not greater than the reference contribution degree, determine that the contribution degree of the transmission condition is the smaller contribution degree, and increase the amount of training data under the transmission condition.
- the reference contribution degree is the median of the ranking, or the contribution degree of a set position in the ranking, or the average of each contribution degree in the ranking, or the difference between the ranking and the The closest contribution to the average.
- the first processing module is also used for:
- the data volume of the training data under each of the transmission conditions is determined.
- the weighting coefficient and the probability density have an increasing function relationship.
- the device also includes:
- a sending module configured to send the training data set to a target device, and the target device is used to train the neural network based on the training data set.
- the sending module is configured to:
- the setting transformation includes specific quantization, specific compression, and At least one of contracted or configured neural network processing.
- the training data set acquisition device in the embodiment of the present application may be a device, a device with an operating system or an electronic device, or a component, an integrated circuit, or a chip in a terminal or a network-side device.
- the apparatus or electronic equipment may be a mobile terminal or a non-mobile terminal, and may also include but not limited to the types of the network side equipment 102 listed above.
- the mobile terminal may include but not limited to the types of terminal 101 listed above
- the non-mobile terminal may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television ( television, TV), teller machines or self-service machines, etc., are not specifically limited in this embodiment of the present application.
- Network Attached Storage Network Attached Storage
- the training data set acquisition device provided by the embodiment of the present application can realize each process realized by the method embodiment in FIG. 2 and achieve the same technical effect. To avoid repetition, details are not repeated here.
- the embodiment of the present application also provides a wireless transmission method, which can be executed by a terminal and/or a network-side device.
- the terminal can be the terminal 101 shown in FIG. 1
- the network-side device can be specifically the terminal 101 shown in FIG. 1.
- Figure 4 it is a schematic flowchart of the wireless transmission method provided by the embodiment of the present application, the method includes:
- Step 401 based on the neural network model, perform a wireless transmission operation to realize the wireless transmission.
- the neural network model is obtained by training in advance using a training data set, and the training data set is obtained based on the methods for obtaining the training data set as described in the above-mentioned embodiments.
- the training data set (or the data ratio of each transmission condition can also be obtained) can be obtained in advance according to the above-mentioned embodiments of the training data set acquisition method, and the training data set can be used to train and initialize the built neural network. , to get the neural network model. Afterwards, the neural network model is applied to the wireless transmission calculation process of the embodiment of the present application, and the wireless transmission of the embodiment of the present application is finally realized through calculation.
- the wireless transmission application environment of the embodiment of the present application can be any wireless transmission environment that can replace the functions of one or more modules in the existing wireless transmission network with machine learning, that is, use machine learning to train neural networks in wireless transmission
- the training data set can be constructed by using the above-mentioned training data set acquisition method embodiment of the present application
- the neural network model can be trained by using the training data set for wireless transmission.
- Wireless transmission application environment such as pilot design, channel estimation, signal detection, user pairing, HARQ and positioning at the physical layer, resource allocation, handover and mobility management at the high layer, and scheduling or slicing at the network layer, etc., this application
- the embodiment does not limit specific wireless transmission application scenarios.
- data under various transmission conditions are selected in different proportions to construct a non-uniform mixed training data set, and based on The heterogeneously mixed training data set trains a common neural network for wireless transmission under different actual transmission conditions, enabling the trained neural network to achieve higher performance under each transmission condition.
- the wireless transmission method before performing the wireless transmission operation based on the neural network model, the wireless transmission method further includes:
- training methods Based on the training data set, use any of the following training methods to train and obtain the neural network model, and the following training methods include:
- the neural network model before using the neural network model to perform wireless transmission operations, the neural network model must first be trained and obtained using the training data set.
- the training phase of the neural network can be performed offline, and the execution subject can be a network-side device, a terminal-side device, or a network-side device-terminal-side device combination.
- the network side device is an access network device, a core network device or a data network device. That is to say, the network-side device in this embodiment of the present application may include one or more of a network-side device in an access network, a core network device, and a data network device (data network, DN).
- the network-side device in the access network may be a base station, or a node responsible for AI training on the RAN side, or not limited to the type of access network device 1021 listed in FIG. 1 .
- the core network equipment is not limited to the type of core network equipment 1022 listed in FIG. 1 , and the data network equipment may be NWDAF, UDM, UDR, or UDSF.
- the execution subject when it is a network-side device, it may be a centralized training based on a single network-side device, or a distributed training (such as federated learning) based on multiple network-side devices.
- the execution subject when the execution subject is a terminal-side device, it can be centralized training based on a single terminal, or distributed training based on multiple terminals (such as federated learning).
- the execution subject when the execution subject is a network-side device-terminal-side device combination, it can be a single network-side device combined with multiple terminal devices, or a single terminal device combined with multiple network-side devices, or multiple network-side devices combined a terminal-side device. This application does not specifically limit the subject of execution of the training process.
- the wireless transmission method further includes: during the distributed training process, sharing the proportion of the training data under each transmission condition among the subjects performing the distributed training.
- each execution subject can realize the training of the neural network model without sharing its own data, which can solve the problem of insufficient computing power or training capacity of a single device or inability between devices. Problems with sharing data (involving privacy concerns) or the cost of transferring large amounts of data.
- any one of the multiple network-side devices calculates and determines the proportion of training data under each of the transmission conditions. ratio, and send the ratio to other network-side devices in the plurality of network-side devices except the one network-side device through the first setting type interface signaling.
- the network-side interface signaling is a preset first setting type.
- the first setting type interface signaling includes Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling, or N22 interface signaling.
- the Xn interface can be used between the base stations to realize data sharing through the Xn interface signaling
- the N1, N2, N3, N4, N5, N6, N7, N8, and N9 between the core network devices can be used between the core network devices , N10, N11, N12, N13, N14, N15, or N22 interfaces, share the share of data through the corresponding type of interface signaling.
- a certain network-side device may calculate and determine the data ratio information, and then share the data ratio information with other network-side devices through Xn interface signaling (including but not limited to).
- any one of the multiple terminals calculates and determines the proportion of training data under each of the transmission conditions, and passes the second Set type interface signaling, and send the proportion to other terminals in the plurality of terminals except the any terminal.
- the terminal interface signaling is a preset second setting type.
- the second setting type interface signaling includes PC5 interface signaling or sidelink interface signaling.
- a certain terminal device calculates and determines the data ratio information, and then shares the data ratio information with other terminal devices through PC5 interface signaling (including but not limited to).
- any network-side device or any terminal among the network-side device and the terminal calculates and determines The ratio of the training data, and send the ratio to the network side device and the terminal other than the network side device or any terminal other than the network side device or any terminal through the third setting type signaling. terminal.
- the execution subject is a network-side device-terminal-side device combination
- all devices share the same data ratio.
- one terminal (or network side device) in the joint network side device-terminal side device can calculate and obtain the proportion of training data under each transmission condition, and share the proportion with the network side device through setting type interface signaling - Other devices in the association of terminal-side devices.
- the setting type interface signaling is a preset third setting type interface signaling.
- the third setting type signaling includes RRC, PDCCH layer 1 signaling, PDSCH, MAC CE, SIB, Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 Interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 Interface signaling, N15 interface signaling, N22 interface signaling, PUCCH layer 1 signaling, PUSCH, PRACH MSG1, PRACH MSG3, PRACH MSGA, PC5 interface signaling or sidelink interface signaling.
- the method further includes: acquiring real-time data under the transmission conditions, and adjusting the trained neural network model online based on the real-time data.
- the online data of the transmission environment collected in real time is used to fine the pre-trained neural network parameters -tuning (also called fine-tuning) to adapt the neural network to the actual environment.
- fine-tuning is a training process that uses the parameters of the pre-trained neural network as initialization.
- the parameters of some layers can be frozen. Generally, the layers near the input end are frozen, and the layers near the output end are activated, so as to ensure that the network can still converge.
- the smaller the amount of data in the fine-tuning stage the more layers are suggested to be frozen and only fine-tune a small number of layers close to the output.
- the data mixing ratio of different transmission conditions in the model training stage can be used. You can also take the trained neural network to the actual wireless environment for fine-tuning, or you can directly use the data generated in the actual wireless environment for fine-tuning without controlling the proportion of data.
- online fine-tuning is performed on the trained neural network model to make the neural network more adaptable to the actual environment.
- the acquiring the real-time data under the transmission conditions, and adjusting the trained neural network model online based on the real-time data includes: obtaining the proportion of the training data under each transmission condition, and obtaining the Real-time data under the above-mentioned transmission conditions; if the proportion of real-time data under any transmission condition in the transmission conditions is higher than the proportion of training data under any transmission condition, then adjust the neural network that has completed the training online.
- the data that exceeds the proportion of the training data under any transmission condition in the real-time data under any transmission condition is not input into the trained neural network model.
- the data mixing ratio of different transmission conditions in the model training stage can be used. That is, according to the proportion of training data under each transmission condition in the training stage, determine the proportion or data volume of real-time data under each transmission condition in the online fine-tuning stage, and obtain the corresponding amount of real-time data under each transmission condition accordingly.
- the mixing ratio in the training phase when the proportion of data from a certain transmission condition in the actual wireless environment exceeds the proportion of the transmission condition in the model training phase, the excess data will not be input into the network for fine-tuning.
- the data proportion in the training phase when used, the data whose transmission conditions exceed the proportion are not input into the neural network model for fine-tuning, so as to avoid the unbalanced influence of the data volume exceeding the proportion.
- the acquiring the real-time data under each of the transmission conditions includes: collecting online the data of at least one of the network side device and the terminal under each of the transmission conditions as the real-time data under each of the transmission conditions
- the online adjustment of the trained neural network model includes: based on the data under each of the transmission conditions of at least one of the network-side device and the terminal, using the network-side device or the terminal to adjust online
- the neural network model that has been trained is described above.
- the execution subject is at the input end of the neural network, or it can be the network side equipment and/or end-side equipment. That is to say, when the execution subject is a network-side device, real-time data of various transmission conditions under the network-side device can be obtained online; when the execution subject is a terminal device, real-time data of various transmission conditions of the terminal side can be obtained online; when the execution subject When both the network side device and the terminal are included, it is necessary to obtain real-time data of each transmission condition for both execution subjects.
- the network-side device or terminal performs online fine-tuning of the neural network model according to its corresponding real-time data, and updates the network parameters.
- the wireless transmission method Before acquiring the real-time data under each of the transmission conditions, the wireless transmission method also includes:
- the network-side device uses Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, and N8 interface signaling interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling and N22 interface signaling, from the Obtain the proportion of training data under each of the transmission conditions in the network side equipment in the training phase;
- the network side device obtains the information under each transmission condition from the terminal in the training phase through PUCCH layer 1 signaling, PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
- PUCCH layer 1 signaling PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
- the terminal obtains the proportion of training data under each of the transmission conditions from the terminal in the training phase through PC5 interface signaling or sidelink interface signaling;
- the terminal obtains the proportion of training data under each of the transmission conditions from the network side equipment in the training phase through any one of RRC, PDCCH layer 1 signaling, PUSCH, MAC CE and SIB signaling.
- N1 interface signaling N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface Signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling and N22 interface signaling, or PC5 interface signaling, or sidelink Interface signaling, or interface signaling such as RRC, PDCCH layer 1 signaling, MAC CE or SIB
- the data proportion information is obtained from other execution subjects through the setting type interface signaling, so as to realize the sharing of the data proportion information in the online fine-tuning phase, ensuring The generalization ability of neural networks.
- Fig. 5 is a schematic flow diagram of constructing a neural network model in the wireless transmission method provided according to the embodiment of the present application.
- Fig. 5 shows the model construction process involved in the wireless transmission method proposed in the embodiment of the present application, which can be divided into offline ( offline) training phase (the part shown in 1 in the figure) and the fine-tuning or fine tuning stage in the actual transmission network (the part shown in 2 in the figure), where the training data set can be constructed and obtained before the offline training.
- offline offline training phase
- fine-tuning or fine tuning stage in the actual transmission network the part shown in 2 in the figure
- the data of all transmission conditions can be mixed in equal proportions first, so as to determine the data volume or data proportion of each transmission condition in equal proportions. All transfer conditions in the mixture are then sorted by their contribution to the neural network optimization objective. After that, on the premise of ensuring sufficient data for all transmission conditions, increase the data volume of transmission conditions with low contribution and reduce the data volume of transmission conditions with high contribution to determine the proportion of data for each transmission condition, and Further construct a mixed training data set according to the ratio.
- "guaranteeing sufficient data for all transmission conditions” may refer to setting a threshold, and the ratio of the amount of data for any one transmission condition to the total amount of data must not be lower than the threshold.
- the contribution degree is sorted, the contribution degree is sorted from small to large, and the proportion of the corresponding transmission condition data to the total data (that is, the proportion) can be arbitrarily decreasing, such as linear reduction, equal difference reduction , proportional reduction, exponential function reduction or power function reduction, etc.
- the proportion of the data corresponding to the transmission condition to the total data can be increased in any way, such as linear increase, arithmetic difference increase, proportional increase, exponential function increase or power function formula increase, etc.
- Figure 6 is a schematic flow diagram of determining the proportion of training data in the training data set acquisition method provided according to the embodiment of the present application, mainly including:
- the contribution of the i-th transmission condition is greater than the median contribution, then on the premise of ensuring that the data volume of all transmission conditions is sufficient, reduce the data volume of the i-th transmission condition, and the reduction is the same as that of the i-th transmission condition
- the difference between the contribution degree and the median (median value) of the contribution degree is proportional to;
- the contribution degree of the i-th transmission condition is less than the median contribution degree, on the premise of ensuring that the data volume of all transmission conditions is sufficient, increase the data volume of the i-th transmission condition, and the increase is equal to the median contribution degree (middle value) is proportional to the difference of the contribution degree of the i-th transmission condition.
- training data set After obtaining the training data set, use the training data set to iteratively train the neural network model off-line to achieve convergence, and obtain the trained neural network model.
- the data in the actual wireless network is collected in real time, and the parameters of the pre-trained neural network model are fine-tuned, so that the neural network model can adapt to the actual environment.
- Online fine-tuning can be considered as a retraining process using the parameters of the pre-trained neural network as initialization.
- DMRS Demodulation Reference Signal
- Figure 7 is a schematic structural diagram of the neural network used for DMRS channel estimation in the wireless transmission method provided according to the embodiment of the present application, wherein the input information of the neural network is N_RE_DMRS DMRS process The symbols after adding noise to the channel, the output information of the neural network is N_RE symbols, corresponding to the channel estimation results on all N_RE time-frequency resources.
- the training data is a tagged DMRS information pair, that is, a DMRS signal sample (including N_RE_DMRS symbols) corresponds to a label (the label is the true value of the channel corresponding to the current DMRS signal sample, a total of N_RE_DMRS symbols) .
- a DMRS signal sample including N_RE_DMRS symbols
- the label is the true value of the channel corresponding to the current DMRS signal sample, a total of N_RE_DMRS symbols
- a large number of labeled DMRS information pairs are used to adjust the parameters of the neural network during training, and the normalized mean square error NMSE between the output of the neural network based on the DMRS signal sample and its label is minimized.
- the kth SNR is denoted as SNR k
- the data volume of the kth SNR is N k
- the neural network is trained based on federated learning, and there is one network-side device and multiple terminal-side devices participating in federated learning.
- the network-side device determines the data ratio of each SNR when the mixed training data set is constructed.
- the contribution of the kth SNR data to the above NMSE is denoted as C k .
- the SNRs are sorted in the order of C k from small to large (or from large to small). In this embodiment, data with a low SNR has a greater contribution to the NMSE, and data with a high SNR has a smaller contribution to the NMSE. Record the median of all contributions in the ranking as
- the network-side device sends the data ratio information of each SNR to all terminals participating in the joint training through RRC, PDCCH layer 1 signaling, MAC CE or SIB and other interface signaling to perform federated learning of the neural network, that is, Offline training.
- the trained neural network is fine-tuned online in the actual wireless network. Since the data proportion information has been shared with all terminals in the offline training phase, the data proportion can be used in the online fine-tuning phase. When the proportion of data from a certain SNR in the actual wireless environment exceeds the proportion of the SNR in the model training stage, the excess data will not be input into the neural network for fine-tuning.
- the embodiments of the present application can improve the generalization ability of a trained neural network model in a changing wireless environment.
- the wireless transmission method provided in the embodiment of the present application may be executed by a wireless transmission device, or a control module in the wireless transmission device for executing the wireless transmission method.
- the wireless transmission device provided in the embodiment of the present application is described by taking the wireless transmission device executing the wireless transmission method as an example.
- the structure of the wireless transmission device in the embodiment of the present application is shown in Figure 8, which is a schematic structural diagram of the wireless transmission device provided in the embodiment of the present application.
- the device can be used to implement the wireless transmission in the above wireless transmission method embodiments.
- the device include:
- the third processing module 801 is configured to perform wireless transmission calculations based on the neural network model to realize the wireless transmission.
- the neural network model is obtained by training in advance using a training data set, and the training data set is obtained based on the methods for obtaining the training data set as described in the above-mentioned embodiments.
- the wireless transmission device further includes:
- a training module configured to use any of the following training methods to train and obtain the neural network model based on the training data set, and the following training methods include:
- the network side device is an access network device, a core network device or a data network device.
- the wireless transmission device further includes:
- the fourth processing module is configured to share the proportion of training data under each transmission condition among the subjects performing the distributed training during the distributed training process.
- the fourth processing module is configured to calculate and determine each network side device among the multiple network side devices The ratio of the training data under the transmission conditions, and sending the ratio to other network sides of the plurality of network side devices except the any one of the network side devices through the first setting type interface signaling equipment.
- the first setting type interface signaling includes Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling, or N22 interface signaling.
- the fourth processing module is configured to calculate and determine, by any terminal among the multiple terminals, the training time under each of the transmission conditions proportion of the data, and send the proportion to other terminals in the plurality of terminals except the one terminal through the second setting type interface signaling.
- the second setting type interface signaling includes PC5 interface signaling or sidelink interface signaling.
- the fourth processing module is configured to use any network side device or any one of the network side device and the terminal
- the terminal calculates and determines the proportion of the training data under each of the transmission conditions, and sends the proportion to the network-side device and the terminal except for the network-side device or any Other network-side devices or terminals other than a terminal.
- the third setting type signaling includes RRC, PDCCH layer 1 signaling, PDSCH, MAC CE, SIB, Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 Interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 Interface signaling, N15 interface signaling, N22 interface signaling, PUCCH layer 1 signaling, PUSCH, PRACH MSG1, PRACH MSG3, PRACH MSG A, PC5 interface signaling or sidelink interface signaling.
- the wireless transmission device further includes:
- the fine-tuning module is used to obtain real-time data under the transmission conditions, and adjust the trained neural network model online based on the real-time data.
- the fine-tuning module is used for:
- the proportion of real-time data under any of the transmission conditions is higher than the proportion of training data under any of the transmission conditions, then in the process of online adjustment of the trained neural network model, the Among the real-time data under any transmission condition, the data that exceeds the proportion of the training data under any transmission condition is input into the trained neural network model.
- the fine-tuning module when used for acquiring the real-time data under each of the transmission conditions, it is used for:
- the fine-tuning module when used for the online adjustment of the trained neural network model, is used for:
- the network-side device or the terminal is used to adjust the trained neural network model online.
- the wireless transmission device further includes:
- a communication module configured to, when the network-side device or the terminal does not obtain the proportion of training data under each of the transmission conditions during the training phase,
- the network-side device uses Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, and N8 interface signaling interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling and N22 interface signaling, from the Obtain the proportion of training data under each of the transmission conditions in the network side equipment in the training phase;
- the network side device obtains the information under each transmission condition from the terminal in the training phase through PUCCH layer 1 signaling, PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
- PUCCH layer 1 signaling PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
- the terminal obtains the proportion of training data under each of the transmission conditions from the terminal in the training phase through PC5 interface signaling or sidelink interface signaling;
- the terminal obtains the proportion of training data under each of the transmission conditions from the network side equipment in the training phase through any one of RRC, PDCCH layer 1 signaling, PUSCH, MAC CE and SIB signaling.
- the wireless transmission device in the embodiment of the present application may be a device, a device with an operating system or an electronic device, and may also be a component, an integrated circuit, or a chip in a terminal or network-side device.
- the apparatus or electronic equipment may be a mobile terminal or a non-mobile terminal, and may also include but not limited to the types of the network side equipment 102 listed above.
- the mobile terminal may include but not limited to the types of terminal 101 listed above
- the non-mobile terminal may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television ( television, TV), teller machines or self-service machines, etc., are not specifically limited in this embodiment of the present application.
- Network Attached Storage Network Attached Storage
- the wireless transmission device in the embodiment of the present application may be a device with an operating system.
- the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
- the wireless transmission device provided in the embodiment of the present application can realize various processes realized by the wireless transmission method embodiments in FIG. 4 to FIG. 7 , and achieve the same technical effect. To avoid repetition, details are not repeated here.
- the embodiment of the present application also provides a communication device 900, including a processor 901, a memory 902, and programs or instructions stored in the memory 902 and operable on the processor 901, for example, the communication
- the device 900 is a terminal or a network-side device
- the program or instruction is executed by the processor 901
- the various processes of the above-mentioned embodiment of the training data set acquisition method can be realized, and the same technical effect can be achieved, or the above-mentioned embodiment of the wireless transmission method can be realized.
- Each process, and can achieve the same technical effect, in order to avoid repetition, will not repeat them here.
- the embodiment of the present application also provides a communication device, which may be a terminal or a network-side device.
- the communication device includes a processor and a communication interface, wherein the processor is used to determine the The data volume of the training data under each of the transmission conditions; and based on the data volume of the training data under each of the transmission conditions, obtaining the training data under each of the transmission conditions to form a training data set for training the neural network ;
- the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
- the embodiment of the present application also provides a communication device, which may be a terminal or a network side device, and the communication device includes a processor and a communication interface, wherein the processor is used to perform wireless transmission calculations based on a neural network model to realize the wireless Transmission; wherein, the neural network model is obtained by using a training data set for training in advance, and the training data set is obtained based on the methods for obtaining the training data set as described in the above-mentioned embodiments.
- this embodiment of the communication device corresponds to the embodiment of the above-mentioned wireless transmission method, and each implementation process and implementation mode of the above-mentioned method embodiment can be applied to this embodiment of the communication device, and can achieve the same technical effect .
- FIG. 10 is a schematic diagram of a hardware structure of a terminal implementing an embodiment of the present application.
- the terminal 1000 includes but not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc. at least some of the components.
- the terminal 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1010 through the power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
- a power supply such as a battery
- the terminal structure shown in FIG. 10 does not constitute a limitation on the terminal, and the terminal may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, which will not be repeated here.
- the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
- the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
- the user input unit 1007 includes a touch panel 10071 and other input devices 10072 .
- the touch panel 10071 is also called a touch screen.
- the touch panel 10071 may include two parts, a touch detection device and a touch controller.
- Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
- the radio frequency unit 1001 receives the downlink data from the network side device, and processes it to the processor 1010; in addition, sends the uplink data to the network side device.
- the radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the memory 1009 can be used to store software programs or instructions as well as various data.
- the memory 1009 may mainly include a program or instruction storage area and a data storage area, wherein the program or instruction storage area may store an operating system, at least one application program or instruction required by a function (such as a sound playback function, an image playback function, etc.) and the like.
- the memory 1009 may include a high-speed random access memory, and may also include a nonvolatile memory, wherein the nonvolatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM) , PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- ROM Read-Only Memory
- PROM programmable read-only memory
- PROM erasable programmable read-only memory
- Erasable PROM Erasable PROM
- EPROM electrically erasable programmable read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory for example at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
- Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM , SLDRAM) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DRRAM).
- RAM Random Access Memory
- SRAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- SDRAM double data rate synchronous dynamic random access memory
- Double Data Rate SDRAM Double Data Rate SDRAM
- DDRSDRAM double data rate synchronous dynamic random access memory
- Enhanced SDRAM, ESDRAM enhanced synchronous dynamic random access memory
- Synch link DRAM , SLDRAM
- Direct Memory Bus Random Access Memory Direct Rambus
- the processor 1010 may include one or more processing units; optionally, the processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, application programs or instructions, etc., Modem processors mainly handle wireless communications, such as baseband processors. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
- the processor 1010 is configured to determine the data volume of the training data under each of the transmission conditions based on the contribution of each transmission condition to the neural network optimization goal; and based on the data volume of the training data under each of the transmission conditions, obtain each Training data under the transmission conditions to form a training data set for training the neural network; wherein, the contribution of the transmission conditions to the neural network optimization goal represents the transmission condition to the neural network optimization goal The degree of influence of the value of .
- various transmission conditions are selected in different proportions according to the contribution of data with different transmission conditions to the neural network optimization goal (or objective function or loss function).
- the following data can be used to construct a mixed training data set, which can effectively improve the generalization ability of the neural network.
- the processor 1010 is further configured to sort the contribution degrees of each of the transmission conditions; and on the basis of mixing in equal proportions, reduce the training data under the transmission conditions corresponding to the larger contribution degrees in the sorting. At least one of the data volume and the operation of increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the sorting.
- the processor 1010 is further configured to collect and calibrate the data under each of the transmission conditions based on the amount of training data under each of the transmission conditions, to form a training data set under each of the transmission conditions; or, Collect a set amount of data under each of the transmission conditions, and based on the data volume of the training data under each of the transmission conditions, select part of the data from the set amount of data and calibrate, or the set amount of data
- the data is supplemented and calibrated to form a training data set under each of the transmission conditions.
- the processor 1010 is further configured to, according to the following rules, perform reducing the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the sorting and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the ranking.
- the following rules include: the greater the value of the greater contribution, the greater the magnitude of the reduction; the smaller the value of the smaller contribution, the greater the The greater the magnitude of the increase.
- the embodiment of the present application increases or decreases the data amount of the training data corresponding to the transmission condition proportionally according to the contribution value, so that as the contribution degree of the transmission condition increases gradually, the data amount of the training data of the transmission condition gradually decreases, thereby A better balance of the influence of each transmission condition on the final neural network is more conducive to improving the generalization ability of the neural network.
- the processor 1010 is further configured to determine a reference contribution degree according to the ranking, and compare the contribution degree of the transmission condition with the reference contribution degree, if the contribution degree of the transmission condition is greater than the reference contribution degree degree, then determine that the contribution degree of the transmission condition is the greater contribution degree, and reduce the data volume of the training data under the transmission condition, otherwise, determine that the contribution degree of the transmission condition is the smaller contribution degree, And increase the amount of training data under the transmission conditions.
- the intermediate comparison reference amount of the contribution degree by determining the intermediate comparison reference amount of the contribution degree, it is only necessary to compare other contribution degrees with the comparison reference amount, and then determine to increase or decrease the amount of data corresponding to the transmission condition according to the comparison result.
- the algorithm is simple and the calculation amount is small. .
- the processor 1010 is further configured to determine a weighting coefficient corresponding to each of the transmission conditions based on the probability density of each of the transmission conditions in an actual application; based on the contribution of each of the transmission conditions to the optimization goal , in combination with the weighting coefficients, determine the data volume of the training data under each of the transmission conditions.
- the weighting item is designed according to the actual probability density of the transmission condition, which can better adapt to the actual environment.
- the radio frequency unit 1001 is configured to send the training data set to a target device, and the target device is used to train the neural network based on the training data set.
- the radio frequency unit 1001 is configured to directly send the training data set to the target device, or send the training data set after setting transformation to the target device;
- the processor 1010 is further configured to perform setting transformation on the training data set, the setting transformation including at least one of specific quantization, specific compression, and neural network processing according to pre-agreement or configuration.
- the processor 1010 is also configured to perform a wireless transmission operation based on a neural network model to realize the wireless transmission; wherein, the neural network model is obtained by training in advance using a training data set, and the training data set It is obtained based on the methods for obtaining training data sets as described in the above embodiments of the methods for obtaining training data sets.
- data under various transmission conditions are selected in different proportions to construct a non-uniform mixed training data set, and based on The heterogeneously mixed training data set trains a common neural network for wireless transmission under different actual transmission conditions, enabling the trained neural network to achieve higher performance under each transmission condition.
- the processor 1010 is further configured to use any of the following training methods to train and obtain the neural network model based on the training data set, and the following training methods include:
- the radio frequency unit 1001 is further configured to share the proportion of training data under each transmission condition among the subjects performing the distributed training during the distributed training process.
- each execution subject can realize the training of the neural network model without sharing its own data, which can solve the problem of insufficient computing power or training capacity of a single device or inability between devices. Problems with sharing data (involving privacy concerns) or the cost of transferring large amounts of data.
- the processor 1010 is further configured to, if the distributed training is joint distributed training of multiple network side devices, calculate and determine each of the The proportion of training data under transmission conditions;
- the radio frequency unit 1001 is further configured to send the proportion to other network-side devices in the plurality of network-side devices except the one network-side device through the first setting type interface signaling.
- the processor 1010 is further configured to, in the case that the distributed training is a joint distributed training of multiple terminals, calculate and determine, by any one of the multiple terminals, the values of the training data under each of the transmission conditions. Proportion;
- the radio frequency unit 1001 is further configured to send the ratio to other terminals in the plurality of terminals except for the any terminal through second setting type interface signaling.
- the processor 1010 is further configured to, if the distributed training is a joint distributed training between the network side device and the terminal, any network side device or any terminal among the network side device and the terminal calculates Determine the proportion of training data under each of the transmission conditions;
- the radio frequency unit 1001 is further configured to send the proportion to other network-side devices or any other network-side device or any terminal among the network-side device and the terminal through a third setting type signaling. terminal.
- the processor 1010 is further configured to acquire real-time data under the transmission conditions, and adjust the trained neural network model online based on the real-time data.
- online fine-tuning is performed on the trained neural network model to make the neural network more adaptable to the actual environment.
- the processor 1010 is further configured to acquire real-time data under each of the transmission conditions based on the proportion of training data under each of the transmission conditions; and if the real-time data under any of the transmission conditions is higher than the proportion of training data under any of the transmission conditions, then in the process of online adjustment of the trained neural network model, the real-time data under any of the transmission conditions will not exceed the above-mentioned The proportion of training data under any transmission condition is input to the trained neural network model.
- the data proportion in the training phase when used, the data whose transmission conditions exceed the proportion are not input into the neural network model for fine-tuning, so as to avoid the unbalanced influence of the data volume exceeding the proportion.
- the input unit 1004 is configured to collect online data under each of the transmission conditions of at least one of the network side device and the terminal, as real-time data under each of the transmission conditions;
- the processor 1010 is further configured to use the network-side device or the terminal to adjust the trained neural network model online based on the data of at least one of the network-side device and the terminal under each of the transmission conditions .
- the radio frequency unit 1001 is also used to pass Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, and N5 interface signaling Signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface Any network-side interface signaling in signaling and N22 interface signaling, obtain the proportion of training data under each of the transmission conditions from the network-side equipment in the training phase, or use RRC, PDCCH layer 1 signaling, MAC Any interface signaling in CE and SIB, to obtain the proportion of training data under each of the transmission conditions from the terminal in the training phase;
- the radio frequency unit 1001 is further configured to obtain the proportion of training data under each of the transmission conditions from the terminal in the training phase through PC5 interface signaling or sidelink interface signaling, or through Any signaling in RRC, PDCCH layer 1 signaling, MAC CE and SIB, obtains the proportion of training data under each of the transmission conditions from the network side equipment in the training phase.
- the data proportion information is obtained from other execution subjects through the setting type interface signaling, so as to realize the sharing of the data proportion information in the online fine-tuning phase, ensuring The generalization ability of neural networks.
- FIG. 11 is a schematic diagram of a hardware structure of an access network device implementing an embodiment of the present application.
- the access network device 1100 includes: an antenna 1101, a radio frequency device 1102, and a baseband device 1103.
- the antenna 1101 is connected to the radio frequency device 1102 .
- the radio frequency device 1102 receives information through the antenna 1101, and sends the received information to the baseband device 1103 for processing.
- the baseband device 1103 processes the information to be sent and sends it to the radio frequency device 1102
- the radio frequency device 1102 processes the received information and sends it out through the antenna 1101 .
- the frequency band processing device may be located in the baseband device 1103 , and the method performed by the network side device in the above embodiments may be implemented in the baseband device 1103 , and the baseband device 1103 includes a processor 1104 and a memory 1105 .
- the baseband device 1103 may include, for example, at least one baseband board, and the baseband board is provided with a plurality of chips, as shown in FIG. The operation of the network side device shown in the above method embodiments.
- the baseband device 1103 may also include a network interface 1106 for exchanging information with the radio frequency device 1102, such as a common public radio interface (CPRI for short).
- a network interface 1106 for exchanging information with the radio frequency device 1102, such as a common public radio interface (CPRI for short).
- CPRI common public radio interface
- the access network device in this embodiment of the present invention further includes: instructions or programs stored in the memory 1105 and operable on the processor 1104, and the processor 1104 invokes the instructions or programs in the memory 1105 to execute FIG. 3 or FIG. 8
- the methods executed by each module shown in the figure achieve the same technical effect, so in order to avoid repetition, they are not repeated here.
- FIG. 12 is a schematic diagram of a hardware structure of a core network device implementing an embodiment of the present application.
- the core network device 1200 includes: a processor 1201, a transceiver 1202, a memory 1203, a user interface 1204, and a bus interface, wherein:
- the core network device 1200 also includes: a computer program stored in the memory 1203 and operable on the processor 1201.
- a computer program stored in the memory 1203 and operable on the processor 1201.
- the computer program is executed by the processor 1201, each module shown in FIG. 3 or FIG. 8 is implemented. To avoid duplication, the method of implementation and to achieve the same technical effect will not be repeated here.
- the bus architecture may include any number of interconnected buses and bridges, specifically one or more processors represented by processor 1201 and various circuits of memory represented by memory 1203 are linked together.
- the bus architecture can also link together various other circuits such as peripheral devices, voltage regulators and power management circuits, etc., which are well known in the art, so the embodiments of this application will not further describe them .
- the bus interface provides the interface.
- Transceiver 1202 may be a plurality of elements, including a transmitter and a receiver, providing a means for communicating with various other devices over transmission media.
- the user interface 1204 may also be an interface capable of connecting externally and internally to required devices, and the connected devices include but not limited to keypads, displays, speakers, microphones, joysticks, and so on.
- the processor 1201 is responsible for managing the bus architecture and general processing, and the memory 1203 can store data used by the processor 1201 when performing operations.
- the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by the processor, each process of the above embodiment of the training data set acquisition method is realized, or Each process of the foregoing wireless transmission method embodiment is implemented, and the same technical effect can be achieved, so in order to avoid repetition, details are not repeated here.
- the processor is the processor in the terminal or the network side device described in the foregoing embodiments.
- the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
- the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above training data set acquisition method
- the chip includes a processor and a communication interface
- the communication interface is coupled to the processor
- the processor is used to run programs or instructions to implement the above training data set acquisition method
- the chip mentioned in the embodiment of the present application may also be called a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
Claims (55)
- 一种训练数据集获取方法,包括:基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
- 根据权利要求1所述的训练数据集获取方法,其中,所述基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量,包括:对各所述传输条件的贡献度进行排序;在等比例混合的基础上,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一。
- 根据权利要求1或2所述的训练数据集获取方法,其中,所述传输条件的类型包括如下至少之一:信噪比或信干噪比;参考信号接收功率;信号强度;干扰强度;终端移动速度;信道参数;终端离基站的距离;小区大小;载频;调制阶数或调制编码策略;小区类型;站间距;天气和环境因素;发端或收端的天线配置信息;终端能力或类型;基站能力或类型。
- 根据权利要求1或2所述的训练数据集获取方法,其中,所述获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集,包括:基于各所述传输条件下训练数据的数据量,收集各所述传输条件下的数据并标定,构成各所述传输条件下的训练数据集;或者,收集各所述传输条件下设定数量的数据,并基于各所述传输条件下训练数据的数据量,由所述设定数量的数据中选取部分数据并标定,或对所述设定数量的数据进行补足并标定,构成各所述传输条件下的训练数据集。
- 根据权利要求2所述的训练数据集获取方法,其中,所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,包括:按照如下规则,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,所述如下规则,包括:所述较大贡献度的值越大,所述减少的幅度越大;所述较小贡献度的值越小,所述增加的幅度越大。
- 根据权利要求5所述的训练数据集获取方法,其中,在所述排序 的结果为由小到大的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递减,在所述排序的结果为由大到小的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递增。
- 根据权利要求2所述的训练数据集获取方法,其中,所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,包括:根据所述排序,确定参照贡献度,并比较所述传输条件的贡献度与所述参照贡献度;根据比较结果,执行如下操作中至少之一,所述如下操作包括:若所述传输条件的贡献度大于所述参照贡献度,则确定所述传输条件的贡献度为所述较大贡献度,并减少所述传输条件下训练数据的数据量;若所述传输条件的贡献度不大于所述参照贡献度,则确定所述传输条件的贡献度为所述较小贡献度,并增加所述传输条件下训练数据的数据量。
- 根据权利要求7所述的训练数据集获取方法,其中,所述参照贡献度为所述排序的中位数,或所述排序中设定位置的贡献度,或所述排序中各贡献度的平均数,或所述排序中与所述平均数最接近的贡献度。
- 根据权利要求1、2、5-8中任一所述的训练数据集获取方法,其中,所述基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量,包括:基于各所述传输条件在实际应用中的概率密度,确定各所述传输条件对应的加权系数;基于各所述传输条件对所述优化目标的贡献度,结合所述加权系数,确定各所述传输条件下训练数据的数据量。
- 根据权利要求9所述的训练数据集获取方法,其中,所述加权系 数与所述概率密度呈函数递增关系。
- 根据权利要求1、2、5-8、10中任一所述的训练数据集获取方法,其中,所述方法还包括:将所述训练数据集发送至目标设备,所述目标设备用于基于所述训练数据集,训练所述神经网络。
- 根据权利要求11所述的训练数据集获取方法,其中,所述将所述训练数据集发送至目标设备,包括:直接将所述训练数据集发送至所述目标设备,或者,对所述训练数据集进行设定变换后发送至所述目标设备,所述设定变换包括特定的量化、特定的压缩以及按照预先约定或配置的神经网络处理中的至少一种。
- 一种训练数据集获取装置,包括:第一处理模块,用于基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;第二处理模块,用于基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
- 根据权利要求13所述的训练数据集获取装置,其中,所述第一处理模块,用于:对各所述传输条件的贡献度进行排序;在等比例混合的基础上,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一。
- 根据权利要求13或14所述的训练数据集获取装置,其中,所述传输条件的类型包括如下至少之一:信噪比或信干噪比;参考信号接收功率;信号强度;干扰强度;终端移动速度;信道参数;终端离基站的距离;小区大小;载频;调制阶数或调制编码策略;小区类型;站间距;天气和环境因素;发端或收端的天线配置信息;终端能力或类型;基站能力或类型。
- 根据权利要求13或14所述的训练数据集获取装置,其中,所述第二处理模块,用于:基于各所述传输条件下训练数据的数据量,收集各所述传输条件下的数据并标定,构成各所述传输条件下的训练数据集;或者,收集各所述传输条件下设定数量的数据,并基于各所述传输条件下训练数据的数据量,由所述设定数量的数据中选取部分数据并标定,或对所述设定数量的数据进行补足并标定,构成各所述传输条件下的训练数据集。
- 根据权利要求14所述的训练数据集获取装置,其中,所述第一处理模块在用于所述执行减少所述排序中较大贡献度对应的传输条件下 训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一时,用于:按照如下规则,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,所述如下规则,包括:所述较大贡献度的值越大,所述减少的幅度越大;所述较小贡献度的值越小,所述增加的幅度越大。
- 根据权利要求17所述的训练数据集获取装置,其中,在所述排序的结果为由小到大的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递减,在所述排序的结果为由大到小的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递增。
- 根据权利要求14所述的训练数据集获取装置,其中,所述第一处理模块在用于所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一时,用于:根据所述排序,确定参照贡献度,并比较所述传输条件的贡献度与所述参照贡献度;根据比较结果,执行如下操作中至少之一,所述如下操作包括:若所述传输条件的贡献度大于所述参照贡献度,则确定所述传输条件的贡献度为所述较大贡献度,并减少所述传输条件下训练数据的数据量;若所述传输条件的贡献度不大于所述参照贡献度,则确定所述传输条件的贡献度为所述较小贡献度,并增加所述传输条件下训练数据的数据量。
- 根据权利要求19所述的训练数据集获取装置,其中,所述参照贡献度为所述排序的中位数,或所述排序中设定位置的贡献度,或所述排序中各贡献度的平均数,或所述排序中与所述平均数最接近的贡献 度。
- 根据权利要求13、14、17-20中任一所述的训练数据集获取装置,其中,所述第一处理模块,还用于:基于各所述传输条件在实际应用中的概率密度,确定各所述传输条件对应的加权系数;基于各所述传输条件对所述优化目标的贡献度,结合所述加权系数,确定各所述传输条件下训练数据的数据量。
- 根据权利要求21所述的训练数据集获取装置,其中,所述加权系数与所述概率密度呈函数递增关系。
- 根据权利要求13、14、17-20、22中任一所述的训练数据集获取装置,其中,所述装置还包括:发送模块,用于将所述训练数据集发送至目标设备,所述目标设备用于基于所述训练数据集,训练所述神经网络。
- 根据权利要求23所述的训练数据集获取装置,其中,所述发送模块,用于:直接将所述训练数据集发送至所述目标设备,或者,对所述训练数据集进行设定变换后发送至所述目标设备,所述设定变换包括特定的量化、特定的压缩以及按照预先约定或配置的神经网络处理中的至少一种。
- 一种无线传输方法,包括:基于神经网络模型,进行无线传输运算,实现所述无线传输;其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如权利要求1-12中任一所述的训练数据集获取方法获取的。
- 根据权利要求25所述的无线传输方法,其中,在所述基于神经网络模型,进行无线传输运算之前,所述无线传输方法还包括:基于所述训练数据集,利用如下训练方式中任一,训练获取所述神 经网络模型,所述如下训练方式包括:单个终端集中式训练;单个网络侧设备集中式训练;多个终端联合分布式训练;多个网络侧设备联合分布式训练;单个网络侧设备与多个终端联合分布式训练;多个网络侧设备与多个终端联合分布式训练;多个网络侧设备与单个终端联合分布式训练。
- 根据权利要求26所述的无线传输方法,其中,所述网络侧设备为接入网设备、核心网设备或数据网络设备。
- 根据权利要求26或27所述的无线传输方法,其中,所述无线传输方法还包括:在所述分布式训练的过程中,将各所述传输条件下训练数据的占比在执行所述分布式训练的各主体间共享。
- 根据权利要求28所述的无线传输方法,其中,在所述分布式训练为多个网络侧设备联合分布式训练的情况下,由所述多个网络侧设备中任一网络侧设备计算确定各所述传输条件下训练数据的占比,并通过第一设定类型接口信令,将所述占比发送至所述多个网络侧设备中除所述任一网络侧设备外的其它网络侧设备。
- 根据权利要求29所述的无线传输方法,其中,所述第一设定类型接口信令包括Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令或N22接口信令。
- 根据权利要求28所述的无线传输方法,其中,在所述分布式训练为多个终端联合分布式训练的情况下,由所述多个终端中任一终端计算确定各所述传输条件下训练数据的占比,并通过第二设定类型接口信 令,将所述占比发送至所述多个终端中除所述任一终端外的其它终端。
- 根据权利要求31所述的无线传输方法,其中,所述第二设定类型接口信令包括PC5接口信令或sidelink接口信令。
- 根据权利要求28所述的无线传输方法,其中,在所述分布式训练为网络侧设备与终端联合分布式训练的情况下,由所述网络侧设备与终端中任一网络侧设备或任一终端计算确定各所述传输条件下训练数据的占比,并通过第三设定类型信令,将所述占比发送至所述网络侧设备与终端中除所述任一网络侧设备或任一终端外的其它网络侧设备或终端。
- 根据权利要求33所述的无线传输方法,其中,所述第三设定类型信令包括RRC、PDCCH层1信令、PDSCH、MAC CE、SIB、Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令、N22接口信令、PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3、PRACH的MSG A、PC5接口信令或sidelink接口信令。
- 根据权利要求26、27、29-34中任一所述的无线传输方法,其中,还包括:获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型。
- 根据权利要求35所述的无线传输方法,其中,所述获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型,包括:基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据;若所述传输条件中任一传输条件下的实时数据的占比高于所述任一 传输条件下训练数据的占比,则在在线调整所述训练完成的神经网络模型的过程中,不将所述任一传输条件下的实时数据中超出所述任一传输条件下训练数据的占比的数据输入所述训练完成的神经网络模型。
- 根据权利要求36所述的无线传输方法,其中,所述获取各所述传输条件下的实时数据,包括:在线采集网络侧设备和终端中至少之一的各所述传输条件下的数据,作为各所述传输条件下的实时数据;所述在线调整训练完成的神经网络模型,包括:基于所述网络侧设备和终端中至少之一的各所述传输条件下的数据,利用所述网络侧设备或所述终端,在线调整所述训练完成的神经网络模型。
- 根据权利要求37所述的无线传输方法,其中,在所述网络侧设备或所述终端在训练阶段未获取各所述传输条件下训练数据的占比的情况下,在所述基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据之前,所述无线传输方法还包括:由所述网络侧设备通过Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令和N22接口信令中任一接口信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比;或者,由所述网络侧设备通过PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3和PRACH的MSG A中任一信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;或者,由所述终端通过PC5接口信令或sidelink接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;或者,由所述终端通过RRC、PDCCH层1信令、PUSCH、MAC CE 和SIB中任一信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比。
- 一种无线传输装置,包括:第三处理模块,用于基于神经网络模型,进行无线传输运算,实现所述无线传输;其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如权利要求1-12中任一所述的训练数据集获取方法获取的。
- 根据权利要求39所述的无线传输装置,其中,所述无线传输装置还包括:训练模块,用于基于所述训练数据集,利用如下训练方式中任一,训练获取所述神经网络模型,所述如下训练方式包括:单个终端集中式训练;单个网络侧设备集中式训练;多个终端联合分布式训练;多个网络侧设备联合分布式训练;单个网络侧设备与多个终端联合分布式训练;多个网络侧设备与多个终端联合分布式训练;多个网络侧设备与单个终端联合分布式训练。
- 根据权利要求40所述的无线传输装置,其中,所述网络侧设备为接入网设备、核心网设备或数据网络设备。
- 根据权利要求40或41所述的无线传输装置,其中,所述无线传输装置还包括:第四处理模块,用于在所述分布式训练的过程中,将各所述传输条件下训练数据的占比在执行所述分布式训练的各主体间共享。
- 根据权利要求42所述的无线传输装置,其中,在所述分布式训练为多个网络侧设备联合分布式训练的情况下,所述第四处理模块,用 于由所述多个网络侧设备中任一网络侧设备计算确定各所述传输条件下训练数据的占比,并通过第一设定类型接口信令,将所述占比发送至所述多个网络侧设备中除所述任一网络侧设备外的其它网络侧设备。
- 根据权利要求43所述的无线传输装置,其中,所述第一设定类型接口信令包括Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令或N22接口信令。
- 根据权利要求42所述的无线传输装置,其中,在所述分布式训练为多个终端联合分布式训练的情况下,所述第四处理模块,用于由所述多个终端中任一终端计算确定各所述传输条件下训练数据的占比,并通过第二设定类型接口信令,将所述占比发送至所述多个终端中除所述任一终端外的其它终端。
- 根据权利要求45所述的无线传输装置,其中,所述第二设定类型接口信令包括PC5接口信令或sidelink接口信令。
- 根据权利要求42所述的无线传输装置,其中,在所述分布式训练为网络侧设备与终端联合分布式训练的情况下,所述第四处理模块,用于由所述网络侧设备与终端中任一网络侧设备或任一终端计算确定各所述传输条件下训练数据的占比,并通过第三设定类型信令,将所述占比发送至所述网络侧设备与终端中除所述任一网络侧设备或任一终端外的其它网络侧设备或终端。
- 根据权利要求47所述的无线传输装置,其中,所述第三设定类型信令包括RRC、PDCCH层1信令、PDSCH、MAC CE、SIB、Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令、N22接口信令、PUCCH层1信令、PUSCH、 PRACH的MSG1、PRACH的MSG3、PRACH的MSG A、PC5接口信令或sidelink接口信令。
- 根据权利要求40、41、43-48中任一所述的无线传输装置,其中,所述无线传输装置还包括:微调模块,用于获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型。
- 根据权利要求49所述的无线传输装置,其中,所述微调模块,用于:基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据;若所述传输条件中任一传输条件下的实时数据的占比高于所述任一传输条件下训练数据的占比,则在在线调整所述训练完成的神经网络模型的过程中,不将所述任一传输条件下的实时数据中超出所述任一传输条件下训练数据的占比的数据输入所述训练完成的神经网络模型。
- 根据权利要求50所述的无线传输装置,其中,所述微调模块,在用于所述获取各所述传输条件下的实时数据时,用于:在线采集网络侧设备和终端中至少之一的各所述传输条件下的数据,作为各所述传输条件下的实时数据;所述微调模块,在用于所述在线调整训练完成的神经网络模型时,用于:基于所述网络侧设备和终端中至少之一的各所述传输条件下的数据,利用所述网络侧设备或所述终端,在线调整所述训练完成的神经网络模型。
- 根据权利要求51所述的无线传输装置,其中,所述无线传输装置还包括:通信模块,用于在所述网络侧设备或所述终端在训练阶段未获取各所述传输条件下训练数据的占比的情况下,由所述网络侧设备通过Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令和N22接口信令中任一接口信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比;或者,由所述网络侧设备通过PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3和PRACH的MSG A中任一信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;或者,由所述终端通过PC5接口信令或sidelink接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;或者,由所述终端通过RRC、PDCCH层1信令、PUSCH、MAC CE和SIB中任一信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比。
- 一种通信设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至12任一项所述的训练数据集获取方法的步骤,或者实现如权利要求25-38任一项所述的无线传输方法的步骤。
- 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-12任一项所述的训练数据集获取方法,或者实现如权利要求25至38任一项所述的无线传输方法的步骤。
- 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-12任一项所述的训练数据集获取方法,或者实现如权利要求25至38任一项所述的无线传输方法的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22806785.6A EP4339842A1 (en) | 2021-05-11 | 2022-05-11 | Training data set acquisition method, wireless transmission method, apparatus, and communication device |
JP2023569669A JP2024518483A (ja) | 2021-05-11 | 2022-05-11 | トレーニングデータセット取得方法、無線伝送方法、装置及び通信機器 |
US18/388,635 US20240078439A1 (en) | 2021-05-11 | 2023-11-10 | Training Data Set Obtaining Method, Wireless Transmission Method, and Communications Device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110513732.0A CN115329954A (zh) | 2021-05-11 | 2021-05-11 | 训练数据集获取方法、无线传输方法、装置及通信设备 |
CN202110513732.0 | 2021-05-11 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/388,635 Continuation US20240078439A1 (en) | 2021-05-11 | 2023-11-10 | Training Data Set Obtaining Method, Wireless Transmission Method, and Communications Device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022237822A1 true WO2022237822A1 (zh) | 2022-11-17 |
Family
ID=83912888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/092144 WO2022237822A1 (zh) | 2021-05-11 | 2022-05-11 | 训练数据集获取方法、无线传输方法、装置及通信设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240078439A1 (zh) |
EP (1) | EP4339842A1 (zh) |
JP (1) | JP2024518483A (zh) |
CN (1) | CN115329954A (zh) |
WO (1) | WO2022237822A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625322A (zh) * | 2012-02-27 | 2012-08-01 | 北京邮电大学 | 多制式智能可配的无线网络优化的实现方法 |
CN109359166A (zh) * | 2018-10-10 | 2019-02-19 | 广东国地规划科技股份有限公司 | 一种空间增长动态模拟与驱动力因子贡献度同步计算方法 |
CN109472345A (zh) * | 2018-09-28 | 2019-03-15 | 深圳百诺名医汇网络技术有限公司 | 一种权重更新方法、装置、计算机设备和存储介质 |
CN111652381A (zh) * | 2020-06-04 | 2020-09-11 | 深圳前海微众银行股份有限公司 | 数据集贡献度评估方法、装置、设备及可读存储介质 |
CN112329813A (zh) * | 2020-09-29 | 2021-02-05 | 中南大学 | 一种能耗预测用特征提取方法及系统 |
US20210049473A1 (en) * | 2019-08-14 | 2021-02-18 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Robust Federated Training of Neural Networks |
-
2021
- 2021-05-11 CN CN202110513732.0A patent/CN115329954A/zh active Pending
-
2022
- 2022-05-11 WO PCT/CN2022/092144 patent/WO2022237822A1/zh active Application Filing
- 2022-05-11 EP EP22806785.6A patent/EP4339842A1/en active Pending
- 2022-05-11 JP JP2023569669A patent/JP2024518483A/ja active Pending
-
2023
- 2023-11-10 US US18/388,635 patent/US20240078439A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625322A (zh) * | 2012-02-27 | 2012-08-01 | 北京邮电大学 | 多制式智能可配的无线网络优化的实现方法 |
CN109472345A (zh) * | 2018-09-28 | 2019-03-15 | 深圳百诺名医汇网络技术有限公司 | 一种权重更新方法、装置、计算机设备和存储介质 |
CN109359166A (zh) * | 2018-10-10 | 2019-02-19 | 广东国地规划科技股份有限公司 | 一种空间增长动态模拟与驱动力因子贡献度同步计算方法 |
US20210049473A1 (en) * | 2019-08-14 | 2021-02-18 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Robust Federated Training of Neural Networks |
CN111652381A (zh) * | 2020-06-04 | 2020-09-11 | 深圳前海微众银行股份有限公司 | 数据集贡献度评估方法、装置、设备及可读存储介质 |
CN112329813A (zh) * | 2020-09-29 | 2021-02-05 | 中南大学 | 一种能耗预测用特征提取方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
JP2024518483A (ja) | 2024-05-01 |
EP4339842A1 (en) | 2024-03-20 |
CN115329954A (zh) | 2022-11-11 |
US20240078439A1 (en) | 2024-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022078276A1 (zh) | Ai网络参数的配置方法和设备 | |
US20220247472A1 (en) | Coding method, decoding method, and device | |
US20220247469A1 (en) | Method and device for transmitting channel state information | |
WO2021031812A1 (zh) | 一种天线面板状态的指示方法及装置 | |
CN114765879A (zh) | Pusch传输方法、装置、设备及存储介质 | |
US20240073882A1 (en) | Beam Control Method and Apparatus for Intelligent Surface Device and Electronic Device | |
US20230244911A1 (en) | Neural network information transmission method and apparatus, communication device, and storage medium | |
US20240088970A1 (en) | Method and apparatus for feeding back channel information of delay-doppler domain, and electronic device | |
WO2023066288A1 (zh) | 模型请求方法、模型请求处理方法及相关设备 | |
WO2022237822A1 (zh) | 训练数据集获取方法、无线传输方法、装置及通信设备 | |
WO2022083619A1 (zh) | 通信信息的发送、接收方法及通信设备 | |
US20240056989A1 (en) | Precoding and power allocation for access points in a cell-free communication system | |
WO2023169544A1 (zh) | 质量信息确定方法、装置、终端及存储介质 | |
US20240224082A1 (en) | Parameter selection method, parameter configuration method, terminal, and network side device | |
WO2024041420A1 (zh) | 测量反馈处理方法、装置、终端及网络侧设备 | |
WO2023088387A1 (zh) | 信道预测方法、装置、ue及系统 | |
WO2023040886A1 (zh) | 数据采集方法及装置 | |
WO2024032694A1 (zh) | Csi预测处理方法、装置、通信设备及可读存储介质 | |
WO2024078405A1 (zh) | 传输方法、装置、通信设备及可读存储介质 | |
WO2023207898A1 (zh) | 多trp传输的pmi的反馈方法、设备、终端及网络侧设备 | |
WO2024017239A1 (zh) | 数据采集方法及装置、通信设备 | |
WO2023174325A1 (zh) | Ai模型的处理方法及设备 | |
WO2024012285A1 (zh) | 参考信号测量方法、装置、终端、网络侧设备及介质 | |
WO2023179540A1 (zh) | 信道预测方法、装置及无线通信设备 | |
WO2024093713A1 (zh) | 资源配置方法、装置、通信设备及可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22806785 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023569669 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022806785 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022806785 Country of ref document: EP Effective date: 20231211 |