WO2022237822A1 - 训练数据集获取方法、无线传输方法、装置及通信设备 - Google Patents

训练数据集获取方法、无线传输方法、装置及通信设备 Download PDF

Info

Publication number
WO2022237822A1
WO2022237822A1 PCT/CN2022/092144 CN2022092144W WO2022237822A1 WO 2022237822 A1 WO2022237822 A1 WO 2022237822A1 CN 2022092144 W CN2022092144 W CN 2022092144W WO 2022237822 A1 WO2022237822 A1 WO 2022237822A1
Authority
WO
WIPO (PCT)
Prior art keywords
interface signaling
training data
training
transmission
data under
Prior art date
Application number
PCT/CN2022/092144
Other languages
English (en)
French (fr)
Inventor
孙布勒
姜大洁
杨昂
纪子超
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to EP22806785.6A priority Critical patent/EP4339842A1/en
Priority to JP2023569669A priority patent/JP2024518483A/ja
Publication of WO2022237822A1 publication Critical patent/WO2022237822A1/zh
Priority to US18/388,635 priority patent/US20240078439A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • the present application belongs to the technical field of communication, and in particular relates to a training data set acquisition method, a wireless transmission method, a device and a communication device.
  • Generalization means that the neural network can also obtain reasonable output for data not encountered in the training (learning) process.
  • a common neural network can be trained based on mixed data, and the parameters of the neural network do not need to be switched as the environment changes.
  • this neural network cannot achieve optimal performance in every transmission condition.
  • Embodiments of the present application provide a training data set acquisition method, wireless transmission method, device, and communication equipment, which can solve problems such as insufficient generalization ability of neural networks in existing wireless transmission.
  • a method for obtaining a training data set comprising:
  • the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
  • a training data set acquisition device including:
  • the first processing module is used to determine the amount of training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization goal;
  • the second processing module is used to obtain the training data under each of the transmission conditions based on the data amount of the training data under each of the transmission conditions, so as to form a training data set for training the neural network;
  • the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
  • a wireless transmission method includes:
  • the neural network model is obtained by using a training data set for training in advance, and the training data set is obtained based on the method for obtaining a training data set as described in the first aspect.
  • a wireless transmission device including:
  • the third processing module is used to perform wireless transmission calculation based on the neural network model to realize the wireless transmission;
  • the neural network model is obtained by using a training data set for training in advance, and the training data set is obtained based on the method for obtaining a training data set as described in the first aspect.
  • a communication device which includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is executed by the processor When executed, the steps of the method described in the first aspect are realized, or the steps of the method described in the third aspect are realized.
  • a communication device including a processor and a communication interface, wherein the processor is used to determine the amount of training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization goal and based on the amount of training data under each of the transmission conditions, obtain the training data under each of the transmission conditions to form a training data set for training the neural network; wherein the transmission conditions are optimized for the neural network
  • the contribution degree of the target indicates the influence degree of the transmission condition on the value of the neural network optimization target.
  • a communication device including a processor and a communication interface, wherein the processor is used to perform wireless transmission operations based on a neural network model to realize the wireless transmission; wherein the neural network model is The training is obtained by using the training data set, and the training data set is obtained based on the method for obtaining the training data set as described in the first aspect.
  • a readable storage medium is provided, and programs or instructions are stored on the readable storage medium, and when the programs or instructions are executed by a processor, the steps of the method described in the first aspect are realized, or the steps of the method described in the first aspect are realized, or The steps of the method described in the third aspect.
  • a ninth aspect provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, the processor is used to run programs or instructions, and implement the method as described in the first aspect , or implement the method described in the third aspect.
  • a computer program/program product is provided, the computer program/program product is stored in a non-transitory storage medium, and the program/program product is executed by at least one processor to implement the program described in the first aspect.
  • a variety of The data under transmission conditions and the construction of a mixed training data set can effectively improve the generalization ability of the neural network.
  • FIG. 1 is a structural diagram of a wireless communication system applicable to an embodiment of the present application
  • Fig. 2 is a schematic flow chart of the training data set acquisition method provided by the embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a training data set acquisition device provided in an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a wireless transmission method provided in an embodiment of the present application.
  • FIG. 5 is a schematic flow diagram of constructing a neural network model in a wireless transmission method provided according to an embodiment of the present application
  • FIG. 6 is a schematic flow diagram of determining the proportion of training data in the training data set acquisition method provided according to an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of a neural network used for DMRS channel estimation in a wireless transmission method provided according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a wireless transmission device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a communication device provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a hardware structure of a terminal implementing an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a hardware structure of an access network device implementing an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a hardware structure of a core network device implementing an embodiment of the present application.
  • first, second and the like in the specification and claims of the present application are used to distinguish similar objects, and are not used to describe a specific sequence or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein and that "first" and “second” distinguish objects. It is usually one category, and the number of objects is not limited. For example, there may be one or more first objects.
  • “and/or” in the description and claims means at least one of the connected objects, and the character “/” generally means that the related objects are an "or” relationship.
  • LTE Long Term Evolution
  • LTE-Advanced LTE-Advanced
  • LTE-A Long Term Evolution-Advanced
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency-Division Multiple Access
  • system and “network” in the embodiments of the present application are often used interchangeably, and the described technology can be used for the above-mentioned system and radio technology, and can also be used for other systems and radio technologies.
  • NR New Radio
  • the following description describes the New Radio (NR) system for illustrative purposes, and uses NR terminology in most of the following descriptions, but these techniques can also be applied to applications other than NR system applications, such as the 6th generation (6 th Generation, 6G) communication system.
  • 6G 6th Generation
  • FIG. 1 shows a structural diagram of a wireless communication system to which this embodiment of the present application is applicable.
  • the wireless communication system includes a terminal 101 and a network side device 102 .
  • the terminal 101 can also be called a terminal device or a user terminal (User Equipment, UE), and the terminal 101 can be a mobile phone, a tablet computer (Tablet Personal Computer), a laptop computer (Laptop Computer) or a notebook computer, a personal digital Assistant (Personal Digital Assistant, PDA), handheld computer, netbook, ultra-mobile personal computer (UMPC), mobile Internet device (Mobile Internet Device, MID), wearable device (Wearable Device) or vehicle-mounted device (VUE), Pedestrian Terminal (PUE) and other terminal-side devices, wearable devices include: smart watches, bracelets, earphones, glasses, etc.
  • the network side device 102 may be an access network device 1021 or a core network device 1022 or a data network (data network, DN) device 1023.
  • the access network device 1021 may also be called a wireless access network device or a radio access network (Radio Access Network, RAN), and the access network device 1021 may be a base station or a node responsible for neural network training on the RAN side, etc., and the base station may Known as Node B, Evolved Node B, Access Point, Base Transceiver Station (BTS), Radio Base Station, Radio Transceiver, Basic Service Set (BSS), Extended Service Set (Extended Service Set) Set, ESS), Node B, Evolved Node B (eNB), Home Node B, Home Evolved Node B, WLAN access point, WiFi node, Transmitting Receiving Point (TRP) or others in the field
  • TRP Transmitting Receiving Point
  • the core network device 1022 may also be called a core network (Core Network, CN) or a 5G core (5G core, 5GC) network, and the core network device 1022 may include but not limited to at least one of the following: a core network node, a core network function, a mobile Management entity (Mobility Management Entity, MME), access management function (Access Management Function, AMF), session management function (Session Management Function, SMF), user plane function (User Plane Function, UPF), policy control function (Policy Control Function (PCF), Policy and Charging Rules Function (PCRF), Edge Application Server Discovery Function (EASDF), Application Function (AF), etc.
  • MME mobile Management entity
  • MME mobile Management Entity
  • Access Management Function Access Management Function
  • AMF session management function
  • SMF Session Management Function
  • PCF Policy Control Function
  • PCF Policy and Charging Rules Function
  • EASDF Edge Application Server Discovery Function
  • AF Application Function
  • the data network device 1023 may include but not limited to at least one of the following: Network Data Analytics Function (Network Data Analytics Function, NWDAF), Unified Data Management (Unified Data Management, UDM), Unified Data Warehouse (Unified Data Repository, UDR) and wireless Unstructured Data Storage Function (UDSF). It should be noted that, in the embodiment of the present application, only the data network equipment in the 5G system is taken as an example, but it is not limited thereto.
  • NWDAF Network Data Analytics Function
  • UDM Unified Data Management
  • UDR Unified Data Warehouse
  • UDR Unstructured Data Storage Function
  • FIG. 2 is a schematic flow diagram of a method for obtaining a training data set provided by an embodiment of the present application.
  • the method can be executed by a terminal and/or a network-side device.
  • the terminal can be the terminal 101 shown in FIG. 1
  • the network-side device Specifically, it may be the network side device 102 shown in FIG. 1 .
  • the method includes:
  • Step 201 based on the contribution of each transmission condition to the neural network optimization goal, determine the data volume of training data under each transmission condition.
  • the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
  • the degree of influence of each transmission condition on the value of the neural network optimization target can be tested in advance, that is, the value of the test transmission condition can affect the optimization target.
  • the degree of influence of each transmission condition on the optimization goal of the neural network Generally, the higher the degree of influence, the larger the contribution value corresponding to the transmission condition, and vice versa, the smaller the contribution value.
  • the optimization target is expressed as a function of the transmission condition
  • the function is an increasing function about the transmission condition (such as the throughput is an increasing function of SNR)
  • the contribution of the transmission condition with a large value of the optimization target is high
  • the function is a decreasing function about the transmission condition (for example, NMSE is a decreasing function of SNR)
  • the contribution of the transmission condition with a small optimization target value is high.
  • the contribution of the transmission condition to the optimization objective may be determined based on the attainable optimal value of the optimization objective under the transmission condition. That is to say, the contribution of a given transmission condition to the optimization objective can be measured by the size of the attainable optimal value of the optimization objective under the given transmission condition. The greater the impact of the given transmission conditions on the optimization objective, the greater the corresponding contribution.
  • the embodiment of the present application can adjust the data volume of the training data corresponding to each transmission condition based on the size of the contribution value corresponding to each transmission condition, that is, each The data volume of the training data under the transfer condition is associated with the contribution of the transfer condition to the optimization objective.
  • the data amount is the reference data amount of the training data that needs to be prepared for each transmission condition, or called the set data amount. When actually obtaining the training data set, this data amount needs to be referred to when preparing the training data for each transmission condition.
  • the amount of data of the transmission conditions with a large contribution can be reduced to reduce the influence of these transmission conditions; at the same time, the transmission conditions with a small contribution can be increased
  • the amount of data to improve the influence of these transmission conditions that is, to balance the impact of each transmission condition on the optimization goal through the amount of data.
  • the data volume may refer to the absolute value of the data volume or the relative proportion of the data volume.
  • the transmission condition is a parameter of a transmission medium, a transmission signal, a transmission environment, etc. involved in an actual wireless transmission environment.
  • the type of the transmission condition includes at least one of the following:
  • Signal-to-noise ratio or signal-to-interference-to-noise ratio
  • RSRP Reference Signal Receiving Power
  • the distance between the terminal and the base station is the distance between the terminal and the base station
  • Antenna configuration information at the transmitting end or receiving end
  • the types of transmission conditions involved in this embodiment of the application may include, but are not limited to, one or a combination of multiple types of transmission conditions listed above.
  • the interference intensity can indicate the intensity of co-channel interference between cells, or the magnitude of other interference; channel parameters such as number of paths (or LOS or NLOS scenarios), delay (or maximum delay), Doppler (or maximum Puler), arrival angle (including horizontal, vertical) range, departure angle (including horizontal, vertical) range or channel correlation coefficient, etc.; cell type such as indoor cell, outdoor cell, macro cell, micro cell or pico cell, etc.; station distance For example, it can be divided into station spacing within 200 meters, 200-500 meters or more than 500 meters, etc.; weather and environmental factors, such as the temperature and/or humidity of the network environment where the training data is located; the antenna configuration information of the transmitting end or receiving end, For example, it may be the number of antennas and/or antenna polarization; the UE capability/type may be, for example, Redcap UE and/or normal UE.
  • Step 202 based on the data volume of the training data under each of the transmission conditions, acquire the training data under each of the transmission conditions to form a training data set for training the neural network.
  • the reference data volume corresponding to each transmission condition is used as a reference or setting
  • the training data under each transmission condition is acquired, so that the data amount of the finally acquired training data under each transmission condition conforms to the reference or setting.
  • the obtained training data under various transmission conditions are non-uniformly mixed to obtain a data set, which is the training data set, which can be used to train the above-mentioned neural network or neural network model in the wireless transmission environment.
  • the acquiring training data under each of the transmission conditions to form a training data set for training the neural network includes: collecting each of the training data based on the amount of training data under each of the transmission conditions. Data under the transmission conditions and calibrated to form a training data set under each of the transmission conditions; or, collect a set amount of data under each of the transmission conditions, and based on the data volume of the training data under each of the transmission conditions, by Select part of data from the set amount of data and calibrate, or complement and calibrate the set amount of data to form a training data set under each of the transmission conditions.
  • the data volume required for each transmission condition can be calculated first, and then the data of each transmission condition can be obtained according to the data volume . It is also possible to obtain a large amount of data for each transmission condition first, that is, obtain a set amount of data, then calculate the amount of data required for each transmission condition, and then select or supplement from the above-mentioned large amount of data obtained in advance.
  • the total amount of data acquired in advance under the kth transmission condition is M k
  • the required data amount determined by calculation is N k . If M k ⁇ N k , it is necessary to randomly select N k data from the M k data and put them into the training data set; K data is then put into the training data set.
  • the required data volume is the data volume of the training data under each transmission condition determined in the previous step.
  • the label added to the data DMRS signal under the transmission condition is the true value of the channel corresponding to the DMRS signal.
  • the embodiments of the present application can be applied in any scenario where machine learning can be used to replace the functions of one or more modules in the existing wireless transmission network, that is, the present application can be used when machine learning is used to train the neural network.
  • the training data set acquisition method in the embodiment of the application constructs the training data set.
  • Application scenarios include, for example, pilot design, channel estimation, signal detection, user pairing, HARQ, and positioning at the physical layer, resource allocation, handover, and mobility management at the high layer, and scheduling or slicing at the network layer. There is no limitation on specific wireless transmission application scenarios.
  • various transmission conditions are selected in different proportions according to the contribution of data with different transmission conditions to the neural network optimization goal (or objective function or loss function).
  • the following data can be used to construct a mixed training data set, which can effectively improve the generalization ability of the neural network.
  • determining the amount of training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization target includes: sorting the contribution of each transmission condition; On the basis of performing at least one of the operations of reducing the data volume of training data under the transmission condition corresponding to the larger contribution degree in the sorting and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the sorting .
  • the contribution of the data with different transmission conditions to the neural network optimization goal (or objective function, loss function) when mixed in equal proportions can be calculated first. Sort. Afterwards, when constructing the mixed training data set, based on the above sorting, the smaller contribution degree and the greater contribution degree can be determined, so as to further determine the transmission conditions with lower contribution degree and the transmission condition with higher contribution degree, and can be used in Under the premise of ensuring sufficient data for all transmission conditions, increase the data volume of transmission conditions with low contribution, and/or reduce the data volume of transmission conditions with high contribution.
  • a threshold that is, a preset threshold
  • the ratio of the data volume of any transmission condition to the total data volume should not be lower than the threshold, so as to meet the above-mentioned "premise of ensuring sufficient data for all transmission conditions" .
  • At least one of the operations of reducing the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the ranking and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the ranking is performed.
  • One of them includes: according to the following rules, performing the reduction of the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the sorting and increasing the data of the training data under the transmission condition corresponding to the smaller contribution degree in the sorting
  • At least one of the amount of operations include: the greater the value of the greater contribution, the greater the magnitude of the reduction; the smaller the value of the smaller contribution, the greater the magnitude of the increase bigger.
  • the goal is to make the transmission condition gradually increase with the contribution of transmission conditions.
  • the amount of training data is gradually reduced. Therefore, the lower the contribution degree of the transmission condition is, the more the amount of data will be increased by the transmission condition; the higher the contribution degree of the transmission condition will be, the more the data volume will be reduced by the transmission condition.
  • the embodiment of the present application increases or decreases the data amount of the training data corresponding to the transmission condition proportionally according to the contribution value, so that as the contribution degree of the transmission condition increases gradually, the data amount of the training data of the transmission condition gradually decreases, thereby A better balance of the influence of each transmission condition on the final neural network is more conducive to improving the generalization ability of the neural network.
  • the data volume of the training data under the transmission conditions decreases in the direction of the sorting, and when the sorting result is from large to small In this case, the data volume of the training data under the transmission condition increases according to the sorting direction.
  • the proportion of the corresponding transmission condition data to the total data can be arbitrarily decreasing, as in Linear reduction, arithmetic reduction, proportional reduction, exponential function reduction or power function reduction, etc.
  • the proportion of the data corresponding to the transmission condition to the total data can be increased in any way, such as linear increase, arithmetic difference increase, proportional increase, exponential function increase or power function formula increase, etc.
  • At least one of the operations of reducing the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the ranking and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the ranking is performed.
  • the contribution degree of the transmission condition is greater than the reference contribution degree, then determine that the contribution degree of the transmission condition is the greater contribution degree, and reduce the amount of training data under the transmission condition;
  • the contribution degree of the transmission condition is not greater than the reference contribution degree, determine that the contribution degree of the transmission condition is the smaller contribution degree, and increase the amount of training data under the transmission condition.
  • an intermediate comparison reference amount of the contribution degree may be determined first according to the ranking, which is called the reference contribution degree.
  • the reference contribution degree is the median of the ranking, or the contribution degree of a set position in the ranking, or the average of each contribution degree in the ranking, or the difference between the ranking and the The closest contribution to the average.
  • the average may be an arithmetic mean, a geometric mean, a harmonic mean, a weighted mean, a square mean or an exponential mean, etc.
  • the contribution degree of each transmission condition is compared with the reference contribution degree in turn. If the contribution degree of the i-th transmission condition is greater than the reference contribution degree, then the i-th transmission condition is determined to be the larger contribution degree described in the above-mentioned embodiment, and the data volume of the i-th transmission condition is reduced; otherwise, If the contribution degree of the i-th transmission condition is smaller than the median contribution degree, the i-th transmission condition is determined as the smaller contribution degree described in the above embodiment, and the data amount of the i-th transmission condition is increased.
  • the intermediate comparison reference amount of the contribution degree by determining the intermediate comparison reference amount of the contribution degree, it is only necessary to compare other contribution degrees with the comparison reference amount, and then determine to increase or decrease the amount of data corresponding to the transmission condition according to the comparison result.
  • the algorithm is simple and the calculation amount is small. .
  • determining the amount of training data under each of the transmission conditions based on the contribution of each of the transmission conditions to the neural network optimization goal includes: determining each The weighting coefficient corresponding to the transmission condition; based on the contribution of each transmission condition to the optimization goal, combined with the weighting coefficient, the data amount of the training data under each transmission condition is determined.
  • the weighting item when determining the proportion of data under different transmission conditions to the total data volume, can be designed based on the actual probability density of different transmission conditions, and the data volume of conditions with high probability density can be increased, and the data volume with low probability density can be reduced.
  • the amount of data for the condition For example, assuming that the probability density of the kth SNR is p k , the corresponding weighting item is f(p k ), and f(p k ) is an increasing function about p k .
  • the data amount of the kth SNR after the weighting item is updated is f(p k ) ⁇ N k .
  • the probability density reflects the occurrence probability of transmission conditions in different environments, and the transmission conditions in different environments do not exist with equal probability, and some transmission conditions have slightly higher probability of occurrence, and some slightly lower.
  • the weighting coefficient and the probability density have an increasing function relationship. That is to say, the relationship between the weighted item and the probability density can be any increasing function relationship, that is, it is necessary to ensure that the weighted item with high probability density is large, and the weighted item with low probability density is small.
  • the weighting item is designed according to the actual probability density of the transmission condition, which can better adapt to the actual environment.
  • the method further includes: sending the training data set to a target device, where the target device is used to train the neural network based on the training data set.
  • the training data set obtained in the embodiment of the present application can be used to train the neural network, and the method for obtaining the training data set in the embodiment of the present application can be applied to data transmission when the data acquisition end and the neural network training end may not be at the same execution end scene.
  • data collection is completed according to the determined ratio of the mixed data set, and a training data set is built after calibration, and the training data set is fed back to other devices that need to execute the neural network training process, that is, the target device.
  • the target device is a second device different from the current device, which may be a terminal or a network side device, and is used to complete the training of the neural network model by using the obtained training data set.
  • the sending the training data set to the target device includes: directly sending the training data set to the target device, or sending the training data set to the
  • the setting transformation includes at least one of specific quantization, specific compression, and neural network processing according to a pre-agreement or configuration.
  • the sending method may be direct sending or indirect sending.
  • Indirect sending refers to transforming and feeding back the training data in the training data set.
  • it can adopt a specific quantization method, a specific compression method, or process the training data to be sent according to the pre-agreed and configured neural network before sending it.
  • the neural network optimization objective or objective function, loss function
  • MSE mean square error
  • NMSE normalized mean square error
  • N k the data volume of the kth SNR
  • the contribution of the kth SNR data to the above-mentioned index that needs to be minimized is recorded as C k .
  • the SNRs are sorted in the order of C k from small to large (or from large to small).
  • the decreasing rule can be any decreasing rule; or, according to the order of C k from large to small, make the value of the corresponding N k
  • the value increases from small to large and the increment rule can be any increment rule.
  • weighting items when determining the proportion of data under different transmission conditions to the total data volume, weighting items may be designed based on actual probability densities of different transmission conditions. Assuming that the probability density of the kth SNR is p k , the corresponding weighting item is f(p k ), and f(p k ) is an increasing function about p k . Consider that the data amount of the kth SNR after the weighting item is updated is f(p k ) ⁇ N k .
  • the optimization objective (or objective function, loss function) of the neural network is used as the index to be maximized such as signal to interference plus noise ratio (SINR), spectral efficiency or throughput, and the transmission condition is the signal-to-noise ratio (signal to noise ratio, SNR) application scenario as an example, consider K types of SNR data to be mixed, record the total amount of data as N all , record the k-th signal-to-noise ratio as SNR k , and record the k-th signal-to-noise ratio as SNR k .
  • the data volume of the noise ratio is denoted as N k .
  • the contribution of the kth SNR data to the above index to be maximized is recorded as C k .
  • the SNRs are sorted in the order of C k from small to large (or from large to small).
  • the decreasing rule can be any decreasing rule; or, according to the order of C k from large to small, make the value of the corresponding N k
  • the value increases from small to large and the increment rule can be any increment rule.
  • weighting items when determining the proportion of data under different transmission conditions to the total data volume, weighting items may be designed based on actual probability densities of different transmission conditions. Assuming that the probability density of the kth SNR is p k , the corresponding weighting item is f(p k ), and f(p k ) is an increasing function about p k . Consider that the data amount of the kth SNR after the weighting item is updated is f(p k ) ⁇ N k .
  • the training data set acquisition method provided in the embodiment of the present application may be executed by a training data set acquisition device, or a control module in the training data set acquisition device for executing the training data set acquisition method.
  • the method for obtaining the training data set executed by the training data set obtaining device is taken as an example to illustrate the training data set obtaining device provided in the embodiment of the present application.
  • the structure of the training data set acquisition device in the embodiment of the present application is shown in Figure 3, which is a schematic structural diagram of the training data set acquisition device provided in the embodiment of the present application, and the device can be used to implement the above-mentioned training data set acquisition method embodiments.
  • the training data set acquisition, the device includes: a first processing module 301 and a second processing module 302, wherein:
  • the first processing module 301 is used to determine the data volume of the training data under each transmission condition based on the contribution of each transmission condition to the neural network optimization goal;
  • the second processing module 302 is configured to acquire the training data under each of the transmission conditions based on the data amount of the training data under each of the transmission conditions, so as to form a training data set for training the neural network;
  • the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
  • the first processing module is configured to:
  • the type of the transmission condition includes at least one of the following:
  • Signal-to-noise ratio or signal-to-interference-to-noise ratio
  • the distance between the terminal and the base station is the distance between the terminal and the base station
  • Antenna configuration information at the transmitting end or receiving end
  • the second processing module is configured to:
  • the data under each of the transmission conditions is collected and calibrated to form a training data set under each of the transmission conditions;
  • Collect a set amount of data under each of the transmission conditions and based on the data volume of the training data under each of the transmission conditions, select part of the data from the set amount of data and calibrate, or the set amount of data
  • the data is supplemented and calibrated to form a training data set under each of the transmission conditions.
  • the first processing module is used for the execution to reduce the amount of training data under the transmission condition corresponding to the larger contribution degree in the ranking and increase the transmission condition corresponding to the smaller contribution degree in the ranking
  • at least one of the operations on the data volume of the training data is used:
  • the data volume of the training data under the transmission conditions decreases in the direction of the sorting, and when the sorting result is from large to small In this case, the data volume of the training data under the transmission condition increases according to the sorting direction.
  • the first processing module is used for the execution to reduce the amount of training data under the transmission condition corresponding to the larger contribution degree in the ranking and increase the transmission condition corresponding to the smaller contribution degree in the ranking
  • at least one of the operations on the data volume of the training data is used:
  • the contribution degree of the transmission condition is greater than the reference contribution degree, then determine that the contribution degree of the transmission condition is the greater contribution degree, and reduce the amount of training data under the transmission condition;
  • the contribution degree of the transmission condition is not greater than the reference contribution degree, determine that the contribution degree of the transmission condition is the smaller contribution degree, and increase the amount of training data under the transmission condition.
  • the reference contribution degree is the median of the ranking, or the contribution degree of a set position in the ranking, or the average of each contribution degree in the ranking, or the difference between the ranking and the The closest contribution to the average.
  • the first processing module is also used for:
  • the data volume of the training data under each of the transmission conditions is determined.
  • the weighting coefficient and the probability density have an increasing function relationship.
  • the device also includes:
  • a sending module configured to send the training data set to a target device, and the target device is used to train the neural network based on the training data set.
  • the sending module is configured to:
  • the setting transformation includes specific quantization, specific compression, and At least one of contracted or configured neural network processing.
  • the training data set acquisition device in the embodiment of the present application may be a device, a device with an operating system or an electronic device, or a component, an integrated circuit, or a chip in a terminal or a network-side device.
  • the apparatus or electronic equipment may be a mobile terminal or a non-mobile terminal, and may also include but not limited to the types of the network side equipment 102 listed above.
  • the mobile terminal may include but not limited to the types of terminal 101 listed above
  • the non-mobile terminal may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television ( television, TV), teller machines or self-service machines, etc., are not specifically limited in this embodiment of the present application.
  • Network Attached Storage Network Attached Storage
  • the training data set acquisition device provided by the embodiment of the present application can realize each process realized by the method embodiment in FIG. 2 and achieve the same technical effect. To avoid repetition, details are not repeated here.
  • the embodiment of the present application also provides a wireless transmission method, which can be executed by a terminal and/or a network-side device.
  • the terminal can be the terminal 101 shown in FIG. 1
  • the network-side device can be specifically the terminal 101 shown in FIG. 1.
  • Figure 4 it is a schematic flowchart of the wireless transmission method provided by the embodiment of the present application, the method includes:
  • Step 401 based on the neural network model, perform a wireless transmission operation to realize the wireless transmission.
  • the neural network model is obtained by training in advance using a training data set, and the training data set is obtained based on the methods for obtaining the training data set as described in the above-mentioned embodiments.
  • the training data set (or the data ratio of each transmission condition can also be obtained) can be obtained in advance according to the above-mentioned embodiments of the training data set acquisition method, and the training data set can be used to train and initialize the built neural network. , to get the neural network model. Afterwards, the neural network model is applied to the wireless transmission calculation process of the embodiment of the present application, and the wireless transmission of the embodiment of the present application is finally realized through calculation.
  • the wireless transmission application environment of the embodiment of the present application can be any wireless transmission environment that can replace the functions of one or more modules in the existing wireless transmission network with machine learning, that is, use machine learning to train neural networks in wireless transmission
  • the training data set can be constructed by using the above-mentioned training data set acquisition method embodiment of the present application
  • the neural network model can be trained by using the training data set for wireless transmission.
  • Wireless transmission application environment such as pilot design, channel estimation, signal detection, user pairing, HARQ and positioning at the physical layer, resource allocation, handover and mobility management at the high layer, and scheduling or slicing at the network layer, etc., this application
  • the embodiment does not limit specific wireless transmission application scenarios.
  • data under various transmission conditions are selected in different proportions to construct a non-uniform mixed training data set, and based on The heterogeneously mixed training data set trains a common neural network for wireless transmission under different actual transmission conditions, enabling the trained neural network to achieve higher performance under each transmission condition.
  • the wireless transmission method before performing the wireless transmission operation based on the neural network model, the wireless transmission method further includes:
  • training methods Based on the training data set, use any of the following training methods to train and obtain the neural network model, and the following training methods include:
  • the neural network model before using the neural network model to perform wireless transmission operations, the neural network model must first be trained and obtained using the training data set.
  • the training phase of the neural network can be performed offline, and the execution subject can be a network-side device, a terminal-side device, or a network-side device-terminal-side device combination.
  • the network side device is an access network device, a core network device or a data network device. That is to say, the network-side device in this embodiment of the present application may include one or more of a network-side device in an access network, a core network device, and a data network device (data network, DN).
  • the network-side device in the access network may be a base station, or a node responsible for AI training on the RAN side, or not limited to the type of access network device 1021 listed in FIG. 1 .
  • the core network equipment is not limited to the type of core network equipment 1022 listed in FIG. 1 , and the data network equipment may be NWDAF, UDM, UDR, or UDSF.
  • the execution subject when it is a network-side device, it may be a centralized training based on a single network-side device, or a distributed training (such as federated learning) based on multiple network-side devices.
  • the execution subject when the execution subject is a terminal-side device, it can be centralized training based on a single terminal, or distributed training based on multiple terminals (such as federated learning).
  • the execution subject when the execution subject is a network-side device-terminal-side device combination, it can be a single network-side device combined with multiple terminal devices, or a single terminal device combined with multiple network-side devices, or multiple network-side devices combined a terminal-side device. This application does not specifically limit the subject of execution of the training process.
  • the wireless transmission method further includes: during the distributed training process, sharing the proportion of the training data under each transmission condition among the subjects performing the distributed training.
  • each execution subject can realize the training of the neural network model without sharing its own data, which can solve the problem of insufficient computing power or training capacity of a single device or inability between devices. Problems with sharing data (involving privacy concerns) or the cost of transferring large amounts of data.
  • any one of the multiple network-side devices calculates and determines the proportion of training data under each of the transmission conditions. ratio, and send the ratio to other network-side devices in the plurality of network-side devices except the one network-side device through the first setting type interface signaling.
  • the network-side interface signaling is a preset first setting type.
  • the first setting type interface signaling includes Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling, or N22 interface signaling.
  • the Xn interface can be used between the base stations to realize data sharing through the Xn interface signaling
  • the N1, N2, N3, N4, N5, N6, N7, N8, and N9 between the core network devices can be used between the core network devices , N10, N11, N12, N13, N14, N15, or N22 interfaces, share the share of data through the corresponding type of interface signaling.
  • a certain network-side device may calculate and determine the data ratio information, and then share the data ratio information with other network-side devices through Xn interface signaling (including but not limited to).
  • any one of the multiple terminals calculates and determines the proportion of training data under each of the transmission conditions, and passes the second Set type interface signaling, and send the proportion to other terminals in the plurality of terminals except the any terminal.
  • the terminal interface signaling is a preset second setting type.
  • the second setting type interface signaling includes PC5 interface signaling or sidelink interface signaling.
  • a certain terminal device calculates and determines the data ratio information, and then shares the data ratio information with other terminal devices through PC5 interface signaling (including but not limited to).
  • any network-side device or any terminal among the network-side device and the terminal calculates and determines The ratio of the training data, and send the ratio to the network side device and the terminal other than the network side device or any terminal other than the network side device or any terminal through the third setting type signaling. terminal.
  • the execution subject is a network-side device-terminal-side device combination
  • all devices share the same data ratio.
  • one terminal (or network side device) in the joint network side device-terminal side device can calculate and obtain the proportion of training data under each transmission condition, and share the proportion with the network side device through setting type interface signaling - Other devices in the association of terminal-side devices.
  • the setting type interface signaling is a preset third setting type interface signaling.
  • the third setting type signaling includes RRC, PDCCH layer 1 signaling, PDSCH, MAC CE, SIB, Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 Interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 Interface signaling, N15 interface signaling, N22 interface signaling, PUCCH layer 1 signaling, PUSCH, PRACH MSG1, PRACH MSG3, PRACH MSGA, PC5 interface signaling or sidelink interface signaling.
  • the method further includes: acquiring real-time data under the transmission conditions, and adjusting the trained neural network model online based on the real-time data.
  • the online data of the transmission environment collected in real time is used to fine the pre-trained neural network parameters -tuning (also called fine-tuning) to adapt the neural network to the actual environment.
  • fine-tuning is a training process that uses the parameters of the pre-trained neural network as initialization.
  • the parameters of some layers can be frozen. Generally, the layers near the input end are frozen, and the layers near the output end are activated, so as to ensure that the network can still converge.
  • the smaller the amount of data in the fine-tuning stage the more layers are suggested to be frozen and only fine-tune a small number of layers close to the output.
  • the data mixing ratio of different transmission conditions in the model training stage can be used. You can also take the trained neural network to the actual wireless environment for fine-tuning, or you can directly use the data generated in the actual wireless environment for fine-tuning without controlling the proportion of data.
  • online fine-tuning is performed on the trained neural network model to make the neural network more adaptable to the actual environment.
  • the acquiring the real-time data under the transmission conditions, and adjusting the trained neural network model online based on the real-time data includes: obtaining the proportion of the training data under each transmission condition, and obtaining the Real-time data under the above-mentioned transmission conditions; if the proportion of real-time data under any transmission condition in the transmission conditions is higher than the proportion of training data under any transmission condition, then adjust the neural network that has completed the training online.
  • the data that exceeds the proportion of the training data under any transmission condition in the real-time data under any transmission condition is not input into the trained neural network model.
  • the data mixing ratio of different transmission conditions in the model training stage can be used. That is, according to the proportion of training data under each transmission condition in the training stage, determine the proportion or data volume of real-time data under each transmission condition in the online fine-tuning stage, and obtain the corresponding amount of real-time data under each transmission condition accordingly.
  • the mixing ratio in the training phase when the proportion of data from a certain transmission condition in the actual wireless environment exceeds the proportion of the transmission condition in the model training phase, the excess data will not be input into the network for fine-tuning.
  • the data proportion in the training phase when used, the data whose transmission conditions exceed the proportion are not input into the neural network model for fine-tuning, so as to avoid the unbalanced influence of the data volume exceeding the proportion.
  • the acquiring the real-time data under each of the transmission conditions includes: collecting online the data of at least one of the network side device and the terminal under each of the transmission conditions as the real-time data under each of the transmission conditions
  • the online adjustment of the trained neural network model includes: based on the data under each of the transmission conditions of at least one of the network-side device and the terminal, using the network-side device or the terminal to adjust online
  • the neural network model that has been trained is described above.
  • the execution subject is at the input end of the neural network, or it can be the network side equipment and/or end-side equipment. That is to say, when the execution subject is a network-side device, real-time data of various transmission conditions under the network-side device can be obtained online; when the execution subject is a terminal device, real-time data of various transmission conditions of the terminal side can be obtained online; when the execution subject When both the network side device and the terminal are included, it is necessary to obtain real-time data of each transmission condition for both execution subjects.
  • the network-side device or terminal performs online fine-tuning of the neural network model according to its corresponding real-time data, and updates the network parameters.
  • the wireless transmission method Before acquiring the real-time data under each of the transmission conditions, the wireless transmission method also includes:
  • the network-side device uses Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, and N8 interface signaling interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling and N22 interface signaling, from the Obtain the proportion of training data under each of the transmission conditions in the network side equipment in the training phase;
  • the network side device obtains the information under each transmission condition from the terminal in the training phase through PUCCH layer 1 signaling, PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
  • PUCCH layer 1 signaling PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
  • the terminal obtains the proportion of training data under each of the transmission conditions from the terminal in the training phase through PC5 interface signaling or sidelink interface signaling;
  • the terminal obtains the proportion of training data under each of the transmission conditions from the network side equipment in the training phase through any one of RRC, PDCCH layer 1 signaling, PUSCH, MAC CE and SIB signaling.
  • N1 interface signaling N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface Signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling and N22 interface signaling, or PC5 interface signaling, or sidelink Interface signaling, or interface signaling such as RRC, PDCCH layer 1 signaling, MAC CE or SIB
  • the data proportion information is obtained from other execution subjects through the setting type interface signaling, so as to realize the sharing of the data proportion information in the online fine-tuning phase, ensuring The generalization ability of neural networks.
  • Fig. 5 is a schematic flow diagram of constructing a neural network model in the wireless transmission method provided according to the embodiment of the present application.
  • Fig. 5 shows the model construction process involved in the wireless transmission method proposed in the embodiment of the present application, which can be divided into offline ( offline) training phase (the part shown in 1 in the figure) and the fine-tuning or fine tuning stage in the actual transmission network (the part shown in 2 in the figure), where the training data set can be constructed and obtained before the offline training.
  • offline offline training phase
  • fine-tuning or fine tuning stage in the actual transmission network the part shown in 2 in the figure
  • the data of all transmission conditions can be mixed in equal proportions first, so as to determine the data volume or data proportion of each transmission condition in equal proportions. All transfer conditions in the mixture are then sorted by their contribution to the neural network optimization objective. After that, on the premise of ensuring sufficient data for all transmission conditions, increase the data volume of transmission conditions with low contribution and reduce the data volume of transmission conditions with high contribution to determine the proportion of data for each transmission condition, and Further construct a mixed training data set according to the ratio.
  • "guaranteeing sufficient data for all transmission conditions” may refer to setting a threshold, and the ratio of the amount of data for any one transmission condition to the total amount of data must not be lower than the threshold.
  • the contribution degree is sorted, the contribution degree is sorted from small to large, and the proportion of the corresponding transmission condition data to the total data (that is, the proportion) can be arbitrarily decreasing, such as linear reduction, equal difference reduction , proportional reduction, exponential function reduction or power function reduction, etc.
  • the proportion of the data corresponding to the transmission condition to the total data can be increased in any way, such as linear increase, arithmetic difference increase, proportional increase, exponential function increase or power function formula increase, etc.
  • Figure 6 is a schematic flow diagram of determining the proportion of training data in the training data set acquisition method provided according to the embodiment of the present application, mainly including:
  • the contribution of the i-th transmission condition is greater than the median contribution, then on the premise of ensuring that the data volume of all transmission conditions is sufficient, reduce the data volume of the i-th transmission condition, and the reduction is the same as that of the i-th transmission condition
  • the difference between the contribution degree and the median (median value) of the contribution degree is proportional to;
  • the contribution degree of the i-th transmission condition is less than the median contribution degree, on the premise of ensuring that the data volume of all transmission conditions is sufficient, increase the data volume of the i-th transmission condition, and the increase is equal to the median contribution degree (middle value) is proportional to the difference of the contribution degree of the i-th transmission condition.
  • training data set After obtaining the training data set, use the training data set to iteratively train the neural network model off-line to achieve convergence, and obtain the trained neural network model.
  • the data in the actual wireless network is collected in real time, and the parameters of the pre-trained neural network model are fine-tuned, so that the neural network model can adapt to the actual environment.
  • Online fine-tuning can be considered as a retraining process using the parameters of the pre-trained neural network as initialization.
  • DMRS Demodulation Reference Signal
  • Figure 7 is a schematic structural diagram of the neural network used for DMRS channel estimation in the wireless transmission method provided according to the embodiment of the present application, wherein the input information of the neural network is N_RE_DMRS DMRS process The symbols after adding noise to the channel, the output information of the neural network is N_RE symbols, corresponding to the channel estimation results on all N_RE time-frequency resources.
  • the training data is a tagged DMRS information pair, that is, a DMRS signal sample (including N_RE_DMRS symbols) corresponds to a label (the label is the true value of the channel corresponding to the current DMRS signal sample, a total of N_RE_DMRS symbols) .
  • a DMRS signal sample including N_RE_DMRS symbols
  • the label is the true value of the channel corresponding to the current DMRS signal sample, a total of N_RE_DMRS symbols
  • a large number of labeled DMRS information pairs are used to adjust the parameters of the neural network during training, and the normalized mean square error NMSE between the output of the neural network based on the DMRS signal sample and its label is minimized.
  • the kth SNR is denoted as SNR k
  • the data volume of the kth SNR is N k
  • the neural network is trained based on federated learning, and there is one network-side device and multiple terminal-side devices participating in federated learning.
  • the network-side device determines the data ratio of each SNR when the mixed training data set is constructed.
  • the contribution of the kth SNR data to the above NMSE is denoted as C k .
  • the SNRs are sorted in the order of C k from small to large (or from large to small). In this embodiment, data with a low SNR has a greater contribution to the NMSE, and data with a high SNR has a smaller contribution to the NMSE. Record the median of all contributions in the ranking as
  • the network-side device sends the data ratio information of each SNR to all terminals participating in the joint training through RRC, PDCCH layer 1 signaling, MAC CE or SIB and other interface signaling to perform federated learning of the neural network, that is, Offline training.
  • the trained neural network is fine-tuned online in the actual wireless network. Since the data proportion information has been shared with all terminals in the offline training phase, the data proportion can be used in the online fine-tuning phase. When the proportion of data from a certain SNR in the actual wireless environment exceeds the proportion of the SNR in the model training stage, the excess data will not be input into the neural network for fine-tuning.
  • the embodiments of the present application can improve the generalization ability of a trained neural network model in a changing wireless environment.
  • the wireless transmission method provided in the embodiment of the present application may be executed by a wireless transmission device, or a control module in the wireless transmission device for executing the wireless transmission method.
  • the wireless transmission device provided in the embodiment of the present application is described by taking the wireless transmission device executing the wireless transmission method as an example.
  • the structure of the wireless transmission device in the embodiment of the present application is shown in Figure 8, which is a schematic structural diagram of the wireless transmission device provided in the embodiment of the present application.
  • the device can be used to implement the wireless transmission in the above wireless transmission method embodiments.
  • the device include:
  • the third processing module 801 is configured to perform wireless transmission calculations based on the neural network model to realize the wireless transmission.
  • the neural network model is obtained by training in advance using a training data set, and the training data set is obtained based on the methods for obtaining the training data set as described in the above-mentioned embodiments.
  • the wireless transmission device further includes:
  • a training module configured to use any of the following training methods to train and obtain the neural network model based on the training data set, and the following training methods include:
  • the network side device is an access network device, a core network device or a data network device.
  • the wireless transmission device further includes:
  • the fourth processing module is configured to share the proportion of training data under each transmission condition among the subjects performing the distributed training during the distributed training process.
  • the fourth processing module is configured to calculate and determine each network side device among the multiple network side devices The ratio of the training data under the transmission conditions, and sending the ratio to other network sides of the plurality of network side devices except the any one of the network side devices through the first setting type interface signaling equipment.
  • the first setting type interface signaling includes Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling, or N22 interface signaling.
  • the fourth processing module is configured to calculate and determine, by any terminal among the multiple terminals, the training time under each of the transmission conditions proportion of the data, and send the proportion to other terminals in the plurality of terminals except the one terminal through the second setting type interface signaling.
  • the second setting type interface signaling includes PC5 interface signaling or sidelink interface signaling.
  • the fourth processing module is configured to use any network side device or any one of the network side device and the terminal
  • the terminal calculates and determines the proportion of the training data under each of the transmission conditions, and sends the proportion to the network-side device and the terminal except for the network-side device or any Other network-side devices or terminals other than a terminal.
  • the third setting type signaling includes RRC, PDCCH layer 1 signaling, PDSCH, MAC CE, SIB, Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 Interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 Interface signaling, N15 interface signaling, N22 interface signaling, PUCCH layer 1 signaling, PUSCH, PRACH MSG1, PRACH MSG3, PRACH MSG A, PC5 interface signaling or sidelink interface signaling.
  • the wireless transmission device further includes:
  • the fine-tuning module is used to obtain real-time data under the transmission conditions, and adjust the trained neural network model online based on the real-time data.
  • the fine-tuning module is used for:
  • the proportion of real-time data under any of the transmission conditions is higher than the proportion of training data under any of the transmission conditions, then in the process of online adjustment of the trained neural network model, the Among the real-time data under any transmission condition, the data that exceeds the proportion of the training data under any transmission condition is input into the trained neural network model.
  • the fine-tuning module when used for acquiring the real-time data under each of the transmission conditions, it is used for:
  • the fine-tuning module when used for the online adjustment of the trained neural network model, is used for:
  • the network-side device or the terminal is used to adjust the trained neural network model online.
  • the wireless transmission device further includes:
  • a communication module configured to, when the network-side device or the terminal does not obtain the proportion of training data under each of the transmission conditions during the training phase,
  • the network-side device uses Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, N5 interface signaling, N6 interface signaling, N7 interface signaling, and N8 interface signaling interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface signaling and N22 interface signaling, from the Obtain the proportion of training data under each of the transmission conditions in the network side equipment in the training phase;
  • the network side device obtains the information under each transmission condition from the terminal in the training phase through PUCCH layer 1 signaling, PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
  • PUCCH layer 1 signaling PUSCH, MSG1 of PRACH, MSG3 of PRACH, and MSG A of PRACH.
  • the terminal obtains the proportion of training data under each of the transmission conditions from the terminal in the training phase through PC5 interface signaling or sidelink interface signaling;
  • the terminal obtains the proportion of training data under each of the transmission conditions from the network side equipment in the training phase through any one of RRC, PDCCH layer 1 signaling, PUSCH, MAC CE and SIB signaling.
  • the wireless transmission device in the embodiment of the present application may be a device, a device with an operating system or an electronic device, and may also be a component, an integrated circuit, or a chip in a terminal or network-side device.
  • the apparatus or electronic equipment may be a mobile terminal or a non-mobile terminal, and may also include but not limited to the types of the network side equipment 102 listed above.
  • the mobile terminal may include but not limited to the types of terminal 101 listed above
  • the non-mobile terminal may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television ( television, TV), teller machines or self-service machines, etc., are not specifically limited in this embodiment of the present application.
  • Network Attached Storage Network Attached Storage
  • the wireless transmission device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the wireless transmission device provided in the embodiment of the present application can realize various processes realized by the wireless transmission method embodiments in FIG. 4 to FIG. 7 , and achieve the same technical effect. To avoid repetition, details are not repeated here.
  • the embodiment of the present application also provides a communication device 900, including a processor 901, a memory 902, and programs or instructions stored in the memory 902 and operable on the processor 901, for example, the communication
  • the device 900 is a terminal or a network-side device
  • the program or instruction is executed by the processor 901
  • the various processes of the above-mentioned embodiment of the training data set acquisition method can be realized, and the same technical effect can be achieved, or the above-mentioned embodiment of the wireless transmission method can be realized.
  • Each process, and can achieve the same technical effect, in order to avoid repetition, will not repeat them here.
  • the embodiment of the present application also provides a communication device, which may be a terminal or a network-side device.
  • the communication device includes a processor and a communication interface, wherein the processor is used to determine the The data volume of the training data under each of the transmission conditions; and based on the data volume of the training data under each of the transmission conditions, obtaining the training data under each of the transmission conditions to form a training data set for training the neural network ;
  • the contribution degree of the transmission condition to the neural network optimization objective indicates the degree of influence of the transmission condition on the value of the neural network optimization objective.
  • the embodiment of the present application also provides a communication device, which may be a terminal or a network side device, and the communication device includes a processor and a communication interface, wherein the processor is used to perform wireless transmission calculations based on a neural network model to realize the wireless Transmission; wherein, the neural network model is obtained by using a training data set for training in advance, and the training data set is obtained based on the methods for obtaining the training data set as described in the above-mentioned embodiments.
  • this embodiment of the communication device corresponds to the embodiment of the above-mentioned wireless transmission method, and each implementation process and implementation mode of the above-mentioned method embodiment can be applied to this embodiment of the communication device, and can achieve the same technical effect .
  • FIG. 10 is a schematic diagram of a hardware structure of a terminal implementing an embodiment of the present application.
  • the terminal 1000 includes but not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc. at least some of the components.
  • the terminal 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1010 through the power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • a power supply such as a battery
  • the terminal structure shown in FIG. 10 does not constitute a limitation on the terminal, and the terminal may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, which will not be repeated here.
  • the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes a touch panel 10071 and other input devices 10072 .
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the radio frequency unit 1001 receives the downlink data from the network side device, and processes it to the processor 1010; in addition, sends the uplink data to the network side device.
  • the radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the memory 1009 can be used to store software programs or instructions as well as various data.
  • the memory 1009 may mainly include a program or instruction storage area and a data storage area, wherein the program or instruction storage area may store an operating system, at least one application program or instruction required by a function (such as a sound playback function, an image playback function, etc.) and the like.
  • the memory 1009 may include a high-speed random access memory, and may also include a nonvolatile memory, wherein the nonvolatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM) , PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • ROM Read-Only Memory
  • PROM programmable read-only memory
  • PROM erasable programmable read-only memory
  • Erasable PROM Erasable PROM
  • EPROM electrically erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory for example at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM , SLDRAM) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM Double Data Rate SDRAM
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM enhanced synchronous dynamic random access memory
  • Synch link DRAM , SLDRAM
  • Direct Memory Bus Random Access Memory Direct Rambus
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, application programs or instructions, etc., Modem processors mainly handle wireless communications, such as baseband processors. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
  • the processor 1010 is configured to determine the data volume of the training data under each of the transmission conditions based on the contribution of each transmission condition to the neural network optimization goal; and based on the data volume of the training data under each of the transmission conditions, obtain each Training data under the transmission conditions to form a training data set for training the neural network; wherein, the contribution of the transmission conditions to the neural network optimization goal represents the transmission condition to the neural network optimization goal The degree of influence of the value of .
  • various transmission conditions are selected in different proportions according to the contribution of data with different transmission conditions to the neural network optimization goal (or objective function or loss function).
  • the following data can be used to construct a mixed training data set, which can effectively improve the generalization ability of the neural network.
  • the processor 1010 is further configured to sort the contribution degrees of each of the transmission conditions; and on the basis of mixing in equal proportions, reduce the training data under the transmission conditions corresponding to the larger contribution degrees in the sorting. At least one of the data volume and the operation of increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the sorting.
  • the processor 1010 is further configured to collect and calibrate the data under each of the transmission conditions based on the amount of training data under each of the transmission conditions, to form a training data set under each of the transmission conditions; or, Collect a set amount of data under each of the transmission conditions, and based on the data volume of the training data under each of the transmission conditions, select part of the data from the set amount of data and calibrate, or the set amount of data
  • the data is supplemented and calibrated to form a training data set under each of the transmission conditions.
  • the processor 1010 is further configured to, according to the following rules, perform reducing the data volume of the training data under the transmission condition corresponding to the larger contribution degree in the sorting and increasing the data volume of the training data under the transmission condition corresponding to the smaller contribution degree in the ranking.
  • the following rules include: the greater the value of the greater contribution, the greater the magnitude of the reduction; the smaller the value of the smaller contribution, the greater the The greater the magnitude of the increase.
  • the embodiment of the present application increases or decreases the data amount of the training data corresponding to the transmission condition proportionally according to the contribution value, so that as the contribution degree of the transmission condition increases gradually, the data amount of the training data of the transmission condition gradually decreases, thereby A better balance of the influence of each transmission condition on the final neural network is more conducive to improving the generalization ability of the neural network.
  • the processor 1010 is further configured to determine a reference contribution degree according to the ranking, and compare the contribution degree of the transmission condition with the reference contribution degree, if the contribution degree of the transmission condition is greater than the reference contribution degree degree, then determine that the contribution degree of the transmission condition is the greater contribution degree, and reduce the data volume of the training data under the transmission condition, otherwise, determine that the contribution degree of the transmission condition is the smaller contribution degree, And increase the amount of training data under the transmission conditions.
  • the intermediate comparison reference amount of the contribution degree by determining the intermediate comparison reference amount of the contribution degree, it is only necessary to compare other contribution degrees with the comparison reference amount, and then determine to increase or decrease the amount of data corresponding to the transmission condition according to the comparison result.
  • the algorithm is simple and the calculation amount is small. .
  • the processor 1010 is further configured to determine a weighting coefficient corresponding to each of the transmission conditions based on the probability density of each of the transmission conditions in an actual application; based on the contribution of each of the transmission conditions to the optimization goal , in combination with the weighting coefficients, determine the data volume of the training data under each of the transmission conditions.
  • the weighting item is designed according to the actual probability density of the transmission condition, which can better adapt to the actual environment.
  • the radio frequency unit 1001 is configured to send the training data set to a target device, and the target device is used to train the neural network based on the training data set.
  • the radio frequency unit 1001 is configured to directly send the training data set to the target device, or send the training data set after setting transformation to the target device;
  • the processor 1010 is further configured to perform setting transformation on the training data set, the setting transformation including at least one of specific quantization, specific compression, and neural network processing according to pre-agreement or configuration.
  • the processor 1010 is also configured to perform a wireless transmission operation based on a neural network model to realize the wireless transmission; wherein, the neural network model is obtained by training in advance using a training data set, and the training data set It is obtained based on the methods for obtaining training data sets as described in the above embodiments of the methods for obtaining training data sets.
  • data under various transmission conditions are selected in different proportions to construct a non-uniform mixed training data set, and based on The heterogeneously mixed training data set trains a common neural network for wireless transmission under different actual transmission conditions, enabling the trained neural network to achieve higher performance under each transmission condition.
  • the processor 1010 is further configured to use any of the following training methods to train and obtain the neural network model based on the training data set, and the following training methods include:
  • the radio frequency unit 1001 is further configured to share the proportion of training data under each transmission condition among the subjects performing the distributed training during the distributed training process.
  • each execution subject can realize the training of the neural network model without sharing its own data, which can solve the problem of insufficient computing power or training capacity of a single device or inability between devices. Problems with sharing data (involving privacy concerns) or the cost of transferring large amounts of data.
  • the processor 1010 is further configured to, if the distributed training is joint distributed training of multiple network side devices, calculate and determine each of the The proportion of training data under transmission conditions;
  • the radio frequency unit 1001 is further configured to send the proportion to other network-side devices in the plurality of network-side devices except the one network-side device through the first setting type interface signaling.
  • the processor 1010 is further configured to, in the case that the distributed training is a joint distributed training of multiple terminals, calculate and determine, by any one of the multiple terminals, the values of the training data under each of the transmission conditions. Proportion;
  • the radio frequency unit 1001 is further configured to send the ratio to other terminals in the plurality of terminals except for the any terminal through second setting type interface signaling.
  • the processor 1010 is further configured to, if the distributed training is a joint distributed training between the network side device and the terminal, any network side device or any terminal among the network side device and the terminal calculates Determine the proportion of training data under each of the transmission conditions;
  • the radio frequency unit 1001 is further configured to send the proportion to other network-side devices or any other network-side device or any terminal among the network-side device and the terminal through a third setting type signaling. terminal.
  • the processor 1010 is further configured to acquire real-time data under the transmission conditions, and adjust the trained neural network model online based on the real-time data.
  • online fine-tuning is performed on the trained neural network model to make the neural network more adaptable to the actual environment.
  • the processor 1010 is further configured to acquire real-time data under each of the transmission conditions based on the proportion of training data under each of the transmission conditions; and if the real-time data under any of the transmission conditions is higher than the proportion of training data under any of the transmission conditions, then in the process of online adjustment of the trained neural network model, the real-time data under any of the transmission conditions will not exceed the above-mentioned The proportion of training data under any transmission condition is input to the trained neural network model.
  • the data proportion in the training phase when used, the data whose transmission conditions exceed the proportion are not input into the neural network model for fine-tuning, so as to avoid the unbalanced influence of the data volume exceeding the proportion.
  • the input unit 1004 is configured to collect online data under each of the transmission conditions of at least one of the network side device and the terminal, as real-time data under each of the transmission conditions;
  • the processor 1010 is further configured to use the network-side device or the terminal to adjust the trained neural network model online based on the data of at least one of the network-side device and the terminal under each of the transmission conditions .
  • the radio frequency unit 1001 is also used to pass Xn interface signaling, N1 interface signaling, N2 interface signaling, N3 interface signaling, N4 interface signaling, and N5 interface signaling Signaling, N6 interface signaling, N7 interface signaling, N8 interface signaling, N9 interface signaling, N10 interface signaling, N11 interface signaling, N12 interface signaling, N13 interface signaling, N14 interface signaling, N15 interface Any network-side interface signaling in signaling and N22 interface signaling, obtain the proportion of training data under each of the transmission conditions from the network-side equipment in the training phase, or use RRC, PDCCH layer 1 signaling, MAC Any interface signaling in CE and SIB, to obtain the proportion of training data under each of the transmission conditions from the terminal in the training phase;
  • the radio frequency unit 1001 is further configured to obtain the proportion of training data under each of the transmission conditions from the terminal in the training phase through PC5 interface signaling or sidelink interface signaling, or through Any signaling in RRC, PDCCH layer 1 signaling, MAC CE and SIB, obtains the proportion of training data under each of the transmission conditions from the network side equipment in the training phase.
  • the data proportion information is obtained from other execution subjects through the setting type interface signaling, so as to realize the sharing of the data proportion information in the online fine-tuning phase, ensuring The generalization ability of neural networks.
  • FIG. 11 is a schematic diagram of a hardware structure of an access network device implementing an embodiment of the present application.
  • the access network device 1100 includes: an antenna 1101, a radio frequency device 1102, and a baseband device 1103.
  • the antenna 1101 is connected to the radio frequency device 1102 .
  • the radio frequency device 1102 receives information through the antenna 1101, and sends the received information to the baseband device 1103 for processing.
  • the baseband device 1103 processes the information to be sent and sends it to the radio frequency device 1102
  • the radio frequency device 1102 processes the received information and sends it out through the antenna 1101 .
  • the frequency band processing device may be located in the baseband device 1103 , and the method performed by the network side device in the above embodiments may be implemented in the baseband device 1103 , and the baseband device 1103 includes a processor 1104 and a memory 1105 .
  • the baseband device 1103 may include, for example, at least one baseband board, and the baseband board is provided with a plurality of chips, as shown in FIG. The operation of the network side device shown in the above method embodiments.
  • the baseband device 1103 may also include a network interface 1106 for exchanging information with the radio frequency device 1102, such as a common public radio interface (CPRI for short).
  • a network interface 1106 for exchanging information with the radio frequency device 1102, such as a common public radio interface (CPRI for short).
  • CPRI common public radio interface
  • the access network device in this embodiment of the present invention further includes: instructions or programs stored in the memory 1105 and operable on the processor 1104, and the processor 1104 invokes the instructions or programs in the memory 1105 to execute FIG. 3 or FIG. 8
  • the methods executed by each module shown in the figure achieve the same technical effect, so in order to avoid repetition, they are not repeated here.
  • FIG. 12 is a schematic diagram of a hardware structure of a core network device implementing an embodiment of the present application.
  • the core network device 1200 includes: a processor 1201, a transceiver 1202, a memory 1203, a user interface 1204, and a bus interface, wherein:
  • the core network device 1200 also includes: a computer program stored in the memory 1203 and operable on the processor 1201.
  • a computer program stored in the memory 1203 and operable on the processor 1201.
  • the computer program is executed by the processor 1201, each module shown in FIG. 3 or FIG. 8 is implemented. To avoid duplication, the method of implementation and to achieve the same technical effect will not be repeated here.
  • the bus architecture may include any number of interconnected buses and bridges, specifically one or more processors represented by processor 1201 and various circuits of memory represented by memory 1203 are linked together.
  • the bus architecture can also link together various other circuits such as peripheral devices, voltage regulators and power management circuits, etc., which are well known in the art, so the embodiments of this application will not further describe them .
  • the bus interface provides the interface.
  • Transceiver 1202 may be a plurality of elements, including a transmitter and a receiver, providing a means for communicating with various other devices over transmission media.
  • the user interface 1204 may also be an interface capable of connecting externally and internally to required devices, and the connected devices include but not limited to keypads, displays, speakers, microphones, joysticks, and so on.
  • the processor 1201 is responsible for managing the bus architecture and general processing, and the memory 1203 can store data used by the processor 1201 when performing operations.
  • the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by the processor, each process of the above embodiment of the training data set acquisition method is realized, or Each process of the foregoing wireless transmission method embodiment is implemented, and the same technical effect can be achieved, so in order to avoid repetition, details are not repeated here.
  • the processor is the processor in the terminal or the network side device described in the foregoing embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above training data set acquisition method
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is used to run programs or instructions to implement the above training data set acquisition method
  • the chip mentioned in the embodiment of the present application may also be called a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请公开了一种训练数据集获取方法、无线传输方法、装置及通信设备,属于通信技术领域,本申请实施例的训练数据集获取方法包括:基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。

Description

训练数据集获取方法、无线传输方法、装置及通信设备
相关申请的交叉引用
本申请要求于2021年05月11日提交的申请号为202110513732.0,发明名称为“训练数据集获取方法、无线传输方法、装置及通信设备”的中国专利申请的优先权,其通过引用方式全部并入本申请。
技术领域
本申请属于通信技术领域,具体涉及一种训练数据集获取方法、无线传输方法、装置及通信设备。
背景技术
泛化是指神经网络对未在训练(学习)过程中遇到的数据也可以得到合理的输出。目前,为实现对多变无线传输环境的泛化能力,可以基于混合数据训练出一个共性的神经网络,该神经网络的参数无需随环境的变化而切换。但是该神经网络无法实现在每个传输条件下都能达到最优性能。
发明内容
本申请实施例提供一种训练数据集获取方法、无线传输方法、装置及通信设备,能够解决现有无线传输中神经网络的泛化能力不足等的问题。
第一方面,提供了一种训练数据集获取方法,该方法包括:
基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;
基于各所述传输条件下训练数据的数据量,获取各所述传输条件下 的训练数据,以形成用于训练所述神经网络的训练数据集;
其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
第二方面,提供了一种训练数据集获取装置,包括:
第一处理模块,用于基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;
第二处理模块,用于基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;
其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
第三方面,提供了一种无线传输方法,该方法包括:
基于神经网络模型,进行无线传输运算,实现所述无线传输;
其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如第一方面所述的训练数据集获取方法获取的。
第四方面,提供了一种无线传输装置,包括:
第三处理模块,用于基于神经网络模型,进行无线传输运算,实现所述无线传输;
其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如第一方面所述的训练数据集获取方法获取的。
第五方面,提供了一种通信设备,该通信设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤,或者实现如第三方面所述的方法的步骤。
第六方面,提供了一种通信设备,包括处理器及通信接口,其中,所述处理器用于基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;以及基于各所述传输条件下训练数 据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
第七方面,提供了一种通信设备,包括处理器及通信接口,其中,所述处理器用于基于神经网络模型,进行无线传输运算,实现所述无线传输;其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如第一方面所述的训练数据集获取方法获取的。
第八方面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤,或者实现如第三方面所述的方法的步骤。
第九方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法,或实现如第三方面所述的方法。
第十方面,提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在非瞬态的存储介质中,所述程序/程序产品被至少一个处理器执行以实现如第一方面所述的训练数据集获取方法的步骤,或者实现如第三方面所述的无线传输方法的步骤。
本申请实施例中,在基于人工智能的通信系统中构造训练数据集时,根据不同传输条件的数据对神经网络优化目标(或称目标函数或损失函数)的贡献度,以不同比例选取多种传输条件下的数据,构造混合训练数据集,能够有效提高神经网络的泛化能力。
附图说明
图1为本申请实施例可应用的一种无线通信系统的结构图;
图2为本申请实施例提供的训练数据集获取方法的流程示意图;
图3为本申请实施例提供的训练数据集获取装置的结构示意图;
图4为本申请实施例提供的无线传输方法的流程示意图;
图5为根据本申请实施例提供的无线传输方法中构建神经网络模型的流程示意图;
图6为根据本申请实施例提供的训练数据集获取方法中确定训练数据占比的流程示意图;
图7为根据本申请实施例提供的无线传输方法中用于DMRS信道估计的神经网络的结构示意图;
图8为本申请实施例提供的无线传输装置的结构示意图;
图9为本申请实施例提供的通信设备的结构示意图;
图10为实现本申请实施例的一种终端的硬件结构示意图;
图11为实现本申请实施例的一种接入网设备的硬件结构示意图;
图12为实现本申请实施例的一种核心网设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型 (Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency-Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,但是这些技术也可应用于NR系统应用以外的应用,如第6代(6 th Generation,6G)通信系统。
图1示出本申请实施例可应用的一种无线通信系统的结构图。无线通信系统包括终端101和网络侧设备102。其中,终端101也可以称作终端设备或者用户终端(User Equipment,UE),终端101可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)或称为笔记本电脑、个人数字助理(Personal Digital Assistant,PDA)、掌上电脑、上网本、超级移动个人计算机(ultra-mobile personal computer,UMPC)、移动上网装置(Mobile Internet Device,MID)、可穿戴式设备(Wearable Device)或车载设备(VUE)、行人终端(PUE)等终端侧设备,可穿戴式设备包括:智能手表、手环、耳机、眼镜等。需要说明的是,本申请实施例并不限定终端101的具体类型。网络侧设备102可以是接入网设备1021或核心网设备1022或数据网络(data network,DN)设备1023。其中,接入网设备1021也可以称为无线接入网设备或无线接入网(Radio Access Network,RAN),接入网设备1021可以为基站或RAN侧负责神经网络训练的节点等,基站可被称为节点B、演进节点B、接入点、基收发机站(Base Transceiver  Station,BTS)、无线电基站、无线电收发机、基本服务集(Basic Service Set,BSS)、扩展服务集(Extended Service Set,ESS)、B节点、演进型B节点(eNB)、家用B节点、家用演进型B节点、WLAN接入点、WiFi节点、发送接收点(Transmitting Receiving Point,TRP)或所述领域中其他某个合适的术语,只要达到相同的技术效果,所述基站不限于特定技术词汇,需要说明的是,在本申请实施例中仅以NR系统中的基站为例,但是并不限定基站的具体类型。核心网设备1022也可以称为核心网(Core Network,CN)或5G核心(5G core,5GC)网,核心网设备1022可以包含但不限于如下至少一项:核心网节点、核心网功能、移动管理实体(Mobility Management Entity,MME)、接入移动管理功能(Access Management Function,AMF)、会话管理功能(Session Management Function,SMF)、用户平面功能(User Plane Function,UPF)、策略控制功能(Policy Control Function、PCF)、策略与计费规则功能单元(Policy and Charging Rules Function,PCRF)、边缘应用服务发现功能(Edge Application Server Discovery Function,EASDF)、应用功能(Application Function,AF)等。需要说明的是,在本申请实施例中仅以5G系统中的核心网设备为例,但是并不限定此为限。数据网络设备1023可以包含但不限于如下至少一项:网络数据分析功能(Network Data Analytics Function,NWDAF)、统一数据管理(Unified Data Management,UDM)、统一数据仓库(Unified Data Repository,UDR)和无结构化数据存储功能(Unstructured Data Storage Function,UDSF)。需要说明的是,在本申请实施例中仅以5G系统中的数据网络设备为例,但是并不限定此为限。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的训练数据集获取方法、无线传输方法、装置及通信设备进行详细地说明。
图2所示为本申请实施例提供的训练数据集获取方法的流程示意 图,该方法可由终端和/或网络侧设备执行,该终端具体可以是图1中示出的终端101,该网络侧设备具体可以是图1中示出的网络侧设备102。如图2所示,该方法包括:
步骤201,基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量。
其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
可以理解为,对于面向的无线传输环境的各传输条件,本申请实施例事先可以通过测试各传输条件对神经网络优化目标取值的影响程度,也即通过测试传输条件的取值对优化目标取值大小的影响程度,确定各传输条件对神经网络优化目标的贡献度。通常影响程度越高,对应传输条件的贡献度值越大,反之贡献度值越小。
例如,当将优化目标表示成传输条件的函数时,若该函数是关于传输条件的递增函数(如吞吐量是SNR的递增函数),则使优化目标取值大的传输条件的贡献度高;若该函数是关于传输条件的递减函数(如NMSE是SNR的递减函数),则使优化目标取值小的传输条件的贡献度高。
可选地,对于任一传输条件,可以基于该传输条件下优化目标可达到的最优值,确定该传输条件对优化目标的贡献度。也就是说,一个给定的传输条件对优化目标的贡献度,可以用该给定的传输条件下优化目标的可达最优值的大小来衡量,该可达最优值越大表示该给定的传输条件对优化目标的影响程度越大,相应的贡献度也越大。
在确定了各传输条件的贡献度的基础上,本申请实施例可以以各传输条件对应的贡献度值的大小为基础,分别调整各传输条件对应的训练数据的数据量,也即使得每个传输条件下的训练数据的数据量与该传输条件对优化目标的贡献度相关联。其中,该数据量是针对各传输条件需要准备的训练数据的参考数据量,或称为设定数据量,实际获取训练数 据集时,针对各传输条件准备训练数据时,需参考该数据量。
可以理解,为了使得神经网络对各种传输条件具有更好的泛化能力,可以减少贡献度大的传输条件的数据量,以降低这些传输条件的影响力;同时可增加贡献度小的传输条件的数据量,以提高这些传输条件的影响力,也即通过数据量来均衡各传输条件对优化目标的影响。其中,所述的数据量既可以指数据量的绝对值,也可以是数据量的相对比例。
其中,传输条件是实际无线传输环境中涉及的传输媒介、传输信号、传输环境等的参数。可选地,所述传输条件的类型包括如下至少之一:
信噪比或信干噪比;
参考信号接收功率(Reference Signal Receiving Power,RSRP);
信号强度;
干扰强度;
终端移动速度;
信道参数;
终端离基站的距离;
小区大小;
载频;
调制阶数或调制编码策略;
小区类型;
站间距;
天气和环境因素;
发端或收端的天线配置信息;
终端能力或类型;
基站能力或类型。
可以理解为,本申请实施例涉及的传输条件的类型可以包括但不局 限于上述列举的各传输条件类型中的一项或者多项的组合。
其中,干扰强度例如可以表示小区间同频干扰的强度,或其他干扰的大小;信道参数如径数(或LOS或NLOS场景)、时延(或最大时延)、多普勒(或最大多普勒)、到达角(包括水平、垂直)范围、出发角(包括水平、垂直)范围或信道相关系数等;小区类型例如室内小区、室外小区、宏小区、微小区或微微小区等;站间距例如可分为站间距200米以内、200-500米或500米以上等;天气和环境因素等例如可以为训练数据所在网络环境的温度和/或湿度等信息;发端或收端的天线配置信息,例如可以为天线数和/或天线极化方式等;UE能力/类型,例如可以为Redcap UE和/或normal UE等。
步骤202,基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集。
可以理解为,本申请实施例在获取各传输条件下训练数据的数据量的基础上,也即在获取参考数据量的基础上,以各传输条件对应的该参考数据量为参考或设定,获取各传输条件下的训练数据,以使得最终获取的各传输条件的训练数据的数据量与该参考或设定相符。最后,将获取的各传输条件下的训练数据进行非均匀混合,得到一数据集即为训练数据集,该训练数据集可以用于训练无线传输环境中的上述神经网络或称神经网络模型。
可选地,所述获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集,包括:基于各所述传输条件下训练数据的数据量,收集各所述传输条件下的数据并标定,构成各所述传输条件下的训练数据集;或者,收集各所述传输条件下设定数量的数据,并基于各所述传输条件下训练数据的数据量,由所述设定数量的数据中选取部分数据并标定,或对所述设定数量的数据进行补足并标定,构成各所述传输条件下的训练数据集。
可以理解为,本申请实施例在根据确定的各传输条件下训练数据的 数据量构造训练数据集时,可以先计算各个传输条件所需的数据量,再根据该数据量获取各个传输条件的数据。也可以先获取各个传输条件的大量数据,也即获取设定数量的数据,再计算各个传输条件所需的数据量,之后从上述提前获取的大量数据中进行挑选或补充。
其中,针对后一种情况,假设第k个传输条件下提前获取的总数据量为M k,而通过计算确定的所需数据量是N k。若M k≥N k,则需从这M k个数据中随机选取N k个数据放入训练数据集;若M k<N k,则需再获取N k-M k个数据,补齐N k个数据后再放入训练数据集。其中所需数据量即是根据上一步骤确定的各传输条件下的训练数据的数据量。
其中,在获取各传输条件下的数据之后,需要对这些数据进行标定,也即为每个传输条件下的数据添加标签,所添加的标签为该数据对应的传输环境真值。如在DMRS信道估计场景中,对传输条件下的数据DMRS信号添加的标签为该DMRS信号对应的信道真值。
需要说明的是,本申请实施例可以应用在任何能够用机器学习替换现有无线传输网络中某一个或多个模块的功能的场景中,也即在利用机器学习训练神经网络时都可以用本申请实施例的训练数据集获取方法构造训练数据集。应用场景例如,物理层的导频设计、信道估计、信号检测、用户配对、HARQ和定位等,高层的资源分配、切换和移动性管理等,以及网络层的调度或切片等,本申请实施例对具体的无线传输应用场景并不作限制。
本申请实施例在基于人工智能的通信系统中构造训练数据集时,根据不同传输条件的数据对神经网络优化目标(或称目标函数或损失函数)的贡献度,以不同比例选取多种传输条件下的数据,构造混合训练数据集,能够有效提高神经网络的泛化能力。
可选地,所述基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量,包括:对各所述传输条件的贡献度进行排序;在等比例混合的基础上,执行减少所述排序中较大贡献度 对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一。
可以理解为,本申请实施例在确定各传输条件下训练数据的数据量时,可以先将等比例混合时不同传输条件的数据对神经网络优化目标(或目标函数、损失函数)的贡献度进行排序。之后,在构造混合训练数据集时,可以基于上述排序,确定其中的较小贡献度和较大贡献度,从而进一步确定贡献度较低的传输条件和贡献度较高的传输条件,并可以在保障所有传输条件的数据足够的前提下,增加贡献度较低的传输条件的数据量,和/或降低贡献度较高的传输条件的数据量。
其中,可以设定一个门限(也即预设阈值),并使得任意一个传输条件的数据量占总数据量的比例不得低于该门限,来满足上述“保障所有传输条件的数据足够的前提”。
本申请实施例通过对各传输条件的贡献度进行排序,并根据该排序来调整对应传输条件的数据量,能够更明确且准确的确定对相应传输条件数据量的调整策略(包括需要增加还是减少数据量、增加或减少的幅度等),从而使得效率更高,结果更准确。
可选地,所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,包括:按照如下规则,执行所述减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,所述如下规则,包括:所述较大贡献度的值越大,所述减少的幅度越大;所述较小贡献度的值越小,所述增加的幅度越大。
可以理解为,本申请实施例在增加贡献度较低的传输条件的数据量,降低贡献度较高的传输条件的数据量时,目标是使得随着传输条件的贡献度的逐渐增高,传输条件的训练数据的数据量逐渐降低。因此, 传输条件的贡献度越低,相应该传输条件增加的数据量越多;传输条件的贡献度越高,相应该传输条件减少的数据量越多。
本申请实施例根据贡献度值的大小来按比例增加或减少对应传输条件训练数据的数据量,能够使得随着传输条件的贡献度的逐渐增高,传输条件的训练数据的数据量逐渐降低,从而更好的均衡各传输条件对最终神经网络的影响,更有利于改善神经网络的泛化能力。
可选地,在所述排序的结果为由小到大的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递减,在所述排序的结果为由大到小的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递增。
可以理解为,在基于上述实施例确定各传输条件的训练数据的数据量时,若贡献度排序方式为从小到大,对应的传输条件的数据占总数据的比例可以是任意递减的方式,如线性减小、等差减小、等比减小、指数函数式减小或幂函数式减小等。反之,若贡献度排序方式为从大到小,对应的传输条件的数据占总数据的比例可以是任意递增的方式,如线性增加、等差增加、等比增加、指数函数式增加或幂函数式增加等。
可选地,所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,包括:根据所述排序,确定参照贡献度,并比较所述传输条件的贡献度与所述参照贡献度;
根据比较结果,执行如下操作中至少之一,所述如下操作包括:
若所述传输条件的贡献度大于所述参照贡献度,则确定所述传输条件的贡献度为所述较大贡献度,并减少所述传输条件下训练数据的数据量;
若所述传输条件的贡献度不大于所述参照贡献度,则确定所述传输条件的贡献度为所述较小贡献度,并增加所述传输条件下训练数据的数据量。
可以理解为,本申请实施例在根据排序调整各传输条件下训练数据的数据量时,可以先根据排序确定一个贡献度的中间对比参照量,称为参照贡献度。可选地,所述参照贡献度为所述排序的中位数,或所述排序中设定位置的贡献度,或所述排序中各贡献度的平均数,或所述排序中与所述平均数最接近的贡献度。其中,所述的平均数可以是算术平均数、几何平均数、调和平均数、加权平均数、平方平均数或指数平均数等。
然后,依次将每个传输条件的贡献度与该参照贡献度进行比较。若第i个传输条件的贡献度大于该参照贡献度,则将该第i个传输条件确定为上述实施例所述的较大贡献度,并减少该第i个传输条件的数据量;否则,若第i个传输条件的贡献度小于贡献度中位数,则将该第i个传输条件确定为上述实施例所述的较小贡献度,并增加该第i个传输条件的数据量。
本申请实施例通过确定贡献度的中间对比参照量,只需将其它贡献度与该对比参照量进行比较,即可根据比较结果确定增加或减少对应传输条件的数据量,算法简单,运算量小。
可选地,所述基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量,包括:基于各所述传输条件在实际应用中的概率密度,确定各所述传输条件对应的加权系数;基于各所述传输条件对所述优化目标的贡献度,结合所述加权系数,确定各传输条件下训练数据的数据量。
可以理解为,在确定不同传输条件的数据占总数据量的比例时,可基于不同传输条件在实际中的概率密度设计加权项,并提高概率密度高的条件的数据量,并降低概率密度低的条件的数据量。例如,假设第k个SNR的概率密度为p k,则对应的加权项为f(p k),f(p k)是关于p k的递增函数。考虑该加权项更新后的第k个SNR的数据量为f(p k)·N k。其中,概率密度是反应不同环境中的传输条件的发生概率,不同环境中各传输 条件不是等概率存在的,有的传输条件发生的概率稍大一些,有的稍低一些。
可选地,所述加权系数与所述概率密度呈函数递增关系。也就是说,加权项与概率密度之间的关系可以是任意的递增函数关系,即需要保障概率密度高的加权项大,概率密度低的加权项小。
本申请实施例根据传输条件在实际中的概率密度设计加权项,能够更好地适配实际环境。
可选地,所述方法还包括:将所述训练数据集发送至目标设备,所述目标设备用于基于所述训练数据集,训练所述神经网络。
可以理解为,本申请实施例获取的训练数据集可用于训练神经网络,且本申请实施例的训练数据集获取方法可以应用在数据采集端和神经网络训练端可以不在同一执行端时的数据传输场景中。本申请实施例按照确定的混合数据集占比完成数据采集、标定后构建训练数据集,并将该训练数据集反馈给其它需要执行神经网络训练流程的设备,也即目标设备。其中的目标设备是不同于当前设备的第二设备,其可以是终端或者网络侧设备,用于利用得到的训练数据集完成神经网络模型的训练。
本申请实施例通过将获取的训练数据集发送至当前设备外的第二设备,能够使多个设备间实现数据共享和联合训练,从而能够有效降低单个设备的运算量,且能够有效提高运算效率。
可选地,所述将所述训练数据集发送至目标设备,包括:直接将所述训练数据集发送至所述目标设备,或者,对所述训练数据集进行设定变换后发送至所述目标设备,所述设定变换包括特定的量化、特定的压缩以及按照预先约定或配置的神经网络处理中的至少一种。
可以理解为,本申请在将训练数据集发送给其它设备时,发送方式可以是直接发送或间接发送。间接发送是指对训练数据集中的训练数据进行变换后再反馈,例如可以采用特定的量化方式,特定的压缩方式或 按照预先约定、配置的神经网络对待发送训练数据进行处理后再发送。
为进一步说明本申请实施例的技术方案,以下将举例说明,但不对本申请要求保护的范围进行限定。
以神经网络优化目标(或目标函数、损失函数)为均方误差(mean square error,MSE)或归一化的均方误差(normalized mean square error,NMSE)等需要被最小化的指标、传输条件为信噪比(signal to noise ratio,SNR)的应用场景为例,考虑待混合的是K种信噪比的数据,将总的数据量记为N all,第k个信噪比记为SNR k,第k个信噪比的数据量记为N k
首先,对K种信噪比的数据进行排序:
假设数据是等比例混合(即N k/N all=1/K),将第k个SNR的数据对上述需要被最小化的指标的贡献度记为C k。按照C k从小到大(或从大到小)的顺序对SNR进行排序。
然后,确定混合比例:
(1)对于第k个SNR,C k越大,经调整使N k的取值越小。按照C k从小到大的顺序,使对应的N k的取值从大到小且递减规则可以是任意的递减规则;或者,按照C k从大到小的顺序,使对应的N k的取值从小到大且递增规则可以是任意的递增规则。
(2)调整数据量后,确认任意SNR的数据占总数据的比例不小于门限γ。
以被优化的指标是MSE和NMSE为例,一般来讲,低SNR的数据对MSE或NMSE的贡献度较大,高SNR的数据对MSE或NMSE的贡献度较小。因此,最终确定的混合比例中低SNR的数据占总数据量的比值最低,随着SNR的增长,对应的数据量也增加。
可选的,在确定不同传输条件的数据占总数据量的比例时,可基于不同传输条件在实际中的概率密度设计加权项。假设第k个SNR的概率密度为p k,则对应的加权项为f(p k),f(p k)是关于p k的递增函数。考虑该 加权项更新后的第k个SNR的数据量为f(p k)·N k
另以神经网络的优化目标(或目标函数、损失函数)为信干噪比(signal to interference plus noise ratio,SINR)、频谱效率或吞吐量等需要被最大化的指标,传输条件为信噪比(signal to noise ratio,SNR)的应用场景为例,考虑待混合的是K种SNR的数据,将总的数据量记为N all,第k个信噪比记为SNR k,第k个信噪比的数据量记为N k
首先,对K种信噪比的数据进行排序:
假设数据是等比例混合(即N k/N all=1/K),将第k个SNR的数据对上述需要被最大化的指标的贡献度记为C k。按照C k从小到大(或从大到小)的顺序对SNR进行排序。
然后,确定混合比例:
(1)对于第k个SNR,C k越大,经调整使N k的取值越小。按照C k从小到大的顺序,使对应的N k的取值从大到小且递减规则可以是任意的递减规则;或者,按照C k从大到小的顺序,使对应的N k的取值从小到大且递增规则可以是任意的递增规则。
(2)调整数据量后,确认任意SNR的数据占总数据的比例不小于门限γ。
以被优化的指标是SINR、频谱效率或吞吐量为例,一般来讲,低SNR的数据对SINR、频谱效率和吞吐量的贡献度较小,高SNR的数据对SINR、频谱效率和吞吐量的贡献度较大。因此,低SNR的数据占总数据量的比值最高,随着SNR的减小,对应的数据量也降低。
可选的,在确定不同传输条件的数据占总数据量的比例时,可基于不同传输条件在实际中的概率密度设计加权项。假设第k个SNR的概率密度为p k,则对应的加权项为f(p k),f(p k)是关于p k的递增函数。考虑该加权项更新后的第k个SNR的数据量为f(p k)·N k
需要说明的是,本申请实施例提供的训练数据集获取方法,执行主体可以为训练数据集获取装置,或者,该训练数据集获取装置中的用于 执行训练数据集获取方法的控制模块。本申请实施例中以训练数据集获取装置执行训练数据集获取方法为例,说明本申请实施例提供的训练数据集获取装置。
本申请实施例的训练数据集获取装置的结构如图3所示,为本申请实施例提供的训练数据集获取装置的结构示意图,该装置可以用于实现上述各训练数据集获取方法实施例中的训练数据集获取,该装置包括:第一处理模块301和第二处理模块302,其中:
第一处理模块301用于基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;
第二处理模块302用于基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;
其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
可选地,所述第一处理模块,用于:
对各所述传输条件的贡献度进行排序;
在等比例混合的基础上,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一。
可选地,所述传输条件的类型包括如下至少之一:
信噪比或信干噪比;
参考信号接收功率;
信号强度;
干扰强度;
终端移动速度;
信道参数;
终端离基站的距离;
小区大小;
载频;
调制阶数或调制编码策略;
小区类型;
站间距;
天气和环境因素;
发端或收端的天线配置信息;
终端能力或类型;
基站能力或类型。
可选地,所述第二处理模块,用于:
基于各所述传输条件下训练数据的数据量,收集各所述传输条件下的数据并标定,构成各所述传输条件下的训练数据集;
或者,
收集各所述传输条件下设定数量的数据,并基于各所述传输条件下训练数据的数据量,由所述设定数量的数据中选取部分数据并标定,或对所述设定数量的数据进行补足并标定,构成各所述传输条件下的训练数据集。
可选地,所述第一处理模块在用于所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一时,用于:
按照如下规则,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,所述如下规则,包括:
所述较大贡献度的值越大,所述减少的幅度越大;所述较小贡献度的值越小,所述增加的幅度越大。
可选地,在所述排序的结果为由小到大的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递减,在所述排序的结果为由大 到小的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递增。
可选地,所述第一处理模块在用于所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一时,用于:
根据所述排序,确定参照贡献度,并比较所述传输条件的贡献度与所述参照贡献度;
根据比较结果,执行如下操作中至少之一,所述如下操作包括:
若所述传输条件的贡献度大于所述参照贡献度,则确定所述传输条件的贡献度为所述较大贡献度,并减少所述传输条件下训练数据的数据量;
若所述传输条件的贡献度不大于所述参照贡献度,则确定所述传输条件的贡献度为所述较小贡献度,并增加所述传输条件下训练数据的数据量。
可选地,所述参照贡献度为所述排序的中位数,或所述排序中设定位置的贡献度,或所述排序中各贡献度的平均数,或所述排序中与所述平均数最接近的贡献度。
可选地,所述第一处理模块,还用于:
基于各所述传输条件在实际应用中的概率密度,确定各所述传输条件对应的加权系数;
基于各所述传输条件对所述优化目标的贡献度,结合所述加权系数,确定各所述传输条件下训练数据的数据量。
可选地,所述加权系数与所述概率密度呈函数递增关系。
可选地,所述装置还包括:
发送模块,用于将所述训练数据集发送至目标设备,所述目标设备用于基于所述训练数据集,训练所述神经网络。
可选地,所述发送模块,用于:
直接将所述训练数据集发送至所述目标设备,或者,对所述训练数据集进行设定变换后发送至所述目标设备,所述设定变换包括特定的量化、特定的压缩以及按照预先约定或配置的神经网络处理中的至少一种。
本申请实施例中的训练数据集获取装置可以是装置,具有操作系统的装置或电子设备,也可以是终端或网络侧设备中的部件、集成电路、或芯片。该装置或电子设备可以是移动终端,也可以为非移动终端,也可以包括但不限于上述所列举的网络侧设备102的类型。示例性的,移动终端可以包括但不限于上述所列举的终端101的类型,非移动终端可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例提供的训练数据集获取装置能够实现图2的方法实施例实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供一种无线传输方法,该方法可由终端和/或网络侧设备执行,该终端具体可以是图1中示出的终端101,该网络侧设备具体可以是图1中示出的网络侧设备102。如图4所示,为本申请实施例提供的无线传输方法的流程示意图,该方法包括:
步骤401,基于神经网络模型,进行无线传输运算,实现所述无线传输。
其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如上述各实施例所述的训练数据集获取方法获取的。
可以理解为,本申请实施例事先可以根据上述各训练数据集获取方法实施例获取训练数据集(或者也可以获取各传输条件的数据占比),并利用该训练数据集训练初始化搭建的神经网络,得到神经网络模型。 之后,将该神经网络模型应用到本申请实施例的无线传输运算过程中,并通过运算最终实现本申请实施例的无线传输。
其中,本申请实施例的无线传输应用环境可以是任何能够用机器学习替换现有无线传输网络中某一个或多个模块的功能的无线传输环境,也即在无线传输中利用机器学习训练神经网络时,都可以用本申请上述训练数据集获取方法实施例构造训练数据集,并利用该训练数据集训练好神经网络模型后用于无线传输。无线传输应用环境例如,物理层的导频设计、信道估计、信号检测、用户配对、HARQ和定位等,高层的资源分配、切换和移动性管理等,以及网络层的调度或切片等,本申请实施例对具体的无线传输应用场景并不作限制。
本申请实施例根据不同传输条件的数据对神经网络优化目标(或称目标函数或损失函数)的贡献度,以不同比例选取多种传输条件下的数据,构造非均匀混合训练数据集,并基于该非均匀混合的训练数据集,训练出一个共性的神经网络,用于实际不同传输条件下的无线传输,能够使训练出的神经网络在每个传输条件下都能达到较高的性能。
可选地,在所述基于神经网络模型,进行无线传输运算之前,所述无线传输方法还包括:
基于所述训练数据集,利用如下训练方式中任一,训练获取所述神经网络模型,所述如下训练方式包括:
单个终端集中式训练;
单个网络侧设备集中式训练;
多个终端联合分布式训练;
多个网络侧设备联合分布式训练;
单个网络侧设备与多个终端联合分布式训练;
多个网络侧设备与多个终端联合分布式训练;
多个网络侧设备与单个终端联合分布式训练。
可以理解为,在利用神经网络模型进行无线传输运算之前,先要利 用训练数据集训练获取该神经网络模型。具体的,神经网络的训练阶段可以是离线进行的,执行主体可以是网络侧设备,或终端侧设备,或者网络侧设备-终端侧设备联合。可选地,所述网络侧设备为接入网设备、核心网设备或数据网络设备。也就是说,本申请实施例的网络侧设备可以包括接入网中的网络侧设备、核心网设备和数据网络设备(data network,DN)中的一种或多种。接入网中的网络侧设备可以是基站,或RAN侧负责AI训练的节点,或不限于图1中列举的接入网设备1021的类型等。核心网设备不限于图1中列举的核心网设备1022的类型,数据网络设备可以是NWDAF、UDM、UDR或UDSF等。
其中,当执行主体是网络侧设备时,可以是基于单个网络侧设备的集中式训练,也可以是基于多个网络侧设备的分布式训练(如联邦学习)。当执行主体是终端侧设备时,可以是基于单个终端的集中式训练,也可以是基于多个终端的分布式训练(如联邦学习)。当执行主体是网络侧设备-终端侧设备联合时,可以是单个网络侧设备联合多个终端设备,也可以是单个终端设备联合多个网络侧设备,也可以说是多个网络侧设备联合多个终端侧设备。本申请对训练过程的执行主体并不作具体限定。
可选地,所述无线传输方法还包括:在所述分布式训练的过程中,将各所述传输条件下训练数据的占比在执行所述分布式训练的各主体间共享。
可以理解为,本申请实施例在利用多个网络侧设备或多个终端设备或网络侧设备与终端设备类联合训练神经网络模型时,各执行设备间共享一套传输条件的训练数据的占比。
本申请实施例通过在各执行主体间共享训练数据的占比,可以使各执行主体无需共享自身数据即可实现神经网络模型的训练,能够解决单个设备计算能力或训练能力不足或者设备之间无法共享数据(涉及隐私问题)或传输大量数据的代价非常大的问题。
可选地,在所述分布式训练为多个网络侧设备联合分布式训练的情况下,由所述多个网络侧设备中任一网络侧设备计算确定各所述传输条件下训练数据的占比,并通过第一设定类型接口信令,将所述占比发送至所述多个网络侧设备中除所述任一网络侧设备外的其它网络侧设备。
可以理解为,在多个网络侧设备联合分布式训练神经网络模型时,所有网络侧设备共用同一种数据占比。且可由这多个网络侧设备中的一个网络侧设备计算获取各传输条件下训练数据的占比,并将该占比通过网络侧接口信令分享给这多个网络侧设备中其它的网络侧设备。该网络侧接口信令是事先设定的第一设定类型。可选地,所述第一设定类型接口信令包括Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令或N22接口信令。其中,基站之间可以利用Xn接口,通过Xn接口信令实现数据占比共享,核心网设备之间可以利用核心网设备间的N1、N2、N3、N4、N5、N6、N7、N8、N9、N10、N11、N12、N13、N14、N15或N22接口,通过对应类型接口信令实现数据占比共享。
例如,可以由某一个网络侧设备计算确定数据占比信息,再通过Xn接口信令(包括但不限于)将该数据占比信息共享给其他网络侧设备。
可选地,在所述分布式训练为多个终端联合分布式训练的情况下,由所述多个终端中任一终端计算确定各所述传输条件下训练数据的占比,并通过第二设定类型接口信令,将所述占比发送至所述多个终端中除所述任一终端外的其它终端。
可以理解为,在多个终端联合分布式训练神经网络模型时,所有终端共用同一种数据占比。且可由这多个终端中的一个终端计算获取各传输条件下训练数据的占比,并将该占比通过终端接口信令分享给这多个终端中其它的终端。该终端接口信令是事先设定的第二设定类型。可选 地,所述第二设定类型接口信令包括PC5接口信令或sidelink接口信令。
例如,由某一个终端设备计算确定数据占比信息,再通过PC5接口信令(包括但不限于)将该数据占比信息共享给其他终端设备。
可选地,在所述分布式训练为网络侧设备与终端联合分布式训练的情况下,由所述网络侧设备与终端中任一网络侧设备或任一终端计算确定各所述传输条件下训练数据的占比,并通过第三设定类型信令,将所述占比发送至所述网络侧设备与终端中除所述任一网络侧设备或任一终端外的其它网络侧设备或终端。
可以理解为,在执行主体是网络侧设备-终端侧设备联合时,所有设备共用同一种数据占比。且可由联合的网络侧设备-终端侧设备中的一个终端(或网络侧设备)计算获取各传输条件下训练数据的占比,并将该占比通过设定类型接口信令分享给网络侧设备-终端侧设备联合中其它的设备。该设定类型接口信令是事先设定的第三设定类型接口信令。
可选地,所述第三设定类型信令包括RRC、PDCCH层1信令、PDSCH、MAC CE、SIB、Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令、N22接口信令、PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3、PRACH的MSG A、PC5接口信令或sidelink接口信令。
也就是说,参与训练的多个网络侧设备-终端侧设备间可通过RRC、PDCCH层1信令、PDSCH、MAC CE、SIB、Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令、N22接口信令、PUCCH层1信令、PUSCH、PRACH的MSG1、 PRACH的MSG3、PRACH的MSG A、PC5接口信令或sidelink接口信令等信令(包括但不限于)实现数据占比信息的共享。
可选地,还包括:获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型。
可以理解为,本申请实施例在根据上述各实施例基于离线收集的大量数据预训练网络,使其达到收敛的基础上,再用实时采集的传输环境在线数据对预训练的神经网络参数进行fine-tuning(也叫微调),使神经网络适配实际环境。可认为微调是用预训练的神经网络的参数作为初始化,进行的训练过程。微调阶段可以冻结部分层的参数,一般冻结靠近输入端的层,激活靠近输出端的层,这样可以保障网络仍然能收敛。微调阶段的数据量越少,建议冻结的层数越多,只微调靠近输出端的少量层。
可选地,将训练好的神经网络拿到实际无线环境中进行微调或fine tuning时,可以沿用模型训练阶段不同传输条件的数据混合占比。也可以将训练好的神经网络拿到实际无线环境中进行微调时,也可以直接使用实际的无线环境中产生的数据进行微调,不做数据占比的控制。
本申请实施例在完成神经网络模型离线训练的基础上,对完成训练的神经网络模型进行在线微调,可以使神经网络更能适配实际环境。
可选地,所述获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型,包括:基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据;若所述传输条件中任一传输条件下的实时数据的占比高于所述任一传输条件下训练数据的占比,则在在线调整所述训练完成的神经网络模型的过程中,不将所述任一传输条件下的实时数据中超出所述任一传输条件下训练数据的占比的数据输入所述训练完成的神经网络模型。
可以理解为,根据上述实施例,将训练好的神经网络模型拿到实际无线环境中进行微调或fine tuning时,可以沿用模型训练阶段不同传输 条件的数据混合占比。也即根据训练阶段各传输条件下训练数据的占比,确定在线微调阶段各传输条件下实时数据的占比或数据量,并据此获取各传输条件下对应数量的实时数据。当沿用训练阶段混合比例时,当来自实际无线环境的某一传输条件的数据占比超出模型训练阶段该传输条件的占比时,将超出的部分数据不输入网络进行微调。
本申请实施例在沿用训练阶段的数据占比时,不将超过该占比的传输条件的数据输入神经网络模型进行微调,能够避免超过占比的数据量的非均衡影响。
可选地,所述获取各所述传输条件下的实时数据,包括:在线采集网络侧设备和终端中至少之一的各所述传输条件下的数据,作为各所述传输条件下的实时数据;所述在线调整训练完成的神经网络模型,包括:基于所述网络侧设备和终端中至少之一的各所述传输条件下的数据,利用所述网络侧设备或所述终端,在线调整所述训练完成的神经网络模型。
可以理解为,与神经网络的训练阶段类似,本申请实施例神经网络模型的微调或fine tuning阶段,若沿用训练阶段的数据占比,则执行主体在神经网络的输入端,也可以是网络侧设备和/或终端侧设备。也就是说,在执行主体是网络侧设备时,可以在线获取网络侧设备下各传输条件的实时数据;在执行主体是终端设备时,可以在线获取终端侧各传输条件的实时数据;当执行主体既包括网络侧设备又包括终端时,需对这两个执行主体均获取各传输条件的实时数据。
之后,在实际在线微调时,由网络侧设备或终端按照自身对应的实时数据进行神经网络模型的在线微调,更新网络参数。
可选地,在所述网络侧设备或所述终端在训练阶段未获取各所述传输条件下训练数据的占比的情况下,在所述基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据之前,所述无线传输方法还包括:
由所述网络侧设备通过Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令和N22接口信令中任一接口信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比;
或者,由所述网络侧设备通过PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3和PRACH的MSG A中任一信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
或者,由所述终端通过PC5接口信令或sidelink接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
或者,由所述终端通过RRC、PDCCH层1信令、PUSCH、MAC CE和SIB中任一信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比。
可以理解为,在神经网络模型的在线微调阶段,若执行主体在训练阶段没有获取数据占比信息,则根据不同的执行主体类型以及面向的训练阶段的目标执行主体类型,需先通过Xn接口信令,或Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令及N22接口信令等接口信令,或PC5接口信令,或sidelink接口信令,或RRC、PDCCH层1信令、MAC CE或SIB等接口信令
(包括但不限于)获取数据占比信息,再进行微调或fine tuning。
本申请实施例在执行主体在训练阶段未获取数据占比信息的基础上,通过设定类型接口信令从其它执行主体获取数据占比信息,能够实现在线微调阶段数据占比信息的共享,保证神经网络的泛化能力。
为进一步说明本申请实施例的技术方案,以下将举例说明,但不对 本申请要求保护的范围进行限定。
图5为根据本申请实施例提供的无线传输方法中构建神经网络模型的流程示意图,图5示出的是本申请实施例提出的无线传输方法中所涉及的模型构建流程,可分为离线(offline)训练阶段(图中①所示部分)和在实际传输网络中的微调或fine tuning阶段(图中②所示部分),其中在离线训练之前,可先构造获取训练数据集。
在构造获取训练数据集时,可以先将所有传输条件的数据等比例混合,以确定等比例下各传输条件的数据量或数据占比。之后将混合中所有传输条件按对神经网络优化目标的贡献度排序。再之后,在保障所有传输条件的数据足够的前提下,增加贡献度较低的传输条件的数据量,减少贡献度较高的传输条件的数据量,以确定各传输条件数据的占比,并进一步根据该占比构造混合训练数据集。
其中,在确定不同传输条件下的数据量时,“保障所有传输条件的数据足够”可以是指设定一个门限,任意一个传输条件的数据量占总数据量的比例不得低于该门限。
在“增加贡献度较低的传输条件的数据量,减少贡献度较高的传输条件的数据量”时,贡献度越低,增加的数据量越多,贡献度越高,减少的数据量越多。若在对贡献度进行排序时,贡献度排序方式为从小到大,对应的传输条件的数据占总数据的比例(即占比)可以是任意递减的方式,如线性减小、等差减小、等比减小、指数函数式减小或幂函数式减小等。反之,若贡献度排序方式为从大到小,对应的传输条件的数据占总数据的比例可以是任意递增的方式,如线性增加、等差增加、等比增加、指数函数式增加或幂函数式增加等。
其中,确定各传输条件数据的占比的一种可行实现方式如图6所示,为根据本申请实施例提供的训练数据集获取方法中确定训练数据占比的流程示意图,主要包括:
排序后找到贡献度的中位数(中值),并依次将每个传输条件的贡 献度与该中位数进行比较。
若第i个传输条件的贡献度大于贡献度中位数,则在保障所有传输条件的数据量足够的前提下,减少该第i个传输条件的数据量,减幅与该第i个传输条件的贡献度和贡献度中位数(中值)的差值成正比;
若第i个传输条件的贡献度小于贡献度中位数,则在保障所有传输条件的数据量足够的前提下,增加该第i个传输条件的数据量,增幅与贡献度中位数(中值)和该第i个传输条件的贡献度的差值成正比。
在获取训练数据集后,利用该训练数据集,离线循环迭代训练神经网络模型,使其达到收敛,得到训练完成的神经网络模型。
然后,实时采集实际无线网络中的数据,对预训练完成的神经网络模型的参数进行微调(fine-tuning),使神经网络模型能够适配实际环境。在线微调可认为是用预训练的神经网络的参数作为初始化,进行的再训练过程。
例如,以具体的应用场景是无线传输中的解调参考信号(Demodulation Reference Signal,DMRS)信道估计为例,考虑一个包含N_RB个RB的系统,每个RB包含N_SC个子载波、N_Sym个符号,即系统总共包含N_RE=N_RB*N_SC*N_Sym个时频资源。在每个RB的频域放置N_SC_DMRS个、时域放置N_Sym_DMRS个DMRS,用作信道估计,即DMRS总共占据N_RE_DMRS=N_RB*N_SC_DMRS*N_Sym_DMRS个时频资源。接收端基于接收到的N_RE_DMRS个时频资源位置的DMRS信号,恢复出所有N_RE个时频资源上的信道估计。
实现上述过程的神经网络的结构如图7所示,为根据本申请实施例提供的无线传输方法中用于DMRS信道估计的神经网络的结构示意图,其中,神经网络的输入信息为N_RE_DMRS个DMRS过信道加上噪声后的符号,神经网络的输出信息为N_RE个符号,对应所有N_RE个时频资源上的信道估计结果。
在训练神经网络时,训练数据是加标签的DMRS信息对,即一个DMRS信号样本(包含N_RE_DMRS个符号)对应一个标签(该标签为与当前DMRS信号样本对应的信道真值,共N_RE_DMRS个符号)。训练时用大量的加标签的DMRS信息对来调整神经网络的参数,最小化基于DMRS信号样本的神经网络的输出与其标签之间的归一化均方误差NMSE。
考虑训练数据中包含K种SNR下获得的DMRS信息对,第k个信噪比记为SNR k,第k个信噪比的数据量为N k,总共有
Figure PCTCN2022092144-appb-000001
个训练数据。假设神经网络是基于联邦学习进行训练的,参与联邦学习的有1个网络侧设备和多个终端侧设备。
首先,在网络侧设备确定混合训练数据集构造时每个SNR的数据占比。
(1)按照传输条件对优化目标的贡献度,对K种信噪比的数据进行排序:
假设数据是等比例混合(即N k/N all=1/K),将第k个SNR的数据对上述NMSE的贡献度记为C k。按照C k从小到大(或从大到小)的顺序对SNR进行排序。在本实施例中,低SNR的数据对NMSE的贡献度较大,高SNR的数据对NMSE的贡献度较小。将排序中所有贡献度的中位数记作
Figure PCTCN2022092144-appb-000002
(2)确定混合比例:
1)对于第k个SNR:若
Figure PCTCN2022092144-appb-000003
则减少第k个SNR的数据量,减幅与
Figure PCTCN2022092144-appb-000004
成正比;若
Figure PCTCN2022092144-appb-000005
则增加第k个SNR的数据量,增幅与
Figure PCTCN2022092144-appb-000006
成正比;
2)假设调整后第k个信噪比的数据量为N′ k,总共有
Figure PCTCN2022092144-appb-000007
个训练数据。确认调整后任意SNR的数据占总数据的比例不小于门限γ,即对于所有k,调整数据量后使N′ k/N′ all≥γ。
然后,网络侧设备将每个SNR的数据占比信息通过RRC、PDCCH 层1信令、MAC CE或SIB等接口信令,发送给所有参与联合训练的终端,进行神经网络的联邦学习,也即离线训练。
在完成神经网络的离线训练后,将完成训练的神经网络在实际无线网络中进行在线微调。由于数据占比信息已经在离线训练阶段共享给了所有终端,因此在线微调阶段可沿用该数据占比。当来自实际无线环境的某一SNR的数据占比超出模型训练阶段该SNR的占比时,将超出的部分数据不输入神经网络进行微调。
本申请实施例能够提高训练出的神经网络模型在变化的无线环境中的泛化能力。
需要说明的是,本申请实施例提供的无线传输方法,执行主体可以为无线传输装置,或者,该无线传输装置中的用于执行无线传输方法的控制模块。本申请实施例中以无线传输装置执行无线传输方法为例,说明本申请实施例提供的无线传输装置。
本申请实施例的无线传输装置的结构如图8所示,为本申请实施例提供的无线传输装置的结构示意图,该装置可以用于实现上述各无线传输方法实施例中的无线传输,该装置包括:
第三处理模块801,用于基于神经网络模型,进行无线传输运算,实现所述无线传输。
其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如上述各实施例所述的训练数据集获取方法获取的。
可选地,所述无线传输装置还包括:
训练模块,用于基于所述训练数据集,利用如下训练方式中任一,训练获取所述神经网络模型,所述如下训练方式包括:
单个终端集中式训练;
单个网络侧设备集中式训练;
多个终端联合分布式训练;
多个网络侧设备联合分布式训练;
单个网络侧设备与多个终端联合分布式训练;
多个网络侧设备与多个终端联合分布式训练;
多个网络侧设备与单个终端联合分布式训练。
可选地,所述网络侧设备为接入网设备、核心网设备或数据网络设备。
可选地,所述无线传输装置还包括:
第四处理模块,用于在所述分布式训练的过程中,将各所述传输条件下训练数据的占比在执行所述分布式训练的各主体间共享。
可选地,在所述分布式训练为多个网络侧设备联合分布式训练的情况下,所述第四处理模块,用于由所述多个网络侧设备中任一网络侧设备计算确定各所述传输条件下训练数据的占比,并通过第一设定类型接口信令,将所述占比发送至所述多个网络侧设备中除所述任一网络侧设备外的其它网络侧设备。
可选地,所述第一设定类型接口信令包括Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令或N22接口信令。
可选地,在所述分布式训练为多个终端联合分布式训练的情况下,所述第四处理模块,用于由所述多个终端中任一终端计算确定各所述传输条件下训练数据的占比,并通过第二设定类型接口信令,将所述占比发送至所述多个终端中除所述任一终端外的其它终端。
可选地,所述第二设定类型接口信令包括PC5接口信令或side1ink接口信令。
可选地,在所述分布式训练为网络侧设备与终端联合分布式训练的情况下,所述第四处理模块,用于由所述网络侧设备与终端中任一网络 侧设备或任一终端计算确定各所述传输条件下训练数据的占比,并通过第三设定类型信令,将所述占比发送至所述网络侧设备与终端中除所述任一网络侧设备或任一终端外的其它网络侧设备或终端。
可选地,所述第三设定类型信令包括RRC、PDCCH层1信令、PDSCH、MAC CE、SIB、Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令、N22接口信令、PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3、PRACH的MSG A、PC5接口信令或sidelink接口信令。
可选地,所述无线传输装置还包括:
微调模块,用于获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型。
可选地,所述微调模块,用于:
基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据;
若所述传输条件中任一传输条件下的实时数据的占比高于所述任一传输条件下训练数据的占比,则在在线调整所述训练完成的神经网络模型的过程中,不将所述任一传输条件下的实时数据中超出所述任一传输条件下训练数据的占比的数据输入所述训练完成的神经网络模型。
可选地,所述微调模块,在用于所述获取各所述传输条件下的实时数据时,用于:
在线采集网络侧设备和终端中至少之一的各所述传输条件下的数据,作为各所述传输条件下的实时数据;
所述微调模块,在用于所述在线调整训练完成的神经网络模型时,用于:
基于所述网络侧设备和终端中至少之一的各所述传输条件下的数 据,利用所述网络侧设备或所述终端,在线调整所述训练完成的神经网络模型。
可选地,所述无线传输装置还包括:
通信模块,用于在所述网络侧设备或所述终端在训练阶段未获取各所述传输条件下训练数据的占比的情况下,
由所述网络侧设备通过Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令和N22接口信令中任一接口信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比;
或者,由所述网络侧设备通过PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3和PRACH的MSG A中任一信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
或者,由所述终端通过PC5接口信令或sidelink接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
或者,由所述终端通过RRC、PDCCH层1信令、PUSCH、MAC CE和SIB中任一信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比。
本申请实施例中的无线传输装置可以是装置,具有操作系统的装置或电子设备,也可以是终端或网络侧设备中的部件、集成电路、或芯片。该装置或电子设备可以是移动终端,也可以为非移动终端,也可以包括但不限于上述所列举的网络侧设备102的类型。示例性的,移动终端可以包括但不限于上述所列举的终端101的类型,非移动终端可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的无线传输装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的无线传输装置能够实现图4至图7的无线传输方法实施例实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
如图9所示,本申请实施例还提供一种通信设备900,包括处理器901,存储器902,存储在存储器902上并可在所述处理器901上运行的程序或指令,例如,该通信设备900为终端或网络侧设备时,该程序或指令被处理器901执行时可实现上述训练数据集获取方法实施例的各个过程,且能达到相同的技术效果,或者实现上述无线传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供一种通信设备,该通信设备可以是终端或网络侧设备,该通信设备包括处理器和通信接口,其中处理器用于基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;以及基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。需要说明的是,该通信设备实施例是与上述训练数据集获取方法实施例对应的,上述方法实施例的各个实施过程和实现方式均可适用于该通信设备实施例中,且能达到相同的技术效果。
本申请实施例还提供一种通信设备,该通信设备可以是终端或网络侧设备,该通信设备包括处理器和通信接口,其中处理器用于基于神经网络模型,进行无线传输运算,实现所述无线传输;其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如上述各实施例所述的训练数据集获取方法获取的。需要说明的是,该 通信设备实施例是与上述无线传输方法实施例对应的,上述方法实施例的各个实施过程和实现方式均可适用于该通信设备实施例中,且能达到相同的技术效果。
具体地,图10为实现本申请实施例的一种终端的硬件结构示意图。该终端1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等中的至少部分部件。
本领域技术人员可以理解,终端1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图10中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
本申请实施例中,射频单元1001将来自网络侧设备的下行数据接收后,给处理器1010处理;另外,将上行的数据发送给网络侧设备。通常,射频单元1001包括但不限于天线、至少一个放大器、收发信机、耦 合器、低噪声放大器、双工器等。
存储器1009可用于存储软件程序或指令以及各种数据。存储器1009可主要包括存储程序或指令区和存储数据区,其中,存储程序或指令区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1009可以包括高速随机存取存储器,还可以包括非易失性存储器,其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1009包括但不限于这些和任意其它适合类型的存储器。
处理器1010可包括一个或多个处理单元;可选的,处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序或指令等,调制解调处理器主要处理无线通信,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
其中,处理器1010,用于基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;以及基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成 用于训练所述神经网络的训练数据集;其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
本申请实施例在基于人工智能的通信系统中构造训练数据集时,根据不同传输条件的数据对神经网络优化目标(或称目标函数或损失函数)的贡献度,以不同比例选取多种传输条件下的数据,构造混合训练数据集,能够有效提高神经网络的泛化能力。
可选的,处理器1010,还用于对各所述传输条件的贡献度进行排序;以及在等比例混合的基础上,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一。
本申请实施例通过对各传输条件的贡献度进行排序,并根据该排序来调整对应传输条件的数据量,能够更明确且准确的确定对相应传输条件数据量的调整策略(包括需要增加还是减少数据量、增加或减少的幅度等),从而使得效率更高,结果更准确。
可选的,处理器1010,还用于基于各所述传输条件下训练数据的数据量,收集各所述传输条件下的数据并标定,构成各所述传输条件下的训练数据集;或者,收集各所述传输条件下设定数量的数据,并基于各所述传输条件下训练数据的数据量,由所述设定数量的数据中选取部分数据并标定,或对所述设定数量的数据进行补足并标定,构成各所述传输条件下的训练数据集。
可选的,处理器1010,还用于按照如下规则,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,所述如下规则,包括:所述较大贡献度的值越大,所述减少的幅度越大;所述较小贡献度的值越小,所述增加的幅度越大。
本申请实施例根据贡献度值的大小来按比例增加或减少对应传输条件训练数据的数据量,能够使得随着传输条件的贡献度的逐渐增高,传输条件的训练数据的数据量逐渐降低,从而更好的均衡各传输条件对最终神经网络的影响,更有利于改善神经网络的泛化能力。
可选的,处理器1010,还用于根据所述排序,确定参照贡献度,并比较所述传输条件的贡献度与所述参照贡献度,若所述传输条件的贡献度大于所述参照贡献度,则确定所述传输条件的贡献度为所述较大贡献度,并减少所述传输条件下训练数据的数据量,否则,确定所述传输条件的贡献度为所述较小贡献度,并增加所述传输条件下训练数据的数据量。
本申请实施例通过确定贡献度的中间对比参照量,只需将其它贡献度与该对比参照量进行比较,即可根据比较结果确定增加或减少对应传输条件的数据量,算法简单,运算量小。
可选的,处理器1010,还用于基于各所述传输条件在实际应用中的概率密度,确定各所述传输条件对应的加权系数;基于各所述传输条件对所述优化目标的贡献度,结合所述加权系数,确定各所述传输条件下训练数据的数据量。
本申请实施例根据传输条件在实际中的概率密度设计加权项,能够更好地适配实际环境。
可选的,射频单元1001,用于将所述训练数据集发送至目标设备,所述目标设备用于基于所述训练数据集,训练所述神经网络。
本申请实施例通过将获取的训练数据集发送至当前设备外的第二设备,能够使多个设备间实现数据共享和联合训练,从而能够有效降低单个设备的运算量,且能够有效提高运算效率。
可选的,射频单元1001,用于直接将所述训练数据集发送至所述目标设备,或者将设定变换后的训练数据集发送至所述目标设备;
处理器1010,还用于对所述训练数据集进行设定变换,所述设定变 换包括特定的量化、特定的压缩以及按照预先约定或配置的神经网络处理中的至少一种。
可选的,处理器1010,还用于基于神经网络模型,进行无线传输运算,实现所述无线传输;其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如上述各训练数据集获取方法实施例所述的训练数据集获取方法获取的。
本申请实施例根据不同传输条件的数据对神经网络优化目标(或称目标函数或损失函数)的贡献度,以不同比例选取多种传输条件下的数据,构造非均匀混合训练数据集,并基于该非均匀混合的训练数据集,训练出一个共性的神经网络,用于实际不同传输条件下的无线传输,能够使训练出的神经网络在每个传输条件下都能达到较高的性能。
可选的,处理器1010,还用于基于所述训练数据集,利用如下训练方式中任一,训练获取所述神经网络模型,所述如下训练方式包括:
单个终端集中式训练;
单个网络侧设备集中式训练;
多个终端联合分布式训练;
多个网络侧设备联合分布式训练;
单个网络侧设备与多个终端联合分布式训练;
多个网络侧设备与多个终端联合分布式训练;
多个网络侧设备与单个终端联合分布式训练。
可选的,射频单元1001,还用于在所述分布式训练的过程中,将各所述传输条件下训练数据的占比在执行所述分布式训练的各主体间共享。
本申请实施例通过在各执行主体间共享训练数据的占比,可以使各执行主体无需共享自身数据即可实现神经网络模型的训练,能够解决单个设备计算能力或训练能力不足或者设备之间无法共享数据(涉及隐私问题)或传输大量数据的代价非常大的问题。
可选的,处理器1010,还用于在所述分布式训练为多个网络侧设备联合分布式训练的情况下,由所述多个网络侧设备中任一网络侧设备计算确定各所述传输条件下训练数据的占比;
射频单元1001,还用于通过第一设定类型接口信令,将所述占比发送至所述多个网络侧设备中除所述任一网络侧设备外的其它网络侧设备。
可选的,处理器1010,还用于在所述分布式训练为多个终端联合分布式训练的情况下,由所述多个终端中任一终端计算确定各所述传输条件下训练数据的占比;
射频单元1001,还用于通过第二设定类型接口信令,将所述占比发送至所述多个终端中除所述任一终端外的其它终端。
可选的,处理器1010,还用于在所述分布式训练为网络侧设备与终端联合分布式训练的情况下,由所述网络侧设备与终端中任一网络侧设备或任一终端计算确定各所述传输条件下训练数据的占比;
射频单元1001,还用于通过第三设定类型信令,将所述占比发送至所述网络侧设备与终端中除所述任一网络侧设备或任一终端外的其它网络侧设备或终端。
可选的,处理器1010,还用于获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型。
本申请实施例在完成神经网络模型离线训练的基础上,对完成训练的神经网络模型进行在线微调,可以使神经网络更能适配实际环境。
可选的,处理器1010,还用于基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据;以及若所述传输条件中任一传输条件下的实时数据的占比高于所述任一传输条件下训练数据的占比,则在在线调整所述训练完成的神经网络模型的过程中,不将所述任一传输条件下的实时数据中超出所述任一传输条件下训练数据的占比的数据输入所述训练完成的神经网络模型。
本申请实施例在沿用训练阶段的数据占比时,不将超过该占比的传输条件的数据输入神经网络模型进行微调,能够避免超过占比的数据量的非均衡影响。
可选的,输入单元1004,用于在线采集网络侧设备和终端中至少之一的各所述传输条件下的数据,作为各所述传输条件下的实时数据;
处理器1010,还用于基于所述网络侧设备和终端中至少之一的各所述传输条件下的数据,利用所述网络侧设备或所述终端,在线调整所述训练完成的神经网络模型。
可选的,在通信设备为网络侧设备的情况下,射频单元1001,还用于通过Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令和N22接口信令中任一网络侧接口信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比,或者通过RRC、PDCCH层1信令、MAC CE和SIB中任一接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
在通信设备为终端的情况下,射频单元1001,还用于通过PC5接口信令或sidelink接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比,或者通过RRC、PDCCH层1信令、MAC CE和SIB中任一信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比。
本申请实施例在执行主体在训练阶段未获取数据占比信息的基础上,通过设定类型接口信令从其它执行主体获取数据占比信息,能够实现在线微调阶段数据占比信息的共享,保证神经网络的泛化能力。
具体地,图11为实现本申请实施例的一种接入网设备的硬件结构示意图。如图11所示,该接入网设备1100包括:天线1101、射频装置 1102、基带装置1103。天线1101与射频装置1102连接。在上行方向上,射频装置1102通过天线1101接收信息,将接收的信息发送给基带装置1103进行处理。在下行方向上,基带装置1103对要发送的信息进行处理,并发送给射频装置1102,射频装置1102对收到的信息进行处理后经过天线1101发送出去。
频带处理装置可以位于基带装置1103中,以上实施例中网络侧设备执行的方法可以在基带装置1103中实现,该基带装置1103包括处理器1104和存储器1105。
基带装置1103例如可以包括至少一个基带板,该基带板上设置有多个芯片,如图11所示,其中一个芯片例如为处理器1104,与存储器1105连接,以调用存储器1105中的程序,执行以上方法实施例中所示的网络侧设备操作。
该基带装置1103还可以包括网络接口1106,用于与射频装置1102交互信息,该接口例如为通用公共无线接口(common public radio interface,简称CPRI)。
具体地,本发明实施例的接入网设备还包括:存储在存储器1105上并可在处理器1104上运行的指令或程序,处理器1104调用存储器1105中的指令或程序执行图3或图8所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
具体地,图12为实现本申请实施例的一种核心网设备的硬件结构示意图。如图12所示,该核心网设备1200包括:处理器1201、收发机1202、存储器1203、用户接口1204和总线接口,其中:
在本申请实施例中,核心网设备1200还包括:存储在存储器1203上并可在处理器1201上运行的计算机程序,计算机程序被处理器1201执行时实现如图3或图8所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
在图12中,总线架构可以包括任意数量的互联的总线和桥,具体由 处理器1201代表的一个或多个处理器和存储器1203代表的存储器的各种电路链接在一起。总线架构还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本申请实施例不再对其进行进一步描述。总线接口提供接口。收发机1202可以是多个元件,即包括发送机和接收机,提供用于在传输介质上与各种其他装置通信的单元。针对不同的用户设备,用户接口1204还可以是能够外接内接需要设备的接口,连接的设备包括但不限于小键盘、显示器、扬声器、麦克风、操纵杆等。
处理器1201负责管理总线架构和通常的处理,存储器1203可以存储处理器1201在执行操作时所使用的数据。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时,实现上述训练数据集获取方法实施例的各个过程,或者实现上述无线传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端或网络侧设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述训练数据集获取方法实施例的各个过程,或者实现上述无线传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方 法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (55)

  1. 一种训练数据集获取方法,包括:
    基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;
    基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;
    其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
  2. 根据权利要求1所述的训练数据集获取方法,其中,所述基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量,包括:
    对各所述传输条件的贡献度进行排序;
    在等比例混合的基础上,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一。
  3. 根据权利要求1或2所述的训练数据集获取方法,其中,所述传输条件的类型包括如下至少之一:
    信噪比或信干噪比;
    参考信号接收功率;
    信号强度;
    干扰强度;
    终端移动速度;
    信道参数;
    终端离基站的距离;
    小区大小;
    载频;
    调制阶数或调制编码策略;
    小区类型;
    站间距;
    天气和环境因素;
    发端或收端的天线配置信息;
    终端能力或类型;
    基站能力或类型。
  4. 根据权利要求1或2所述的训练数据集获取方法,其中,所述获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集,包括:
    基于各所述传输条件下训练数据的数据量,收集各所述传输条件下的数据并标定,构成各所述传输条件下的训练数据集;
    或者,
    收集各所述传输条件下设定数量的数据,并基于各所述传输条件下训练数据的数据量,由所述设定数量的数据中选取部分数据并标定,或对所述设定数量的数据进行补足并标定,构成各所述传输条件下的训练数据集。
  5. 根据权利要求2所述的训练数据集获取方法,其中,所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,包括:
    按照如下规则,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,所述如下规则,包括:
    所述较大贡献度的值越大,所述减少的幅度越大;所述较小贡献度的值越小,所述增加的幅度越大。
  6. 根据权利要求5所述的训练数据集获取方法,其中,在所述排序 的结果为由小到大的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递减,在所述排序的结果为由大到小的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递增。
  7. 根据权利要求2所述的训练数据集获取方法,其中,所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,包括:
    根据所述排序,确定参照贡献度,并比较所述传输条件的贡献度与所述参照贡献度;
    根据比较结果,执行如下操作中至少之一,所述如下操作包括:
    若所述传输条件的贡献度大于所述参照贡献度,则确定所述传输条件的贡献度为所述较大贡献度,并减少所述传输条件下训练数据的数据量;
    若所述传输条件的贡献度不大于所述参照贡献度,则确定所述传输条件的贡献度为所述较小贡献度,并增加所述传输条件下训练数据的数据量。
  8. 根据权利要求7所述的训练数据集获取方法,其中,所述参照贡献度为所述排序的中位数,或所述排序中设定位置的贡献度,或所述排序中各贡献度的平均数,或所述排序中与所述平均数最接近的贡献度。
  9. 根据权利要求1、2、5-8中任一所述的训练数据集获取方法,其中,所述基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量,包括:
    基于各所述传输条件在实际应用中的概率密度,确定各所述传输条件对应的加权系数;
    基于各所述传输条件对所述优化目标的贡献度,结合所述加权系数,确定各所述传输条件下训练数据的数据量。
  10. 根据权利要求9所述的训练数据集获取方法,其中,所述加权系 数与所述概率密度呈函数递增关系。
  11. 根据权利要求1、2、5-8、10中任一所述的训练数据集获取方法,其中,所述方法还包括:
    将所述训练数据集发送至目标设备,所述目标设备用于基于所述训练数据集,训练所述神经网络。
  12. 根据权利要求11所述的训练数据集获取方法,其中,所述将所述训练数据集发送至目标设备,包括:
    直接将所述训练数据集发送至所述目标设备,或者,对所述训练数据集进行设定变换后发送至所述目标设备,所述设定变换包括特定的量化、特定的压缩以及按照预先约定或配置的神经网络处理中的至少一种。
  13. 一种训练数据集获取装置,包括:
    第一处理模块,用于基于各传输条件对神经网络优化目标的贡献度,确定各所述传输条件下训练数据的数据量;
    第二处理模块,用于基于各所述传输条件下训练数据的数据量,获取各所述传输条件下的训练数据,以形成用于训练所述神经网络的训练数据集;
    其中,所述传输条件对神经网络优化目标的贡献度,表示所述传输条件对所述神经网络优化目标的取值的影响程度。
  14. 根据权利要求13所述的训练数据集获取装置,其中,所述第一处理模块,用于:
    对各所述传输条件的贡献度进行排序;
    在等比例混合的基础上,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一。
  15. 根据权利要求13或14所述的训练数据集获取装置,其中,所述传输条件的类型包括如下至少之一:
    信噪比或信干噪比;
    参考信号接收功率;
    信号强度;
    干扰强度;
    终端移动速度;
    信道参数;
    终端离基站的距离;
    小区大小;
    载频;
    调制阶数或调制编码策略;
    小区类型;
    站间距;
    天气和环境因素;
    发端或收端的天线配置信息;
    终端能力或类型;
    基站能力或类型。
  16. 根据权利要求13或14所述的训练数据集获取装置,其中,所述第二处理模块,用于:
    基于各所述传输条件下训练数据的数据量,收集各所述传输条件下的数据并标定,构成各所述传输条件下的训练数据集;
    或者,
    收集各所述传输条件下设定数量的数据,并基于各所述传输条件下训练数据的数据量,由所述设定数量的数据中选取部分数据并标定,或对所述设定数量的数据进行补足并标定,构成各所述传输条件下的训练数据集。
  17. 根据权利要求14所述的训练数据集获取装置,其中,所述第一处理模块在用于所述执行减少所述排序中较大贡献度对应的传输条件下 训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一时,用于:
    按照如下规则,执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一,所述如下规则,包括:
    所述较大贡献度的值越大,所述减少的幅度越大;所述较小贡献度的值越小,所述增加的幅度越大。
  18. 根据权利要求17所述的训练数据集获取装置,其中,在所述排序的结果为由小到大的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递减,在所述排序的结果为由大到小的情况下,所述传输条件下的训练数据的数据量按所述排序的方向递增。
  19. 根据权利要求14所述的训练数据集获取装置,其中,所述第一处理模块在用于所述执行减少所述排序中较大贡献度对应的传输条件下训练数据的数据量和增加所述排序中较小贡献度对应的传输条件下训练数据的数据量的操作中至少之一时,用于:
    根据所述排序,确定参照贡献度,并比较所述传输条件的贡献度与所述参照贡献度;
    根据比较结果,执行如下操作中至少之一,所述如下操作包括:
    若所述传输条件的贡献度大于所述参照贡献度,则确定所述传输条件的贡献度为所述较大贡献度,并减少所述传输条件下训练数据的数据量;
    若所述传输条件的贡献度不大于所述参照贡献度,则确定所述传输条件的贡献度为所述较小贡献度,并增加所述传输条件下训练数据的数据量。
  20. 根据权利要求19所述的训练数据集获取装置,其中,所述参照贡献度为所述排序的中位数,或所述排序中设定位置的贡献度,或所述排序中各贡献度的平均数,或所述排序中与所述平均数最接近的贡献 度。
  21. 根据权利要求13、14、17-20中任一所述的训练数据集获取装置,其中,所述第一处理模块,还用于:
    基于各所述传输条件在实际应用中的概率密度,确定各所述传输条件对应的加权系数;
    基于各所述传输条件对所述优化目标的贡献度,结合所述加权系数,确定各所述传输条件下训练数据的数据量。
  22. 根据权利要求21所述的训练数据集获取装置,其中,所述加权系数与所述概率密度呈函数递增关系。
  23. 根据权利要求13、14、17-20、22中任一所述的训练数据集获取装置,其中,所述装置还包括:
    发送模块,用于将所述训练数据集发送至目标设备,所述目标设备用于基于所述训练数据集,训练所述神经网络。
  24. 根据权利要求23所述的训练数据集获取装置,其中,所述发送模块,用于:
    直接将所述训练数据集发送至所述目标设备,或者,对所述训练数据集进行设定变换后发送至所述目标设备,所述设定变换包括特定的量化、特定的压缩以及按照预先约定或配置的神经网络处理中的至少一种。
  25. 一种无线传输方法,包括:
    基于神经网络模型,进行无线传输运算,实现所述无线传输;
    其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如权利要求1-12中任一所述的训练数据集获取方法获取的。
  26. 根据权利要求25所述的无线传输方法,其中,在所述基于神经网络模型,进行无线传输运算之前,所述无线传输方法还包括:
    基于所述训练数据集,利用如下训练方式中任一,训练获取所述神 经网络模型,所述如下训练方式包括:
    单个终端集中式训练;
    单个网络侧设备集中式训练;
    多个终端联合分布式训练;
    多个网络侧设备联合分布式训练;
    单个网络侧设备与多个终端联合分布式训练;
    多个网络侧设备与多个终端联合分布式训练;
    多个网络侧设备与单个终端联合分布式训练。
  27. 根据权利要求26所述的无线传输方法,其中,所述网络侧设备为接入网设备、核心网设备或数据网络设备。
  28. 根据权利要求26或27所述的无线传输方法,其中,所述无线传输方法还包括:
    在所述分布式训练的过程中,将各所述传输条件下训练数据的占比在执行所述分布式训练的各主体间共享。
  29. 根据权利要求28所述的无线传输方法,其中,在所述分布式训练为多个网络侧设备联合分布式训练的情况下,由所述多个网络侧设备中任一网络侧设备计算确定各所述传输条件下训练数据的占比,并通过第一设定类型接口信令,将所述占比发送至所述多个网络侧设备中除所述任一网络侧设备外的其它网络侧设备。
  30. 根据权利要求29所述的无线传输方法,其中,所述第一设定类型接口信令包括Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令或N22接口信令。
  31. 根据权利要求28所述的无线传输方法,其中,在所述分布式训练为多个终端联合分布式训练的情况下,由所述多个终端中任一终端计算确定各所述传输条件下训练数据的占比,并通过第二设定类型接口信 令,将所述占比发送至所述多个终端中除所述任一终端外的其它终端。
  32. 根据权利要求31所述的无线传输方法,其中,所述第二设定类型接口信令包括PC5接口信令或sidelink接口信令。
  33. 根据权利要求28所述的无线传输方法,其中,在所述分布式训练为网络侧设备与终端联合分布式训练的情况下,由所述网络侧设备与终端中任一网络侧设备或任一终端计算确定各所述传输条件下训练数据的占比,并通过第三设定类型信令,将所述占比发送至所述网络侧设备与终端中除所述任一网络侧设备或任一终端外的其它网络侧设备或终端。
  34. 根据权利要求33所述的无线传输方法,其中,所述第三设定类型信令包括RRC、PDCCH层1信令、PDSCH、MAC CE、SIB、Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令、N22接口信令、PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3、PRACH的MSG A、PC5接口信令或sidelink接口信令。
  35. 根据权利要求26、27、29-34中任一所述的无线传输方法,其中,还包括:
    获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型。
  36. 根据权利要求35所述的无线传输方法,其中,所述获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型,包括:
    基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据;
    若所述传输条件中任一传输条件下的实时数据的占比高于所述任一 传输条件下训练数据的占比,则在在线调整所述训练完成的神经网络模型的过程中,不将所述任一传输条件下的实时数据中超出所述任一传输条件下训练数据的占比的数据输入所述训练完成的神经网络模型。
  37. 根据权利要求36所述的无线传输方法,其中,所述获取各所述传输条件下的实时数据,包括:
    在线采集网络侧设备和终端中至少之一的各所述传输条件下的数据,作为各所述传输条件下的实时数据;
    所述在线调整训练完成的神经网络模型,包括:
    基于所述网络侧设备和终端中至少之一的各所述传输条件下的数据,利用所述网络侧设备或所述终端,在线调整所述训练完成的神经网络模型。
  38. 根据权利要求37所述的无线传输方法,其中,在所述网络侧设备或所述终端在训练阶段未获取各所述传输条件下训练数据的占比的情况下,在所述基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据之前,所述无线传输方法还包括:
    由所述网络侧设备通过Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令和N22接口信令中任一接口信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比;
    或者,由所述网络侧设备通过PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3和PRACH的MSG A中任一信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
    或者,由所述终端通过PC5接口信令或sidelink接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
    或者,由所述终端通过RRC、PDCCH层1信令、PUSCH、MAC CE 和SIB中任一信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比。
  39. 一种无线传输装置,包括:
    第三处理模块,用于基于神经网络模型,进行无线传输运算,实现所述无线传输;
    其中,所述神经网络模型为预先利用训练数据集进行训练获取的,所述训练数据集为基于如权利要求1-12中任一所述的训练数据集获取方法获取的。
  40. 根据权利要求39所述的无线传输装置,其中,所述无线传输装置还包括:
    训练模块,用于基于所述训练数据集,利用如下训练方式中任一,训练获取所述神经网络模型,所述如下训练方式包括:
    单个终端集中式训练;
    单个网络侧设备集中式训练;
    多个终端联合分布式训练;
    多个网络侧设备联合分布式训练;
    单个网络侧设备与多个终端联合分布式训练;
    多个网络侧设备与多个终端联合分布式训练;
    多个网络侧设备与单个终端联合分布式训练。
  41. 根据权利要求40所述的无线传输装置,其中,所述网络侧设备为接入网设备、核心网设备或数据网络设备。
  42. 根据权利要求40或41所述的无线传输装置,其中,所述无线传输装置还包括:
    第四处理模块,用于在所述分布式训练的过程中,将各所述传输条件下训练数据的占比在执行所述分布式训练的各主体间共享。
  43. 根据权利要求42所述的无线传输装置,其中,在所述分布式训练为多个网络侧设备联合分布式训练的情况下,所述第四处理模块,用 于由所述多个网络侧设备中任一网络侧设备计算确定各所述传输条件下训练数据的占比,并通过第一设定类型接口信令,将所述占比发送至所述多个网络侧设备中除所述任一网络侧设备外的其它网络侧设备。
  44. 根据权利要求43所述的无线传输装置,其中,所述第一设定类型接口信令包括Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令或N22接口信令。
  45. 根据权利要求42所述的无线传输装置,其中,在所述分布式训练为多个终端联合分布式训练的情况下,所述第四处理模块,用于由所述多个终端中任一终端计算确定各所述传输条件下训练数据的占比,并通过第二设定类型接口信令,将所述占比发送至所述多个终端中除所述任一终端外的其它终端。
  46. 根据权利要求45所述的无线传输装置,其中,所述第二设定类型接口信令包括PC5接口信令或sidelink接口信令。
  47. 根据权利要求42所述的无线传输装置,其中,在所述分布式训练为网络侧设备与终端联合分布式训练的情况下,所述第四处理模块,用于由所述网络侧设备与终端中任一网络侧设备或任一终端计算确定各所述传输条件下训练数据的占比,并通过第三设定类型信令,将所述占比发送至所述网络侧设备与终端中除所述任一网络侧设备或任一终端外的其它网络侧设备或终端。
  48. 根据权利要求47所述的无线传输装置,其中,所述第三设定类型信令包括RRC、PDCCH层1信令、PDSCH、MAC CE、SIB、Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令、N22接口信令、PUCCH层1信令、PUSCH、 PRACH的MSG1、PRACH的MSG3、PRACH的MSG A、PC5接口信令或sidelink接口信令。
  49. 根据权利要求40、41、43-48中任一所述的无线传输装置,其中,所述无线传输装置还包括:
    微调模块,用于获取所述传输条件下的实时数据,并基于所述实时数据,在线调整训练完成的神经网络模型。
  50. 根据权利要求49所述的无线传输装置,其中,所述微调模块,用于:
    基于各所述传输条件下训练数据的占比,获取各所述传输条件下的实时数据;
    若所述传输条件中任一传输条件下的实时数据的占比高于所述任一传输条件下训练数据的占比,则在在线调整所述训练完成的神经网络模型的过程中,不将所述任一传输条件下的实时数据中超出所述任一传输条件下训练数据的占比的数据输入所述训练完成的神经网络模型。
  51. 根据权利要求50所述的无线传输装置,其中,所述微调模块,在用于所述获取各所述传输条件下的实时数据时,用于:
    在线采集网络侧设备和终端中至少之一的各所述传输条件下的数据,作为各所述传输条件下的实时数据;
    所述微调模块,在用于所述在线调整训练完成的神经网络模型时,用于:
    基于所述网络侧设备和终端中至少之一的各所述传输条件下的数据,利用所述网络侧设备或所述终端,在线调整所述训练完成的神经网络模型。
  52. 根据权利要求51所述的无线传输装置,其中,所述无线传输装置还包括:
    通信模块,用于在所述网络侧设备或所述终端在训练阶段未获取各所述传输条件下训练数据的占比的情况下,
    由所述网络侧设备通过Xn接口信令、N1接口信令、N2接口信令、N3接口信令、N4接口信令、N5接口信令、N6接口信令、N7接口信令、N8接口信令、N9接口信令、N10接口信令、N11接口信令、N12接口信令、N13接口信令、N14接口信令、N15接口信令和N22接口信令中任一接口信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比;
    或者,由所述网络侧设备通过PUCCH层1信令、PUSCH、PRACH的MSG1、PRACH的MSG3和PRACH的MSG A中任一信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
    或者,由所述终端通过PC5接口信令或sidelink接口信令,从所述训练阶段的终端中获取各所述传输条件下训练数据的占比;
    或者,由所述终端通过RRC、PDCCH层1信令、PUSCH、MAC CE和SIB中任一信令,从所述训练阶段的网络侧设备中获取各所述传输条件下训练数据的占比。
  53. 一种通信设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至12任一项所述的训练数据集获取方法的步骤,或者实现如权利要求25-38任一项所述的无线传输方法的步骤。
  54. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-12任一项所述的训练数据集获取方法,或者实现如权利要求25至38任一项所述的无线传输方法的步骤。
  55. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-12任一项所述的训练数据集获取方法,或者实现如权利要求25至38任一项所述的无线传输方法的步骤。
PCT/CN2022/092144 2021-05-11 2022-05-11 训练数据集获取方法、无线传输方法、装置及通信设备 WO2022237822A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP22806785.6A EP4339842A1 (en) 2021-05-11 2022-05-11 Training data set acquisition method, wireless transmission method, apparatus, and communication device
JP2023569669A JP2024518483A (ja) 2021-05-11 2022-05-11 トレーニングデータセット取得方法、無線伝送方法、装置及び通信機器
US18/388,635 US20240078439A1 (en) 2021-05-11 2023-11-10 Training Data Set Obtaining Method, Wireless Transmission Method, and Communications Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110513732.0A CN115329954A (zh) 2021-05-11 2021-05-11 训练数据集获取方法、无线传输方法、装置及通信设备
CN202110513732.0 2021-05-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/388,635 Continuation US20240078439A1 (en) 2021-05-11 2023-11-10 Training Data Set Obtaining Method, Wireless Transmission Method, and Communications Device

Publications (1)

Publication Number Publication Date
WO2022237822A1 true WO2022237822A1 (zh) 2022-11-17

Family

ID=83912888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/092144 WO2022237822A1 (zh) 2021-05-11 2022-05-11 训练数据集获取方法、无线传输方法、装置及通信设备

Country Status (5)

Country Link
US (1) US20240078439A1 (zh)
EP (1) EP4339842A1 (zh)
JP (1) JP2024518483A (zh)
CN (1) CN115329954A (zh)
WO (1) WO2022237822A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625322A (zh) * 2012-02-27 2012-08-01 北京邮电大学 多制式智能可配的无线网络优化的实现方法
CN109359166A (zh) * 2018-10-10 2019-02-19 广东国地规划科技股份有限公司 一种空间增长动态模拟与驱动力因子贡献度同步计算方法
CN109472345A (zh) * 2018-09-28 2019-03-15 深圳百诺名医汇网络技术有限公司 一种权重更新方法、装置、计算机设备和存储介质
CN111652381A (zh) * 2020-06-04 2020-09-11 深圳前海微众银行股份有限公司 数据集贡献度评估方法、装置、设备及可读存储介质
CN112329813A (zh) * 2020-09-29 2021-02-05 中南大学 一种能耗预测用特征提取方法及系统
US20210049473A1 (en) * 2019-08-14 2021-02-18 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Robust Federated Training of Neural Networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625322A (zh) * 2012-02-27 2012-08-01 北京邮电大学 多制式智能可配的无线网络优化的实现方法
CN109472345A (zh) * 2018-09-28 2019-03-15 深圳百诺名医汇网络技术有限公司 一种权重更新方法、装置、计算机设备和存储介质
CN109359166A (zh) * 2018-10-10 2019-02-19 广东国地规划科技股份有限公司 一种空间增长动态模拟与驱动力因子贡献度同步计算方法
US20210049473A1 (en) * 2019-08-14 2021-02-18 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Robust Federated Training of Neural Networks
CN111652381A (zh) * 2020-06-04 2020-09-11 深圳前海微众银行股份有限公司 数据集贡献度评估方法、装置、设备及可读存储介质
CN112329813A (zh) * 2020-09-29 2021-02-05 中南大学 一种能耗预测用特征提取方法及系统

Also Published As

Publication number Publication date
JP2024518483A (ja) 2024-05-01
EP4339842A1 (en) 2024-03-20
CN115329954A (zh) 2022-11-11
US20240078439A1 (en) 2024-03-07

Similar Documents

Publication Publication Date Title
WO2022078276A1 (zh) Ai网络参数的配置方法和设备
US20220247472A1 (en) Coding method, decoding method, and device
US20220247469A1 (en) Method and device for transmitting channel state information
WO2021031812A1 (zh) 一种天线面板状态的指示方法及装置
CN114765879A (zh) Pusch传输方法、装置、设备及存储介质
US20240073882A1 (en) Beam Control Method and Apparatus for Intelligent Surface Device and Electronic Device
US20230244911A1 (en) Neural network information transmission method and apparatus, communication device, and storage medium
US20240088970A1 (en) Method and apparatus for feeding back channel information of delay-doppler domain, and electronic device
WO2023066288A1 (zh) 模型请求方法、模型请求处理方法及相关设备
WO2022237822A1 (zh) 训练数据集获取方法、无线传输方法、装置及通信设备
WO2022083619A1 (zh) 通信信息的发送、接收方法及通信设备
US20240056989A1 (en) Precoding and power allocation for access points in a cell-free communication system
WO2023169544A1 (zh) 质量信息确定方法、装置、终端及存储介质
US20240224082A1 (en) Parameter selection method, parameter configuration method, terminal, and network side device
WO2024041420A1 (zh) 测量反馈处理方法、装置、终端及网络侧设备
WO2023088387A1 (zh) 信道预测方法、装置、ue及系统
WO2023040886A1 (zh) 数据采集方法及装置
WO2024032694A1 (zh) Csi预测处理方法、装置、通信设备及可读存储介质
WO2024078405A1 (zh) 传输方法、装置、通信设备及可读存储介质
WO2023207898A1 (zh) 多trp传输的pmi的反馈方法、设备、终端及网络侧设备
WO2024017239A1 (zh) 数据采集方法及装置、通信设备
WO2023174325A1 (zh) Ai模型的处理方法及设备
WO2024012285A1 (zh) 参考信号测量方法、装置、终端、网络侧设备及介质
WO2023179540A1 (zh) 信道预测方法、装置及无线通信设备
WO2024093713A1 (zh) 资源配置方法、装置、通信设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22806785

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023569669

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2022806785

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022806785

Country of ref document: EP

Effective date: 20231211