CN114492784A - Neural network testing method and device - Google Patents

Neural network testing method and device Download PDF

Info

Publication number
CN114492784A
CN114492784A CN202011163017.0A CN202011163017A CN114492784A CN 114492784 A CN114492784 A CN 114492784A CN 202011163017 A CN202011163017 A CN 202011163017A CN 114492784 A CN114492784 A CN 114492784A
Authority
CN
China
Prior art keywords
neural network
performance
communication
network
dimensions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011163017.0A
Other languages
Chinese (zh)
Inventor
吴晔
金黄平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011163017.0A priority Critical patent/CN114492784A/en
Publication of CN114492784A publication Critical patent/CN114492784A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the application provides a testing method and a testing device of a neural network, which can test the performance of the neural network from multiple dimensions. The method comprises the following steps: obtaining N dimensionality performance measurement values of target information based on communication connection between a first neural network and a second neural network, wherein the first neural network is obtained based on communication connection training between the first neural network and the second neural network; determining whether the performance of the first neural network meets the standard or not based on the performance metric values of the N dimensions; wherein N is an integer of 2 or more. The performance of the first neural network is tested based on the performance metric values of multiple dimensions, the performance of the multiple dimensions is considered, balance is obtained among the performance of the multiple dimensions, and therefore the system performance is improved.

Description

Neural network testing method and device
Technical Field
The application relates to the field of artificial intelligence, in particular to a method and a device for testing a neural network.
Background
In order to improve the performance of the communication system, Neural Networks (NN) are gradually applied to communication devices to obtain joint optimization of both transmitting and receiving ends, thereby improving the overall performance. Taking feedback and reconstruction of Channel State Information (CSI) as an example, the terminal device may quantize and compress the CSI based on a neural network to generate air interface information. The network device may decompress the air interface information based on the neural network to reconstruct the channel.
However, in the feedback process of CSI, feedback overhead and feedback accuracy are two opposite dimensions of performance. In order to control the lower feedback overhead, the feedback accuracy is often affected; in order to improve the feedback accuracy, the feedback overhead is increased. How to obtain such a neural network, so that a higher compromise efficiency between multiple performances can be obtained between a network device and a terminal device, for example, in a CSI feedback scenario, a higher compromise efficiency between feedback overhead and feedback accuracy is obtained, which is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides a testing method and a testing device of a neural network, so that high compromise efficiency can be obtained among multiple performances, and the overall performance of a system can be improved.
In a first aspect, a method for testing a neural network is provided, the method including: obtaining N dimensionality performance measurement values of target information based on communication connection between a first neural network and a second neural network, wherein the first neural network is obtained based on communication connection training with the second neural network; determining whether the performance of the first neural network meets the standard based on the performance metric values of the N dimensions; n is not less than 2 and is an integer.
Wherein the target information may be derived based on a communication connection between the first neural network and the second neural network. Performance testing of the first neural network may be performed based on N-dimensional performance metric values from the target information. The content of the target information may be different corresponding to different scenes. For example, in a feedback and reconstruction scenario of channel information, the target information may be channel information, such as CSI; in an encoding and decoding scenario, the target information may be a bitstream; in a modulation and demodulation scenario, the target information may be a symbol stream; in a precoding and detection reception scenario, the target information may be a precoded symbol stream. The embodiment of the present application does not limit the specific content of the target information.
It is understood that the N-dimensional performance metric values of the obtained target information are different dimensions based on the target information. For example, in the feedback and reconstruction scenario of channel information, the performance metric includes, but is not limited to, feedback overhead, feedback accuracy; in the feedback and reconstruction scenario of channel information, the performance metric values include, but are not limited to, the decoding error probability, complexity of the bitstream; in encoding and decoding scenarios, the performance metric values include, but are not limited to, symbol error rate, complexity of the symbol stream; in a pre-coding and detection reception scenario, the performance metric values include, but are not limited to, two or more of throughput, decoding error probability, symbol error rate, complexity.
The first neural network is a neural network trained based on a communication connection with the first neural network. In the embodiment of the present application, the method is used to test whether the performance of the first neural network meets the standard.
The second neural network may be, for example, a reference neural network, or may be a neural network actually used in the communication system. Both the structure and the parameters of the reference neural network may be predefined, such as by a protocol. The neural network actually used in the communication system may be a neural network constructed based on the structure and parameters of the reference neural network, and thus, also has the same structure and parameters as the reference neural network.
Based on the technical scheme, based on the communication connection between the first neural network and the second neural network, the performance metric values of multiple dimensions of the same target information can be obtained, and then the performance of the first neural network is evaluated based on the performance metric values of multiple dimensions, so that the performance of multiple dimensions can be considered, balance is obtained among the performance of multiple dimensions, and the system performance is favorably improved.
With reference to the first aspect, in certain implementations of the first aspect, the second neural network corresponds to N dimensions of performance indicators. Wherein the determining whether the performance of the first neural network meets the standard based on the performance metric values of the N dimensions includes: determining whether the performance of the first neural network meets the standard or not based on the performance indexes of the N dimensions and the performance metric values of the N dimensions; and each dimension of the N dimensions corresponds to a performance metric value and a performance index, and the performance of the first neural network is determined to reach the standard under the condition that the performance metric values of the N dimensions reach or exceed the performance indexes of the corresponding dimensions.
The performance index is set for judging whether the performance of the neural network reaches the standard or not, which is provided in the embodiment of the application. The performance index may comprise indexes of different dimensions corresponding to different scenes and different target information. It should be understood that the performance indicators may include indicators of different dimensions that correspond to the content included in the above-listed performance metric values. For the sake of brevity, this is not to be enumerated here.
When the performance metric values of the N dimensions can reach or exceed the performance index of the respective corresponding dimension, the performance of the first neural network can be determined to be up to standard.
With reference to the first aspect, in some implementations of the first aspect, in a case that the performance metric values of the N dimensions all reach or exceed the performance index of each corresponding dimension, determining that the performance of the first neural network meets the standard includes: under the condition that the performance metric values of P dimensions in the N dimensions are the performance indexes of the respective corresponding dimensions, if the performance metric values of the other N-P dimensions can also reach or exceed the performance indexes of the respective corresponding dimensions, the performance of the first neural network is determined to reach the standard; wherein N is more than P and is more than or equal to 1, and P is an integer.
It should be understood that, the performance metric values of P dimensions are fixed as the performance indexes of the respective corresponding dimensions, and the manner of evaluating the performance metric values of the remaining N-P dimensions is a possible implementation manner for determining whether the performance metric values of N dimensions all reach the standard. The value of P can be any value from 1 to N-1, and can also be traversed within the range from 1 to N-1. The embodiments of the present application do not limit this.
One possible design is that, in the case that the performance metric values of any P dimensions in the N dimensions are the performance indexes of the respective corresponding dimensions, if the performance metric values of the remaining N-P dimensions can also reach or exceed the performance indexes of the respective corresponding dimensions, it is determined that the performance of the first neural network reaches the standard.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and training the first neural network based on the communication connection with the second neural network and the performance indexes of the N dimensionalities corresponding to the second neural network.
The first neural network may be obtained by training the first neural network based on the communication connection with the second neural network, with the performance indexes of the N dimensions corresponding to the second neural network as optimization targets.
With reference to the first aspect, in certain implementations of the first aspect, the first neural network is one of M neural networks configured in a first communication device to be networked, the performance index of the N dimensions is one of M standardized sets of performance indexes, the M sets of performance indexes correspond to the M neural networks, and each set of performance index is used to test performance of a corresponding one of the neural networks; m is not less than 1 and is an integer. The method further comprises the following steps: allowing the first communication device to access the network for use when the performance of each of the M neural networks reaches the corresponding performance index; or under the condition that the performance of at least one of the M neural networks does not reach the corresponding performance index, the first communication device is not allowed to be accessed for use.
The testing of the M neural networks in the first communication device may be performed according to the above-listed testing methods for the first neural network. For each neural network, a set of performance metrics for the test may be defined. If the performance of the M neural networks can reach the corresponding set of performance indexes, the performance of the first communication equipment can be determined to reach the standard, and the first communication equipment can be used for accessing the network.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and under the condition that the performances of the M neural networks reach the standard, storing the corresponding relation between the M neural networks and the M sets of performance indexes into the first communication equipment.
The corresponding relation between the M neural networks and the M sets of performance indexes is stored in the first communication equipment, so that the matched neural networks can be conveniently selected to work according to the neural network used by the communication equipment at the opposite end in the subsequent network access.
With reference to the first aspect, in certain implementations of the first aspect, the second neural network is one of Z reference neural networks used for performance testing of the plurality of neural networks; m is more than or equal to Z and more than or equal to 1, and Z is an integer.
The Z reference neural networks may be used for performance testing of the M neural networks described above. Z is less than M, which means that the same reference neural network can correspond to two or more sets of performance indexes. And Z-M, which means that each reference neural network can correspond to a set of performance indexes.
One possible scenario is that each of the Z reference neural networks corresponds to one of the M sets of performance indicators and one of the M neural networks.
In this case, the M reference neural networks, the M neural networks, and the M sets of performance indicators correspond one to one. The corresponding relation between the M neural networks and the M sets of performance indexes can also be converted into the corresponding relation between the M neural networks and the M reference neural networks.
The corresponding relation between the Z reference neural networks and the M sets of performance indexes is pre-stored in second communication equipment, and the second communication equipment is equipment which is communicated with the first communication equipment in a communication system.
The second communication device may be a communication device already in network use in the communication system. The second communication device may be a neural network constructed based on the structure and parameters of a reference neural network (e.g., one of the Z reference neural networks described above), and thus, also has the same structure and parameters as the reference neural network. Thus, the neural network in the second communication device is a neural network corresponding to one of the M sets of performance indicators described above.
By pre-storing the corresponding relation between the Z reference neural networks and the M sets of performance indexes in the second communication equipment, the first communication equipment can conveniently determine the matched neural network to work by sending the indication information of the performance indexes to the second communication equipment when communicating with the newly-accessed communication equipment, such as the first communication equipment.
In a second aspect, a communication method is provided, the method comprising: generating first information, wherein the first information is used for determining a fourth neural network matched with the third neural network; the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes.
The method provided by the second aspect may be executed by a communication device (for example, referred to as a third communication device), or may also be executed by a component (such as a chip, a chip system, or the like) configured in the communication device. The embodiments of the present application do not limit this.
In a third aspect, a communication method is provided, and the method includes: the first information is used for determining a fourth neural network matched with the third neural network; the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes; the fourth neural network is determined from the first information.
The method provided by the third aspect may be performed by a communication device (for example, denoted as a fourth communication device), or may also be performed by a component (such as a chip, a system of chips, and the like) configured in the communication device. The embodiments of the present application do not limit this.
The third neural network and the fourth neural network corresponding to the same set of performance indexes may specifically mean that target information in communication connection between the third neural network and the fourth neural network satisfies the set of performance indexes corresponding to the target information.
It should be understood that the third communication device may be a neural network constructed based on the structure and parameters of a reference neural network (e.g., one of the Z reference neural networks described above), also having the same structure and parameters as the reference neural network. Or, the neural network may be trained based on a communication connection of a reference neural network (e.g., one of the Z reference neural networks described above). Therefore, the neural network in the third communication device is a neural network corresponding to one of the above M sets of performance indicators.
The third communication device may transmit first information for determining a fourth neural network matching the third neural network to the fourth communication device based on the communication connection with the fourth communication device and the currently used third neural network, so that the fourth communication device selects the fourth neural network matching the third neural network to operate. Because the third neural network and the fourth neural network correspond to the same set of performance indexes, the requirements of the performance indexes on various performances can be met when the third neural network and the fourth neural network work in a combined mode. Thereby facilitating an increase in system performance.
With reference to the second aspect or the third aspect, in some possible implementations, the first information is used to indicate a set of performance indicators corresponding to a third neural network.
The first information may be, for example, an indication of the performance indicator, or may be an indication of a reference neural network corresponding to the third neural network, or may be an indication of the fourth neural network. The embodiments of the present application do not limit this.
With reference to the second aspect or the third aspect, in some possible implementations, a third neural network is configured in the network device, and a fourth neural network is configured in the terminal device.
That is, the third communication device described above may be a network device, and the fourth communication device may be a terminal device.
With reference to the second aspect or the third aspect, in some possible implementations, the third neural network is configured in the terminal device, and the fourth neural network is configured in the network device.
That is, the third communication device described above may be a terminal device, and the fourth communication device may be a network device.
In a fourth aspect, a testing apparatus for a neural network is provided. In one design, the apparatus may include a module corresponding to one or more of the methods/operations/steps/actions described in the first aspect, where the module may be implemented by hardware circuit, software, or a combination of hardware circuit and software. In one design, the apparatus may include an acquisition module and a determination module.
Illustratively, the obtaining module may be configured to obtain N-dimensional performance metric values of the target information based on a communication connection of the first neural network and the second neural network; the first neural network is trained based on communication connection with the second neural network; the determination module may be configured to determine whether performance of the first neural network meets a standard based on the N-dimensional performance metric values; n is not less than 2 and is an integer.
In a fifth aspect, a testing apparatus for a neural network is provided. The apparatus comprises a processor for implementing the method described in the first aspect above. The apparatus may also include a memory to store instructions and data. The memory is coupled to the processor, and the processor, when executing the instructions stored in the memory, may implement the method described in the first aspect above. The apparatus may also include a communication interface for the apparatus to communicate with other devices, such as a transceiver, circuit, bus, module, or other type of communication interface, which may be network devices.
In one possible arrangement, the apparatus comprises:
a memory for storing program instructions;
a processor to communicate with the second neural network using the communication interface.
In a sixth aspect, a communications apparatus is provided. In one design, the apparatus may include a module corresponding to one or more of the methods/operations/steps/actions described in the second aspect, where the module may be implemented by hardware circuit, software, or a combination of hardware circuit and software. In one design, the apparatus may include a processing module and a communication module.
Illustratively, the processing module may be operative to generate first information for determining a fourth neural network that matches the third neural network; the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes; the communication module may be configured to transmit the first information.
In a seventh aspect, a communications apparatus is provided. The apparatus comprises a processor for implementing the method described in the second aspect above. The apparatus may also include a memory to store instructions and data. The memory is coupled to the processor, and the processor, when executing the instructions stored in the memory, may implement the method described in the first aspect above. The apparatus may also include a communication interface for the apparatus to communicate with other devices, such as a transceiver, circuit, bus, module, or other type of communication interface, which may be network devices.
In one possible arrangement, the apparatus comprises:
a memory for storing program instructions;
a processor configured to send first information using the communication interface, the first information being used to determine a fourth neural network that matches the third neural network. The specific content included in the first information may be referred to the specific description of the first information in the first aspect, and is not specifically limited herein.
In an eighth aspect, a communication device is provided. In one design, the apparatus may include a module corresponding to one or more of the methods/operations/steps/actions described in the third aspect, where the module may be implemented by hardware, software, or a combination of hardware and software. In one design, the apparatus may include a processing module and a communication module.
Illustratively, the communication module may be operable to receive first information for determining a fourth neural network that matches the third neural network; the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes; the processing module may be operative to determine a fourth neural network based on the first information.
In a ninth aspect, a communication device is provided. The apparatus comprises a processor for implementing the method described in the third aspect above. The apparatus may also include a memory to store instructions and data. The memory is coupled to the processor, and the processor, when executing the instructions stored in the memory, may implement the method described in the first aspect above. The apparatus may also include a communication interface for the apparatus to communicate with other devices, such as a transceiver, circuit, bus, module, or other type of communication interface, which may be network devices.
In one possible arrangement, the apparatus comprises:
a memory for storing program instructions;
a processor to utilize the communication interface, the interface to receive first information, the first information to determine a fourth neural network that matches the third neural network. It should be understood that specific contents included in the first information may be referred to the specific description of the first information in the first aspect, and are not specifically limited herein.
In a tenth aspect, an embodiment of the present application further provides a computer-readable storage medium. The computer storage medium has stored thereon a computer program (which may also be referred to as code, or instructions) that, when executed by a processor, causes the method of any of the possible implementations of the first to third aspects described above to be performed.
In an eleventh aspect, the present application further provides a computer program product. The computer program product comprises: computer program (also called code, or instructions), which when executed, causes the method in any of the possible implementations of the first to third aspects described above to be performed.
In a twelfth aspect, an embodiment of the present application further provides a communication system. The communication system may comprise the aforementioned third and fourth communication devices.
Drawings
Fig. 1 is a schematic diagram of a wireless communication system provided by an embodiment of the present application;
fig. 2 is a schematic diagram of an architecture of a neural network respectively configured in a terminal device and a network device according to an embodiment of the present application;
fig. 3 and fig. 4 are schematic flow charts of a testing method of a neural network provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a test curve generated based on M sets of performance indicators provided by an embodiment of the present application;
fig. 6 is a schematic flow chart of a communication method provided by an embodiment of the present application;
fig. 7 and 8 are schematic block diagrams of a testing apparatus of a neural network provided by an embodiment of the present application;
fig. 9 and 10 are schematic block diagrams of a communication device provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The communication method provided by the application can be applied to various communication systems, such as: a Long Term Evolution (LTE) system, an LTE Frequency Division Duplex (FDD) system, an LTE Time Division Duplex (TDD) system, a Universal Mobile Telecommunications System (UMTS), a Worldwide Interoperability for Microwave Access (WiMAX) communication system, a future fifth Generation (5th Generation, 5G) mobile communication system, or a new radio access technology (NR). The 5G mobile communication system may include a non-independent Network (NSA) and/or an independent network (SA), among others.
The communication method provided by the application can also be applied to Machine Type Communication (MTC), Long Term Evolution-machine (LTE-M) communication between machines, device to device (D2D) network, machine to machine (M2M) network, Internet of things (IoT) network, or other networks. The IoT network may comprise, for example, a car networking network. The communication modes in the car networking system are collectively referred to as car to other devices (V2X, X may represent anything), for example, the V2X may include: vehicle to vehicle (V2V) communication, vehicle to infrastructure (V2I) communication, vehicle to pedestrian (V2P) or vehicle to network (V2N) communication, etc.
The communication method provided by the application can also be applied to future communication systems, such as a sixth generation mobile communication system and the like. This is not a limitation of the present application.
In the embodiment of the present application, the network device may be any device having a wireless transceiving function. Network devices include, but are not limited to: evolved Node B (eNB), Radio Network Controller (RNC), Node B (NB), Base Station Controller (BSC), Base Transceiver Station (BTS), home base station (e.g., home evolved Node B or home Node B, HNB), baseband unit (BBU), Access Point (AP) in wireless fidelity (WiFi) system, wireless relay Node, wireless backhaul Node, Transmission Point (TP) or Transmission and Reception Point (TRP), etc., and may also be 5G, such as NR, gbb in the system, or transmission point (TRP or TP), one or a group of base stations in the 5G system may also include multiple antennas, or panels, and may also be configured as network panels or NB, such as a baseband unit (BBU), or a Distributed Unit (DU), etc.
In some deployments, the gNB may include a Centralized Unit (CU) and a DU. The gNB may also include an Active Antenna Unit (AAU). The CU implements part of the function of the gNB, and the DU implements part of the function of the gNB, for example, the CU may be responsible for processing non-real-time protocols and services, for example, may implement functions of a Radio Resource Control (RRC) layer, a Service Data Adaptation Protocol (SDAP) layer, and/or a Packet Data Convergence Protocol (PDCP) layer. The DU may be responsible for handling physical layer protocols and real-time services. For example, the functions of a Radio Link Control (RLC) layer, a Medium Access Control (MAC) layer, and a Physical (PHY) layer may be implemented. One DU may be connected to only one CU or to a plurality of CUs, and one CU may be connected to a plurality of DUs, and communication between CUs and DUs may be performed through the F1 interface. The AAU may implement portions of the physical layer processing functions, radio frequency processing, and active antenna related functions. Since the information of the RRC layer is eventually delivered to the PHY layer to become the information of the PHY layer, or is converted from the information of the PHY layer, under this architecture, the higher layer signaling, such as the RRC layer signaling, can also be considered to be sent by the DU, or sent by the DU + AAU.
It is to be understood that the network device may be a device comprising one or more of a CU node, a DU node, an AAU node. In addition, the CU may be divided into network devices in an access network (RAN), or may be divided into network devices in a Core Network (CN), which is not limited in this application.
The network device provides a service for a cell, and a terminal device communicates with the cell through a transmission resource (e.g., a frequency domain resource, or a spectrum resource) allocated by the network device, where the cell may belong to a macro base station (e.g., a macro eNB or a macro gNB), or may belong to a base station corresponding to a small cell (small cell), where the small cell may include: urban cell (metro cell), micro cell (microcell), pico cell (pico cell), femto cell (femto cell), etc., and these small cells have the characteristics of small coverage area, low transmission power, etc., and are suitable for providing high-rate data transmission service.
In the embodiments of the present application, a terminal device may also be referred to as a User Equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user equipment.
The terminal device may be a device providing voice/data connectivity to a user, e.g. a handheld device, a vehicle mounted device, etc. with wireless connection capability. Currently, some examples of terminals may be: a mobile phone (mobile phone), a tablet computer (pad), a computer with wireless transceiving function (e.g., a laptop, a palmtop, etc.), a Mobile Internet Device (MID), a Virtual Reality (VR) device, an Augmented Reality (AR) device, a wireless terminal in industrial control (industrial control), a wireless terminal in self driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation security, a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), a cellular phone, a cordless phone, a session initiation protocol (session initiation protocol), a PDA, a wireless local loop phone (SIP), a wireless personal digital assistant (personal digital assistant, etc.) A handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a vehicle mounted device, a wearable device, a terminal device in a 5G network or a terminal device in a Public Land Mobile Network (PLMN) for future evolution, etc.
Wherein, wearable equipment also can be called as wearing formula smart machine, is the general term of using wearing formula technique to carry out intelligent design, developing the equipment that can dress to daily wearing, like glasses, gloves, wrist-watch, dress and shoes etc.. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable smart device includes full functionality, large size, and can implement full or partial functionality without relying on a smart phone, such as: smart watches or smart glasses and the like, and only focus on a certain type of application functions, and need to be used in cooperation with other devices such as smart phones, such as various smart bracelets for physical sign monitoring, smart jewelry and the like.
In addition, the terminal device may also be a terminal device in an internet of things (IoT) system. The IoT is an important component of future information technology development, and is mainly technically characterized in that articles are connected with a network through a communication technology, so that an intelligent network with man-machine interconnection and object interconnection is realized. The IoT technology can achieve massive connection, deep coverage, and power saving of the terminal through, for example, Narrowband (NB) technology.
In addition, the terminal equipment can also comprise sensors such as an intelligent printer, a train detector, a gas station and the like, and the main functions of the terminal equipment comprise data collection (part of the terminal equipment), control information and downlink data receiving of the network equipment, electromagnetic wave sending and uplink data transmission to the network equipment.
For the convenience of understanding the embodiments of the present application, a communication system suitable for the communication method provided by the embodiments of the present application will be first described in detail with reference to fig. 1. As shown, the communication system 100 may include at least one network device, such as the network device 101 shown in fig. 1; the communication system 100 may further comprise at least one terminal device, such as the terminal devices 102 to 107 shown in fig. 1. The terminal devices 102 to 107 may be mobile or stationary. Network device 101 and one or more of terminal devices 102-107 may each communicate over a wireless link. Each network device may provide communication coverage for a particular geographic area and may communicate with terminal devices located within that coverage area. For example, the network device may send configuration information to the terminal device, and the terminal device may send uplink data to the network device based on the configuration information; for another example, the network device may send downlink data to the terminal device. Thus, the network device 101 and the terminal devices 102 to 107 in fig. 1 constitute one communication system.
Alternatively, the terminal devices may communicate directly with each other. Direct communication between terminal devices may be achieved, for example, using device to device (D2D) technology or the like. As shown in the figure, direct communication between terminal devices 105 and 106, and between terminal devices 105 and 107 may be performed using D2D technology. Terminal device 106 and terminal device 107 may communicate with terminal device 105 separately or simultaneously.
The terminal apparatuses 105 to 107 may also communicate with the network apparatus 101, respectively. For example, it may communicate directly with network device 101, such as terminal devices 105 and 106 in the figure may communicate directly with network device 101; it may also communicate indirectly with network device 101, such as terminal device 107 communicating with network device 101 via terminal device 105.
It should be understood that fig. 1 exemplarily shows one network device and a plurality of terminal devices, and communication links between the respective communication devices. Alternatively, the communication system 100 may include a plurality of network devices, and each network device may include other numbers of terminal devices within its coverage area, such as more or fewer terminal devices. This is not a limitation of the present application.
The above-described respective communication devices, such as the network device 101 and the terminal devices 102 to 107 in fig. 1, may be configured with a plurality of antennas. The plurality of antennas may include at least one transmit antenna for transmitting signals and at least one receive antenna for receiving signals. Additionally, each communication device may additionally include a transmitter and a receiver, each of which may include various components associated with signal transmission and reception (e.g., processors, modulators, multiplexers, demodulators, demultiplexers, antennas, etc.), as will be appreciated by one skilled in the art. Therefore, the network equipment and the terminal equipment can communicate through the multi-antenna technology.
Optionally, the wireless communication system 100 may further include other network entities such as a network controller, a mobility management entity, and the like, which is not limited thereto.
Both the network device and the terminal device may operate using a neural network. The neural network can be used for generating a transmitting signal or processing a signal to be transmitted; and may also be used to receive and interpret signals received. For example, in a modulation and demodulation scenario, a neural network in a network device may be used to modulate a signal, and a neural network in a terminal device may be used to demodulate a modulated signal; in a channel coding and decoding scenario, a neural network in a network device may be used to encode a signal, and a neural network in a terminal device may be used to decode (or to be called as decode) the encoded signal; in a precoding and detection receiving scene, a neural network in network equipment can be used for precoding signals, and a neural network in terminal equipment can be used for detecting and receiving signals; in the feedback and reconstruction scene of the channel information, a neural network in the terminal equipment can be used for quantizing and compressing the channel information; a neural network in the network device may be used to reconstruct the channel information.
The neural network in the network device and the neural network in the terminal device can be used in pairs to form a dual network (or, referred to as a dual architecture), so that the effect of joint optimization can be achieved. Dual networks include, but are not limited to, neural networks for transmitting signals and neural networks for receiving signals that are jointly constructed based on various variations of auto-encoder (auto-encoder) structures or auto-encoder structures, or in other neural network structures combinations.
An autoencoder is a neural network that can be applied end-to-end. Under the self-encoder structure, the neural network at the end where the signal is transmitted may be referred to as an encoder neural network (encoder neural network), and the neural network at the end where the signal is received may be referred to as a decoder (decoder) neural network (decoder neural network). The two are mutually restricted and work cooperatively. It is noted that the encoder and decoder described herein are not the same as the encoder and decoder described above for channel encoding and decoding, respectively.
Fig. 2 is a schematic diagram of an architecture of a neural network respectively configured in a terminal device and a network device according to an embodiment of the present application. Fig. 2 illustrates a training process and a testing process of a terminal device and a network device, which are respectively configured with a neural network, on a signal by taking a feedback and reconstruction scenario of channel information as an example.
As shown in fig. 2, the neural network configured in the terminal device may include a feature encoding module, a quantization module, and an entropy encoding module. A neural network configured in a network device may include an entropy decoding module and a feature decoding module.
In the training process, one training of the neural network may include one forward propagation (forward propagation) and one backward propagation (back propagation), and the training process of the neural network model may include at least one training.
In the forward propagation, the terminal device may perform feature extraction on the channel information for training through the feature coding module to obtain a feature coefficient of the channel information, and then input the extracted feature coefficient of the channel information to the quantization module. The quantization module may perform bit quantization on the characteristic coefficients of the channel information to obtain a quantized bit stream. The quantized bit stream may be input to an entropy coding module, which may further bit compress the quantized bit stream. The compressed bit stream may be sent to the network device over the air interface after being processed by other physical layers (e.g., modulation, etc.).
After receiving the air interface information, the network device may obtain a compressed bitstream through physical layer processing, and it should be understood that the compressed bitstream may correspond to a compressed bitstream sent by the terminal device. The compressed bit stream is input to a neural network provided in a network device, and channel information can be reconstructed. Illustratively, the compressed bitstream is input to an entropy decoding module, which is operable to perform bit decompression on the compressed bitstream to obtain a decompressed bitstream. The decompressed bitstream is input to a feature decoding module. The feature decoding module may reconstruct the channel information based on the decompressed bitstream. It should be understood that the channel information reconstructed by the feature decoding module may correspond to the channel information sent to the originating neural network above. The network device may obtain gradient information (e.g., gradient, etc.) of the neural network through a gradient descent method, and feed the gradient information back to the terminal device in back propagation, thereby completing one training of the neural network. In the testing process, the terminal device can perform feature extraction on the channel information for testing through the feature coding module to obtain a feature coefficient of the channel information, and then input the extracted feature coefficient of the channel information to the quantization module. The quantization module may perform bit quantization on the characteristic coefficient of the channel information to obtain a quantized bit stream. The quantized bit stream may be input to an entropy coding module, which may further bit compress the quantized bit stream. The compressed bit stream may be sent to the network device over the air interface after being modulated and so on.
After receiving the air interface information, the network device may obtain a compressed bit stream through processing such as demodulation. It should be understood that the compressed bitstream may correspond to the compressed bitstream transmitted by the terminal device. The compressed bitstream may be input to an entropy decoding module, which may be configured to perform bit decompression on the compressed bitstream to obtain a decompressed bitstream. The decompressed bitstream is input to a feature decoding module. The feature decoding module may reconstruct the channel information based on the decompressed bitstream. It should be understood that the channel information reconstructed by the feature decoding module may correspond to the channel information sent to the originating neural network above, and may be referred to as a recovered value of the channel information.
The neural network may be tested based on the channel information reconstructed by the feature decoding module and the channel information for testing (i.e., an example of the true value). For example, the channel information input to the signature coding module by the terminal device may be a channel matrix H. The channel information reconstructed by the feature decoding module in the network device can be the recovery value of the channel matrix H
Figure BDA0002744944570000091
For another example, the channel information input by the terminal device to the eigen coding module may be channel eigen information derived from a channel matrix H, such as an eigenvector obtained by performing a singular value decomposition on the channel matrix H. The channel information reconstructed by the feature decoding module in the network device may be channel feature information, such as the above feature vector, or a recovery value of the channel matrix H
Figure BDA0002744944570000101
It can be seen that the channel information input to the neural network by the terminal device and the channel information reconstructed by the network device based on the neural network are not necessarily completely symmetrical. However, the channel information input into the neural network by the terminal device is correlated with the channel information reconstructed by the network device based on the neural network, and as in the above example, the channel characteristic information and the channel matrix can be converted into each other through mathematical transformation.
It should be understood that the architecture of the neural network shown above in connection with fig. 2 is merely an example, and should not constitute any limitation to the present application. For example, in some possible designs, the neural network configured in the terminal device may not include an entropy encoding module, and the neural network configured in the network device may not include an entropy decoding module. In still other possible designs, the neural network configured in the terminal device and the neural network configured in the network device may further include other more modules. The embodiment of the present application is not limited to this.
In the feedback and reconstruction scenario of the channel information, the feedback overhead and the feedback accuracy of the channel information are two important performances. However, these two properties are opposing two-dimensional properties. The feedback overhead is high, and the feedback precision is generally high; the feedback overhead is low and the feedback accuracy is usually low. Typically, testing of neural networks focuses only on performance in one dimension, and does not compromise other dimensions. For example, in a feedback and reconstruction scenario of channel information, testing of the neural network usually focuses on feedback overhead, which may result in that feedback accuracy cannot be guaranteed. It is therefore desirable to have a neural network that achieves a high efficiency trade-off between feedback overhead and feedback accuracy, or that achieves a good balance between performance in multiple dimensions, thereby improving overall performance.
In view of this, the present application provides a method and an apparatus for testing a neural network. In the method, a neural network is tested based on performance metric values of multiple dimensions and performance indexes of multiple dimensions. And the performance of the neural network is considered to reach the standard under the condition that the performance metric values of the multiple dimensions respectively reach the performance indexes of the corresponding dimensions. Therefore, better balance can be obtained among the performances of multiple dimensions, and the overall performance is favorably improved.
Before describing the methods provided by the embodiments of the present application, a brief description of several terms referred to herein will be provided to facilitate an understanding of the embodiments of the present application.
1. A neural network: the network structure is used as an important branch of artificial intelligence and is used for carrying out information processing by simulating animal neural network behavior characteristics. The structure of the neural network is formed by connecting a large number of nodes (or called neurons) mutually, and the purpose of processing information is achieved by learning and training input information based on a specific operation model. The neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer is responsible for receiving input signals, the output layer is responsible for outputting calculation results of the neural network, the hidden layer is responsible for complex functions such as feature expression and the like, and the functions of the hidden layer are represented by a weight matrix and corresponding activation functions.
Deep Neural Networks (DNNs) are typically multi-layer structures. The depth and the width of the neural network are increased, the expression capacity of the neural network can be improved, and stronger information extraction and abstract modeling capacity is provided for a complex system.
The DNN may be constructed in various manners, and may include, but is not limited to, a Recurrent Neural Network (RNN), a Convolutional Neural Network (CNN), a fully-connected neural network, and the like.
2. Training (training): also known as learning. Training refers to a process in which a model learns to perform a particular task by optimizing parameters, such as weighting values, in the model. The embodiment of the application is suitable for but not limited to one or more of the following training methods: supervised learning, unsupervised learning, reinforcement learning, migratory learning, and the like. Supervised learning is trained using a set of training samples that have been correctly labeled (correctly labeled means that each sample has an expected output value). Unsupervised learning, as opposed to supervised learning, refers to a method of automatically classifying or clustering input data without giving previously labeled training samples.
3. Test (test): or verification and prediction. After the neural network is trained, the trained neural network may be tested. Specifically, the task may be executed by the trained neural network, and based on the result output by the neural network, it is evaluated whether the trained neural network can meet the performance requirement.
In the embodiment of the present application, the testing specifically may include: based on the communicative connection of a neural network to be tested (a first neural network as shown below) to another neural network (a second neural network as shown below), performance metric values for a plurality of dimensions are obtained. And comparing the performance metric values with the performance indexes of the corresponding dimensions respectively to see whether the performance indexes are respectively reached or exceeded.
4. Performance indexes are as follows: and the index is set for judging whether the performance of the neural network reaches the standard or not. The performance indicators may include, but are not limited to, a decoding error probability, complexity corresponding to an encoding and decoding scenario, a symbol error rate, complexity corresponding to a constellation modulation and demodulation scenario, two or more of throughput, decoding error probability, symbol error rate, complexity corresponding to a precoding and detection reception scenario, feedback overhead, feedback accuracy corresponding to feedback of channel information and a reconstruction scenario, and the like, corresponding to different scenarios to which the neural network is applied. For the sake of brevity, this is not necessarily an illustration.
For a certain performance, the performance index may be an upper bound or a lower bound. The upper and lower bounds are understood to be maximum and minimum values set within a performance acceptable range. For the upper bound, if the performance metric is above the upper bound, the performance is considered to be poor and unacceptable; for the lower bound, if the performance metric is below the lower bound, the performance is considered to be poor and unacceptable.
For example, for feedback overhead, if measured in overhead bits, the lower the better, the performance indicator may be an upper bound; the performance index may be an upper bound, as measured by compressibility, the lower the compression rate, the better.
As another example, for feedback accuracy, if measured in MSE, the lower the better, the performance metric may be an upper bound; the performance index may be a lower bound, if measured in terms of correlation coefficients, the higher the better.
It is noted that in the following, when describing the relation of the performance metric value to the performance indicator, different understandings are possible based on different definitions of the performance indicator. If the performance metric reaches the performance index, it may mean that the performance metric is equal to the performance index. If the performance metric exceeds the performance metric, it may be that the performance metric is below an upper bound of the performance metric, or it may be that the performance metric is above a lower bound of the performance metric.
5. A metric function: can be understood as a function for characterizing performance. In embodiments of the present application, a metric function may be used to obtain a performance metric value. By way of example and not limitation, the metric function may include, but is not limited to, one or more of the following: mean Square Error (MSE), normalized minimum mean square error (NMSE), Mean Absolute Error (MAE), maximum absolute error (also called absolute error bound), correlation coefficient, cross entropy, mutual information, bit error rate, or bit error rate, among others.
The following describes in detail a method for testing a neural network according to an embodiment of the present application with reference to the drawings. It should be understood that the process may be done off-line (offline). The device for performing the test method may be, for example, a communication device or a computing device different from the communication device. The embodiments of the present application do not limit this. For convenience of explanation, the apparatus for performing the test method will be hereinafter referred to as a test apparatus. The test equipment can test the target information based on the communication connection between the first neural network and the second neural network to determine whether the performance of the first neural network meets the standard.
Wherein the first neural network may be a dedicated neural network applied to a certain scene. For example, the method can be applied to various processing stages of signals in a communication system, for example, including but not limited to coding and decoding, constellation modulation and demodulation, precoding and detection reception, and compression, quantization and reconstruction of channel information. In particular, if the first neural network is configured in the terminal device, the first neural network may be used for decoding, or may also be used for demodulation, or may also be used for detecting reception, or may also be used for feedback (including compression and quantization, for example) of channel information. If the first neural network is configured in the network device, the first neural network may be used for encoding, or may also be used for modulation, or may also be used for precoding, or may also be used for reconstruction of channel information.
The second neural network may be a reference neural network or may be a neural network actually used in the communication system. Both the structure and the parameters of the reference neural network may be predefined, such as by a protocol. In other words, the reference neural network may be considered as a normalized neural network. The neural network actually used in the communication system may be a neural network constructed based on the structure and parameters of the reference neural network, and thus, also has the same structure and parameters as the reference neural network.
For the sake of convenience of understanding and explanation, it is assumed that the first neural network is a neural network configured in the terminal device, and the second neural network is a neural network configured in the network device.
Fig. 3 is a schematic flow chart of a testing method 300 of a neural network provided in an embodiment of the present application. As shown in fig. 3, the method 300 may include steps 310 and 320. The various steps in method 300 are described in detail below.
In step 310, performance metric values of N (N ≧ 2 and integer) dimensions of the target information are obtained based on the communication connection between the first neural network and the second neural network.
Wherein the first neural network may be a neural network trained based on a communication connection with the second neural network. The communication connection described herein may refer to a communication connection simulated by a computer, that is, a virtual communication connection; it may also refer to a communication connection established through a network, such as a radio access technology (NR) technology, i.e. a real communication connection.
The device for training the first neural network may be the same device as the test device or a different device. The embodiments of the present application do not limit this.
In the training process, the function may be constructed with the performance indexes of N dimensions as the optimization target, such as but not limited to, a loss function (loss function), a cost function (cost function), or an objective function (objective function). The difference between the output value of the neural network and the target value is measured by a construction function. Taking the loss function as an example, the higher the value of the loss function (which may be called loss value (loss)), the greater the difference, the training of the neural network becomes a process of minimizing the loss value.
Take the feedback and reconstruction scenario of the channel information as an example. In this scenario, the target information is channel information. For channel information, feedback overhead and feedback accuracy are optimization targets of two different dimensions, i.e., N-2. Based on the two optimization objectives, a loss function can be constructed. An example of a loss function is shown below:
L(A1,A2,A3)=λ1×f(A1,A3)+λ2×g(A1,A2)。
wherein L (A1, A2, A3) is a loss function. Where a1 is the neural network parameter of the compression module, which may be, for example, the entropy coding module shown in fig. 2; a2 is a neural network parameter of a decompression module, which may be, for example, the entropy decoding module shown in fig. 2; a3 is the neural network parameters of the quantization module.
f (a1, A3) is a function for measuring feedback overhead, and may be, for example, a compression rate, which may specifically be a ratio of bit overhead after compression to bit overhead before compression. It will be appreciated that the greater the compression rate, the greater the feedback overhead.
g (a1, a2) is a function for measuring feedback accuracy, such as MSE, which may be specifically an expectation of the square of the difference between the real value and the output value of the neural network, or may be a logarithm of the expectation to obtain a value in dB, and so on, which is not limited in the present application. It will be appreciated that the larger the MSE, the lower the feedback accuracy.
1,λ2Is a balanced parameter combination of a function on feedback overhead and a function on feedback accuracy. In the loss function L (A1, A2, A3) ═ λ1×f(A1,A3)+λ2In Xg (A1, A2), the balance parameter of the function f (A1, A3) is fixedλ1The equilibrium parameters of the function g (A1, A2) are fixed at λ2
It should be understood that { λ1,λ2This is only one possible representation of the balance parameter combination, which can also be represented as {1, λ }. In this case, the loss function exemplified above can also be expressed as: l (a1, a2, A3) ═ f (a1, A3) + λ × g (a1, a 2). In this case, the balance parameter of the function f (a1, A3) is fixed to 1, and the balance parameter of the function g (a1, a2) is fixed to λ.
It should also be understood that the above-mentioned loss functions are only examples, and should not constitute any limitation to the embodiments of the present application. The application is not limited to the specific form of the loss function. The loss function may be predefined, such as protocol predefined; or defined by each manufacturer; or may be invisible to the outside. In addition, the balance parameter set varies with different values of N and different performances of N dimensions.
It should also be understood that the above description is only for ease of understanding, with channel information as an example of the target information, and describes a process of training the first neural network based on a plurality of optimization targets. This should not be construed as limiting the application in any way. In different application scenarios, the objective information may also be other information, and the optimization objective and its metric function may also vary accordingly. For example, in a channel coding and decoding scenario, the target information may be a bit stream, the optimization target may be a decoding error probability and complexity, and the metric function may be a Bit Error Rate (BER) and a floating-point Operation Per Second (FLOPs). For the sake of brevity, this is not illustrated individually.
It can be understood that in different application scenarios, the objective information may be different, the optimization objective may be different, and the metric function may be different. In different application scenarios, the objective information, the optimization objective and the metric function thereof may be predefined, such as predefined by the protocol, or may be partially predefined, such as predefined by the objective information and the optimization objective, but the metric function may be defined by different vendors. The embodiments of the present application do not limit this.
Through the training of the first neural network, the loss value tends to be minimized. After the loss function reaches a minimum, the first neural network may be output. Thereafter, the first neural network may be tested to determine whether the performance of the first neural network is met.
In an embodiment of the present application, the testing of the first neural network is also tested based on the communication connection with the second neural network. Specifically, N-dimensional performance metric values of the target information may be obtained based on the communicative connection of the first neural network and the second neural network. The N-dimensional performance metric values may be derived based on a predefined function for characterizing performance of the N dimensions. Such as in the above example scenario of feedback and reconstruction of channel information, the compression rate and MSE of the channel information may be obtained based on the communication connection of the first neural network and the second neural network. The compression rate and MSE of the channel information thus obtained are examples of N-dimensional performance metric values.
It should be understood that the N-dimensional performance metric values exemplified above are shown only for ease of understanding and should not be construed as limiting the application in any way. The method and the device are not limited to the values of N in different application scenes, the measurement functions for obtaining the performance metric values of N dimensions and the like.
In step 320, it is determined whether the performance of the first neural network is up to standard based on the N-dimensional performance metric values.
When the performance of the first neural network in N dimensions can reach or exceed the acceptable range, the performance of the first neural network can be considered to reach the standard. It should be understood that acceptable ranges, as described herein, may refer to meeting the requirements for various capabilities as specified by the current protocol, for example.
For example, a corresponding performance index may be set for the performance of each dimension. As mentioned above, the performance index may be an upper bound or a lower bound, which is not limited in the embodiments of the present application. If the performance metric value of each of the N dimensions meets or exceeds the performance index of the corresponding dimension, it may be determined that the performance of the first neural network meets the criteria. If the performance metric of one or more of the N dimensions does not reach the performance index of the corresponding dimension, it may be determined that the performance of the first neural network does not meet the criteria.
In connection with the above example, the performance indicators corresponding to the performance in two (i.e., N-2) dimensions of feedback overhead and feedback precision may be the upper bound of the compression rate (e.g., noted as C) respectively0) And an upper bound of MSE (e.g., denoted as M)0). That is, when based on the communication connection of the first neural network and the second neural network, the obtained compression ratio (e.g., denoted as C) is less than or equal to the index of the corresponding dimension (i.e., C ≦ C)0) And MSE (e.g., denoted as M) is less than or equal to the index for the corresponding dimension (i.e., M ≦ M0) Then the performance of the first neural network may be determined to be satisfactory. If any one of them is greater than the index of the corresponding dimension (e.g. C)0Or M0) Then the performance of the first neural network may be determined to be unsatisfactory.
It should be understood that what has been described above is one specific example of N-2. In case N is greater than 2, it is still possible to evaluate whether the performance of the first neural network is up to standard according to the same principle.
In one possible implementation, the test device may fix the performance metric values of P (P is greater than or equal to 1 and less than N, and P is an integer) dimensions of the N dimensions to the performance index of the corresponding dimension, and evaluate the performance metric values of the remaining N-P dimensions. That is, the performance metric of some dimensions is fixed to the performance index of the corresponding dimension, and the performance metrics of other dimensions are evaluated. If the performance metric values of the other N-P dimensions respectively reach or exceed the performance indexes of the respective corresponding dimensions, the performance of the first neural network can be determined to reach the standard; if the performance metric value of one of the other N-P dimensions fails to meet or exceed the performance index of the corresponding dimension, the performance of the first neural network can be determined to be not up to standard.
The performance metric value of a part of dimensions is fixed to the performance index of the corresponding dimension, and the performance index of the corresponding dimension can be realized by adjusting parameters of the first neural network. The performance metric value of one or more dimensions in the N dimensions reaches the performance index of the corresponding dimension by adjusting the parameters of the first neural network, and the performance metric values of other dimensions are obtained under the condition of keeping the parameters unchanged and are compared with the performance index of the corresponding dimension, so that whether the performance of the first neural network reaches the standard or not is judged.
For ease of understanding, a specific example is given below.
Still in conjunction with the feedback and reconstruction scenario of the channel information, assume that in the performance index of the target information (it can be understood that the target information is the channel information in this example), the upper bound C of the compression rate0An upper bound M of 0.03 (or expressed as 3%) MSE0Is-23 (in dB).
For example, based on the communication connection of the first neural network and the second neural network, the compression rate and MSE of the target information are obtained. The compression ratio at the target information is 0.03 (i.e., equal to the upper bound C)0) Meanwhile, the MSE of the target information is obtained if the MSE is larger than the upper bound M0For example, if the MSE is-20, the MSE does not reach the standard, or the feedback accuracy does not reach the standard, that is, it may be determined that the performance of the first neural network does not reach the standard; provided that MSE is less than or equal to the upper bound M0For example, if the MSE is-24, the MSE is up to the standard, or the feedback accuracy is up to the standard, that is, the performance of the first neural network is determined to be up to the standard.
For another example, based on the communication connection of the first neural network and the second neural network, the compression rate and MSE of the target information are obtained. MSE at target information is-23 (i.e., equal to upper bound M)0) At the same time, obtaining the compression rate of the target information if the compression rate is larger than the upper bound C0For example, if the compression ratio is 4%, the compression ratio does not reach the standard, or the feedback overhead does not reach the standard, that is, it may be determined that the performance of the first neural network does not reach the standard; provided that the compression ratio is less than or equal to the upper bound C0For example, the compression ratio is 0.3, the compression ratio is reached, or the feedback overhead is reached, i.e., the performance of the first neural network can be determined to be reached.
In order to obtain higher reliability, the performance metric values of any P (P is more than or equal to 1 and less than N, and P is an integer) dimensions in the N dimensions can be fixed on the performance indexes of the corresponding dimensions, and the performance metric values of the rest N-P dimensions are evaluated. If the performance metric values of any P dimensions are fixed on the performance indexes of the corresponding dimensions, and the performance metric values of the other N-P dimensions can reach or exceed the performance indexes of the corresponding dimensions, the performance of the first neural network can be determined to reach the standard; and if the performance indexes of the corresponding dimensionalities are not obtained in the performance metric values of the other N-P dimensionalities when the performance metric values of a certain P dimensionalities are fixed on the performance indexes of the corresponding dimensionalities, determining that the performance of the first neural network does not reach the standard.
In connection with the above example, based on the communication connection between the first neural network and the second neural network, the compression rate and MSE of the target information are obtained. Acquiring the MSE of the target information when the compression rate of the target information is 0.03 so as to determine whether the MSE reaches the standard; and acquiring the compression rate of the target information when the MSE of the target information is-23 so as to determine whether the compression rate reaches the standard. If the MSE obtained last time and the compression ratio obtained last time both reach the standard, determining that the performance of the first neural network reaches the standard; and if the MSE obtained last time and/or the compression ratio obtained last time do not reach the standard, determining that the performance of the first neural network does not reach the standard.
It should be understood that the above examples of the compression rate and MSE of the metric function for feedback overhead and feedback accuracy are shown only for ease of understanding. The compression rate may also be measured, for example, by a percentage, and the MSE may also be measured, for example, by a specific value. In addition, the measurement function of the feedback overhead and the feedback precision can also be other functions, for example, the feedback precision can also be measured by a correlation coefficient, MSA and the like. The embodiments of the present application do not limit this.
Wherein, the value of P can be any value from 1 to N-1. For example, the value of P may be predefined or may be self-defined by a different vendor. Alternatively, P may be traversed in the range of 1 to N-1. The embodiments of the present application do not limit this.
For example, when N is 3, P may take the value 1 or 2. That is, the performance metric value for one or two of the three dimensions is fixed at the performance index for the corresponding dimension, and then the performance metric values for the remaining two or one of the remaining dimensions are evaluated. For the sake of brevity, detailed descriptions are omitted here.
Based on the technical scheme, based on the communication connection between the first neural network and the second neural network, the performance metric values of multiple dimensions of the same target information are obtained, the performance of the first neural network is evaluated based on the performance metric values of the multiple dimensions, the performance of the multiple dimensions is considered, balance is obtained among the performance of the multiple dimensions, and therefore the system performance is favorably improved.
Based on the above-described testing of the first neural network, a neural network with qualified performance may be applied in the communication device. However, in general, since there may be differences between the devices shipped from different manufacturers, for example, different manufacturers may adopt different neural network parameters, the built neural network may also satisfy different performance indexes. In addition, based on different communication conditions, different performance indicators may be selected to meet the communication demand. The communication conditions may include, for example, but are not limited to, channel conditions, interference conditions, and the like. It can be understood that the communication conditions are influenced by more factors and have larger differences. For example, the channel states are different in different areas such as outdoor dense urban areas, outdoor villages, outdoor mountainous areas, indoor offices, indoor plants and the like; single interference source, multiple interference sources, and multiple interference signal types with different strengths also affect the interference state. And so on, not to mention one by one here.
In order to match with different devices and different performance indexes, a plurality of neural networks or sets of neural network parameters can be configured in the same communication device, and the matched neural network is selected to communicate with the communication device based on the neural network used in the communication device at the opposite end.
Before a communication device is connected to a network, performance tests on a plurality of neural networks in the communication device are also needed. The method for testing a plurality of neural networks in a communication device will be described in detail below with reference to fig. 4.
Fig. 4 is a schematic flow chart of a testing method 400 of a neural network provided in another embodiment of the present application. As shown in fig. 4, the method 400 may include steps 410 and 420. The various steps in method 400 are described in detail below.
In step 410, M sets of performance metrics of the target information are obtained based on the communication connection of M (M is greater than or equal to 1 and is an integer) neural networks and Z (M is greater than or equal to Z is greater than or equal to 1 and Z is an integer) reference neural networks in the first communication device.
Wherein the M neural networks may be neural networks configured in the first communication device to be networked. The M neural networks may correspond to M sets of performance indicators. Each of the M neural networks may be trained based on one of the M sets of performance indicators in order to obtain a neural network that satisfies the performance indicator.
It should be understood that the M sets of performance indicators may be predefined, e.g., may be protocol predefined. Each manufacturer may train and test the neural network deployed in the communication device based on the M sets of performance indicators.
Table 1 shows an example of M (M ═ 3) sets of performance indexes. The M sets of performance indicators are an example of performance indicators of two dimensions shown based on the feedback overhead and the feedback accuracy of the channel information in the above exemplary feedback and reconstruction scenarios of the channel information.
TABLE 1
Figure BDA0002744944570000151
Each set of performance indicators may correspond to a row thereof. One index for each row. Each index may be understood as an index of a set of performance indicators that may be used to indicate a set of performance indicators. In the M sets of performance indexes, different feedback costs are matched with different feedback precisions.
Further, each of the M neural networks may be trained based on the communication connection with a reference neural network. Where the number of reference neural networks used to train the M neural networks may be Z. Since M ≧ Z, there may be multiple ones of the M neural networks trained based on communication connections with the same reference neural network. It is understood that when multiple neural networks of the M neural networks are trained based on communication connections with the same reference neural network, the performance indicators based on are different. In other words, each of the M neural networks may correspond to one of the M sets of performance metrics, or the M neural networks may correspond to one of the M sets of performance metrics, but each of the Z reference neural networks may correspond to one or more of the M sets of performance metrics.
One possible design is that M ═ Z, i.e., M neural networks correspond one-to-one to M sets of performance metrics, and M reference neural networks correspond one-to-one to M sets of neural networks. It is to be understood that in the case of M ═ Z, the indices in table 1 can also be understood with reference to the indices of the neural network.
It has been mentioned above that in constructing the loss function for training the neural network, a set of balance parameters is introduced, taking into account the balance between feedback overhead and feedback accuracy. In the embodiment of the present application, the balance parameter set may be fixed, such as predefined by a protocol; alternatively, the balance parameter set may also be configured with different values based on different performance indicators, such as protocol predefinition; alternatively, the balance parameter set may be defined by each manufacturer.
Table 2 shows another example of the M sets of performance indicators.
TABLE 2
Figure BDA0002744944570000161
It should be understood that the set of balance parameters may also be invisible. In this case, table 2 is the same as table 1.
Based on the communication connection of each of the M neural networks to a reference neural network, a set of performance metric values for the target information may be obtained. Based on the communication connections of the M neural networks and the Z reference neural networks, M sets of performance metric values may be obtained. Each of the M sets of performance metric values may include performance metric values for one or more dimensions.
In this embodiment, the set of performance metric values corresponding to each neural network may be obtained based on the testing method for the first neural network described in the method 300 above, or may be obtained based on other testing methods. This is not a limitation of the present application. It will be appreciated that if the M sets of performance metric values were obtained based on the testing method for the first neural network described above in method 300, each set of performance metric values may include N dimensions of performance metric values, respectively.
Based on the above-mentioned correspondence of the M neural networks, the M sets of performance indicators, and the Z reference neural networks, M sets of performance metric values of the target information can be obtained.
In step 420, it is determined whether the first communication device is allowed to use for network access based on the M sets of performance metric values.
If the performance of the M neural networks configured in the first communication device can respectively reach or exceed a set of corresponding performance indexes, the first communication device can be allowed to access the network for use; and if the performance of at least one of the M neural networks in the first communication device fails to reach the corresponding set of performance indexes, not allowing the first communication device to access the network for use.
In this embodiment, based on the above-mentioned correspondence between the M neural networks and the M sets of performance indexes, and the M sets of performance metric values obtained in step 410, it may be determined whether the performance of the M neural networks reaches the standard, and further, whether the first communication device is allowed to use for network access is determined.
In one implementation, the corresponding test curve may be generated based on the M sets of performance indicators. For example, the test curve shown in fig. 5 can be obtained by taking the M sets of performance indexes shown in table 2 above as an example. As shown in the figure, the horizontal axis represents the compression rate, and is an example of a performance index of the feedback overhead; the vertical axis represents MSE, which is an example of a performance index of feedback accuracy. Based on the data in each row in table 1, a curve as shown in fig. 5 can be obtained.
It can be seen that the MSE gradually decreases as the compression ratio increases. As mentioned above, the performance indicators shown in table 1 are the upper bound of the feedback overhead and the upper bound of the feedback accuracy, respectively. That is, the performance of points above the curve does not meet, and the performance of points on or below the curve meets. In other words, if M sets of performance metric values obtained by testing M neural networks in the first communication device are located on the curve or below the curve at corresponding points in the graph, it indicates that the performance of the M neural networks is up to the standard, and the first communication device is allowed to access the network for use; if M groups of performance metric values obtained by testing M neural networks in the first communication equipment are located above the curve at corresponding points in the graph, the performance of the M neural networks is not completely met, and the first communication equipment is not allowed to be used for accessing the network.
It should be understood that the curve shown in fig. 5 is a possible expression of the M sets of performance indicators in table 1, and should not constitute any limitation to the embodiments of the present application. For example, the abscissa and the ordinate may be interchanged to generate a curve.
It should also be understood that, the determination of whether the performance of the M neural networks meets the standard based on the position relationship between the corresponding points of the M sets of performance metric values in the graph and the curve is also only one possible implementation manner, and should not constitute any limitation on the embodiment of the present application.
Further, if the performances of the M neural networks in the first communication device all reach the standard, the corresponding relationship between the M neural networks and the M sets of performance indexes can be stored in the first communication device. For example, the test device may copy the correspondence to the first communication device. The first communication device may store the corresponding relationship locally, so that when the first communication device is used in a subsequent network access, the first communication device selects a matching neural network based on the neural network of the opposite terminal.
In combination with the M sets of performance indicators shown in table 1, table 3 shows the corresponding relationship between the M sets of performance indicators and the M neural networks. Wherein M is 3.
TABLE 3
Figure BDA0002744944570000171
It should be understood that the table 3 is only one possible form, and the correspondence between the M sets of performance indicators and the M neural networks can be simplified to the table 4:
TABLE 4
Index of performance indicators Indexing of neural networks
1 1
2 2
3 3
In addition, in the case where Z is equal to M, the correspondence between the M sets of performance indexes and the M neural networks in tables 3 and 4 may also be replaced by the correspondence between the M reference neural networks and the M neural networks. The correspondence between the M reference neural networks and the M neural networks is shown in table 5, for example:
TABLE 5
Indexing of reference neural networks Indexing of neural networks
1 1
2 2
3 3
It is understood that in the case of Z ═ M, M sets of performance indicators correspond one-to-one to M reference neural networks, and tables 4 and 5 can be considered equivalent.
Or, the test device may also store the correspondence between the Z reference neural networks, the M neural networks, and the M sets of performance indicators in the first communication device. The correspondence between the Z reference neural networks, the M neural networks, and the M sets of performance indicators is shown in table 6, for example:
TABLE 6
Figure BDA0002744944570000181
Table 6 shows the case where Z ≠ M. If Z is equal to M, the correspondence between the Z reference neural networks, M neural networks, and M sets of performance indicators is shown in table 7, for example:
TABLE 7
Indexing of reference neural networks Index of performance indicators Indexing of neural networks
1 1 1
2 2 2
3 3 3
Wherein the index values are not necessarily listed in the above table. For example, when M is 3, the index values of the performance index may include 0, 1, and 2, the index values of the neural network include 10, 11, and 12, and the index values of the reference neural network may include 20, 21, and 22. The embodiment of the present application is not limited to this.
Further, other possible permutations may be made by those skilled in the art based on any of tables 4 through 7 listed above. These variants are based on the same concept and are therefore intended to fall within the scope of protection of the present application.
On the other hand, if the first communication device is used for accessing the network, the first communication device can establish a communication connection with one or more communication devices in the communication system. For convenience of distinction and explanation, a communication device establishing a communication connection with a first communication device in a communication system is referred to as a second communication device. For ease of understanding, it is assumed that the second communication device is a communication device already present in the communication system. For example, the second communication device may have established a communication connection with another communication device.
Optionally, the first communication device is a terminal device, and the second communication device is a network device. Optionally, the first communication device is a network device, and the second communication device is a terminal device. The embodiments of the present application do not limit this.
It should be understood that the second communication device has a neural network deployed therein that can match the neural network in the first communication device. In a possible case, a neural network matching the M neural networks is deployed in the second communication device, or in a communication connection between the first communication device and the second communication device, the target information can satisfy the M sets of performance indexes. In this case, the Z reference neural networks or Z neural networks constructed based on the parameters of the Z reference neural networks may be deployed in the second communication device. The correspondence between the Z neural networks in the second communication device and the M sets of performance indicators may be pre-stored in the second communication device. It is understood that the correspondence between the Z neural networks in the second communication device and the M sets of performance indicators, that is, the correspondence between the Z reference neural networks and the M sets of performance indicators, is as shown in the above examples in table 6 and table 7.
Based on the corresponding relationship between the M sets of performance indexes and the M neural networks in the first communication device and the corresponding relationship between the M sets of performance indexes and the Z neural networks in the second communication device, the corresponding relationship between the M neural networks in the first communication device and the Z neural networks in the second communication device can be established. In the case where Z is M, M neural networks in the first communication device correspond one-to-one to M neural networks in the second communication device.
Thereafter, the first communication device may communicate using a neural network that matches the neural network currently in use in the second communication device. The communication flow between the first communication device and the second communication device will be described in the following method 600 with reference to the accompanying drawings, and for brevity, will not be described in detail here.
Based on the technical scheme, based on the communication connection between the M neural networks and the Z reference neural networks in the first communication device, M groups of performance metric values of the same target information can be obtained, and then the performance of the first communication device is evaluated based on the M groups of performance metric values, so that the communication device which can meet M groups of performance indexes is accessed to the network for use. And each neural network can be evaluated based on the performance metric values of multiple dimensions, so that balance among the performances of the multiple dimensions can be obtained, and the system performance is favorably improved.
Fig. 6 is a schematic flow chart of a communication method 600 provided in an embodiment of the present application. As shown in fig. 6, the method 600 may include steps 610 through 630. The various steps in method 600 are described in detail below.
In step 610, the third communication device generates first information for determining a fourth neural network that matches the third neural network.
Wherein the third neural network may be a neural network configured in the third communication device. The fourth neural network may be a neural network configured in the fourth communication device. The third communication device and the fourth communication device have a communication connection therebetween. For example, the third communication device is a network device, and the fourth communication device is a terminal device. In other words, the third neural network may be a neural network configured in the network device, and the fourth neural network may be a neural network configured in the terminal device. For another example, the third communication device is a terminal device, and the fourth communication device is a network device. In other words, the third neural network may be a neural network configured in the terminal device, and the fourth neural network may be a neural network configured in the network device. The embodiments of the present application do not limit this.
It is understood that a plurality of neural networks may be configured in each of the third communication device and the fourth communication device to correspond to a plurality of sets of performance indicators. The third neural network may be one of a plurality of neural networks configured in a third communication device, and the fourth neural network may be one of a plurality of neural networks configured in a fourth communication device.
The third neural network is matched with the fourth neural network, which may mean that the third neural network and the fourth neural network may correspond to a set of performance indexes; or the target information in the communication connection between the third neural network and the fourth neural network meets the same set of performance indexes.
Here, the correspondence relationship between the neural network and the performance index can be understood as follows: the neural network may be a neural network used after training and testing to reach a standard based on a corresponding set of performance indicators, such as the first neural network and the corresponding relationship between the performance indicators described in the method 300 above. The neural network may also be used to train and test another neural network, and the training and testing may be performed based on a corresponding set of performance indicators, such as the above-described correspondence between the second neural network and the performance indicators. In short, if the neural network corresponds to a certain set of performance indexes, after the neural network establishes a communication connection with the neural network of the opposite end, the target information transmitted based on the communication connection may satisfy the corresponding set of performance indexes.
In this embodiment, the fourth communication device may be, for example, a communication device that can be used in a network after the neural network is tested based on the method described above with reference to fig. 3 and 4, such as the first communication device described above.
After the fourth communication device is used in the network, the fourth communication device can establish a communication connection with a third communication device in the communication network. The third communication device may generate first information for determining a fourth neural network that matches a third neural network currently used by the third communication device based on the establishment of the communication connection with the fourth neural network.
Wherein a fourth neural network matching the third neural network may be determined by the third communication device and indicate the fourth communication device by signaling; the fourth communication device may also determine based on the third neural network, or based on a performance metric. The embodiments of the present application do not limit this.
The first information will be described in detail with reference to specific examples, and for brevity, the detailed description will not be provided here.
In step 620, the third communication device sends the first information to the fourth communication device.
The third communication device may send the first information to the fourth communication device over an air interface. The first information may be carried in an existing signaling or may be a newly added signaling. The embodiments of the present application do not limit this.
In step 630, the fourth communications device determines a fourth neural network from the first information.
The fourth communication device may perform different operations based on the specific content included in the first information.
In one example, the first information is an index of performance indicators.
Since the third communication device can determine to which set of performance indicators the third neural network currently in use corresponds, the third communication device can send the index of the performance indicator corresponding to the third neural network to the fourth communication device through the first information. The fourth communication device may determine the fourth neural network according to the pre-stored correspondence between the plurality of neural networks and the plurality of sets of performance indicators and the index in the first information.
Taking table 4 shown above as an example, assuming that the third neural network corresponds to the performance index having the index value of 2 in table 4, that is, the first information indicates the index value of 2, the fourth communication device may determine the fourth neural network according to the index value of 2.
In another example, the first information is an index of a reference neural network.
Assuming that a plurality of neural networks in the third communication device correspond to a plurality of sets of performance indicators one to one, the first information may be an index of a reference neural network corresponding to the third neural network. It will be appreciated that since multiple reference neural networks correspond to sets of performance reports, the index of the reference neural network is equivalent to the index of the performance indicator. The fourth communication device may determine a fourth neural network according to a pre-stored correspondence between the plurality of neural networks and the plurality of sets of performance indicators and the index in the first information; or, the fourth communication device may determine the fourth neural network according to the pre-stored correspondence between the plurality of reference neural networks, the plurality of neural networks, and the plurality of sets of performance indicators, and the index in the first information.
Taking table 7 shown above as an example, assuming that the third neural network corresponds to the reference neural network with index value 3 in table 7, i.e., the first information indicates index value 3, the fourth communication device may determine the fourth neural network according to the index value 3.
As yet another example, the first information is an index of a fourth neural network.
If the third communication device stores therein correspondence between a plurality of neural networks and a plurality of sets of performance indicators, or stores therein correspondence between a plurality of reference neural networks, a plurality of neural networks, and a plurality of sets of performance indicators, or stores therein correspondence between a plurality of reference neural networks and a plurality of neural networks, the third communication device may determine a fourth neural network by itself according to the correspondence, and send an index of the fourth neural network to the fourth communication device through the first information. That is, the first information may be an index of the fourth neural network. The fourth communication device may determine a fourth neural network based on information in the first information.
Taking table 5 shown above as an example, assuming that the third neural network corresponds to the reference neural network having an index value of 1 in table 5, the third neural network may determine that the index value of the fourth neural network is also 1. That is, the first information indicates an index value of 1. The fourth neural network may determine the fourth neural network from the index value of 1.
It should be understood that the index is only one possible form for indicating a neural network, a reference neural network, or a performance metric. The first information may also be other information that can be used to identify a neural network, a reference neural network, or a performance index, which is not limited in this embodiment of the present application.
Based on the above technical solution, the third communication device may send, based on the communication connection with the fourth communication device and the currently used third neural network, the first information for determining the fourth neural network matching the third neural network to the fourth communication device, so that the fourth communication device selects the fourth neural network matching the third neural network to operate. Because the third neural network and the fourth neural network correspond to the same set of performance indexes, the requirements of the performance indexes on various performances can be met when the third neural network and the fourth neural network work in a combined mode. Thereby contributing to achieving an improvement in system performance.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is introduced from the perspective of the testing device, the perspective of interaction between the first communication device and the second communication device, and the perspective of interaction between the third communication device and the fourth communication device, respectively. In order to implement the functions in the method provided by the embodiments of the present application, each device may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
The method provided by the embodiment of the present application is described in detail above with reference to fig. 3 to 6. Hereinafter, the apparatus provided in the embodiment of the present application will be described in detail with reference to fig. 7 to 10.
Fig. 7 is a schematic block diagram of a testing apparatus of a neural network provided in an embodiment of the present application. As shown in fig. 7, the apparatus 700 may include: an acquisition module 710 and a determination module 720. The obtaining module 710 may be configured to obtain N-dimensional performance metric values of the target information based on a communication connection between the first neural network and the second neural network; the first neural network is trained based on communication connection with the second neural network; the determination module 720 may be configured to determine whether the performance of the first neural network is up to standard based on the performance metric values for the N dimensions; n is not less than 2 and is an integer.
It should be understood that the apparatus 700 may correspond to the test device in the embodiment shown in fig. 3, and may include modules for performing the method performed by the test device in fig. 3. The obtaining module 710 may be configured to perform step 310 of the method 300 above, and the determining module 720 may be configured to perform step 320 of the method 300 above.
The apparatus 700 may also correspond to the test device in the embodiment shown in fig. 4, and may include modules for performing the method performed by the test device in the method 400 in fig. 4. The obtaining module 710 may be configured to perform step 410 of the method 400 above, and the determining module 720 may be configured to perform step 420 of the method 400 above.
The steps performed by the obtaining module 710 and the determining module 720 may be implemented by one or more processors executing corresponding programs.
In one possible design, the apparatus 700 may be deployed on a chip.
It should be understood that the division of the modules in the embodiments of the present application is illustrative, and is only one logical function division, and there may be other division manners in actual implementation. In addition, functional modules in the embodiments of the present application may be integrated into one processor, may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 8 is another schematic block diagram of a testing apparatus of a neural network provided in an embodiment of the present application. The device can be used for realizing the function of the test equipment in the method. The device can be a test device or a device in the test device. Wherein the apparatus may be a system-on-a-chip. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
As shown in fig. 8, the apparatus 800 may include at least one processor 810 for implementing the functions of the test device in the methods provided by the embodiments of the present application. Illustratively, the processor 810 may be configured to obtain N-dimensional performance metric values of the target information based on the communication connection of the first neural network with the second neural network, and may be configured to determine whether the performance of the first neural network is up to standard based on the N-dimensional performance metric values; the first neural network is obtained based on communication connection training with the second neural network, N is larger than or equal to 2, and N is an integer. For details, reference is made to the detailed description in the method example, which is not repeated herein.
The apparatus 800 may also include at least one memory 820 for storing program instructions and/or data. The memory 820 is coupled to the processor 810. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 810 may cooperate with the memory 820. Processor 810 may execute program instructions stored in memory 820. At least one of the at least one memory may be included in the processor.
The apparatus 800 may also include a communication interface 830 for communicating with other devices over a transmission medium such that the apparatus used in the apparatus 800 may communicate with other devices. Illustratively, the other device may be a second neural network. The communication interface 830 may be, for example, a transceiver, an interface, a bus, a circuit, or a device capable of performing transceiving functions. The processor 810 may use the communication interface 830 to send and receive data and to implement the methods performed by the test equipment described in the embodiments corresponding to fig. 3-4.
The specific connection medium between the processor 810, the memory 820 and the communication interface 830 is not limited in the embodiments of the present application. In fig. 8, the processor 810, the memory 820 and the communication interface 830 are connected by a bus 840 according to the embodiment of the present application. The bus 840 is represented by a thick line in fig. 8, and the connection between other components is merely illustrative and not intended to be limiting. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
It is to be understood that the apparatus 800 may correspond to the test device in the embodiments shown in fig. 3 or fig. 4, and may include means for performing the method performed by the test device in the method 300 in fig. 3 or the method 400 in fig. 4. Alternatively, the memory 820 may include a read-only memory and a random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. The memory 820 may be a separate device or may be integrated into the processor 810. The processor 810 may be configured to execute instructions stored in the memory 820, and when the processor 810 executes the instructions stored in the memory, the processor 810 is configured to perform the various steps and/or processes of the method embodiments corresponding to the test equipment described above.
Fig. 9 is a schematic block diagram of a communication device provided in an embodiment of the present application. As shown in fig. 9, the apparatus 900 may include: a processing module 910 and a communication module 920.
Alternatively, the communication apparatus 900 may correspond to the third communication device in the above method embodiments, and may be, for example, the third communication device, or a component (e.g., a chip or a system of chips, etc.) configured in the third communication device.
It should be understood that the communication apparatus 900 may correspond to the third communication device in the method 600 according to the embodiment of the present application, and the communication apparatus 900 may include a module for performing the method performed by the third communication device in the method 600 in fig. 6. Also, the modules and other operations and/or functions described above in the communication apparatus 900 are respectively for implementing the corresponding flows of the method 600 in fig. 6.
Wherein, when the communication device 900 is configured to execute the method 600 in fig. 6, the processing module 910 is configured to execute the step 610 in the method 600, and the communication module 920 is configured to execute the step 620 in the method 600. It should be understood that the specific processes of the modules for executing the corresponding steps are already described in detail in the above method embodiments, and therefore, for brevity, detailed descriptions thereof are omitted.
Alternatively, the communication apparatus 900 may correspond to the communication device in the above method embodiment, such as a fourth communication device, for example, the fourth communication device may be the fourth communication device, or a component (e.g., a chip or a system of chips, etc.) configured in the fourth communication device.
It should be understood that the communication apparatus 900 may correspond to the fourth communication device in the method 600 according to the embodiment of the present application, and the communication apparatus 900 may include modules for executing the method executed by the fourth communication device in the method 600 in fig. 6. Also, the modules and other operations and/or functions described above in the communication device 900 are respectively for implementing the corresponding flows of the method 600 in fig. 6.
Wherein, when the communication device 900 is configured to execute the method 600 in fig. 6, the processing module 910 is configured to execute the step 610 in the method 600, and the communication module 920 is configured to execute the step 620 in the method 600. It should be understood that the specific processes of the modules for executing the corresponding steps are already described in detail in the above method embodiments, and therefore, for brevity, detailed descriptions thereof are omitted.
Wherein, when the communication device 900 is configured to execute the method 600 in fig. 6, the processing module 910 is configured to execute the step 630 in the method 600, and the communication module 920 is configured to execute the step 620 in the method 600. It should be understood that the specific processes of the modules for executing the corresponding steps are already described in detail in the above method embodiments, and therefore, for brevity, detailed descriptions thereof are omitted.
When the communication apparatus 900 is a communication device (e.g., a third communication device or a fourth communication device), the communication module 920 in the communication apparatus 900 may be implemented by a transceiver, for example, may correspond to the transceiver 1020 in the communication apparatus 1000 shown in fig. 10, or the RRU 1110 in the network device 1100 shown in fig. 11, or the transceiver 1202 in the terminal device 1200 shown in fig. 12, and the processing module 910 in the communication apparatus 900 may be implemented by at least one processor, for example, may correspond to the processor 1010 in the communication apparatus 1000 shown in fig. 10, or the BBU1120 in the network device 1100 shown in fig. 11, or the processor 1201 in the terminal device 1200 shown in fig. 12.
When the communication apparatus 900 is a chip or a chip system configured in a communication device (e.g., a third communication device or a fourth communication device), the communication module 920 in the communication apparatus 900 may be implemented by an input/output interface, a circuit, etc., and the processing module 910 in the communication apparatus 900 may be implemented by a processor, a microprocessor, an integrated circuit, etc., integrated on the chip or the chip system.
It should be understood that the division of the modules in the embodiments of the present application is illustrative, and is only one logical function division, and there may be other division manners in actual implementation. In addition, functional modules in the embodiments of the present application may be integrated into one processor, may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 10 is another schematic block diagram of a communication device provided in an embodiment of the present application. The device can be used for realizing the function of the test equipment in the method. The device can be a test device or a device in the test device. Wherein the apparatus may be a system-on-a-chip. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
As shown in fig. 10, the apparatus 1000 may include at least one processor 1010 for implementing the functions of the third communication device or the fourth communication device in the methods provided in the embodiments of the present application.
For example, if the apparatus 1000 is used to implement the function of the third communication device in the method provided by the embodiment of the present application, the processor 1010 may be configured to generate first information, where the first information is used to determine a fourth neural network matching the third neural network; the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes; the processor 1010 is also operable to control the communication interface 1030 to transmit the first information. For details, reference is made to the detailed description in the method example, which is not repeated herein.
If the apparatus 1000 is configured to implement the function of the fourth communication device in the method provided in the embodiment of the present application, the processor 1010 may be configured to control the communication interface 1030 to receive first information, where the first information is used to determine a fourth neural network matching the third neural network; the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes; the processor 1030 is also operable to determine a fourth neural network based on the first information.
The apparatus 1000 may also include at least one memory 1020 for storing program instructions and/or data. The memory 1020 is coupled to the processor 1010. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 1010 may operate in conjunction with the memory 1020. Processor 1010 may execute program instructions stored in memory 1020. At least one of the at least one memory may be included in the processor.
The apparatus 1000 may also include a communication interface 1030 for communicating with other devices over a transmission medium such that the apparatus used in the apparatus 1000 may communicate with other devices. Illustratively, the apparatus 1000 may be a third communication device and the other device may be a fourth communication device. Alternatively, the apparatus 1000 may be a fourth communication device and the other device may be a third communication device. The communication interface 1030 may be, for example, a transceiver, an interface, a bus, a circuit, or a device capable of performing transceiving functions. Processor 1010 may utilize communication interface 1030 to transceive data and may be configured to implement a method performed by the third communication device or the fourth communication device described in the corresponding embodiment of fig. 6.
The specific connection medium between the processor 1010, the memory 1020 and the communication interface 1030 is not limited in the embodiments of the present application. In fig. 10, the processor 1010, the memory 1020, and the communication interface 1030 are connected by a bus 1040. The bus 1040 is shown in fig. 10 by a thick line, and the connection between other components is merely illustrative and not intended to be limiting. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
It should be understood that the communication apparatus 1000 may correspond to the third communication device or the fourth communication device in the above method embodiments, and may be configured to perform each step and/or flow performed by the third communication device or the fourth communication device in the above method embodiments. Alternatively, the memory 1030 may include a read-only memory and a random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. Memory 1030 may be a separate device or integrated into processor 1010. The processor 1010 may be configured to execute instructions stored in the memory 1030, and when the processor 1010 executes the instructions stored in the memory, the processor 1010 is configured to perform the steps and/or processes of the method embodiments described above corresponding to the third communication device or the fourth communication device.
In one possible design, the third communication device is a network device, and the fourth communication device is a terminal device. In another possible design, the third communication device is a terminal device, and the fourth communication device is a network device. The following describes the structures of the network device and the terminal device in detail with reference to fig. 11 and 12, respectively.
Fig. 11 is a schematic structural diagram of a network device provided in an embodiment of the present application, which may be a schematic structural diagram of a base station, for example. The base station 1100 may be used in a system as shown in fig. 1. As shown in fig. 11, the base station 1100 may include one or more radio frequency units, such as a Remote Radio Unit (RRU) 1110 and one or more baseband units (BBUs) (which may also be referred to as Distributed Units (DUs)) 1120. The RRU 1110 may be referred to as a transceiver unit, and may correspond to the communication module 920 in fig. 9 or the communication interface 1030 in fig. 10. Optionally, the RRU 1110 may also be referred to as a transceiver, transceiver circuitry, or transceiver, etc., which may include at least one antenna 1111 and a radio frequency unit 1112. Optionally, the RRU 1110 may include a receiving unit and a sending unit, where the receiving unit may correspond to a receiver (or receiver and receiving circuit), and the sending unit may correspond to a transmitter (or transmitter and transmitting circuit). The RRU 1110 is mainly used for transceiving radio frequency signals and converting the radio frequency signals and baseband signals, for example, for performing an operation procedure related to the third communication device in the above method embodiment, e.g., sending the first information to the fourth communication device; or, the operation flow used in the above method embodiment with respect to the fourth communication device, for example, receiving the first information from the third communication device. The BBU1120 part is mainly used for performing baseband processing, controlling a base station, and the like. The RRU 1110 and the BBU1120 may be physically disposed together or may be physically disposed separately, that is, distributed base stations.
The BBU1120 is a control center of the base station, and may also be referred to as a processing unit, and may correspond to the processing module 910 in fig. 9 or the processor 1010 in fig. 10, and is mainly used for performing baseband processing functions, such as channel coding, multiplexing, modulation, spreading, and the like. For example, the BBU (processing unit) may be configured to control the base station to perform an operation procedure related to the third communication device in the above method embodiment, for example, to generate the above indication information. Or, the BBU (processing unit) may be configured to control the base station to perform an operation procedure related to the fourth communication device in the above method embodiment, for example, determine a fourth neural network and the like.
In an example, the BBU1120 may be formed by one or more boards, and the boards may collectively support a radio access network of a single access system (e.g., an LTE network), or may respectively support radio access networks of different access systems (e.g., an LTE network, a 5G network, or other networks). The BBU1120 also includes a memory 1121 and a processor 1122. The memory 1121 is used for storing necessary instructions and data. The processor 1122 is configured to control the base station to perform necessary actions, for example, to control the base station to execute the operation procedure of the above method embodiment with respect to the third communication device. The memory 1121 and processor 1122 may serve one or more boards. That is, the memory and processor may be provided separately on each board. Multiple boards may share the same memory and processor. In addition, each single board can be provided with necessary circuits.
It should be understood that the base station 1100 shown in fig. 11 can implement the respective processes involving the third communication device or the fourth communication device in the method embodiment shown in fig. 6. The operations and/or functions of the respective modules in the base station 1100 are respectively for implementing the corresponding flows in the above-described method embodiments. Reference may be made specifically to the description of the above method embodiments, and a detailed description is appropriately omitted herein to avoid redundancy.
When base station 1100 is configured to perform the operation procedure related to the third communication device in the above method embodiment, BBU1120 may be configured to perform the action implemented inside the third communication device described in the foregoing method embodiment, and RRU 1110 may be configured to perform the action sent by the third communication device to the fourth communication device described in the foregoing method embodiment. Please refer to the description of the previous embodiment of the method, which is not repeated herein.
When base station 1100 is configured to perform the operation flow involving the fourth communication device in the above method embodiment, BBU1120 may be configured to perform the actions implemented by the fourth communication device described in the foregoing method embodiment, and RRU 1110 may be configured to perform the actions received by the fourth communication device from the third communication device described in the foregoing method embodiment. Please refer to the description of the previous embodiment of the method, which is not repeated herein.
It should be understood that the base station 1100 shown in fig. 11 is only one possible form of a network device, and should not limit the present application in any way. The method provided by the application can be applied to network equipment in other forms. For example, including AAUs, and may also include CUs and/or DUs, or including BBUs and Adaptive Radio Units (ARUs), or BBUs; the network device may also be a Customer Premise Equipment (CPE) or other forms, and the present application is not limited to a specific form of the network device.
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application. The terminal device may be applied in a system as shown in fig. 1. As shown in fig. 12, the terminal apparatus 1200 includes a processor 1201 and a transceiver 1202. Optionally, the terminal device 1200 further comprises a memory 1203. The processor 1201, the transceiver 1202 and the memory 1203 may communicate with each other through an internal connection path to transmit control and/or data signals, the memory 1203 is used for storing a computer program, and the processor 1201 is used for calling and running the computer program from the memory 1203 to control the transceiver 1202 to transmit and receive signals. Optionally, the terminal device 1200 may further include an antenna 1204, configured to send uplink data or uplink control signaling output by the transceiver 1202 through a wireless signal.
The processor 1201 and the memory 1203 may be combined into a processing device, and the processor 1201 is configured to execute the program code stored in the memory 1203 to implement the above-described functions. In particular implementations, the memory 1203 may also be integrated with the processor 1201 or may be separate from the processor 1201. The processor 1201 may correspond to the processing module 910 of fig. 9 or the processor 1010 of fig. 10.
The transceiver 1202 may correspond to the communication module 920 of fig. 9 or the communication interface 1030 of fig. 10. The transceiver 1202 may include a receiver (or receiver, receiving circuit) and a transmitter (or transmitter, transmitting circuit). Wherein the receiver is used for receiving signals, and the transmitter is used for transmitting signals.
Optionally, the terminal device 1200 may further include a power supply 1205 for supplying power to various devices or circuits in the terminal device 1200.
In addition, in order to further improve the functions of the terminal device, the terminal device 1200 may further include one or more of an input unit 1206, a display unit 1207, an audio circuit 1208, a camera 1209, a sensor 1210, and the like, and the audio circuit may further include a speaker 1208a, a microphone 1208b, and the like.
It should be understood that the terminal device 1200 shown in fig. 12 can implement the respective processes involving the third communication device or the fourth communication device in the method embodiment shown in fig. 6. The operations and/or functions of the modules in the terminal device 1200 are respectively to implement the corresponding flows in the above method embodiments. Reference may be made specifically to the description of the above method embodiments, and a detailed description is appropriately omitted herein to avoid redundancy.
When the terminal device 1200 is configured to perform the operation flow related to the third communication device in the above method embodiment, the processor 1201 may be configured to perform the actions implemented by the third communication device in the foregoing method embodiment, and the transceiver 1202 may be configured to perform the actions transmitted by the third communication device to the fourth communication device in the foregoing method embodiment. Please refer to the description of the previous embodiment of the method, which is not repeated herein.
When the terminal device 1200 is configured to perform the operation flow related to the fourth communication device in the above method embodiment, the processor 1201 may be configured to perform the actions implemented by the fourth communication device described in the previous method embodiment, and the transceiver 1202 may be configured to perform the actions received by the fourth communication device from the third communication device described in the previous method embodiment. Please refer to the description of the previous embodiment of the method, which is not repeated herein. The present application further provides a processing apparatus comprising at least one processor configured to execute a computer program stored in a memory, so that the processing apparatus performs the method performed by the test device, the method performed by the third communication device, or the method performed by the fourth communication device in the above method embodiments.
In this embodiment, the processor may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a Microcontroller (MCU) or other Programmable Logic Device (PLD), a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in this embodiment. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In embodiments of the present application, the memory may be a non-volatile memory, or a volatile memory (volatile memory), or may include both volatile and non-volatile memory. The nonvolatile memory may be, for example, a hard disk (HDD), a solid-state drive (SSD), a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory may be, for example, random-access memory (RAM). By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM).
The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory. According to the method provided by the embodiment of the present application, the present application further provides a computer program product, which includes: computer program code which, when run on a computer, causes the computer to perform the method performed by the testing device in the embodiment shown in fig. 3 or 4, the method performed by the third communication device in the embodiment shown in fig. 6 or the method performed by the fourth communication device.
According to the method provided by the embodiment of the present application, the present application further provides a computer-readable storage medium storing a program code, which, when running on a computer, causes the computer to execute the method executed by the test device in the embodiment shown in fig. 3 or fig. 4, the method executed by the third communication device in the embodiment shown in fig. 6, or the method executed by the fourth communication device.
According to the method provided by the embodiment of the present application, the present application further provides a system, which includes the third communication device and the fourth communication device.
The technical solutions provided in the embodiments of the present application may be wholly or partially implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a terminal device or other programmable apparatus. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium, among others.
In the embodiments of the present application, the embodiments may refer to each other, for example, methods and/or terms between the embodiments of the method may refer to each other, for example, functions and/or terms between the embodiments of the apparatus and the embodiments of the method may refer to each other, without logical contradiction.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (22)

1. A method for testing a neural network, comprising:
obtaining N dimensionality performance measurement values of target information based on communication connection of a first neural network and a second neural network, wherein the first neural network is obtained based on communication connection training of the first neural network and the second neural network;
determining whether the performance of the first neural network is up to standard based on the performance metric values of the N dimensions; n is not less than 2 and is an integer.
2. The method of claim 1, in which the second neural network corresponds to N dimensions of performance metrics; and
the determining whether the performance of the first neural network is up to standard based on the performance metric values of the N dimensions comprises:
determining whether the performance of the first neural network meets a standard based on the performance indicators of the N dimensions and the performance metric values of the N dimensions;
and determining that the performance of the first neural network reaches the standard when the performance metric values of the N dimensions reach or exceed the performance indexes of the respective corresponding dimensions.
3. The method of claim 2, wherein determining that the performance of the first neural network meets the criteria in the case that the performance metric values for the N dimensions can all meet or exceed the performance metric for the respective corresponding dimension comprises:
under the condition that the performance metric values of P dimensions in the N dimensions are the performance indexes of the respective corresponding dimensions, if the performance metric values of the other N-P dimensions can also reach or exceed the performance indexes of the respective corresponding dimensions, the performance of the first neural network is determined to reach the standard; wherein N is more than P and is more than or equal to 1, and P is an integer.
4. The method of any of claims 1 to 3, further comprising:
training the first neural network based on the communication connection with the second neural network and the performance indexes of the N dimensions corresponding to the second neural network.
5. The method of any one of claims 1 to 4, wherein the first neural network is one of M neural networks configured in a first communication device to be networked, the performance indicators of the N dimensions being one set of standardized M sets of performance indicators corresponding to the M neural networks, each set of performance indicators being used to test performance of the corresponding one of the neural networks; m is not less than 1 and is an integer; and
the method further comprises the following steps:
allowing the first communication device to access the network for use when the performance of each of the M neural networks reaches the corresponding performance index; or
And under the condition that the performance of at least one of the M neural networks does not reach the corresponding performance index, the first communication equipment is not allowed to be accessed for use.
6. The method of claim 5, wherein the method further comprises:
and under the condition that the performances of the M neural networks reach the standard, storing the corresponding relation between the M neural networks and the M sets of performance indexes into the first communication equipment.
7. The method of claim 5 or 6, wherein the second neural network is one of Z reference neural networks used for performance testing of the M neural networks; m is more than or equal to Z and more than or equal to 1, and Z is an integer.
8. The method of claim 7, wherein M-Z, each of the Z reference neural networks corresponds to one of the M sets of performance indicators and one of the M neural networks.
9. The method of claim 7, wherein the correspondence of the Z reference neural networks to the M sets of performance indicators is pre-stored in a second communication device, the second communication device being a device in a communication system that communicates with the first communication device.
10. The method of any one of claims 1 to 9, wherein the target information is channel information, the first neural network is used for feedback of the channel information, and the second neural network is used for reconstruction of the channel information.
11. The method of claim 10, wherein the N-dimensional performance metric values comprise a feedback overhead for the channel information and a feedback accuracy for the channel information.
12. A method of communication, comprising:
generating first information, wherein the first information is used for determining a fourth neural network matched with a third neural network; wherein the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes;
and sending the first information.
13. The method of claim 12, wherein the first information is indicative of a set of performance metrics corresponding to the third neural network.
14. The method of claim 12 or 13, wherein the third neural network is configured in a network device and the fourth neural network is configured in a terminal device; or, the third neural network is configured in the terminal device, and the fourth neural network is configured in the network device.
15. A method of communication, comprising:
receiving first information, wherein the first information is used for determining a fourth neural network matched with a third neural network; wherein the third neural network and the fourth neural network are respectively configured in two communication devices with communication connection, and the third neural network and the fourth neural network correspond to the same set of performance indexes;
determining the fourth neural network from the first information.
16. The method of claim 15, wherein the first information is indicative of a set of performance metrics corresponding to the third neural network.
17. The method of claim 15 or 16, wherein the third neural network is configured in a network device and the fourth neural network is configured in a terminal device; or, the third neural network is configured in the terminal device, and the fourth neural network is configured in the network device.
18. A testing device for neural networks, characterized by being adapted to implement the method of any one of claims 1 to 11.
19. A communication device arranged to implement the method of any of claims 12 to 17.
20. A testing apparatus for a neural network, comprising a processor and a memory, the memory coupled to the processor, the processor configured to perform the method of any one of claims 1 to 11.
21. A communications device comprising a processor and a memory, the memory coupled to the processor, the processor configured to perform the method of any of claims 12 to 17.
22. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 17.
CN202011163017.0A 2020-10-27 2020-10-27 Neural network testing method and device Pending CN114492784A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011163017.0A CN114492784A (en) 2020-10-27 2020-10-27 Neural network testing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011163017.0A CN114492784A (en) 2020-10-27 2020-10-27 Neural network testing method and device

Publications (1)

Publication Number Publication Date
CN114492784A true CN114492784A (en) 2022-05-13

Family

ID=81470129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011163017.0A Pending CN114492784A (en) 2020-10-27 2020-10-27 Neural network testing method and device

Country Status (1)

Country Link
CN (1) CN114492784A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024022007A1 (en) * 2022-07-26 2024-02-01 华为技术有限公司 Method and apparatus for communication in wireless local area network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024022007A1 (en) * 2022-07-26 2024-02-01 华为技术有限公司 Method and apparatus for communication in wireless local area network

Similar Documents

Publication Publication Date Title
US20220149904A1 (en) Compression and Decompression of Downlink Channel Estimates
WO2021217519A1 (en) Method and apparatus for adjusting neural network
WO2021253937A1 (en) Terminal and base station of wireless communication system, and methods executed by terminal and base station
CN114079493A (en) Channel state information measurement feedback method and related device
WO2021142605A1 (en) Method and apparatus for channel measurement
US11956031B2 (en) Communication of measurement results in coordinated multipoint
WO2021083157A1 (en) Precoding matrix processing method and communication apparatus
CN114642019A (en) Method for acquiring channel information
CN114614955A (en) Method and device for transmitting data
CN114492784A (en) Neural network testing method and device
US20230136416A1 (en) Neural network obtaining method and apparatus
WO2023123062A1 (en) Quality evaluation method for virtual channel sample, and device
CN113938907A (en) Communication method and communication device
WO2024031456A1 (en) Communication method, apparatus and device, storage medium, chip, and program product
WO2024008004A1 (en) Communication method and apparatus
WO2023236143A1 (en) Information transceiving method and apparatus
US20230403587A1 (en) Method and apparatus for monitoring and reporting ai model in wireless communication system
WO2023115254A1 (en) Data processing method and device
WO2024032775A1 (en) Quantization method and apparatus
WO2023279947A1 (en) Communication method and apparatus
WO2024067258A1 (en) Communication method and communication apparatus
WO2023221061A1 (en) Method and apparatus for acquiring channel quality, storage medium and chip
WO2024002003A1 (en) Channel feedback model determination method, terminal, and network side device
WO2023006096A1 (en) Communication method and apparatus
WO2022151064A1 (en) Information sending method and apparatus, information receiving method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination