CN111210005A - Equipment operation method and device, storage medium and electronic equipment - Google Patents

Equipment operation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111210005A
CN111210005A CN201911416475.8A CN201911416475A CN111210005A CN 111210005 A CN111210005 A CN 111210005A CN 201911416475 A CN201911416475 A CN 201911416475A CN 111210005 A CN111210005 A CN 111210005A
Authority
CN
China
Prior art keywords
network model
neural network
environments
operating
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911416475.8A
Other languages
Chinese (zh)
Other versions
CN111210005B (en
Inventor
周明君
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911416475.8A priority Critical patent/CN111210005B/en
Publication of CN111210005A publication Critical patent/CN111210005A/en
Application granted granted Critical
Publication of CN111210005B publication Critical patent/CN111210005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Stored Programmes (AREA)

Abstract

The application discloses an equipment operation method, an equipment operation device, a storage medium and electronic equipment. The equipment operation method comprises the following steps: acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model; configuring interfaces corresponding to operators; acquiring configuration parameters corresponding to a plurality of groups of different operation environments, wherein the configuration parameters corresponding to the operation environments are used for appointing the operation environments of each neural layer when the neural network model is operated in the electronic equipment; operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators; and outputting operation result data of the neural network model in the corresponding operation environment based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model in different operation environments. The method and the device can compare the operation efficiency of the neural network model under different operation environments.

Description

Equipment operation method and device, storage medium and electronic equipment
Technical Field
The present application belongs to the technical field of electronic devices, and in particular, to a device operation method, apparatus, storage medium, and electronic device.
Background
Artificial Neural Networks (ANNs) and deep learning methods based on the same have been increasingly applied to the fields of Artificial intelligence such as image recognition, scene judgment, intelligent recommendation, and the like, and exhibit their superiority in various aspects. Hardware acceleration for artificial neural networks is also a popular area of research. However, in the related art, when designing hardware acceleration for an artificial neural network, the operation efficiency of the artificial neural network in different operation environments cannot be compared.
Disclosure of Invention
The embodiment of the application provides an equipment operation method, an equipment operation device, a storage medium and electronic equipment, which can compare the operation efficiency of an artificial neural network in different operation environments.
In a first aspect, an embodiment of the present application provides an apparatus operation method, including:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring an interface corresponding to each operator;
acquiring configuration parameters corresponding to a plurality of groups of different operating environments, wherein the configuration parameters corresponding to the operating environments are used for appointing the operating environment of each neural layer when the neural network model is operated in the electronic equipment;
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
and outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environment so as to compare the operation efficiency of the neural network model under different operation environments.
In a second aspect, an embodiment of the present application provides an apparatus for operating a device, including:
the first acquisition module is used for acquiring a neural network model and determining operators corresponding to each neural layer in the neural network model;
the setting module is used for configuring the interface corresponding to each operator;
the second acquisition module is used for acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for appointing the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
the operation module is used for operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operation environments, wherein when the neural network model is operated in different operation environments, corresponding operators are called through interfaces corresponding to the operators;
and the output module is used for outputting the operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environment so as to compare the operation efficiency of the neural network model under different operation environments.
In a third aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer program is enabled to execute a flow in an apparatus operation method provided in an embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute a flow in the device operation method provided in the embodiment of the present application by calling a computer program stored in the memory.
In this embodiment, the electronic device may configure a corresponding interface for an operator in the neural network model, where the interface is used to call a function of the corresponding operator, so that the electronic device may operate the neural network model in different operating environments, and obtain operation result data in different operating environments, so as to compare operation efficiencies of the neural network model in different operating environments. Namely, the method and the device can realize the operation efficiency of the artificial neural network model in different operation environments.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic flow chart of an apparatus operation method provided in an embodiment of the present application.
Fig. 2 is another schematic flow chart of an apparatus operation method provided in an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an apparatus operating device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Fig. 5 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
It is understood that the execution subject of the embodiment of the present application may be an electronic device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an apparatus operation method according to an embodiment of the present application, where the flow chart may include:
101. and acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model.
Artificial Neural Networks (ANNs) and deep learning methods based on the same have been increasingly applied to the fields of Artificial intelligence such as image recognition, scene judgment, intelligent recommendation, and the like, and exhibit their superiority in various aspects. Hardware acceleration for artificial neural networks is also a popular area of research. However, in the related art, when designing hardware acceleration for an artificial neural network, the operation efficiency of the artificial neural network in different operation environments cannot be compared.
In this embodiment, for example, the electronic device may first obtain the neural network model. After obtaining the neural network model, the electronic device may determine an operator corresponding to each neural layer in the neural network model. The neural layer comprises an input layer, a hidden layer, an output layer and the like of the neural network model. The operators corresponding to each neural layer may refer to a computation node and an operation rule in a neural network computation graph, wherein the computation node in the neural network computation graph may be a node such as convolution (convolution), pooling (Activation), and the like, and the operation rule may refer to an operation such as partial convolution, two-vector addition, and the like.
It should be noted that the neural network model is an Artificial neural network model (ANN), which is a research hotspot emerging in the field of Artificial intelligence since the 80 th of the 20 th century. The method abstracts the human brain neuron network from the information processing angle, establishes a certain simple model, and forms different networks according to different connection modes. In recent decades, the research work of artificial neural networks has been deepened and has made great progress, which has successfully solved many practical problems that are difficult to solve by modern computers in the fields of pattern recognition, intelligent robots, automatic control, predictive estimation, biology, medicine, economy, and the like, and has shown good intelligent characteristics.
102. And configuring an interface corresponding to each operator.
For example, after determining the operator corresponding to each neural layer in the neural network model, the electronic device may configure a corresponding interface for each operator.
For example, the electronic device may obtain code for implementing an operator and then encapsulate the implementation code into the form of an interface. Therefore, when the operator needs to be realized in different operation environments, the electronic equipment only needs to call the interface corresponding to the operator to realize the function of the operator.
103. And acquiring configuration parameters corresponding to a plurality of groups of different operation environments, wherein the configuration parameters corresponding to the operation environments are used for appointing the operation environments of each neural layer when the neural network model is operated in the electronic equipment.
For example, the electronic device may obtain multiple sets of configuration parameters corresponding to different operating environments, where the configuration parameters corresponding to each set of operating environment may be used to specify the operating environment of each neural layer when the neural network model is operated in the electronic device.
For example, the neural network model includes an input layer, a first hidden layer, a second hidden layer, and an output layer. The configuration parameters corresponding to the operating environment a may specify that when the neural network model is operated in the electronic device, the input layer of the neural network model is operated in the CPU environment, the first hidden layer of the neural network model is operated in the FPGA environment, and the second hidden layer and the output layer of the neural network model are operated in the simulator environment. For another example, when the configuration parameter corresponding to the operating environment B specifies that the neural network model is operated in the electronic device, the input layer of the neural network model is operated on the CPU, and the first hidden layer, the second hidden layer, and the output layer of the neural network model are all operated on the FPGA. Note that the CPU refers to a central processing unit. The FPGA refers to a Field Programmable Gate Array (Field Programmable Gate Array), which is an artificial intelligence AI chip. The simulator may refer to a simulator written using SystemC.
104. And operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators.
For example, after obtaining multiple sets of configuration parameters corresponding to different operating environments, the electronic device may sequentially operate the neural network model in the electronic device according to the configuration parameters corresponding to each set of operating environments. When the neural network model is operated under different operation environments, the electronic equipment can call the corresponding operator through the interface corresponding to each operator.
For example, the electronic device may first run the neural network model according to the configuration parameters corresponding to the running environment a, for example, the electronic device may run the input layer of the neural network model on the CPU environment, run the first hidden layer of the neural network model on the FPGA environment, and run the second hidden layer and the output layer of the neural network model on the simulator environment.
For another example, the electronic device may then run the neural network model according to the configuration parameters corresponding to the running environment B, for example, run the input layer of the neural network model on the CPU environment, and run the first hidden layer, the second hidden layer, and the output layer of the neural network model on the FPGA environment.
When the neural network model is operated under different operation environments, the electronic equipment can call the corresponding operator through the interface corresponding to each operator. For example, the second hidden layer of the neural network model includes a convolution operator, and its corresponding interface is J1. Then, when the neural network model is operated according to the configuration parameters corresponding to the operation environment a, the electronic device can realize the function of the convolution operator by calling the interface J1 corresponding to the convolution operator in the simulator environment. When the neural network model is operated according to the configuration parameters corresponding to the operation environment B, the electronic device can realize the function of the convolution operator by calling the interface J1 corresponding to the convolution operator in the FPGA environment. That is, in different operating environments, the electronic device may implement the function of the corresponding operator through the same interface, thereby simplifying the implementation of the function of the operator in different operating environments.
105. And outputting operation result data of the neural network model in the corresponding operation environment based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model in different operation environments.
For example, after the neural network model is run in the electronic device according to the configuration parameters corresponding to each set of operating environments, the electronic device may output the operating result data in the operating environment corresponding to the configuration parameters corresponding to the set of operating environments. Based on the configuration parameters corresponding to each group of operating environments, the electronic equipment can obtain multiple groups of operating result data, so that the operating efficiency of the neural network model under different operating environments is compared.
For example, the electronic device obtains the operation result data D1 corresponding to the configuration parameter corresponding to the operation environment a and obtains the operation result data D2 corresponding to the configuration parameter corresponding to the operation environment B, so that the electronic device can compare the operation efficiency of the neural network model in two different operation environments according to the operation result data D1 and D2.
It can be understood that, in the embodiment of the present application, the electronic device may configure a corresponding interface for an operator in the neural network model, where the interface is used to implement a function of the corresponding operator, so that the electronic device may operate the neural network model in different operating environments, and obtain operation result data in different operating environments, so as to compare operation efficiencies of the neural network model in different operating environments. Namely, the method and the device can realize the operation efficiency of the artificial neural network model in different operation environments.
Referring to fig. 2, fig. 2 is another schematic flow chart of an apparatus operation method according to an embodiment of the present application, where the flow chart may include:
201. the electronic equipment acquires a neural network model, determines operators corresponding to each neural layer in the neural network model, and acquires parameters corresponding to the operators.
For example, the electronic device may first obtain a neural network model. Then, the electronic device may determine an operator corresponding to each neural layer in the neural network model, and further obtain a parameter corresponding to each operator. The operators in each neural layer may refer to computation nodes and operation rules in a neural network computational graph, where the computation nodes in the neural network computational graph may be nodes such as convolution (convolution), pooling (Activation), Activation (Activation), and the like, and the operation rules may refer to operations such as partial convolution, two-vector addition, and the like. The parameters corresponding to the operator may include shapes and formats of input and output data, a weight matrix value, specific parameters of a specific operator, and the like. The specific parameter of the specific operator may be, for example, the convolution kernel size, shape, padding (padding), stride (stride), and the like of the convolutional layer.
202. And the electronic equipment configures an interface corresponding to each operator.
For example, after determining the operators corresponding to each neural layer in the neural network model and obtaining the parameters corresponding to each operator, the electronic device may configure a corresponding interface for each operator.
For example, the electronic device may obtain code for implementing an operator and then encapsulate the implementation code into the form of an interface. Therefore, when the operator needs to be realized in different operating environments, the electronic device only needs to call the interface corresponding to the operator in the different operating environments to realize the function of the operator.
203. The electronic equipment acquires a plurality of groups of configuration parameters corresponding to different operation environments, the configuration parameters corresponding to the operation environments are used for appointing the operation environments of each neural layer when the neural network model is operated in the electronic equipment, and the operation environments at least comprise a CPU operation environment, an FPGA operation environment and a simulator operation environment.
For example, the electronic device may obtain multiple sets of configuration parameters corresponding to different operating environments, where the configuration parameters corresponding to each set of operating environment may be used to specify the operating environment of each neural layer when the neural network model is operated in the electronic device. The operating environment in the electronic device may at least include a CPU operating environment, an FPGA operating environment, a simulator operating environment, and the like.
For example, the neural network model includes an input layer, a first hidden layer, a second hidden layer, and an output layer. The configuration parameters corresponding to the operating environment a may specify that when the neural network model is operated in the electronic device, the input layer of the neural network model is operated in the CPU environment, the first hidden layer of the neural network model is operated in the FPGA environment, and the second hidden layer and the output layer of the neural network model are operated in the simulator environment. For another example, when the configuration parameter corresponding to the operating environment B specifies that the neural network model is operated in the electronic device, the input layer of the neural network model is operated on the CPU, and the first hidden layer, the second hidden layer, and the output layer of the neural network model are all operated on the FPGA. Of course, in other embodiments, the neural network model may not be limited to include only the input layer, the first hidden layer, the second hidden layer, and the output layer, and the neural network model may further include other neural layers, for example, a third hidden layer, a fourth hidden layer, and the like, which is not specifically limited in this embodiment.
Note that the CPU refers to a central processing unit. The FPGA refers to a field programmable Gate Array (field programmable Gate Array), which is an artificial intelligence AI chip. The simulator may refer to a simulator written using SystemC.
204. And the electronic equipment operates the neural network model according to the configuration parameters corresponding to each group of operation environments, wherein when the neural network model is operated in different operation environments, the parameters corresponding to the operators are obtained, the corresponding target operators are selected according to the parameters corresponding to the operators, and the corresponding target operators are called through the interfaces corresponding to the target operators.
For example, after obtaining multiple sets of configuration parameters corresponding to different operating environments, the electronic device may sequentially operate the neural network model in the electronic device according to the configuration parameters corresponding to each set of operating environments. When the neural network model is operated under different operating environments, the electronic device can firstly acquire parameters corresponding to each operator, then select a corresponding target operator according to the parameters corresponding to each operator, and then call the corresponding target operator through an interface corresponding to each target operator.
It should be noted that, in the flow 201 of the embodiment of the present application, the electronic device may only obtain parameters of an operator corresponding to each neural layer, and does not obtain information of what operator each operator is specifically. Instead, when the neural network model is operated, the electronic device automatically selects which operator to use according to the operating environment and the corresponding parameters of the operator. It should be noted that, since the parameter corresponding to each operator has the format corresponding to the operator, the corresponding operator can be selected according to the parameter corresponding to the operator and the format of the parameter.
That is, when a certain operator needs to be implemented, each operating environment can inherit the interface corresponding to the operator. For example, the electronic device may set a main function, and the interface corresponding to each operator may include a first sub-interface and a second sub-interface. Through the main function, the electronic device can call the first sub-interface of the operator first, and transmit the parameter corresponding to the operator to the current operating environment through the first sub-interface. And selecting a proper operator according to the parameters corresponding to the operator by the operating environment. It should be noted that the main function refers to a main function of an executable file set by the electronic device, that is, an entry function, and the main function may call other functions. In other words, each running environment can realize the first sub-interface, the main function transfers the parameters corresponding to the operator to the specific running environment by calling the first sub-interface, and the running environment selects how to process the parameters according to the characteristics of the running environment. It may be sent to hardware or temporarily stored in software.
The process of selecting a specific operator by the electronic device may have different implementations.
Taking convolution operators as an example, in a CPU environment, we use a program to implement all convolutions, in this case, we convert the parameters of the operators into the parameters of convolution implementation, and call functions to complete the calculation. That is, the operator realized by the CPU performs some simple conversions on the parameters of the operator, and then transfers the parameters as function parameters to a certain function to complete the calculation of the operator. For example, the convolution function generally has the following form conv (input _ data, padding, stride, scaling …), and different input sizes and parameters can be calculated by using the function, so the CPU only needs to fill in appropriate parameters and call the function.
In the FPGA environment, we can use different programs to implement convolution of different parameters, in this case, there are two methods to select:
firstly, a hardware program is directly specified for each layer, so that whether the parameter format of an operator is the same as that of the hardware program or not only needs to be checked. For example, assuming that a hardware program provided by hardware can only support 3x3, padding is equal to same, stride is equal to 1, and translation is equal to 1, and a certain neural layer specifies that the hardware function is used, the electronic device needs to check whether the neural layer matches the capability provided by hardware.
Secondly, a database is established for all hardware programs, and the most suitable operator is selected from the database according to the parameters of the operator. That is, the operator realized by the FPGA selects a suitable hardware program from the database according to the parameters, and sends the hardware program to the hardware to wait for the hardware to complete the calculation of the operator. For example, the electronic device may manage various hardware programs provided by all hardware, and automatically select which hardware program to use based on the operator parameters.
After the execution environment has completed the selection of the appropriate operator, the main function may call the second sub-interface to execute the operator. Wherein the execution modes of different execution environments can be different. For example, the CPU environment may be a C language function, which is packaged simply, and the electronic device may directly call the packaged function to execute the corresponding operator. In a simulator environment, electronic equipment needs to write data and parameters into certain specific files in advance, then call an executable file prepared in advance, finally analyze an output file, extract specific data from the output file, and accordingly complete execution of an operator. In the FPGA environment, the electronic device needs to dynamically send data to the FPGA through a hardware interface (such as USB/PCIe), and read back through the hardware interface after the computation is completed, so as to complete the execution of the operator.
That is, through the above implementation, the main function calls the operator, the operating environment implements the operator, the two interact through the "same interface", and different operating environments follow the same interface to implement the function of the operator, so the method of calling the operator implemented in different environments is the same for the main function.
205. Based on the configuration parameters corresponding to each group of operating environments, the electronic equipment outputs operating result data of the neural network model in the corresponding operating environment, wherein the operating result data comprises operating time T of each neural layer, and when the neural layer operates in the CPU environment, the operating time T of the corresponding neural layer is calculated according to the following formula: t ═ F1/F2 × T1, where F1 is a preset frequency value, F2 is a CPU actual frequency value, and T1 is an actual operating time length; when the neural layer runs in an FPGA or simulator environment, calculating the running time T of the corresponding neural layer according to the following formula: t is C × F3+ T2, where C is the number of actual running clocks, F3 is the preset running frequency, and T2 is the preset running duration.
For example, after the neural network model is run in the electronic device according to the configuration parameters corresponding to each set of operating environments, the electronic device may output the operating result data in the operating environment corresponding to the configuration parameters corresponding to the set of operating environments. Wherein the operation result data includes operation time of each neural layer. When the neural layer runs in the CPU environment, the running time T of the corresponding neural layer is calculated according to the following formula: and T is F1/F2 xT 1, wherein F1 is a preset frequency value, F2 is a CPU actual frequency value, and T1 is an actual running time length. When the neural layer runs in an FPGA or simulator environment, calculating the running time T of the corresponding neural layer according to the following formula: t is C × F3+ T2, where C is the number of actual running clocks, F3 is the preset running frequency, and T2 is the preset running duration.
That is, through the process 205, the electronic device may obtain the operation time of each neural layer in the neural network model under the operation environment specified by the configuration parameters corresponding to the operation environment.
206. The electronic equipment compares the operation efficiency of the neural network model under different operation environments according to the sum of the operation time of each neural layer, determines the optimal operation environment of each corresponding neural layer according to the operation time of each neural layer under different operation environments, and determines the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
For example, the electronic device may compare the operating efficiency of the neural network model under different operating environments according to the sum of the operating times of the neural layers in the neural network model. In this embodiment, the electronic device may further determine the optimal operating environment of each corresponding neural layer according to the operating time of each neural layer in different operating environments, and determine the optimal operating environment of the neural network model according to the optimal operating environment of each neural layer. For example, the electronic device may determine the operating environment with the shortest operating time as the optimal operating environment of each corresponding neural layer, and determine the optimal operating environment of the neural network model according to the optimal operating environment of each neural layer.
For example, the neural network model includes an input layer, a first hidden layer, a second hidden layer, and an output layer. The electronic device obtains the operation result data D1 corresponding to the configuration parameter corresponding to the operation environment a and obtains the operation result data D2 corresponding to the configuration parameter corresponding to the operation environment B, so that the electronic device can determine the operation efficiency of the neural network model in two different operation environments according to the operation result data D1 and D2. For example, the operation result data D1 shows that the total operation time of the neural network model in the operation environment corresponding to the configuration parameters corresponding to the operation environment a (i.e., the sum of the operation times of the respective neural layers) is 1 second. The operation result data D2 shows that the total operation time (i.e., the sum of the operation times of the neural layers) of the neural network model in the operation environment corresponding to the configuration parameters corresponding to the operation environment B is 1.5 seconds. Then, because the time for the electronic device to operate the neural network model in the operating environment corresponding to the configuration parameter corresponding to the operating environment a is shorter, the electronic device can determine that the operating environment corresponding to the configuration parameter corresponding to the operating environment a is better than the operating environment corresponding to the configuration parameter corresponding to the operating environment B.
In this embodiment, the electronic device may determine the operation environment with the shortest operation time as the optimal operation environment of the corresponding neural layer, and determine the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer. For example, according to the operation result data of the operation environment corresponding to the configuration parameters corresponding to each operation environment, the electronic device determines that the input layer of the neural network model has the shortest operation time in the CPU environment, the first hidden layer of the neural network model has the shortest operation time in the FPGA environment, the second hidden layer of the neural network model has the shortest operation time in the FPGA environment, and the output layer of the neural network model has the shortest operation time in the simulator, so that the electronic device can determine that the optimal operation environment of the neural network model is: the input layer runs in a CPU environment, the first hidden layer and the second hidden layer run in an FPGA environment, and the output layer runs in a simulator environment. That is, in one embodiment, the electronic device may combine the optimal operating environments of the neural layers to obtain the optimal operating environment of the neural network model.
In one embodiment, the process of running the neural network model in the electronic device according to the configuration parameters corresponding to each set of running environments may include:
operating a neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein an output data format of a previous neural layer and an input data format of a next neural layer are obtained when the neural network model is operated;
if the output data format of the previous nerve layer is different from the input data format of the next nerve layer, the electronic equipment converts the output data format of the previous nerve layer into the input data format of the next nerve layer and then inputs the converted data format to the next nerve layer.
For example, the input and output shape requirements are different due to different operating environments. For example, in a CPU environment, the general input and output are data in NHWC or NCHW format. Hardware-related environments, such as simulator, FPGA environments, typically use data in the NCHWc format due to performance and hardware limitations.
In this embodiment, the electronic device may obtain the input and output formats of the operator. In the master function, the electronic device may compare the output format of the previous neural layer with the input format of the next neural layer. If the formats of the two are different, the electronic device can perform automatic conversion. There are three forms of automatic conversion:
first, transformation of the dimensional transformation: such as NHWC to NCHW.
Second, conversion of dimension splitting: if NHWC is switched to NCHWc, the original C channels will be split into C × C output channels.
Third, add Padding's conversions, such as NHWC (1,224, 3) to NHWC (1,256, 3), add Padding in the H and W dimensions. The electronic device can automatically determine which transformation is to be performed according to the input and output formats (such as "NHWC") and the shapes (such as (1,224, 3)).
In one embodiment, after obtaining the neural network model, when the electronic device executes determining operators corresponding to each neural layer in the neural network model, the method may include:
the electronic equipment converts the format of the neural network model into a preset format, and determines operators corresponding to each neural layer in the neural network model during format conversion.
For example, after the neural network model is obtained, the electronic device may perform format conversion on the neural network model, convert the format of the neural network model into a preset format that is customized in advance, and determine an operator corresponding to each neural layer in the neural network model during format conversion.
That is, the above format conversion process is responsible for parsing the model in the standard format, obtaining the complete structure of the whole model and the parameters of each layer of operators, and storing the parameters into a pre-defined data format. In one embodiment, the electronic device may use the json format to save the parameters of the operator.
The purpose of format conversion is to unify models of different formats into a pre-defined format for subsequent processing.
The model may generate a new model file after format conversion, and a default operating environment, such as a CPU or FPGA or simulator, may be specified during conversion. In one embodiment, the electronic device may specify a default operating environment at format conversion via a command line parameter. After the model conversion, the operating environment of a single operator at a certain layer in the model file can be changed. For example, a user or a developer can specify, through code, that an input layer runs on a CPU environment, a hidden layer runs on an FPGA environment, and an output layer runs on a simulator environment.
It should be noted that the format conversion of the neural network model is to adapt to models trained by different training frameworks, such as tensoflow, caffe, or pytorh. And moreover, effective data can be extracted by carrying out format conversion on the neural network model, so that the subsequent processing is more convenient.
In an embodiment, the device operation method provided by this embodiment may also be used to verify the correctness of the operation result of the neural network model. For example, since the electronic device can specify the operating environment of each neural layer in the neural network model, multiple operating environments can be used to operate the same operator and compare the results.
For example, for the output of a certain neural layer, if it is determined that the operation result of the output of the neural layer in a certain operation environment is relatively reliable, the electronic device may use the operation environment as a reference environment, and the operation results of other operation environments may be compared with the operation result of the operation environment to detect whether the operation results of the operation environments are consistent. If the operation results are consistent, the output of the nerve layer or the design of the nerve layer is correct. If the operation results are not consistent, it may indicate that the operation of the neural layer in other operation environments is wrong or that the neural layer is not suitable for operating in other operation environments.
For another example, without a reference environment, the neural network model or a certain neural layer therein may be operated in multiple (for example, greater than or equal to 3) operating environments, and the operating results in the operating environments may be obtained. Then, the electronic device can score each operation result according to the consistency of the operation results. For example, when the certain nerve layer is operated in a CPU environment or an FPGA environment, their operation results are S1, and the score of S1 may be 2. When the nerve layer is operated in the simulator environment, the operation result is S2, and then the score of S2 may be 1. Through the grading mode, the electronic equipment can obtain operation results under different operation environments, accordingly, the electronic equipment can determine the operation result with the highest grade as the correct operation result of the neural layer, and adjust the operation environment design of the neural network model or the neural layer according to the correct operation result. For example, the electronic device may determine an optimal operating environment of the neural network model or a neural layer therein according to the operating results of different operating environments.
It can be understood that, the device operation method provided in the embodiment of the present application can compare the operation efficiencies of the neural network model in different operation environments, and can verify the correctness of the operation environment of the neural network model according to the comparison result of the operation efficiencies, so as to optimize the operation environment of the neural network model, and facilitate acceleration of hardware design corresponding to the operation environment of the neural network model.
In the embodiment of the present application, since the electronic device can specify the operating environment of the neural layer in the neural network model, the embodiment can very simply add the operating environment to the neural network model. In addition, the operation efficiency and the operation result of the neural network model under different operation environments can be rapidly compared, so that the accuracy and the efficiency of hardware designed for the neural network model can be rapidly verified, and meanwhile, reliable guidance is provided for the verification and the tuning of the neural network model.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an apparatus operating device according to an embodiment of the present disclosure. The device operating apparatus 300 may include: the device comprises a first acquisition module 301, a setting module 302, a second acquisition module 303, an operation module 304 and an output module 305.
The first obtaining module 301 is configured to obtain a neural network model, and determine an operator corresponding to each neural layer in the neural network model.
A setting module 302, configured to configure an interface corresponding to each operator.
A second obtaining module 303, configured to obtain multiple sets of configuration parameters corresponding to different operating environments, where the configuration parameters corresponding to the operating environments are used to specify operating environments of each neural layer when the neural network model is operated in an electronic device.
An operation module 304, configured to operate the neural network model in the electronic device according to the configuration parameters corresponding to each set of the operation environments, where when the neural network model is operated in different operation environments, a corresponding operator is called through an interface corresponding to each operator.
An output module 305, configured to output, based on the configuration parameters corresponding to each set of the operating environments, operating result data of the neural network model in the corresponding operating environment, so as to compare operating efficiencies of the neural network model in different operating environments.
In one embodiment, the first obtaining module 301 may further be configured to: and acquiring parameters corresponding to the operators.
The execution module 304 may be configured to: and when the neural network model is operated under different operation environments, acquiring parameters corresponding to each operator, selecting a corresponding target operator according to the parameters corresponding to each operator, and calling the corresponding target operator through an interface corresponding to each target operator.
In one embodiment, the operating environments include at least a CPU operating environment, an FPGA operating environment, and a simulator operating environment.
The output module 305 may be configured to: and outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environments, wherein the operation result data comprises the operation time of each neural layer, so that the operation efficiency of the neural network model under different operation environments is compared according to the sum of the operation time of each neural layer.
In one embodiment, the execution module 304 may be configured to:
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of the operating environments, wherein the output data format of a previous neural layer and the input data format of a next neural layer are obtained when the neural network model is operated;
and if the output data format of the previous nerve layer is different from the input data format of the next nerve layer, converting the output data format of the previous nerve layer into the input data format of the next nerve layer and inputting the converted output data format to the next nerve layer.
In one embodiment, the first obtaining module 301 may be configured to:
and converting the format of the neural network model into a preset format, and determining operators corresponding to each neural layer in the neural network model during format conversion.
In one embodiment, the output module 305 may be further configured to:
and determining the optimal operation environment of each corresponding nerve layer according to the operation time of each nerve layer in different operation environments.
In one embodiment, the output module 305 may be further configured to:
and determining the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
In one embodiment, the output module 305 may be configured to:
and combining the optimal operating environments of the neural layers to obtain the optimal operating environment of the neural network model.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is caused to execute the flow in the device operation method provided in this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the device operation method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device 400 may include components such as a sensor 401, a memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The sensors 401 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, and the like.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring an interface corresponding to each operator;
acquiring configuration parameters corresponding to a plurality of groups of different operating environments, wherein the configuration parameters corresponding to the operating environments are used for appointing the operating environment of each neural layer when the neural network model is operated in the electronic equipment;
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
and outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environment so as to compare the operation efficiency of the neural network model under different operation environments.
Referring to fig. 5, the electronic device 400 may include a sensor 401, a memory 402, a processor 403, an input unit 404, an output unit 405, a speaker 406, and the like.
The sensors 401 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, and the like.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
The input unit 404 may be used to receive input numbers, character information, or user characteristic information, such as a fingerprint, and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The output unit 405 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The output unit may include a display panel.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring an interface corresponding to each operator;
acquiring configuration parameters corresponding to a plurality of groups of different operating environments, wherein the configuration parameters corresponding to the operating environments are used for appointing the operating environment of each neural layer when the neural network model is operated in the electronic equipment;
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
and outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environment so as to compare the operation efficiency of the neural network model under different operation environments.
In one embodiment, processor 403 may further perform: acquiring parameters corresponding to the operators;
then, when the processor 403 executes the neural network model in different operating environments, and calls the corresponding operator through the interface corresponding to each operator, it may perform: and when the neural network model is operated under different operation environments, acquiring parameters corresponding to each operator, selecting a corresponding target operator according to the parameters corresponding to each operator, and calling the corresponding target operator through an interface corresponding to each target operator.
In one embodiment, the operating environments include at least a CPU operating environment, an FPGA operating environment, and a simulator operating environment.
Then, when the processor 403 executes the configuration parameters corresponding to each set of the operating environments and outputs the operating result data of the neural network model in the corresponding operating environment, so as to compare the operating efficiency of the neural network model in different operating environments, it may execute:
and outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environments, wherein the operation result data comprises the operation time of each neural layer, so that the operation efficiency of the neural network model under different operation environments is compared according to the sum of the operation time of each neural layer.
In one embodiment, the processor 403 executes the configuration parameters corresponding to each set of the operating environments, and when the neural network model is operated in the electronic device, the following steps may be executed:
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of the operating environments, wherein the output data format of a previous neural layer and the input data format of a next neural layer are obtained when the neural network model is operated;
and if the output data format of the previous nerve layer is different from the input data format of the next nerve layer, converting the output data format of the previous nerve layer into the input data format of the next nerve layer and inputting the converted output data format to the next nerve layer.
In one embodiment, when processor 403 executes the operator for determining each neural layer in the neural network model, it may execute:
and converting the format of the neural network model into a preset format, and determining operators corresponding to each neural layer in the neural network model during format conversion.
In one embodiment, processor 403 may further perform:
and determining the optimal operation environment of each corresponding nerve layer according to the operation time of each nerve layer in different operation environments.
In one embodiment, processor 403 may further perform:
and determining the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
In one embodiment, when the processor 403 executes the determining of the optimal operating environment of the neural network model according to the optimal operating environment of each neural layer, it may execute: and combining the optimal operating environments of the neural layers to obtain the optimal operating environment of the neural network model.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the device operation method, and are not described herein again.
The apparatus operation device provided in the embodiment of the present application and the apparatus operation method in the above embodiment belong to the same concept, and any method provided in the apparatus operation method embodiment may be operated on the apparatus operation device, and a specific implementation process thereof is described in the apparatus operation method embodiment in detail, and is not described herein again.
It should be noted that, for the device operation method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the device operation method described in the embodiment of the present application may be implemented by controlling the relevant hardware through a computer program, where the computer program may be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of implementing the embodiment of the device operation method may include the process of the embodiment of the device operation method. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the device operation apparatus in the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The above detailed description is provided for an apparatus operation method, an apparatus, a storage medium, and an electronic device provided in the embodiments of the present application, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A method of operating a plant, comprising:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring an interface corresponding to each operator;
acquiring configuration parameters corresponding to a plurality of groups of different operating environments, wherein the configuration parameters corresponding to the operating environments are used for appointing the operating environment of each neural layer when the neural network model is operated in the electronic equipment;
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
and outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environment so as to compare the operation efficiency of the neural network model under different operation environments.
2. The method of operating a plant according to claim 1, characterized in that the method further comprises: acquiring parameters corresponding to the operators;
when the neural network model is operated under different operation environments, calling the corresponding operator through the interface corresponding to each operator comprises: and when the neural network model is operated under different operation environments, acquiring parameters corresponding to each operator, selecting a corresponding target operator according to the parameters corresponding to each operator, and calling the corresponding target operator through an interface corresponding to each target operator.
3. The device operating method according to claim 1, wherein the operating environments include at least a CPU operating environment, an FPGA operating environment, and a simulator operating environment;
the outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of the operation environments to compare the operation efficiency of the neural network model under different operation environments comprises:
and outputting operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environments, wherein the operation result data comprises the operation time of each neural layer, so that the operation efficiency of the neural network model under different operation environments is compared according to the sum of the operation time of each neural layer.
4. The device operation method according to claim 1, wherein the operating the neural network model in the electronic device according to the configuration parameters corresponding to each set of the operation environments comprises:
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of the operating environments, wherein the output data format of a previous neural layer and the input data format of a next neural layer are obtained when the neural network model is operated;
and if the output data format of the previous nerve layer is different from the input data format of the next nerve layer, converting the output data format of the previous nerve layer into the input data format of the next nerve layer and inputting the converted output data format to the next nerve layer.
5. The device operation method according to claim 1, wherein the determining operators corresponding to each neural layer in the neural network model comprises:
and converting the format of the neural network model into a preset format, and determining operators corresponding to each neural layer in the neural network model during format conversion.
6. The method of operating a plant according to claim 3, characterized in that the method further comprises:
and determining the optimal operation environment of each corresponding nerve layer according to the operation time of each nerve layer in different operation environments.
7. The method of operating a plant according to claim 6, further comprising:
and determining the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
8. The apparatus operating method according to claim 7, wherein the determining an optimal operating environment of the neural network model according to the optimal operating environment of each neural layer comprises:
and combining the optimal operating environments of the neural layers to obtain the optimal operating environment of the neural network model.
9. An apparatus operating device, comprising:
the first acquisition module is used for acquiring a neural network model and determining operators corresponding to each neural layer in the neural network model;
the setting module is used for configuring the interface corresponding to each operator;
the second acquisition module is used for acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for appointing the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
the operation module is used for operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operation environments, wherein when the neural network model is operated in different operation environments, corresponding operators are called through interfaces corresponding to the operators;
and the output module is used for outputting the operation result data of the neural network model under the corresponding operation environment based on the configuration parameters corresponding to each group of operation environment so as to compare the operation efficiency of the neural network model under different operation environments.
10. A storage medium having stored thereon a computer program, characterized in that the computer program, when executed on a computer, causes the computer to execute the method according to any of claims 1 to 8.
11. An electronic device comprising a memory, a processor, wherein the processor is configured to perform the method of any one of claims 1 to 8 by invoking a computer program stored in the memory.
CN201911416475.8A 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment Active CN111210005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416475.8A CN111210005B (en) 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416475.8A CN111210005B (en) 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111210005A true CN111210005A (en) 2020-05-29
CN111210005B CN111210005B (en) 2023-07-18

Family

ID=70788368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416475.8A Active CN111210005B (en) 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111210005B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753973A (en) * 2020-06-22 2020-10-09 深圳鲲云信息科技有限公司 Optimization method, system, equipment and storage medium of neural network chip
CN111882038A (en) * 2020-07-24 2020-11-03 深圳力维智联技术有限公司 Model conversion method and device
CN112130896A (en) * 2020-08-17 2020-12-25 深圳云天励飞技术股份有限公司 Neural network model migration method and device, electronic equipment and storage medium
CN113052305A (en) * 2021-02-19 2021-06-29 展讯通信(上海)有限公司 Method for operating a neural network model, electronic device and storage medium
CN113342631A (en) * 2021-07-02 2021-09-03 厦门美图之家科技有限公司 Distribution management optimization method and device and electronic equipment
CN114492737A (en) * 2021-12-31 2022-05-13 北京百度网讯科技有限公司 Data processing method, data processing device, electronic equipment, storage medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564170A (en) * 2018-04-26 2018-09-21 福州瑞芯微电子股份有限公司 A kind of restructural neural network computing method and circuit based on NOC
CN109359732A (en) * 2018-09-30 2019-02-19 阿里巴巴集团控股有限公司 A kind of chip and the data processing method based on it
CN109740725A (en) * 2019-01-25 2019-05-10 网易(杭州)网络有限公司 Neural network model operation method and device and storage medium
CN109902819A (en) * 2019-02-12 2019-06-18 Oppo广东移动通信有限公司 Neural computing method, apparatus, mobile terminal and storage medium
CN110210605A (en) * 2019-05-31 2019-09-06 Oppo广东移动通信有限公司 Hardware operator matching process and Related product
US20190318231A1 (en) * 2018-04-11 2019-10-17 Hangzhou Flyslice Technologies Co., Ltd. Method for acceleration of a neural network model of an electronic euqipment and a device thereof related appliction information
US10452974B1 (en) * 2016-11-02 2019-10-22 Jasmin Cosic Artificially intelligent systems, devices, and methods for learning and/or using a device's circumstances for autonomous device operation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10452974B1 (en) * 2016-11-02 2019-10-22 Jasmin Cosic Artificially intelligent systems, devices, and methods for learning and/or using a device's circumstances for autonomous device operation
US20190318231A1 (en) * 2018-04-11 2019-10-17 Hangzhou Flyslice Technologies Co., Ltd. Method for acceleration of a neural network model of an electronic euqipment and a device thereof related appliction information
CN108564170A (en) * 2018-04-26 2018-09-21 福州瑞芯微电子股份有限公司 A kind of restructural neural network computing method and circuit based on NOC
CN109359732A (en) * 2018-09-30 2019-02-19 阿里巴巴集团控股有限公司 A kind of chip and the data processing method based on it
CN109740725A (en) * 2019-01-25 2019-05-10 网易(杭州)网络有限公司 Neural network model operation method and device and storage medium
CN109902819A (en) * 2019-02-12 2019-06-18 Oppo广东移动通信有限公司 Neural computing method, apparatus, mobile terminal and storage medium
CN110210605A (en) * 2019-05-31 2019-09-06 Oppo广东移动通信有限公司 Hardware operator matching process and Related product

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAMALEDIN GHIASI-SHIRAZI: "Generalizing the Convolution Operator in Convolutional Neural Networks" *
丁立德等: "基于FPGA的CNN应用加速技术" *
陈强等: "基于模型的软件接口故障注入测试平台技术" *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753973A (en) * 2020-06-22 2020-10-09 深圳鲲云信息科技有限公司 Optimization method, system, equipment and storage medium of neural network chip
WO2021259106A1 (en) * 2020-06-22 2021-12-30 深圳鲲云信息科技有限公司 Method, system, and device for optimizing neural network chip, and storage medium
CN111882038A (en) * 2020-07-24 2020-11-03 深圳力维智联技术有限公司 Model conversion method and device
CN112130896A (en) * 2020-08-17 2020-12-25 深圳云天励飞技术股份有限公司 Neural network model migration method and device, electronic equipment and storage medium
CN113052305A (en) * 2021-02-19 2021-06-29 展讯通信(上海)有限公司 Method for operating a neural network model, electronic device and storage medium
CN113342631A (en) * 2021-07-02 2021-09-03 厦门美图之家科技有限公司 Distribution management optimization method and device and electronic equipment
CN113342631B (en) * 2021-07-02 2022-08-26 厦门美图之家科技有限公司 Distribution management optimization method and device and electronic equipment
CN114492737A (en) * 2021-12-31 2022-05-13 北京百度网讯科技有限公司 Data processing method, data processing device, electronic equipment, storage medium and program product
CN114492737B (en) * 2021-12-31 2022-12-09 北京百度网讯科技有限公司 Data processing method, data processing device, electronic equipment, storage medium and program product
US11983086B2 (en) 2021-12-31 2024-05-14 Beijing Baidu Netcom Science Technology Co., Ltd. Method for processing data, and electronic device, storage medium and program product

Also Published As

Publication number Publication date
CN111210005B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN111210005B (en) Equipment operation method and device, storage medium and electronic equipment
CN110473141B (en) Image processing method, device, storage medium and electronic equipment
US8712939B2 (en) Tag-based apparatus and methods for neural networks
US9117176B2 (en) Round-trip engineering apparatus and methods for neural networks
WO2021190597A1 (en) Processing method for neural network model, and related device
CN109376852B (en) Arithmetic device and arithmetic method
US20130073500A1 (en) High level neuromorphic network description apparatus and methods
EP2825974A1 (en) Tag-based apparatus and methods for neural networks
WO2022068623A1 (en) Model training method and related device
US10809981B2 (en) Code generation and simulation for graphical programming
CN111966361B (en) Method, device, equipment and storage medium for determining model to be deployed
CN108364068B (en) Deep learning neural network construction method based on directed graph and robot system
CN111428645A (en) Method and device for detecting key points of human body, electronic equipment and storage medium
WO2021232958A1 (en) Method and apparatus for executing operation, electronic device, and storage medium
CN112784959A (en) Deep learning model rapid building system compatible with multiple frames
CN110569984A (en) configuration information generation method, device, equipment and storage medium
CN115115048A (en) Model conversion method, device, computer equipment and storage medium
CN112506503A (en) Programming method, device, terminal equipment and storage medium
CN110312990A (en) Configuration method and system
CN116312489A (en) Model training method and related equipment thereof
CN112749364B (en) Webpage generation method, device, equipment and storage medium based on artificial intelligence
CN113760380B (en) Method, device, equipment and storage medium for determining running code of network model
CN111443897B (en) Data processing method, device and storage medium
CN115762515B (en) Processing and application method, device and equipment for neural network for voice recognition
WO2023220867A1 (en) Neural network with point grid convolutional layer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant