CN111210005B - Equipment operation method and device, storage medium and electronic equipment - Google Patents

Equipment operation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111210005B
CN111210005B CN201911416475.8A CN201911416475A CN111210005B CN 111210005 B CN111210005 B CN 111210005B CN 201911416475 A CN201911416475 A CN 201911416475A CN 111210005 B CN111210005 B CN 111210005B
Authority
CN
China
Prior art keywords
network model
neural network
environments
layer
neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911416475.8A
Other languages
Chinese (zh)
Other versions
CN111210005A (en
Inventor
周明君
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911416475.8A priority Critical patent/CN111210005B/en
Publication of CN111210005A publication Critical patent/CN111210005A/en
Application granted granted Critical
Publication of CN111210005B publication Critical patent/CN111210005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Stored Programmes (AREA)

Abstract

The application discloses a device operation method, a device, a storage medium and electronic equipment. The equipment operation method comprises the following steps: acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model; configuring interfaces corresponding to operators; acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment; operating a neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated under different operating environments, corresponding operators are called through interfaces corresponding to the operators; based on the configuration parameters corresponding to each group of operation environments, operation result data of the neural network model in the corresponding operation environments is output so as to compare operation efficiency of the neural network model in different operation environments. The method and the device can compare the operation efficiency of the neural network model under different operation environments.

Description

Equipment operation method and device, storage medium and electronic equipment
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to an equipment operation method, an equipment operation device, a storage medium and electronic equipment.
Background
Artificial neural networks (Artificial Neural Network, ANN) and deep learning methods based on artificial neural networks have been increasingly applied to artificial intelligence fields such as image recognition, scene determination, intelligent recommendation, and the like, and exhibit their superiority in various aspects. Hardware acceleration for artificial neural networks is also a popular research area. However, in the related art, when hardware acceleration is designed for the artificial neural network, the operation efficiency of the artificial neural network in different operation environments cannot be compared.
Disclosure of Invention
The embodiment of the application provides a device operation method, a device, a storage medium and electronic equipment, which can compare the operation efficiency of an artificial neural network in different operation environments.
In a first aspect, an embodiment of the present application provides a method for operating a device, including:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring interfaces corresponding to the operators;
acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
Operating the neural network model in the electronic equipment according to configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
and outputting operation result data of the neural network model under the corresponding operation environments based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model under different operation environments.
In a second aspect, an embodiment of the present application provides an apparatus running device, including:
the first acquisition module is used for acquiring a neural network model and determining operators corresponding to each neural layer in the neural network model;
the setting module is used for configuring interfaces corresponding to the operators;
the second acquisition module is used for acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
the operation module is used for operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operation environments, wherein when the neural network model is operated under different operation environments, corresponding operators are called through interfaces corresponding to the operators;
And the output module is used for outputting operation result data of the neural network model under the corresponding operation environments based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model under different operation environments.
In a third aspect, embodiments of the present application provide a storage medium having a computer program stored thereon, which when executed on a computer causes the computer to perform the flow in the apparatus operation method provided in the embodiments of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a memory, and a processor, where the processor is configured to execute a flow in the device operation method provided in the embodiment of the present application by calling a computer program stored in the memory.
In this embodiment, the electronic device may configure a corresponding interface for an operator in the neural network model, where the interface is configured to invoke a function of the corresponding operator, so that the electronic device may operate the neural network model in different operating environments, and obtain operation result data in different operating environments, so as to compare operation efficiency of the neural network model in different operating environments. That is, the embodiment of the application can realize the operation efficiency of the artificial neural network model in different operation environments.
Drawings
The technical solution of the present application and the advantageous effects thereof will be made apparent from the following detailed description of the specific embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an operation method of a device according to an embodiment of the present application.
Fig. 2 is another flow chart of the method for operating the device according to the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an apparatus operation device provided in an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numerals refer to like elements throughout, the principles of the present application are illustrated as embodied in a suitable computing environment. The following description is based on the illustrated embodiments of the present application and should not be taken as limiting other embodiments not described in detail herein.
It is understood that the execution subject of the embodiments of the present application may be an electronic device such as a smart phone or tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an operation method of an apparatus provided in an embodiment of the present application, where the flow may include:
101. And acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model.
Artificial neural networks (Artificial Neural Network, ANN) and deep learning methods based on artificial neural networks have been increasingly applied to artificial intelligence fields such as image recognition, scene determination, intelligent recommendation, and the like, and exhibit their superiority in various aspects. Hardware acceleration for artificial neural networks is also a popular research area. However, in the related art, when hardware acceleration is designed for the artificial neural network, the operation efficiency of the artificial neural network in different operation environments cannot be compared.
In the embodiment of the application, for example, the electronic device may first acquire the neural network model. After the neural network model is obtained, the electronic device may determine operators corresponding to each neural layer in the neural network model. The neural layer comprises an input layer, a hidden layer, an output layer and the like of the neural network model. The operators corresponding to the respective neural layers may refer to computation nodes in the neural network computation graph, which may be nodes such as convolution (convolution), pooling (Activation), activation, etc., and computation rules, which may refer to operations such as partial convolution, two vector addition, etc.
The neural network model is an artificial neural network model (Artificial Neural Network, ANN), which is a research hotspot in the field of artificial intelligence since the 80 th century. The human brain nerve cell network is abstracted from the information processing perspective, a certain simple model is built, and different networks are formed according to different connection modes. In recent decades, the research work of artificial neural networks has been in progress, and the artificial neural networks have been developed, which have successfully solved many practical problems that are difficult to solve by modern computers in the fields of pattern recognition, intelligent robots, automatic control, predictive estimation, biology, medicine, economy, etc., and have shown good intelligent characteristics.
102. And configuring interfaces corresponding to the operators.
For example, after determining operators corresponding to each neural layer in the neural network model, the electronic device may configure a corresponding interface for each operator.
For example, the electronic device may obtain code for implementing the operator and then encapsulate the implementation code in the form of an interface. In this way, when the operator needs to be implemented in different operation environments, the electronic device only needs to call the interface corresponding to the operator to implement the function of the operator.
103. And acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment.
For example, the electronic device may obtain a plurality of sets of configuration parameters corresponding to different operation environments, where the configuration parameters corresponding to each set of operation environments may be used to specify operation environments of each neural layer when the neural network model is operated in the electronic device.
For example, the neural network model includes an input layer, a first hidden layer, a second hidden layer, and an output layer. The configuration parameters corresponding to the operation environment a may specify that when the neural network model is operated in the electronic device, an input layer of the neural network model is operated on the CPU environment, a first hidden layer of the neural network model is operated on the FPGA environment, and a second hidden layer and an output layer of the neural network model are operated on the simulator environment. For another example, the configuration parameters corresponding to the operation environment B may specify that when the neural network model is operated in the electronic device, the input layer of the neural network model is operated on the CPU, and the first hidden layer, the second hidden layer and the output layer of the neural network model are all operated on the FPGA. Note that, the CPU refers to a central processing unit. The FPGA refers to a field programmable gate array (Field Programmable Gate Array), which is an artificial intelligence AI chip. The above simulator may refer to a simulator written using SystemC.
104. And operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, the corresponding operators are called through the interfaces corresponding to the operators.
For example, after obtaining configuration parameters corresponding to multiple groups of different operation environments, the electronic device may operate the neural network model in the electronic device in turn according to the configuration parameters corresponding to each group of operation environments. When the neural network model is operated under different operation environments, the electronic equipment can call the corresponding operators through the interfaces corresponding to the operators.
For example, the electronic device may first run the neural network model according to the configuration parameters corresponding to the running environment a, for example, the electronic device may run the input layer of the neural network model on the CPU environment, the first hidden layer of the neural network model on the FPGA environment, and the second hidden layer and the output layer of the neural network model on the simulator environment.
For another example, the electronic device may then run the neural network model according to the configuration parameters corresponding to the run environment B, such as running the input layer of the neural network model on a CPU environment, and running the first hidden layer, the second hidden layer, and the output layer of the neural network model on an FPGA environment.
When the neural network model is operated under different operation environments, the electronic equipment can call the corresponding operators through the interfaces corresponding to the operators. For example, the second hidden layer of the neural network model includes a convolution operator, and the corresponding interface is J1. Then, when the neural network model is operated according to the configuration parameters corresponding to the operation environment a, the electronic device can implement the function of the convolution operator by calling the interface J1 corresponding to the convolution operator in the simulator environment. When the neural network model is operated according to the configuration parameters corresponding to the operation environment B, the electronic equipment can realize the function of the convolution operator by calling the interface J1 corresponding to the convolution operator in the FPGA environment. That is, under different operation environments, the electronic device can realize the functions of the corresponding operators through the same interface, thereby simplifying the functions of realizing the operators under different operation environments.
105. Based on the configuration parameters corresponding to each group of operation environments, operation result data of the neural network model in the corresponding operation environments is output so as to compare the operation efficiency of the neural network model in different operation environments.
For example, after the neural network model is run in the electronic device according to the configuration parameters corresponding to each set of running environments, the electronic device may output running result data under the running environments corresponding to the configuration parameters corresponding to the set of running environments. Based on the configuration parameters corresponding to each group of operation environments, the electronic equipment can obtain a plurality of groups of operation result data, so that the operation efficiency of the neural network model under different operation environments is compared.
For example, the electronic device obtains the operation result data D1 corresponding to the configuration parameter corresponding to the operation environment a, and obtains the operation result data D2 corresponding to the configuration parameter corresponding to the operation environment B, so that the electronic device can compare the operation efficiency of the neural network model under two different operation environments according to the operation result data D1 and D2.
It can be understood that in the embodiment of the present application, the electronic device may configure a corresponding interface for an operator in the neural network model, where the interface is configured to implement a function of the corresponding operator, so that the electronic device may operate the neural network model in different operating environments, and obtain operation result data in different operating environments, so as to compare operation efficiency of the neural network model in different operating environments. That is, the embodiment of the application can realize the operation efficiency of the artificial neural network model in different operation environments.
Referring to fig. 2, fig. 2 is another flow chart of an operation method of an apparatus provided in an embodiment of the present application, where the flow may include:
201. the electronic equipment acquires a neural network model, determines operators corresponding to each neural layer in the neural network model, and acquires parameters corresponding to each operator.
For example, the electronic device may first obtain a neural network model. Then, the electronic equipment can determine operators corresponding to each nerve layer in the nerve network model, and further obtain parameters corresponding to each operator. Wherein operators in each neural layer may refer to computational nodes in a neural network computational graph, which may be nodes such as convolution (convolution), pooling (Activation), activation, etc., and operational rules, which may refer to operations such as partial convolution, two vector addition, etc. The parameters corresponding to the operators can comprise the shape and format of input and output data, weight matrix values, specific parameters of specific operators and the like. The specific parameters of the specific operator may be, for example, the convolution kernel size, shape, padding (padding), step size (stride) of the convolution layer.
202. The electronic equipment configures interfaces corresponding to the operators.
For example, after determining operators corresponding to each neural layer in the neural network model and acquiring parameters corresponding to each operator, the electronic device may configure a corresponding interface for each operator.
For example, the electronic device may obtain code for implementing the operator and then encapsulate the implementation code in the form of an interface. In this way, when the operator needs to be implemented in different operation environments, the electronic device can implement the function of the operator only by calling the interface corresponding to the operator in different operation environments.
203. The electronic equipment acquires a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of all nerve layers when the nerve network model is operated in the electronic equipment, and the operation environments at least comprise a CPU operation environment, an FPGA operation environment and a simulator operation environment.
For example, the electronic device may obtain a plurality of sets of configuration parameters corresponding to different operation environments, where the configuration parameters corresponding to each set of operation environments may be used to specify operation environments of each neural layer when the neural network model is operated in the electronic device. The running environment in the electronic equipment at least comprises a CPU running environment, an FPGA running environment, a simulator running environment and the like.
For example, the neural network model includes an input layer, a first hidden layer, a second hidden layer, and an output layer. The configuration parameters corresponding to the operation environment a may specify that when the neural network model is operated in the electronic device, an input layer of the neural network model is operated on the CPU environment, a first hidden layer of the neural network model is operated on the FPGA environment, and a second hidden layer and an output layer of the neural network model are operated on the simulator environment. For another example, the configuration parameters corresponding to the operation environment B may specify that when the neural network model is operated in the electronic device, the input layer of the neural network model is operated on the CPU, and the first hidden layer, the second hidden layer and the output layer of the neural network model are all operated on the FPGA. Of course, in other embodiments, the neural network model may not be limited to include only the input layer, the first hidden layer, the second hidden layer, and the output layer, and may also include other neural layers, for example, further include a third hidden layer, a fourth hidden layer, and so on, which is not specifically limited in this embodiment.
Note that, the CPU refers to a central processing unit. The FPGA refers to a field programmable gate array (Field Programmable Gate Array), which is an artificial intelligence AI chip. The above simulator may refer to a simulator written using SystemC.
204. According to the configuration parameters corresponding to each group of operation environments, the electronic equipment operates the neural network model, wherein when the neural network model is operated under different operation environments, parameters corresponding to each operator are obtained, corresponding target operators are selected according to the parameters corresponding to each operator, and the corresponding target operators are called through interfaces corresponding to each target operator.
For example, after obtaining configuration parameters corresponding to multiple groups of different operation environments, the electronic device may operate the neural network model in the electronic device in turn according to the configuration parameters corresponding to each group of operation environments. When the neural network model is operated under different operation environments, the electronic equipment can firstly acquire parameters corresponding to each operator, then select a corresponding target operator according to the parameters corresponding to each operator, and then call the corresponding target operator through an interface corresponding to each target operator.
It should be noted that, in the flow 201 in the embodiment of the present application, the electronic device may only obtain the parameters of the operators corresponding to each nerve layer, but not obtain the information of what operator is specifically the operators. But when the neural network model is operated, the electronic equipment automatically selects which operator to use according to the operation environment and parameters corresponding to the operators. It should be noted that, because the parameters corresponding to each operator have their corresponding formats, the corresponding operator may be selected according to the parameters corresponding to the operator and their formats.
That is, when a certain operator needs to be implemented, each running environment can inherit the interface corresponding to the operator. For example, the electronic device may set a main function, and the interface corresponding to each operator may include a first sub-interface and a second sub-interface. Through the main function, the electronic device can firstly call the first sub-interface of the operator, and the parameters corresponding to the operator are transferred to the current running environment through the first sub-interface. And the running environment selects a proper operator according to the parameters corresponding to the operator. It should be noted that, the main function refers to a main function of an executable file set by the electronic device, that is, an entry function, and the main function may call other functions. In other words, each running environment can realize the first sub-interface, and the main function transfers parameters corresponding to the operator to a specific running environment by calling the first sub-interface, and the running environment selects how to process the parameters according to the characteristics of the running environment. The data may be transmitted to hardware or temporarily stored in software.
The process of selecting a specific operator by the electronic device may have different implementations.
Taking a convolution operator as an example, in a CPU environment, we will use one program to implement all convolutions, in which case we will convert the parameters of the operator into parameters of the convolution implementation and call the function to complete the calculation. That is, the operator realized by the CPU transfers the parameters of the operator to a function as the function parameters after performing some simple conversion to complete the calculation of the operator. For example, a convolution function generally has the following form conv (input_data, padding, stride, and condition …), and different input sizes and parameters can be calculated by using the function, so that a CPU only needs to fill in an appropriate parameter and call the function.
In the FPGA environment, we use different programs to implement convolution of different parameters, in which case two methods are chosen:
firstly, a hardware program is directly specified for each layer, so that whether operator parameters are identical to the parameter formats of the hardware program or not only needs to be checked. For example, assuming that a hardware provides a hardware program that can only support 3x3, padding=same, stride=1, and condition=1, and that a certain neural layer designation uses this hardware function, the electronic device needs to check whether this neural layer matches the capabilities provided by the hardware.
Second, a database is established for all hardware programs, and the most suitable operator is selected from the database according to the parameters of the operator. That is, the operator implemented by the FPGA selects a suitable hardware program from the database according to the parameters, and sends the selected hardware program to the hardware, and waits for the hardware to complete the calculation of the operator. For example, the electronic device may manage the various hardware programs provided by all hardware, and automatically select which hardware program to use based on parameters of the operator.
After the execution environment has completed the selection of the appropriate operator, the primary function may invoke the second sub-interface to execute the operator. Wherein the execution manner of different operation environments can be different. For example, the CPU environment may be a C language function, and after some simple wrapping, the electronic device directly invokes the wrapped function to execute the corresponding operator. In the simulator environment, the electronic device needs to write data and parameters into certain specific files in advance, call an executable file prepared in advance, analyze the output file and extract the specific data from the output file, so that the operator is executed. In the FPGA environment, the electronic device needs to dynamically send data to the FPGA through a hardware interface (such as USB/PCIe), and read back through the hardware interface after the computation is completed, so that the execution of the operator is completed.
That is, by the above implementation, the main function calls the operator, the running environment realizes the operator, and the two interact through the same interface, and different running environments follow the functions of the same interface to realize the operator, so the method of calling the operators realized by different environments is the same for the main function.
205. Based on the configuration parameters corresponding to each group of operation environments, the electronic equipment outputs operation result data of the neural network model in the corresponding operation environments, wherein the operation result data comprises operation time T of each neural layer, and when the neural layer operates in the CPU environment, the operation time T of the corresponding neural layer is calculated according to the following formula: t=f1/f2×t1, where F1 is a preset frequency value, F2 is a CPU actual frequency value, and T1 is an actual operation duration; when the nerve layer operates in the FPGA or simulator environment, the operation time T of the corresponding nerve layer is calculated according to the following formula: t=c×f3+t2, where C is the actual running clock number, F3 is the preset running frequency, and T2 is the preset running time.
For example, after the neural network model is run in the electronic device according to the configuration parameters corresponding to each set of running environments, the electronic device may output running result data under the running environments corresponding to the configuration parameters corresponding to the set of running environments. Wherein the operation result data includes operation time of each nerve layer. When the neural layer operates in the CPU environment, the operation time T of the corresponding neural layer is calculated according to the following formula: t=f1/f2×t1, where F1 is a preset frequency value, F2 is an actual frequency value of the CPU, and T1 is an actual operation duration. When the nerve layer operates in the FPGA or simulator environment, the operation time T of the corresponding nerve layer is calculated according to the following formula: t=c×f3+t2, where C is the actual running clock number, F3 is the preset running frequency, and T2 is the preset running time.
That is, through the 205 flow, the electronic device may obtain the operation time of each neural layer in the neural network model under the operation environment specified by the configuration parameters corresponding to the operation environment.
206. The electronic equipment compares the operation efficiency of the neural network model under different operation environments according to the sum of the operation time of each neural layer, determines the optimal operation environment of each corresponding neural layer according to the operation time of each neural layer under different operation environments, and determines the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
For example, the electronic device may compare the operating efficiency of the neural network model under different operating environments based on the sum of the operating times of the neural layers in the neural network model. In the embodiment of the application, the electronic device may further determine an optimal operation environment of each corresponding neural layer according to the operation time of each neural layer in different operation environments, and determine an optimal operation environment of the neural network model according to the optimal operation environment of each neural layer. For example, the electronic device may determine the running environment with the shortest running time as the optimal running environment of each corresponding neural layer, and determine the optimal running environment of the neural network model according to the optimal running environment of each neural layer.
For example, the neural network model includes an input layer, a first hidden layer, a second hidden layer, and an output layer. The electronic device obtains the operation result data D1 corresponding to the configuration parameters corresponding to the operation environment a, and obtains the operation result data D2 corresponding to the configuration parameters corresponding to the operation environment B, so that the electronic device can implement the operation efficiency of the neural network model under two different operation environments according to the operation result data D1 and D2. For example, the operation result data D1 shows that the total operation time (i.e., the sum of the operation times of the respective nerve layers) of the neural network model in the operation environment corresponding to the configuration parameters corresponding to the operation environment a is 1 second. The operation result data D2 shows that the total operation time (i.e., the sum of the operation times of the nerve layers) of the neural network model in the operation environment corresponding to the configuration parameters corresponding to the operation environment B is 1.5 seconds. Then, because the time for the electronic device to operate the neural network model under the operating environment corresponding to the configuration parameter corresponding to the operating environment a is shorter, the electronic device can determine that the operating environment corresponding to the configuration parameter corresponding to the operating environment a is better than the operating environment corresponding to the configuration parameter corresponding to the operating environment B.
In the embodiment of the application, the electronic device may determine the running environment with the shortest running time as the optimal running environment of the corresponding neural layer, and determine the optimal running environment of the neural network model according to the optimal running environment of each neural layer. For example, according to the operation result data of the operation environment corresponding to the configuration parameters corresponding to each operation environment, the electronic device determines that the operation time of the input layer of the neural network model in the CPU environment is shortest, the operation time of the first hidden layer of the neural network model in the FPGA environment is shortest, the operation time of the second hidden layer of the neural network model in the FPGA environment is shortest, and the operation time of the output layer of the neural network model in the simulator is shortest, so that the electronic device can determine that the operation environment of the neural network model is optimal: the input layer operates in a CPU environment, the first hidden layer and the second hidden layer operate in an FPGA environment, and the output layer operates in a simulator environment. That is, in one embodiment, the electronic device may combine the optimal operating environments of the various neural layers to obtain an optimal operating environment of the neural network model.
In one embodiment, the process of running the neural network model in the electronic device according to the configuration parameters corresponding to each set of running environments may include:
Operating a neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein the output data format of the previous neural layer and the input data format of the next neural layer are obtained when the neural network model is operated;
if the output data format of the previous neural layer is different from the input data format of the next neural layer, the electronic device converts the output data format of the previous neural layer into the input data format of the next neural layer and then inputs the converted output data format of the previous neural layer into the next neural layer.
For example, the shape requirements of the input and the output are not the same due to different running environments. For example, in a CPU environment, data in NHWC or NCHW format is typically input and output. Hardware-related environments, such as simulators and FPGA environments, typically use data in the format of NCHWc due to performance and hardware limitations.
In the embodiment of the application, the electronic device can acquire the input and output formats of the operator. In the master function, the electronic device may compare the output format of the previous neural layer with the input format of the subsequent neural layer. If the formats of the two are different, the electronic device can perform automatic conversion. Among them, automatic conversion has three forms:
First, conversion of dimension transformation: such as NHWC, to NCHW.
Second, conversion of dimension splitting: if NHWC is converted to NCHWc, the original C channels are split into output c×c channels.
Third, add Padding transitions, such as NHWC (1,224,224,3) to NHWC (1,256,256,3), add Padding in the H and W dimensions. The electronic device can automatically judge which transformation is performed according to the input and output formats (such as 'NHWC') and shapes (such as (1, 224,224,3)).
In one embodiment, after the neural network model is acquired, when the electronic device executes the operator for determining each neural layer in the neural network model, the method may include:
the electronic equipment converts the format of the neural network model into a preset format, and determines operators corresponding to each neural layer in the neural network model during format conversion.
For example, after the neural network model is obtained, the electronic device may perform format conversion on the neural network model, convert the format of the neural network model into a preset format that is customized in advance, and determine operators corresponding to each neural layer in the neural network model during format conversion.
That is, the above-mentioned format conversion process is responsible for analyzing the model in the standard format, obtaining the completed structure of the whole model and the parameters of each layer of operators, and storing these parameters into the predefined data format. In one embodiment, the electronic device may use json's format to save parameters of the operator.
The purpose of format conversion is to unify models of different formats into a pre-custom format for subsequent processing.
The model generates a new model file after format conversion, and a default running environment, such as a CPU or FPGA or simulator, can be specified during conversion. In one embodiment, the electronic device may implement a command line parameter to specify a default operating environment at the time of format conversion. After model conversion, the running environment of a single operator of a certain layer in the model file can also be changed. For example, a user or developer may specify, via code, that the input layer run on a CPU environment, the hidden layer run on an FPGA environment, and the output layer run on a simulator environment.
It should be noted that the format conversion of the neural network model is to adapt to models trained by different training frameworks, such as tensorflow, caffe or pyroh trained neural network models, and so on. And the effective data can be extracted by performing format conversion on the neural network model, so that the follow-up processing is more convenient.
In one implementation manner, the device operation method provided in this embodiment may also be used to verify the correctness of the operation result of the neural network model. For example, since the electronic device may specify the operating environment of each neural layer in the neural network model, multiple operating environments may be used to run the same operator and compare the results.
For example, for an output of a certain neural layer, if it is determined that the operation result of the output of the neural layer under a certain operation environment is more reliable, the electronic device may use the operation environment as a reference environment, and the operation results of other operation environments may be compared with the operation result of the operation environment to detect whether the operation results of the operation environments are consistent. If the results of the runs are consistent, it is indicated that the output of the neural layer or its design is correct. If the results of the operation are inconsistent, it may indicate that the neural layer is operating in other operating environments with errors or that the neural layer is not suitable for operating in other operating environments.
For another example, without a reference environment, the neural network model or a certain neural layer thereof may be run in multiple (e.g., greater than or equal to 3) running environments, and the running results in each running environment may be obtained. The electronic device may then score each of the operational results as a function of operational result consistency. For example, when the certain neural layer is operated in a CPU environment or an FPGA environment, the operation results thereof are S1, and then the score of S1 may be 2 points. And when the neural layer is operated in the simulator environment, the operation result is S2, and the score of S2 may be 1 point. Through the scoring mode, the electronic equipment can obtain the operation results under different operation environments, and accordingly the electronic equipment can determine the operation result with the highest score as the correct operation result of the nerve layer and adjust the nerve network model or the operation environment design of the nerve layer in the nerve network model according to the correct operation result. For example, based on the results of the operation of the different operating environments, the electronic device may determine an optimal operating environment for the neural network model or the neural layer therein.
It can be understood that, the device operation method provided by the embodiment of the application not only can compare the operation efficiency of the neural network model in different operation environments, but also can verify the correctness of the operation environment of the neural network model according to the comparison result of the operation efficiency, and is beneficial to accelerating the hardware design corresponding to the operation environment of the neural network model.
In the embodiment of the application, the electronic device can specify the operation environment of the nerve layer in the nerve network model, so that the embodiment can very simply add the operation environment to the nerve network model. In addition, the embodiment of the application can quickly compare the operation efficiency and the result of the neural network model in different operation environments, thereby quickly verifying the correctness and the efficiency of hardware designed for the neural network model, and simultaneously providing reliable guidance for verification and tuning of the neural network model.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an apparatus operation device according to an embodiment of the present application. The device operation apparatus 300 may include: a first acquisition module 301, a setting module 302, a second acquisition module 303, a running module 304, and an output module 305.
The first obtaining module 301 is configured to obtain a neural network model, and determine operators corresponding to each neural layer in the neural network model.
The setting module 302 is configured to configure an interface corresponding to each operator.
The second obtaining module 303 is configured to obtain configuration parameters corresponding to multiple groups of different operation environments, where the configuration parameters corresponding to the operation environments are used to specify operation environments of each neural layer when the neural network model is operated in the electronic device.
And an operation module 304, configured to operate the neural network model in the electronic device according to the configuration parameters corresponding to each set of operation environments, where when the neural network model is operated in different operation environments, a corresponding operator is invoked through an interface corresponding to each operator.
And the output module 305 is configured to output operation result data of the neural network model in the corresponding operation environments based on the configuration parameters corresponding to each set of operation environments, so as to compare operation efficiency of the neural network model in different operation environments.
In one embodiment, the first obtaining module 301 may further be configured to: and obtaining parameters corresponding to the operators.
The operation module 304 may be configured to: when the neural network model is operated under different operation environments, parameters corresponding to each operator are obtained, corresponding target operators are selected according to the parameters corresponding to each operator, and the corresponding target operators are called through interfaces corresponding to each target operator.
In one embodiment, the operating environments include at least a CPU operating environment, an FPGA operating environment, and a simulator operating environment.
The output module 305 may be configured to: based on the configuration parameters corresponding to each group of the operation environments, outputting operation result data of the neural network model in the corresponding operation environments, wherein the operation result data comprises operation time of each neural layer, and comparing the operation efficiency of the neural network model in different operation environments according to the sum of the operation time of each neural layer.
In one embodiment, the operation module 304 may be configured to:
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein the output data format of the previous neural layer and the input data format of the next neural layer are obtained when the neural network model is operated;
if the output data format of the previous nerve layer is different from the input data format of the next nerve layer, converting the output data format of the previous nerve layer into the input data format of the next nerve layer and then inputting the converted output data format of the previous nerve layer into the next nerve layer.
In one embodiment, the first obtaining module 301 may be configured to:
Converting the format of the neural network model into a preset format, and determining operators corresponding to each neural layer in the neural network model during format conversion.
In one embodiment, the output module 305 may also be configured to:
and determining the optimal operation environment of each corresponding nerve layer according to the operation time of each nerve layer in different operation environments.
In one embodiment, the output module 305 may also be configured to:
and determining the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
In one embodiment, the output module 305 may be configured to:
and combining the optimal operation environments of the nerve layers to obtain the optimal operation environment of the nerve network model.
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed on a computer, causes the computer to execute a flow in an apparatus operation method as provided in the present embodiment.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the processor is used for executing the flow in the equipment operation method provided by the embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device 400 may include a sensor 401, a memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the electronic device structure shown in fig. 4 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
The sensor 401 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, or the like.
Memory 402 may be used to store applications and data. The memory 402 stores application programs including executable code. Applications may constitute various functional modules. Processor 403 executes various functional applications and data processing by running application programs stored in memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing application programs stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 403 in the electronic device loads executable codes corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 executes the application programs stored in the memory 402, so as to execute:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring interfaces corresponding to the operators;
acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
operating the neural network model in the electronic equipment according to configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
and outputting operation result data of the neural network model under the corresponding operation environments based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model under different operation environments.
Referring to fig. 5, the electronic device 400 may include a sensor 401, a memory 402, a processor 403, an input unit 404, an output unit 405, a speaker 406, and the like.
The sensor 401 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, or the like.
Memory 402 may be used to store applications and data. The memory 402 stores application programs including executable code. Applications may constitute various functional modules. Processor 403 executes various functional applications and data processing by running application programs stored in memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing application programs stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
The input unit 404 may be used to receive input numbers, character information, or user characteristic information (such as a fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The output unit 405 may be used to display information input by a user or information provided to a user and various graphical user interfaces of an electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. The output unit may include a display panel.
In this embodiment, the processor 403 in the electronic device loads executable codes corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 executes the application programs stored in the memory 402, so as to execute:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring interfaces corresponding to the operators;
acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
operating the neural network model in the electronic equipment according to configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
And outputting operation result data of the neural network model under the corresponding operation environments based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model under different operation environments.
In one embodiment, the processor 403 may also perform: acquiring parameters corresponding to each operator;
then, when the processor 403 executes the operation of the neural network model under different operation environments and invokes the corresponding operator through the interface corresponding to each operator, the method may be executed: when the neural network model is operated under different operation environments, parameters corresponding to each operator are obtained, corresponding target operators are selected according to the parameters corresponding to each operator, and the corresponding target operators are called through interfaces corresponding to each target operator.
In one embodiment, the operating environments include at least a CPU operating environment, an FPGA operating environment, and a simulator operating environment.
Then, when the processor 403 executes the configuration parameters corresponding to each set of the operation environments and outputs operation result data of the neural network model in the corresponding operation environments to compare operation efficiency of the neural network model in different operation environments, the method may be performed:
Based on the configuration parameters corresponding to each group of the operation environments, outputting operation result data of the neural network model in the corresponding operation environments, wherein the operation result data comprises operation time of each neural layer, and comparing the operation efficiency of the neural network model in different operation environments according to the sum of the operation time of each neural layer.
In one embodiment, when the processor 403 executes the configuration parameters corresponding to each set of the running environments and runs the neural network model in the electronic device, the method may be performed:
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein the output data format of the previous neural layer and the input data format of the next neural layer are obtained when the neural network model is operated;
if the output data format of the previous nerve layer is different from the input data format of the next nerve layer, converting the output data format of the previous nerve layer into the input data format of the next nerve layer and then inputting the converted output data format of the previous nerve layer into the next nerve layer.
In one embodiment, when the processor 403 executes the determining the operator of each neural layer in the neural network model, it may execute:
Converting the format of the neural network model into a preset format, and determining operators corresponding to each neural layer in the neural network model during format conversion.
In one embodiment, the processor 403 may also perform:
and determining the optimal operation environment of each corresponding nerve layer according to the operation time of each nerve layer in different operation environments.
In one embodiment, the processor 403 may also perform:
and determining the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
In one embodiment, when the processor 403 executes the determining the optimal operating environment of the neural network model according to the optimal operating environment of each neural layer, the method may be executed: and combining the optimal operation environments of the nerve layers to obtain the optimal operation environment of the nerve network model.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of an embodiment that are not described in detail may be referred to the above detailed description of the operation method of the device, which is not repeated herein.
The device operation apparatus provided in the embodiment of the present application belongs to the same concept as the device operation method in the foregoing embodiment, and any method provided in the device operation method embodiment may be operated on the device operation apparatus, and a specific implementation process of the device operation apparatus is detailed in the device operation method embodiment and will not be described herein.
It should be noted that, for the device operation method according to the embodiment of the present application, it will be understood by those skilled in the art that all or part of the flow of implementing the device operation method according to the embodiment of the present application may be implemented by controlling related hardware through a computer program, where the computer program may be stored in a computer readable storage medium, such as a memory, and executed by at least one processor, and the execution may include the flow of the embodiment of the device operation method as described in the embodiment of the present application. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a random access Memory (RAM, random Access Memory), etc.
For the device operation apparatus in the embodiment of the present application, each functional module may be integrated in one processing chip, or each module may exist separately and physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated module, if implemented as a software functional module and sold or used as a stand-alone product, may also be stored on a computer readable storage medium such as read-only memory, magnetic or optical disk, etc.
The foregoing describes in detail a device operation method, device, storage medium and electronic device provided in the embodiments of the present application, and specific examples are applied to illustrate principles and implementations of the present application, where the foregoing description of the embodiments is only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (11)

1. A method of operating a device, comprising:
acquiring a neural network model, and determining operators corresponding to each neural layer in the neural network model;
configuring interfaces corresponding to the operators; the interface is a code for realizing an operator, and the code for realizing the operator is packaged into an interface form;
acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
operating the neural network model in the electronic equipment according to configuration parameters corresponding to each group of operating environments, wherein when the neural network model is operated in different operating environments, corresponding operators are called through interfaces corresponding to the operators;
And outputting operation result data of the neural network model under the corresponding operation environments based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model under different operation environments.
2. The method of operating a device of claim 1, further comprising: acquiring parameters corresponding to each operator; the parameters corresponding to the operators refer to parameters used by calculation nodes and operation rules in a neural network calculation graph in operation;
when the neural network model is operated under different operation environments, the calling of the corresponding operators through the interfaces corresponding to the operators comprises the following steps: when the neural network model is operated under different operation environments, parameters corresponding to each operator are obtained, corresponding target operators are selected according to the parameters corresponding to each operator, and the corresponding target operators are called through interfaces corresponding to each target operator.
3. The device operating method of claim 1, wherein the operating environment includes at least a CPU operating environment, an FPGA operating environment, and a simulator operating environment;
outputting operation result data of the neural network model in the corresponding operation environments based on the configuration parameters corresponding to each group of operation environments to compare the operation efficiency of the neural network model in different operation environments, wherein the operation method comprises the following steps:
Based on the configuration parameters corresponding to each group of the operation environments, outputting operation result data of the neural network model in the corresponding operation environments, wherein the operation result data comprises operation time of each neural layer, and comparing the operation efficiency of the neural network model in different operation environments according to the sum of the operation time of each neural layer.
4. The device operation method according to claim 1, wherein the operation of the neural network model in the electronic device according to the configuration parameters corresponding to each set of the operation environments includes:
operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operating environments, wherein the output data format of the previous neural layer and the input data format of the next neural layer are obtained when the neural network model is operated;
if the output data format of the previous nerve layer is different from the input data format of the next nerve layer, converting the output data format of the previous nerve layer into the input data format of the next nerve layer and then inputting the converted output data format of the previous nerve layer into the next nerve layer.
5. The apparatus operation method according to claim 1, wherein the determining an operator corresponding to each neural layer in the neural network model includes:
Converting the format of the neural network model into a preset format, and determining operators corresponding to each neural layer in the neural network model during format conversion.
6. A method of operating a device according to claim 3, further comprising:
and determining the optimal operation environment of each corresponding nerve layer according to the operation time of each nerve layer in different operation environments.
7. The method of operating a device of claim 6, further comprising:
and determining the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer.
8. The apparatus operation method according to claim 7, wherein the determining the optimal operation environment of the neural network model according to the optimal operation environment of each neural layer comprises:
and combining the optimal operation environments of the nerve layers to obtain the optimal operation environment of the nerve network model.
9. An apparatus operating device, comprising:
the first acquisition module is used for acquiring a neural network model and determining operators corresponding to each neural layer in the neural network model;
the setting module is used for configuring interfaces corresponding to the operators;
The second acquisition module is used for acquiring a plurality of groups of configuration parameters corresponding to different operation environments, wherein the configuration parameters corresponding to the operation environments are used for designating the operation environments of each neural layer when the neural network model is operated in the electronic equipment;
the operation module is used for operating the neural network model in the electronic equipment according to the configuration parameters corresponding to each group of operation environments, wherein when the neural network model is operated under different operation environments, corresponding operators are called through interfaces corresponding to the operators;
and the output module is used for outputting operation result data of the neural network model under the corresponding operation environments based on the configuration parameters corresponding to each group of operation environments so as to compare the operation efficiency of the neural network model under different operation environments.
10. A storage medium having stored thereon a computer program, which, when executed on a computer, causes the computer to perform the method of any of claims 1 to 8.
11. An electronic device comprising a memory, a processor, characterized in that the processor is adapted to perform the method according to any of claims 1-8 by invoking a computer program stored in the memory.
CN201911416475.8A 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment Active CN111210005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416475.8A CN111210005B (en) 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416475.8A CN111210005B (en) 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111210005A CN111210005A (en) 2020-05-29
CN111210005B true CN111210005B (en) 2023-07-18

Family

ID=70788368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416475.8A Active CN111210005B (en) 2019-12-31 2019-12-31 Equipment operation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111210005B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753973A (en) * 2020-06-22 2020-10-09 深圳鲲云信息科技有限公司 Optimization method, system, equipment and storage medium of neural network chip
CN111882038A (en) * 2020-07-24 2020-11-03 深圳力维智联技术有限公司 Model conversion method and device
CN112130896B (en) * 2020-08-17 2022-03-25 深圳云天励飞技术股份有限公司 Neural network model migration method and device, electronic equipment and storage medium
CN113052305B (en) * 2021-02-19 2022-10-21 展讯通信(上海)有限公司 Method for operating a neural network model, electronic device and storage medium
CN113342631B (en) * 2021-07-02 2022-08-26 厦门美图之家科技有限公司 Distribution management optimization method and device and electronic equipment
CN114492737B (en) 2021-12-31 2022-12-09 北京百度网讯科技有限公司 Data processing method, data processing device, electronic equipment, storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564170A (en) * 2018-04-26 2018-09-21 福州瑞芯微电子股份有限公司 A kind of restructural neural network computing method and circuit based on NOC
CN109359732A (en) * 2018-09-30 2019-02-19 阿里巴巴集团控股有限公司 A kind of chip and the data processing method based on it
CN109740725A (en) * 2019-01-25 2019-05-10 网易(杭州)网络有限公司 Neural network model operation method and device and storage medium
CN109902819A (en) * 2019-02-12 2019-06-18 Oppo广东移动通信有限公司 Neural computing method, apparatus, mobile terminal and storage medium
CN110210605A (en) * 2019-05-31 2019-09-06 Oppo广东移动通信有限公司 Hardware operator matching process and Related product
US10452974B1 (en) * 2016-11-02 2019-10-22 Jasmin Cosic Artificially intelligent systems, devices, and methods for learning and/or using a device's circumstances for autonomous device operation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710941A (en) * 2018-04-11 2018-10-26 杭州菲数科技有限公司 The hard acceleration method and device of neural network model for electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10452974B1 (en) * 2016-11-02 2019-10-22 Jasmin Cosic Artificially intelligent systems, devices, and methods for learning and/or using a device's circumstances for autonomous device operation
CN108564170A (en) * 2018-04-26 2018-09-21 福州瑞芯微电子股份有限公司 A kind of restructural neural network computing method and circuit based on NOC
CN109359732A (en) * 2018-09-30 2019-02-19 阿里巴巴集团控股有限公司 A kind of chip and the data processing method based on it
CN109740725A (en) * 2019-01-25 2019-05-10 网易(杭州)网络有限公司 Neural network model operation method and device and storage medium
CN109902819A (en) * 2019-02-12 2019-06-18 Oppo广东移动通信有限公司 Neural computing method, apparatus, mobile terminal and storage medium
CN110210605A (en) * 2019-05-31 2019-09-06 Oppo广东移动通信有限公司 Hardware operator matching process and Related product

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Kamaledin Ghiasi-Shirazi.Generalizing the Convolution Operator in Convolutional Neural Networks.https://arxiv.org/pdf/1707.09864.2017,第1-17页. *
丁立德等.基于FPGA的CNN应用加速技术.信息技术.2019,第43卷(第43期),第110-115页. *
陈强等.基于模型的软件接口故障注入测试平台技术.计算机测量与控制.2016,第24卷(第24期),第52-55,59页. *

Also Published As

Publication number Publication date
CN111210005A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111210005B (en) Equipment operation method and device, storage medium and electronic equipment
CN107451663B (en) Algorithm componentization, modeling method and device based on algorithm components and electronic equipment
US20230008597A1 (en) Neural network model processing method and related device
CN109376852B (en) Arithmetic device and arithmetic method
US10809981B2 (en) Code generation and simulation for graphical programming
CN111275199A (en) Conversion method and system of deep learning model file, computer equipment and computer readable storage medium
WO2021253743A1 (en) Method and system for switching scene functions of robot, storage medium and smart robot
EP2825974A1 (en) Tag-based apparatus and methods for neural networks
CN111966361B (en) Method, device, equipment and storage medium for determining model to be deployed
JPH11513512A (en) Method of manufacturing digital signal processor
CN110569984B (en) Configuration information generation method, device, equipment and storage medium
CN112070202B (en) Fusion graph generation method and device and computer readable storage medium
CN111158465A (en) Force touch vibration feedback method and system
TW202145079A (en) Operation execution method and device, electronic equipment and storage medium
CN111651989B (en) Named entity recognition method and device, storage medium and electronic device
CN115115048A (en) Model conversion method, device, computer equipment and storage medium
CN111443897B (en) Data processing method, device and storage medium
CN110458285A (en) Data processing method, device, computer equipment and storage medium
CN112749364B (en) Webpage generation method, device, equipment and storage medium based on artificial intelligence
CN111832714B (en) Operation method and device
CN113760380A (en) Method, device, equipment and storage medium for determining running code of network model
CN115081628B (en) Method and device for determining adaptation degree of deep learning model
CN115762515B (en) Processing and application method, device and equipment for neural network for voice recognition
US20210064990A1 (en) Method for machine learning deployment
US11526780B2 (en) Converting nonnative skills for conversational computing interfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant