CN111694557A - Data processing method and device, image processing method and device, and electronic device - Google Patents

Data processing method and device, image processing method and device, and electronic device Download PDF

Info

Publication number
CN111694557A
CN111694557A CN201910199174.8A CN201910199174A CN111694557A CN 111694557 A CN111694557 A CN 111694557A CN 201910199174 A CN201910199174 A CN 201910199174A CN 111694557 A CN111694557 A CN 111694557A
Authority
CN
China
Prior art keywords
code
intermediate representation
neural network
function object
representation code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910199174.8A
Other languages
Chinese (zh)
Other versions
CN111694557B (en
Inventor
李周洋
张行程
颜深根
何家傲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201910199174.8A priority Critical patent/CN111694557B/en
Publication of CN111694557A publication Critical patent/CN111694557A/en
Application granted granted Critical
Publication of CN111694557B publication Critical patent/CN111694557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/42Syntactic analysis
    • G06F8/427Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/37Compiler construction; Parser generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Abstract

The present disclosure relates to a data processing method and apparatus, an image processing method and apparatus, and an electronic device, wherein the data processing method includes: acquiring a neural network code; performing code translation processing on the neural network code to obtain an intermediate representation code, wherein the intermediate representation code abstracts the realization of different hardware devices on the same operation; executing the intermediate representation code by an execution engine. The present disclosure can improve the expandable lines of neural network code.

Description

Data processing method and device, image processing method and device, and electronic device
Technical Field
The present disclosure relates to the field of data processing, and in particular, to a data processing method and apparatus, an image processing method and apparatus, and an electronic device.
Background
In the field of machine learning, the implementation process of neural networks (such as deep learning networks) is a process of solving a complex mathematical model. The structured representation of complex mathematical models is a prerequisite for the implementation of machine learning tasks.
At present, a typical deep learning framework uses Python and other languages as a programming interface, and an upper layer description language directly corresponds to a back end, so that conversion to a different back end is difficult.
Disclosure of Invention
The embodiment of the disclosure provides a data processing method and device, an image processing method and device and an electronic device, which can improve the neural network code expandability.
According to an aspect of the present disclosure, there is provided a data processing method including:
acquiring a neural network code;
performing code translation processing on the neural network code to obtain an intermediate representation code, wherein the intermediate representation code abstracts the realization of different hardware devices on the same operation;
executing the intermediate representation code by an execution engine.
In some possible embodiments, the performing a code translation process on the neural network code to obtain an intermediate representation code includes:
running the neural network code;
in response to a function object in the neural network code being executed, a generator interface is invoked to generate intermediate representation code corresponding to the function object.
In some possible embodiments, the invoking a generator interface in response to a function object in the neural network code being executed comprises:
in response to a function object in the neural network code being executed, based on the type of the function object, calling a generator interface corresponding to the function object to generate intermediate representation code corresponding to the function object.
In some possible embodiments, the generating, by the generator interface, the intermediate representation code corresponding to the function object includes:
and in response to the same function object being repeatedly executed, calling the generator interface to generate intermediate representation code corresponding to the same function object each time the same function object is executed.
In some possible embodiments, the generating, by the generator interface, the intermediate representation code corresponding to the function object includes:
when the function object is repeatedly called, the saved intermediate representation code corresponding to the function object is operated,
the saved intermediate representation code corresponding to the function object is the intermediate representation code generated by calling the generator interface when the function object is called for the first time or the intermediate representation code generated by calling the generator interface before the function object is actually called.
In some possible embodiments, the executing the neural network code comprises: the neural network code is interpreted and executed by an interpreter and/or compiled and executed by a compiler.
In some possible embodiments, the intermediate representation code corresponding to the function object in the neural network includes: and the operation name, the operation additional attribute, the operation input and the operation output corresponding to the function object.
In some possible embodiments, the intermediate representation code is configured to represent a function object included in the neural network code in an abstract symbolic manner, and the intermediate representation code corresponding to the function object is independent of a context of the function object.
In some possible embodiments, the executing the intermediate representation code by the execution engine includes:
and calling a computing module and/or a communication module by the execution engine to execute the intermediate representation code.
In some possible embodiments, the method further comprises:
optimizing the intermediate representation code;
the executing, by the execution engine, the intermediate representation code, comprising:
executing the optimized intermediate representation code by an execution engine.
In some possible embodiments, the optimizing the intermediate representation code comprises at least one of:
optimizing a memory use mode corresponding to the intermediate representation code;
code for merging at least a portion of the intermediate representation code for arithmetic operations;
code that merges at least a portion of the intermediate representation code for a communication operation.
In some possible embodiments, the method further comprises:
and storing the intermediate representation codes according to the generation sequence of the intermediate representation codes.
In some possible embodiments, the method further comprises:
forming a visualization structure corresponding to the intermediate representation code using a visualization tool;
displaying the visualization structure.
In some possible embodiments, the method further comprises:
acquiring an image to be processed;
the code translation processing is performed on the neural network code to obtain an intermediate representation code, and the method comprises the following steps:
running the neural network code by taking the image to be processed as an operation object to obtain an intermediate representation code corresponding to the neural network code;
the executing, by the execution engine, the intermediate representation code, comprising:
and executing the intermediate representation code by using the execution engine to obtain a processing result of the image to be processed.
In some possible embodiments, the image to be processed is a sample image, and the method further includes:
determining network loss based on the processing result of the image to be processed;
adjusting network parameters of the neural network based on the network loss.
According to a second aspect of the present disclosure, there is provided an image processing method including:
acquiring an image to be processed;
taking the image to be processed as an operation object, and running an intermediate representation code corresponding to a neural network code to obtain a processing result of the image to be processed, wherein the intermediate representation code is generated by the method according to any one of the first aspect.
According to a fifth aspect of the present disclosure, there is provided an electronic apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of the first aspect or performing the method of the second aspect.
According to a sixth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method of any one of the first aspects or the method as described in the second aspect.
According to the embodiment of the disclosure, when the neural network code corresponding to the neural network code is operated, the intermediate representation code corresponding to the function object in the neural network code can be generated at the same time, the intermediate representation code abstracts the implementation of different hardware devices on the same operation, namely abstracts the neural network code, and can be applied to different types of hardware devices, and the intermediate representation code is a code unrelated to the type of the hardware executed by the back end of the neural network, so that the intermediate representation code can be applied to different back ends. The embodiment of the disclosure has the characteristics of simplicity and convenience, and can improve the expandability of the neural network code.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a data processing method according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram for code translation in a data processing method according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart for visualizing intermediate representation code in a data processing method of an embodiment of the present disclosure;
FIG. 4 shows another flow diagram of a data processing method according to an embodiment of the present disclosure;
FIG. 5 shows another flow diagram of a data processing method according to an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of one example of a data processing method according to an embodiment of the present disclosure;
FIG. 7 shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 8 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure;
fig. 9 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 10 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
FIG. 11 shows another block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, components and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The embodiment of the disclosure provides a data processing method, which can convert a neural network code into an intermediate representation code irrelevant to the back-end execution hardware of the neural network, thereby being capable of decoupling the front end and the back end of the neural network and being suitable for different back-end processing. The embodiment of the disclosure also has the characteristics of simplicity and convenience, can improve the code applicability, and has the characteristic of more flexibility. In addition, the data processing method provided by the embodiment of the present disclosure may be applied to any device with a machine learning function, such as a mobile terminal device such as a mobile phone, a computer device, an intelligent wearable device such as a PAD and a bracelet, or a server device, which is not illustrated herein.
It should be understood that the data processing method provided by the embodiment of the present disclosure may be applied to a training process of a neural network, and may also be applied to a testing process of the neural network or a process of applying the neural network to an actual scene for image processing, which is not limited in the embodiment of the present disclosure.
Fig. 1 shows a flow chart of a data processing method according to an embodiment of the present disclosure, wherein the data processing method may include:
s100: acquiring a neural network code;
in the embodiment of the present disclosure, abstract representation of neural network codes may be implemented, and because of diversification of programming languages of the neural network codes, neural network codes in different language forms cannot be executed in different hardware devices at the same time, therefore, the embodiment of the present disclosure provides a data processing method capable of abstracting and translating neural network codes in different programming languages into intermediate representation codes.
First, the neural network code that needs to be translated into the intermediate representation code may be retrieved. In some possible embodiments, the neural network code may be a neural network code implementing any function, such as a neural network code performing image processing (image recognition, image segmentation, image optimization, etc.), or a neural network code performing language word processing, or a neural network code performing audio processing.
Additionally, in some possible implementations, the neural network code may be a code program written in a high-level language. For example, the high-level language includes at least one of the following languages: java, C + +, C #, python, lisplus. In other embodiments, the neural network code may be written in other languages, and the disclosure is not limited thereto.
In some possible embodiments, the manner of obtaining the neural network code may include: reading the neural network code to be translated from a database, or receiving the neural network code transmitted by other equipment, wherein the database can comprise a local database and/or a cloud database. The above is merely exemplary, and in other embodiments, the neural network code may be obtained in other manners.
S200: performing code translation processing on the neural network code to obtain an intermediate representation code, wherein the intermediate representation code abstracts the realization of different hardware devices on the same operation;
as described in the foregoing embodiments, the embodiments of the present disclosure may implement translation of neural network codes, that is, may translate the neural network codes from a native language form into a language form of intermediate representation codes. The intermediate representation code of the embodiment of the disclosure can be a self-defined language structure, the operation front end of the intermediate representation code is irrelevant to the reasoning back end, the coupling between the front end (model analysis) and the back end (mathematical computation) in the neural network learning framework can be released, and the same network model can be conveniently executed on different back ends. In addition, the intermediate representation code of the embodiment of the disclosure can be used for representing the function objects contained in the neural network code in an abstract symbol manner, so that the intermediate representation code can be executed on different types of hardware devices, and the applicability and the expandability of the neural network code are increased.
In some possible embodiments, the translation of the function object in the neural network code may be implemented when the neural network code is executed, for example, when the function object in the neural network code is called, an intermediate representation code corresponding to the called function object may be generated. The intermediate representation code may be used to interpret the corresponding function object.
In the embodiment of the present disclosure, the neural network code may include at least one function object, where the function object may be an operation function, a control function, an indication function, or the like, or any other function that operates arbitrarily, which is not illustrated herein, and as long as the function object in the neural network code is a function object in the embodiment of the present disclosure.
In the embodiment of the present disclosure, when the neural network code is executed, the function object in the neural network code is called, and at this time, the operation corresponding to the function object may be translated into the intermediate representation code. In other words, during the process of running the neural network code, the embodiment not disclosed herein may generate an intermediate representation code corresponding to the executed operation of the function object according to the call of each function object of the neural network code. The neural network code in the form of a high-level language can be converted into the intermediate representation code through the translation process, and the control and calculation logic described by the high-level language can be translated into the predefined intermediate representation code in real time by means of the operation mechanism of the high-level language (such as Python). The intermediate representation code is self-defined code for explaining each operation process of the neural network code, namely, the function objects in the neural network code are represented in an abstract symbol mode, and the intermediate representation code formed by the translation process is independent of the context of the function objects, so that the relationship between the front end and the back end of the neural network can be released, and the translated code can be suitable for different back-end processing.
S300: executing the intermediate representation code by an execution engine.
In embodiments of the present disclosure, after the neural network code is translated into the intermediate representation code, the intermediate representation code may be executed by the execution engine. Since the intermediate representation code may have different functional implementations for different neural network codes, for example, it may be used for performing one or more of an arithmetic operation, a communication operation, a control operation, or it may also be used for implementing or performing other operations, which is not limited by the present disclosure. According to the operation corresponding to the intermediate representation code, the execution engine can control the corresponding device or equipment to execute the corresponding operation. For example, for the intermediate representation code for performing the operation, the execution engine may execute the intermediate representation code through a computing module (e.g., CPU, GPU), or for the intermediate representation code for the communication operation, the execution engine may execute the intermediate representation code through a communication module, that is, the execution engine may execute the corresponding operation through a device, a module, or the like capable of executing the operation corresponding to the intermediate representation code.
In addition, in some possible embodiments, after the neural network code is translated into the predefined intermediate representation code, the translated intermediate representation code may be stored for subsequent operations such as visualization conversion or execution. The translated intermediate representation codes can be exported for deployment of different back ends and visualization of models, and the stored intermediate representation codes can be stored in a text form and/or a picture form. The intermediate representation codes can be stored according to the generation sequence of the intermediate representation codes generated in the running process of the neural network codes. The stored intermediate representation code may be executed when executed by the execution engine.
Through the embodiment of the disclosure, the neural network code can be conveniently converted into the form of the intermediate representation code, and the intermediate representation code can abstract the implementation modes of different devices for the same operation, so that the neural network code can be executed on different hardware devices, the applicability of the neural network code can be increased, and the neural network code translation method has the characteristics of simple and convenient translation process.
The embodiments of the present disclosure are explained in detail below. Fig. 2 illustrates a flowchart of code translation in a data processing method according to an embodiment of the present disclosure. Wherein, the performing code translation processing on the neural network code based on the above to obtain an intermediate representation code (step S200) may include:
s201: running the neural network code;
s202: in response to a function object in the neural network code being executed, a generator interface is invoked to generate intermediate representation code corresponding to the function object.
As described in the foregoing embodiments, the disclosed embodiments may generate intermediate representation code corresponding to a function object when the function object in the neural network code is called. I.e. the original code in the neural network code can be represented by a predefined intermediate representation code.
In some possible implementations, the neural network code in the embodiments of the present disclosure may be interpreted and executed by an interpreter. The interpreter can directly translate and run the high-level programming language corresponding to the neural network codes. For different types of high-level languages, the execution may be interpreted by different types of interpreters. For example, when the neural network code obtained in step S100 is written based on Python language, the neural network code may be interpreted and executed by a Python language interpreter. For other types of languages, the execution may also be interpreted by a corresponding interpreter, which is not illustrated by the present disclosure. Alternatively, in other embodiments, the neural network code may be compiled and executed by a compiler, for example, for a C + + language type neural network code, the neural network code may be compiled into an executable code by the compiler, and then the executable code is executed. The following embodiments are described by taking an interpreter as an example, and the procedure for calling a generator interface by executing a neural network code through a compiler is the same as the procedure for executing the neural network code by interpreting the interpreter without using the generator interface, and a description thereof will not be repeated.
When the neural network code is interpreted and executed by the interpreter, each function object in the neural network code can be translated into the corresponding intermediate representation code in sequence according to the execution operation sequence of the neural network code. The neural network code may include a plurality of function objects, and each function object may be used to perform different operations, such as performing arithmetic operations, control operations, instruction operations, and the like. The operation may include operations such as addition, subtraction, multiplication, division, convolution, and the like, and the operation object may be a scalar or a tensor. The control operations may include loop operations, branch operations, etc., which may correspond to logical processes such as if, while, etc. in a high level language. The indication operation may include resource allocation and unequal recycling indication such as memory allocation, memory transmission, memory release, and the like.
Correspondingly, when the code segment corresponding to each function object is called through the interpretation of the interpreter, the intermediate representation code corresponding to the function object can be generated through the call generator interface. When a function object in the neural network code is called by running, the generator can call the generator interface thereof to generate the intermediate representation code corresponding to the called function object.
When the interpreter executes the neural network code, the number of lines of the code executed each time of interpretation may be determined according to the type of the high-level language, and may be one line or multiple lines, which is not specifically limited by the present disclosure.
In some possible implementations, multiple types of generator interfaces may be included in embodiments of the present disclosure, each type of generator interface corresponding to a type of function object. For example, in embodiments of the present disclosure, a producer interface defined in a producer may include; at least one of a producer interface of an arithmetic operation type, a producer interface of a control operation type, and a producer interface of a communication operation type, and other types of producer interfaces may also be included in other embodiments. The generator interfaces of the types included in the embodiments of the present disclosure may correspond to the types of function objects in the neural network code, and when a function object is executed, the generator may call the generator interface corresponding to the type of the function object to generate the intermediate representation code corresponding to the function object. That is, the embodiments of the present disclosure may enable, when running different types of function objects, the generation of corresponding intermediate representation code through the generator interface corresponding to the type of the function object.
In some possible embodiments, the generator interface may define the generated intermediate representation code in a preset manner. The intermediate representation code generated by calling the generator interface may include: and the operation name, the operation input and the operation output corresponding to the function object. In other embodiments, additional attributes of operation, or other neural network code related parameter information, may also be included in the intermediate representation code.
The operation name may include at least one of an arithmetic operation, a control operation, a communication operation, and the like, or may also be a name of a function object, and the like. The additional attribute of the operation refers to an operation parameter adopted for executing the function object, such as a convolution parameter, a weight, a coefficient and the like, and can be specifically determined according to a neural network parameter. The input and the output can be the input and the output of the function object respectively, and can be determined according to the function of the function object and the operation process of the neural network.
When the generator interface generates the intermediate representation code, the generator interface may generate the corresponding intermediate representation code according to the above-mentioned definition, where the identifier of each operation name, attribute, input, and output may be determined according to a preset relationship, where the preset relationship may include an intermediate symbol corresponding to each operation, an intermediate symbol corresponding to each control operation, an intermediate symbol corresponding to each communication operation, an intermediate symbol corresponding to an operand and an attribute parameter, and so on. That is, the embodiment of the present disclosure may query the intermediate symbol corresponding to each code in the function object of the neural network code through the preset corresponding relationship, and determine the intermediate representation code corresponding to the code segment corresponding to each function based on the queried intermediate symbol.
In addition, in some possible embodiments, when running the neural network code, there may be a case where the same function object is called and run multiple times, and for this case, the generator interface may be called each time the function object is run to generate intermediate representation code corresponding to the same function object. In addition, the intermediate representation code generated each time can cover the intermediate representation code generated last time, and the occupation of the memory is reduced.
Or, in other embodiments, when the same function object is called and run multiple times, the saved intermediate representation code corresponding to the function object may be run, where the saved intermediate representation code corresponding to the function object is the intermediate representation code generated by calling the generator interface when the function object is called for the first time, or the intermediate representation code generated by calling the generator interface before the function object is actually called. In some embodiments, when the function object is executed for the first time, the intermediate representation code corresponding to the function object is generated through the call generator interface, and the intermediate representation code corresponding to the function object is saved, and when the function object is repeatedly called, the saved intermediate representation code corresponding to the function object is executed. Alternatively, before the function object is actually called, the corresponding generator interface may be called to generate an intermediate representation code corresponding to the function object, and the intermediate representation code may be saved so as to be directly executed in actual application. That is, in the embodiment of the present disclosure, for the same function object, only the intermediate representation code corresponding to the function object is generated when the function object is executed for the first time, and when the function object is called for a plurality of times, the saved intermediate representation code may be executed, so that the execution speed may be increased.
In some possible implementations, after the neural network code is translated into the corresponding intermediate representation codes, the disclosed embodiments may further perform optimization processing on the intermediate representation codes. In the embodiment of the disclosure, the optimization of the intermediate representation code refers to performing equivalent (referring to not changing the operation result of the program) transformation on the intermediate representation code. The equivalent transformation is to make the operation result of the transformed code and the operation result and function of the code before transformation the same. That is, embodiments of the present disclosure may provide space for static analysis and optimization of the model before the neural network model translated into intermediate representation code is actually used. For example, the translated intermediate representation code may be subjected to runtime-independent static analysis and optimization, such as memory allocation optimization, computation merging, communication merging, and the like. The disclosed embodiments can reduce the resource occupation of the intermediate representation code during the execution process by optimizing the intermediate representation code, for example, the generated intermediate representation code can be optimized to be shorter (the running time is shorter, the occupied space is smaller), or the space-time efficiency optimization is executed. The method for optimizing each intermediate representation code according to the embodiment of the present disclosure may include at least one of the following:
a: optimizing a memory use mode corresponding to the intermediate representation code;
the memory usage mode corresponding to the optimized code in the embodiment of the present disclosure may include: the allocation and release of the memory resources, the order of the memory allocation, the transfer codes, the memory multiplexing and other modes are adjusted, so that the time and the order of the memory allocation and release and the reasonable multiplexing can be reasonably arranged, and the memory resource occupation can be reduced. In other embodiments, the memory layout of the intermediate representation code may be optimized in other ways.
b: code for merging at least a portion of the intermediate representation code for arithmetic operations;
by combining the codes of the operation operations, a plurality of operation operations can be combined into one operation with the milk-less operation, so that the codes are shortened, and the running speed of the codes is increased. In addition, the operation result is guaranteed under the condition of reducing the length of the code, and on the other hand, the resource occupied by the operation can be reduced. The code for combining the operation operations may include combining a plurality of code segments for implementing a plurality of operation operations into one code, and may implement the same operation result.
c: code that merges at least a portion of the intermediate representation code for a communication operation.
By combining the codes of the communication operation, the communication requirement can be met under the condition of reducing the length of the codes, and meanwhile, the resources occupied by the communication operation can be reduced, and the delay caused by frequent communication can be reduced. The manner of combining the codes of the communication operations may include combining a plurality of code segments that implement a plurality of communication operations into one code, while being capable of implementing the same communication purpose.
By the method, the optimization of the code can be executed, the length of the intermediate representation code, occupied resources and the like are further reduced, and meanwhile, the running efficiency of the intermediate representation code can be improved.
In addition, when step S300 is executed, the optimized intermediate representation code may be executed by the execution engine, so that the effects of increasing the operation speed, optimizing the memory usage, and reducing the communication delay may be achieved according to the optimized intermediate representation code.
In another embodiment of the present disclosure, a visualization effect of the intermediate representation code may also be achieved. Fig. 3 shows a flowchart for visualizing intermediate representation code in a data processing method according to an embodiment of the present disclosure. The data processing method of the embodiment of the present disclosure may further include:
s401: forming a visualization structure corresponding to the intermediate representation code using a visualization tool;
s402: displaying the visualization structure.
As described in the foregoing embodiment, after the intermediate representation codes corresponding to the neural network codes are generated, the intermediate representation codes may be visualized to obtain corresponding visualized structures. The embodiment of the present disclosure may form a visualization structure corresponding to each intermediate representation code by using a visualization tool. Visualization tools may be used to create a dialog box or other interface to display variables or objects in a manner appropriate to the type of variable or object data. For example, a visualization tool of an embodiment of the present disclosure may include a tensisorboard. The visualization of the intermediate representation code may also be achieved by other visualization tools in other embodiments.
After the visualization structure of the intermediate representation code is obtained, the obtained visualization structure may be displayed, so as to realize the visualization display of the intermediate representation code.
In addition, as described in the above embodiments, the neural network code according to the embodiments of the present disclosure may be a code that implements an arbitrary function, and an intermediate representation code corresponding to the neural network code may be generated by translating the neural network code. The following describes an application of the data processing method according to the embodiment of the present disclosure based on an example in the field of image processing.
Fig. 4 shows another flow diagram of a data processing method according to an embodiment of the present disclosure. The data processing method comprises the following steps:
s501: acquiring an image to be processed;
the neural network code of the embodiments of the present disclosure may implement processing operations of an image, for example, processing operations such as image feature extraction, image deblurring processing, and image recognition, which are not specifically limited by the present disclosure. Correspondingly, the image to be processed may be obtained first, and the image to be processed is used as an operand of the neural network code, and the neural network code of the embodiment of the present disclosure is executed, so as to implement the processing process of the image to be processed.
S502: performing code translation processing on the neural network code to obtain an intermediate representation code, wherein the intermediate representation code comprises an intermediate representation code corresponding to the neural network code obtained by operating the neural network code with the image to be processed as an operand;
according to the embodiment of the disclosure, the acquired image to be processed can be input to the neural network as an operand, that is, can be input to the neural network, and the neural network code is executed at the same time, at this time, when the neural network code is called, the intermediate representation code corresponding to the function object can be generated through the generator interface. The specific process is the same as the description of the above embodiment, and will not be repeated here.
S503: and executing the intermediate representation code through an execution engine, wherein the execution of the intermediate representation code by the execution engine is utilized to obtain a processing result of the image to be processed.
After the corresponding intermediate representation code is generated, the generated intermediate representation code may be stored, for example, in a local database or a cloud database, which is not limited to this.
In addition, the intermediate representation code obtained or stored in step S502 may be executed by the execution engine, so as to obtain a processing result of the image to be processed.
After generating the intermediate representation codes corresponding to the function objects of the neural network code, the execution engine may be used to execute the intermediate representation codes, and since the disclosed embodiments use the operands corresponding to the image to be processed as the processing objects, after executing the intermediate representation codes, the processing result corresponding to the image to be processed may be generated.
In addition, in other embodiments of the present disclosure, the neural network code may be a code of a neural network test phase, and may also be a code of an application phase, which is not specifically limited in this embodiment of the present disclosure.
Fig. 5 shows another flow chart of a data processing method according to an embodiment of the present disclosure, wherein the method further comprises:
s601: determining network loss based on the processing result of the image to be processed;
s602: adjusting network parameters of the neural network based on the network loss.
In some possible embodiments, the to-be-processed image may be used as a training sample to train a neural network corresponding to a neural network code, that is, a plurality of to-be-processed images may be used as operands and input to the neural network, the neural network code is converted into an intermediate representation code, the intermediate representation code is executed by an execution engine to obtain a prediction result (processing result) corresponding to each to-be-processed image, a loss value of the neural network is determined by using a loss between the processing result and a real result for supervision, when the network loss value is smaller than a loss threshold, the training of the network is stopped, and when the network loss value is greater than or equal to the loss threshold, parameters of the neural network are adjusted until the network loss is smaller than the loss threshold.
In order to more clearly embody the embodiments of the present disclosure, the following illustrates the translation process of the embodiments of the present disclosure. Fig. 6 shows a schematic diagram of one example of a data processing method according to an embodiment of the present disclosure. The description will be given by taking a neural network written in Python language as an example.
The method comprises the steps of firstly obtaining a neural network code written by a Python language from a code library or a database, then interpreting and executing the neural network code by a Python interpreter, calling an API (generator interface) in an IR (intermediate representation code) generator based on the operation of a function object in the neural network code in the interpretation process, calling the generator interface based on the IR generator, generating intermediate representation codes corresponding to the called function object, and then storing the intermediate representation codes according to the sequence of generating the intermediate representation codes. Meanwhile, each intermediate representation code can be visualized through a visualization tool to obtain a visualization structure. Further, the functions of the intermediate representation codes can be realized by executing the intermediate representation codes by the execution engine. In other embodiments, the generated intermediate representation code may be optimized by the IR optimization module, so as to improve the running efficiency of the intermediate representation code, reduce the length of the intermediate representation code, reduce the memory space of the application, and the like. In the case of executing the visualization process and executing the intermediate representation code by the execution engine, the visualization process may be performed on the optimized intermediate representation code and the optimized intermediate representation code may be executed by the execution engine.
To sum up, in the embodiment of the present disclosure, when the neural network code corresponding to the neural network code is run, the intermediate representation code corresponding to the function object in the neural network code may be generated at the same time, and the intermediate representation code abstracts the implementation of the same operation by different hardware devices, that is, abstracts the neural network code, and may be applicable to different types of hardware devices, and the intermediate representation code is a code unrelated to the type of hardware executed at the back end of the neural network, and thus may be applicable to different back ends. The embodiment of the disclosure has the characteristics of simplicity and convenience, and can improve the expandability of the neural network code.
In addition, the embodiment of the disclosure also provides an image processing method, and the image processing method can obtain a processing result through the execution of the intermediate representation code corresponding to the neural network code. The image processing method of the embodiment of the present disclosure may be applied to any device having an image processing function, such as a mobile phone, a camera device, and an intelligent wearable device, or may also be applied to a server, which is not specifically limited by the present disclosure.
Fig. 7 shows a flowchart of an image processing method according to an embodiment of the present disclosure, wherein the image processing method includes:
s10: acquiring an image to be processed;
in the embodiment of the present disclosure, the image may be processed by executing the intermediate representation code, wherein the image to be processed may be acquired first. The image to be processed can be acquired by directly acquiring the image, or can be acquired by reading the stored image, or can be received from other equipment to be used as the image to be processed. In other embodiments, the image to be processed may be obtained in other manners, which are not illustrated in this disclosure.
S20: and taking the image to be processed as an operation object, and running an intermediate representation code corresponding to the neural network code to obtain a processing result of the image to be processed, wherein the intermediate representation code is generated by the method in any one of the embodiments.
As in the above embodiments, the disclosed embodiments may translate the neural network code into an intermediate representation code, and correspondingly may convert the neural network code with the image processing function into a corresponding intermediate representation code.
Correspondingly, the acquired image to be processed can be used as an operation object, and the intermediate representation code is executed, so as to obtain a processing result corresponding to the image to be processed, wherein the processing result can be determined according to the specific function of the neural network. With the above-described embodiment, since the intermediate representation code is in a code form independent of the backend, it can be applied to various types of image processing apparatuses, and scalability is better.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides a data processing apparatus, an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any image/data processing method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method section are omitted for brevity.
Fig. 8 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure, which, as shown in fig. 8, includes:
a first obtaining module 10, configured to obtain a neural network code;
a generating module 20, configured to perform code translation processing on the neural network code to obtain an intermediate representation code, where the intermediate representation code abstracts implementation of different hardware devices on the same operation;
an execution module 30 for executing the intermediate representation code by an execution engine.
In some possible embodiments, the generating module comprises:
an execution unit for executing the neural network code;
a generating unit, configured to, in response to a function object in the neural network code being executed, call a generator interface to generate intermediate representation code corresponding to the function object.
In some possible embodiments, the generating unit is further configured to, in response to a function object in the neural network code being executed, call a generator interface corresponding to the function object based on a type of the function object to generate the intermediate representation code corresponding to the function object.
In some possible embodiments, the generating unit is further configured to, in response to the same function object being repeatedly executed, call the generator interface to generate the intermediate representation code corresponding to the same function object each time the same function object is executed.
In some possible embodiments, the generating unit is further configured to execute the saved intermediate representation code corresponding to the function object when the function object is repeatedly called,
the saved intermediate representation code corresponding to the function object is the intermediate representation code generated by calling the generator interface when the function object is called for the first time or the intermediate representation code generated by calling the generator interface before the function object is actually called.
In some possible embodiments, the execution unit is further configured to execute the neural network code by interpreting the neural network code by an interpreter, and/or by compiling the neural network code by a compiler.
In some possible embodiments, the intermediate representation code corresponding to the function object in the neural network includes: and the operation name, the operation input and the operation output corresponding to the function object.
In some possible embodiments, the intermediate representation code is configured to represent a function object included in the neural network code in an abstract symbolic manner, and the intermediate representation code corresponding to the function object is independent of a context of the function object.
In some possible embodiments, the execution module is further configured to invoke, by the execution engine, a computation module and/or a communication module to execute the intermediate representation code.
In some possible embodiments, the apparatus further comprises an optimization module for optimizing the intermediate representation code;
the execution module is further configured to execute the optimized intermediate representation code via an execution engine.
In some possible embodiments, the optimizing module optimizing the intermediate representation code comprises at least one of:
optimizing a memory use mode corresponding to the intermediate representation code;
code for merging at least a portion of the intermediate representation code for arithmetic operations;
code that merges at least a portion of the intermediate representation code for a communication operation.
In some possible embodiments, the apparatus further includes a storage module configured to store the intermediate representation codes in an order in which the intermediate representation codes are generated.
In some possible embodiments, the apparatus further comprises:
a visualization module for forming a visualization structure corresponding to the intermediate representation code using a visualization tool and displaying the visualization structure.
In some possible embodiments, the first obtaining module is further configured to obtain an image to be processed;
the generating module is further configured to run the neural network code with the image to be processed as an operation object to obtain an intermediate representation code corresponding to the neural network code;
the execution module is further configured to execute the intermediate representation code by using the execution engine to obtain a processing result of the image to be processed.
In some possible embodiments, the image to be processed is a sample image, and the apparatus further includes a training module configured to determine a network loss based on a processing result of the image to be processed; and
adjusting network parameters of the neural network based on the network loss.
Fig. 9 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure, wherein the image processing apparatus includes:
a second obtaining module 100, configured to obtain an image to be processed;
an image processing module 200, configured to take the image to be processed as an operation object, run an intermediate representation code corresponding to a neural network code, and obtain a processing result of the image to be processed, where the intermediate representation code is generated by using the method according to any one of the first aspect.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 10 shows a block diagram of an electronic device according to an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 10, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
FIG. 11 shows another block diagram of an electronic device in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 11, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A data processing method, comprising:
acquiring a neural network code;
performing code translation processing on the neural network code to obtain an intermediate representation code, wherein the intermediate representation code abstracts the realization of different hardware devices on the same operation;
executing the intermediate representation code by an execution engine.
2. The method of claim 1, wherein performing code translation processing on the neural network code to obtain an intermediate representation code comprises:
running the neural network code;
in response to a function object in the neural network code being executed, a generator interface is invoked to generate intermediate representation code corresponding to the function object.
3. The method of claim 2, wherein said invoking a generator interface in response to a function object in the neural network code being executed comprises:
in response to a function object in the neural network code being executed, based on the type of the function object, calling a generator interface corresponding to the function object to generate intermediate representation code corresponding to the function object.
4. The method of claim 2 or 3, wherein generating, by the generator interface, the intermediate representation code corresponding to the function object comprises:
and in response to the same function object being repeatedly executed, calling the generator interface to generate intermediate representation code corresponding to the same function object each time the same function object is executed.
5. The method of claim 2 or 3, wherein generating, by the generator interface, the intermediate representation code corresponding to the function object comprises:
when the function object is repeatedly called, the saved intermediate representation code corresponding to the function object is operated,
the saved intermediate representation code corresponding to the function object is the intermediate representation code generated by calling the generator interface when the function object is called for the first time or the intermediate representation code generated by calling the generator interface before the function object is actually called.
6. An image processing method, comprising:
acquiring an image to be processed;
taking the image to be processed as an operation object, and running an intermediate representation code corresponding to a neural network code to obtain a processing result of the image to be processed, wherein the intermediate representation code is generated by the method according to any one of claims 1 to 5.
7. A data processing apparatus, comprising:
a first obtaining module for obtaining a neural network code;
the generation module is used for carrying out code translation processing on the neural network code to obtain an intermediate representation code, and the intermediate representation code abstracts the realization of different hardware devices on the same operation;
an execution module to execute the intermediate representation code by an execution engine.
8. An image processing apparatus characterized by comprising:
the second acquisition module is used for acquiring an image to be processed;
an image processing module, configured to take the image to be processed as an operation object, and run an intermediate representation code corresponding to a neural network code to obtain a processing result of the image to be processed, where the intermediate representation code is generated by the method according to any one of claims 1 to 5.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 5, or performing the method of claim 6.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 5 or the method of claim 6.
CN201910199174.8A 2019-03-15 2019-03-15 Data processing method and device, image processing method and device and electronic equipment Active CN111694557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910199174.8A CN111694557B (en) 2019-03-15 2019-03-15 Data processing method and device, image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910199174.8A CN111694557B (en) 2019-03-15 2019-03-15 Data processing method and device, image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111694557A true CN111694557A (en) 2020-09-22
CN111694557B CN111694557B (en) 2024-04-16

Family

ID=72475465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910199174.8A Active CN111694557B (en) 2019-03-15 2019-03-15 Data processing method and device, image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111694557B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107851002A (en) * 2015-08-31 2018-03-27 华为技术有限公司 A kind of code compiling method and code encoder
US20180136912A1 (en) * 2016-11-17 2018-05-17 The Mathworks, Inc. Systems and methods for automatically generating code for deep learning systems
CN108694694A (en) * 2017-04-10 2018-10-23 英特尔公司 Abstraction library for allowing for scalable distributed machine learning
CN108734649A (en) * 2017-04-24 2018-11-02 英特尔公司 Neural metwork training mechanism
CN109242096A (en) * 2017-07-01 2019-01-18 英特尔公司 For training the technology of deep neural network
US20190050715A1 (en) * 2018-09-28 2019-02-14 Intel Corporation Methods and apparatus to improve data training of a machine learning model using a field programmable gate array
CN109344959A (en) * 2018-08-27 2019-02-15 联想(北京)有限公司 Neural network training method, nerve network system and computer system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107851002A (en) * 2015-08-31 2018-03-27 华为技术有限公司 A kind of code compiling method and code encoder
US20180136912A1 (en) * 2016-11-17 2018-05-17 The Mathworks, Inc. Systems and methods for automatically generating code for deep learning systems
CN108694694A (en) * 2017-04-10 2018-10-23 英特尔公司 Abstraction library for allowing for scalable distributed machine learning
CN108734649A (en) * 2017-04-24 2018-11-02 英特尔公司 Neural metwork training mechanism
CN109242096A (en) * 2017-07-01 2019-01-18 英特尔公司 For training the technology of deep neural network
CN109344959A (en) * 2018-08-27 2019-02-15 联想(北京)有限公司 Neural network training method, nerve network system and computer system
US20190050715A1 (en) * 2018-09-28 2019-02-14 Intel Corporation Methods and apparatus to improve data training of a machine learning model using a field programmable gate array

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHAO, TIAN等: "Design and implementation of DeepDSL: A DSL for deep learning", 《COMPUTER LANGUAGES SYSTEMS & STRUCTURES》, 22 January 2019 (2019-01-22) *
夏文超;刘建平;戴瑜兴;: "基于Matlab与Linux的神经网络实现方法", 计算机工程, no. 14, 20 July 2010 (2010-07-20) *
尹杰: "基于编译器中间语言的软件运行时可靠性研究", 《中国博士学位论文全文数据库(信息科技辑)》, 15 July 2016 (2016-07-15) *

Also Published As

Publication number Publication date
CN111694557B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111222637B (en) Neural network model deployment method and device, electronic equipment and storage medium
JP2022517914A (en) Face-to-hand association detection methods and devices, electronics, storage media and computer programs
CN110909815B (en) Neural network training method, neural network training device, neural network processing device, neural network training device, image processing device and electronic equipment
CN110874217B (en) Interface display method and device for quick application and storage medium
CN112947935A (en) Operation method and device, electronic device and storage medium
CN109981787B (en) Method and device for displaying information
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN110851108A (en) Electronic equipment operation method and device, electronic equipment and storage medium
CN113065591A (en) Target detection method and device, electronic equipment and storage medium
CN111767058A (en) Program compiling method and device, electronic equipment and storage medium
CN114356336A (en) Neural network model deployment method and device, electronic equipment and storage medium
CN114741292A (en) Test script management method and device, electronic equipment and storage medium
CN114035902A (en) Application program development platform and method, electronic device and storage medium
CN104991857A (en) Method and apparatus for trace debugging
CN110163372B (en) Operation method, device and related product
CN111694571B (en) Compiling method and device
CN109635926B (en) Attention feature acquisition method and device for neural network and storage medium
CN111046780A (en) Neural network training and image recognition method, device, equipment and storage medium
CN111694557B (en) Data processing method and device, image processing method and device and electronic equipment
CN114020264A (en) Operator processing method and device, electronic equipment and storage medium
CN112988194B (en) Program optimization method and device based on equipment information, electronic equipment and storage medium
CN111488964A (en) Image processing method and device and neural network training method and device
CN113378893B (en) Data management method and device, electronic equipment and storage medium
CN113052942B (en) Chart generation method, device, storage medium and electronic equipment
CN114035804A (en) Code conversion method, device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant