CN111898761B - Service model generation method, image processing method, device and electronic equipment - Google Patents

Service model generation method, image processing method, device and electronic equipment Download PDF

Info

Publication number
CN111898761B
CN111898761B CN202010809792.2A CN202010809792A CN111898761B CN 111898761 B CN111898761 B CN 111898761B CN 202010809792 A CN202010809792 A CN 202010809792A CN 111898761 B CN111898761 B CN 111898761B
Authority
CN
China
Prior art keywords
model
image processing
processing model
image
post
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010809792.2A
Other languages
Chinese (zh)
Other versions
CN111898761A (en
Inventor
张博
吕灼恒
原帅
张晋锋
袁伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Shuguang International Information Industry Co ltd
Dawning Information Industry Beijing Co Ltd
Dawning Information Industry Co Ltd
Original Assignee
Zhongke Shuguang International Information Industry Co ltd
Dawning Information Industry Beijing Co Ltd
Dawning Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Shuguang International Information Industry Co ltd, Dawning Information Industry Beijing Co Ltd, Dawning Information Industry Co Ltd filed Critical Zhongke Shuguang International Information Industry Co ltd
Priority to CN202010809792.2A priority Critical patent/CN111898761B/en
Publication of CN111898761A publication Critical patent/CN111898761A/en
Application granted granted Critical
Publication of CN111898761B publication Critical patent/CN111898761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The application provides a service model generation method, an image processing method, a device and electronic equipment, wherein the method comprises the following steps: acquiring an image processing model; analyzing the image processing model to obtain model characteristics of the image processing model; configuring a pre-defined preprocessing template according to the model characteristics to obtain a preprocessing program; configuring a pre-defined post-processing template according to the model characteristics to obtain a post-processing program; the pre-processing program, the image processing model and the post-processing program are packaged to generate the reasoning service. The efficiency of construction of the inference service can be improved.

Description

Service model generation method, image processing method, device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a service model generation method, an image processing apparatus, and an electronic device.
Background
Obtaining a deep learning model capable of realizing image recognition generally needs to be processed through multiple links: converting a data set, defining a model, training the model, reasoning and verifying, issuing reasoning service and the like.
However, since the data capable of being input into the deep learning model file is in one format (for example, tensor) and the data to be identified is in another format (for example, picture), the deep learning model file obtained by training alone is not enough to realize inference service. Therefore, to provide an inference service, not only a deep learning model file but also a processing module for compiling and converting data need to be constructed. This approach may result in a long period of issuance for the inference service.
Disclosure of Invention
The application aims to provide a service model generation method, an image processing device and electronic equipment, which can solve the problem that the issuing cycle of inference service is long.
In a first aspect, an embodiment of the present application provides a service model generation method, including:
acquiring an image processing model;
analyzing the image processing model to obtain model characteristics of the image processing model;
configuring a pre-defined preprocessing template according to the model characteristics to obtain a preprocessing program;
configuring a pre-defined post-processing template according to the model characteristics to obtain a post-processing program;
and packaging the pre-processing program, the image processing model and the post-processing program to generate inference service.
In an optional embodiment, the model feature includes input parameter information, and the configuring a pre-processing template according to the model feature to obtain a pre-processing procedure includes:
and filling the predefined preprocessing template with the input parameter information to obtain a preprocessing program.
In the embodiment of the application, the input parameter information is directly filled into the preprocessing template, so that the flow of the preprocessing program can be reduced, and the efficiency of generating the preprocessing flow is improved.
In an optional embodiment, the model feature includes output parameter information, and the configuring a predefined post-processing template according to the model feature to obtain a post-processing program includes:
acquiring a label mapping file;
and filling the post-processing template according to the output parameter information and the label mapping file to obtain a post-processing program.
In the embodiment of the application, for the inference service that the identification result needs to have a label, the label rule on the output identification result is defined through the label mapping file, and further, the label mapping file and the output parameter information are directly used, so that the post-processing program can be obtained more quickly and more simply under the condition of meeting the output requirement.
In an optional embodiment, before the step of acquiring the image processing model, the method further comprises:
after the training of the image processing model is finished, storing the image processing model by using a specified storage mode;
when the image processing model is stored by using the specified storage mode, defining an input layer and an output layer of a specified signature in the image processing model;
the analyzing the image processing model to obtain the model characteristics of the image processing model comprises: analyzing the designated signature of the image processing model to obtain model features of the image processing model.
In the embodiment of the application, the image processing model is stored in the designated storage mode, so that the parameter information input or output by the image processing model can be conveniently acquired when the parameter information is required to be acquired, a foundation can be set for generating a pre-processing program and a post-processing program, and the efficiency of generating the pre-processing program and the post-processing program is improved.
In an optional embodiment, the analyzing the image processing model to obtain model features of the image processing model includes:
and analyzing the image processing model through an interface provided by TF-Serving to obtain the model characteristics of the image processing model.
In the embodiment of the application, the model characteristics of the image processing model are directly obtained through the interface analysis provided by the TF-Serving, so that the required information can be obtained in a simpler and more convenient mode, and the generation of the pre-processing program and the post-processing program is realized.
In a second aspect, an embodiment of the present application provides an image processing method, including:
and inputting the image to be processed into the inference service generated by using the service model generation method, and carrying out image processing on the image to be processed to obtain a processing result.
In a third aspect, an embodiment of the present application provides a service model generation apparatus, including:
the acquisition module is used for acquiring an image processing model;
the analysis module is used for analyzing the image processing model to obtain the model characteristics of the image processing model;
the first configuration module is used for configuring a predefined preprocessing template according to the model characteristics so as to obtain a preprocessing program;
the second configuration module is used for configuring a predefined post-processing template according to the model characteristics to obtain a post-processing program;
and the generating module is used for packaging the preprocessing program, the image processing model and the post-processing program to generate inference service.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including:
and the processing module is used for inputting the image to be processed into the inference service generated by using the service model generation method, and carrying out image processing on the image to be processed to obtain a processing result.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions being executable by the processor to perform the steps of the method described above when the electronic device is run.
In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the above method.
The beneficial effects of the embodiment of the application are that: by acquiring the characteristics of the model, the preprocessing template and the post-processing template are directly configured, so that the time spent for writing the preprocessing program and the post-processing program can be reduced, the time period for generating the whole inference service is shortened, and the efficiency for issuing the inference service can be further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a schematic flowchart of deep learning according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of a service model generation method according to an embodiment of the present application.
FIG. 4 is a schematic diagram of an operation interface for generating a service model.
Fig. 5 is another flowchart of a service model generation method according to an embodiment of the present disclosure.
Fig. 6 is a flowchart of an image classification service generation method according to an embodiment of the present application.
Fig. 7 is a schematic functional module diagram of a service model generation apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example one
To facilitate understanding of the present embodiment, an electronic device that executes the service model generation method or the image processing method disclosed in the embodiments of the present application will be described in detail first.
As shown in fig. 1, is a block schematic diagram of an electronic device. The electronic device 100 may include a memory 110, a memory controller 120, a processor 130, a peripheral interface 140, an input-output unit 150, and a display unit 160. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The aforementioned components of the memory 110, the memory controller 120, the processor 130, the peripheral interface 140, the input/output unit 150, and the display unit 160 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 130 is used to execute the executable modules stored in the memory.
The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 110 is configured to store a program, and the processor 130 executes the program after receiving an execution instruction, and the method performed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 130, or implemented by the processor 130.
The processor 130 may be an integrated circuit chip having signal processing capability. The Processor 130 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 140 couples various input/output devices to the processor 130 and to the memory 110. In some embodiments, peripheral interface 140, processor 130, and memory controller 120 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 150 is used for providing input data to a user. The input/output unit 150 may be, but is not limited to, a mouse, a keyboard, and the like.
The display unit 160 provides an interactive interface (e.g., a user operation interface) between the electronic device 100 and a user or is used for displaying image data to the user for reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing.
It can be known that, when the inference service obtained by the service model generation method provided in the embodiment of the present application is used to provide different services, the electronic device may further set more components according to the different services provided. For example, the inference service is used to implement face card punching, and the electronic device in this embodiment may further include a camera unit for acquiring a face image.
The electronic device 100 in this embodiment may be configured to perform each step in each method provided in this embodiment. The following describes in detail the implementation of the service model generation method and the image processing method by several embodiments.
Example two
First, a process of constructing a deep learning model in a TensorFlow scenario required to be used in the embodiment of the present application is described, as shown in fig. 2. The process of deep learning can be summarized as follows: data set transformation 210, model training 220, and model inference 230.
Among them, data that can be recognized due to the neural network model is generally data of a specified format. For example, the open source software library TensorFlow supports a data format tfrecrd, and general training data is data in a picture format (e.g., jpg, png, etc.). Therefore, it is necessary to convert the picture and the annotation information into a data format tfrecrd supported by tensrflow. The label information may be coordinate information of an object to be identified included in the picture.
The model training 220 described above may include the definition of a model and the training of a model.
Wherein the definition of the model comprises: an input layer, an output layer, a processing algorithm, and learning parameters are defined. For example, the processing algorithm may be to define an initial network model by some neural network models. For example, the neural network model may be: convolutional Neural Networks (CNN), deep Convolutional Neural Networks (DCNN), and the like.
The training of the model can be to input some pictures with labeled information into the initial network model to perform iterative computation so as to adjust parameters in the initial network model until set conditions are reached, thereby obtaining a trained image processing model.
The model inference 230 described above may include: model validation and publication inference services.
The model verification is to verify the trained image processing model by using test data so as to determine the difference between the inference result obtained by the image processing model and an expected inference result.
The issuing inference service is used for merging the preprocessing program, the post-processing program and the image processing model together for issuing.
Illustratively, the deep learning model constructed in the tensrflow scene can be used for realizing target detection, image classification, semantic segmentation and the like.
In the embodiment of the present application, the model inference 230 in the deep learning process can be simplified by defining the pre-processing template and the post-processing template in advance. The service model generation method provided in this embodiment may simplify model inference 230. The service model generation method is described below.
Please refer to fig. 3, which is a flowchart illustrating a method for generating a service model according to an embodiment of the present application. The specific flow shown in fig. 3 will be described in detail below.
Step 302, an image processing model is obtained.
For example, the image processing model may be obtained from a path of the image processing model received in the set window.
Illustratively, as shown in FIG. 4, FIG. 4 shows a schematic view of an operation interface generated by a service model. The figure includes a plurality of windows for receiving input information. Wherein, the operation interface comprises a model path window for inputting the path of the image processing model. The operator interface may also include a "signature" window for receiving model signatures. The operation interface may further include a scene button for selecting a corresponding usage scene, and only three scene buttons are shown in the drawing, respectively: "image classification", "object detection", and "semantic segmentation". The operation interface may further include buttons for selecting an image type of the identification image, and only two types of buttons are shown in the drawing, respectively: "color" and "black and white". The operator interface may also include a "category details" window for loading the label mapping file. The operation interface can also comprise a 'preprocessing template' window for loading preprocessing templates. The operator interface may also include a "post-processing template" window for loading post-processing templates. The operator interface may also include an "image path" window for inputting an image path.
In the above example, the image processing model is obtained through the path of the image processing model in the "model path" window.
Illustratively, as shown in fig. 4, a query button and an upload button are also provided beside the "model path" window. The query button is used to open a local file directory for the user to select a storage path for the image processing model. The upload button is used for uploading the image processing model file in the storage path of the image processing model to the server. The server is connected with the electronic equipment displaying the operation interface.
And step 304, analyzing the image processing model to obtain model characteristics of the image processing model.
In this embodiment, the input parameter information of the input layer and the output parameter information of the output layer of the image processing model may be obtained by analyzing the image processing model.
Illustratively, the input parameter information of the input data required by the image processing model is obtained by analyzing the input layer of the image processing model. The input parameter information may include, but is not limited to, parameter type, parameter dimension, parameter name, and the like.
Illustratively, the parameter type of the input parameter information may be integer type, floating point type, character string type, or the like.
For example, the input data may be a vector or a matrix, and the parameter dimension of the input parameter information may be a dimension of the vector or the matrix.
Illustratively, the output parameter information of the image processing model output data is obtained by analyzing the output layer of the image processing model. The output parameter information may include, but is not limited to, parameter type, parameter dimension, parameter name, parameter category, and the like.
Illustratively, the parameter type of the output parameter information may be integer type, floating point type, character string type, or the like.
Illustratively, the output data may be a vector or a matrix, and the parameter dimension of the output parameter information may be a dimension of the vector or the matrix.
Illustratively, the parameter categories may be detection levels, detection scores, detection boxes, and the like.
Step 306, configuring a pre-defined preprocessing template according to the model characteristics to obtain a preprocessing program.
In this embodiment, the preprocessing program is used to perform data conversion on the picture to be recognized to obtain input data that can be recognized by the image processing model.
Optionally, the predefined preprocessing template is filled with the input parameter information to obtain a preprocessing procedure.
Illustratively, the pre-processing template may be a program that reserves input parameter information that needs to be filled in.
Illustratively, as shown in FIG. 4, the content of a written or loaded pre-processing template may be received in the "pre-processing template" window. Illustratively, a "save" and "withdraw" button may also be provided above the "handle template" window. The save button and the withdraw button are used for the user to perform corresponding operations after writing the content of the pre-processing template.
In one example, the content of the pre-processing template may be:
from PIL import Image
import numpy as np
def preprocess(picture_path):
img=Image.open(picture_path)
img=img.convert(″).
np_img=np.asarray(imag).astyp()
input_dict={:np_img}
return input_dict
the above code reserves information of "img.convert", "np _ img = np.asarray (img).
Wherein the type used to fill in the picture at "img. The parameter type used to fill the required conversion at "np _ img = np. The name used to fill the required model input layer is at "input _ fact = { }" in the pre-processing template.
In the above example, after the input parameters are filled into the preprocessing template, the preprocessing procedure can be obtained as follows:
Figure BDA0002629970020000111
in the above example, the picture type of the padding is the "RGB" type. The parameter type of the required conversion of the padding is "np.uint8", and the extended dimension process of the padding is "np _ img = np.expand _ dims (np _ img, axis = 0)" the required model input layer name of the padding is "inputs'".
Illustratively, the information filled into the pre-processing template may be input parameter information obtained by parsing the image processing model in step 304.
It is to be appreciated that the pre-processing template and the populated input parameter information in the above example are merely exemplary. For example, the populated input parameter information may differ according to the requirements for identifying the model file. For another example, the contents in the preprocessing template may be different according to different data conversion requirements.
In this embodiment, steps and algorithms required for data conversion may be predefined in the preprocessing template, so that when the input parameter information is filled into the preprocessing template, conversion from data in one format to data in another format can be achieved.
And 308, configuring a predefined post-processing template according to the model characteristics to obtain a post-processing program.
In one embodiment, if some annotation information needs to be displayed on the recognition image after the image is recognized, the corresponding tag information may be configured when the post-processing template is configured.
Illustratively, step 308 may include: and acquiring a label mapping file, and filling the post-processing template according to the output parameter information and the label mapping file to obtain a post-processing program.
Optionally, the obtaining of the tag mapping file may be implemented as: and obtaining the label mapping file from the set path.
In one example, as shown in FIG. 4, a "Category details" window is shown in FIG. 4. Alternatively, the "category details" window may be used to load the tag mapping file. For example, a query button and an upload button are displayed next to the "category details" window. The query button is used for opening a local file directory for a user to select a storage path of the tag mapping file. The upload button is used for uploading the label mapping file in the storage path of the label mapping file to the server. The server is connected with the electronic equipment displaying the operation interface.
Optionally, the "category details" window may also be used to receive an input label mapping file.
Illustratively, as shown in FIG. 4, the contents of a written or loaded post-processing template may be received in the "post-processing template" window. Illustratively, a "save" and "withdraw" button may also be provided above the "post-processing template" window. The "save" and "recall" buttons are used for the user to perform corresponding operations after writing the contents of the post-processing template.
In one example, the content of the post-processing template may include:
Figure BDA0002629970020000121
Figure BDA0002629970020000131
in the above example, "label _ map = label _ map _ util.load _ labelmap ()" in the post-processing template is used to fill the path of the label mapping file, "vis _ util.visualize _ boxes _ and _ labels _ on _ image _ array (image _ np, np.squeze (output _ di [" ]) "and"
np. Squeeze (output _ fact [ "]. Astype ()), np. Squeeze (output _ fact [" ]) "are used to fill parameter classes.
The partial contents of the post-processing template described above are merely exemplary. Specifically, the content of the post-processing template may also be different depending on the output requirements of the inference service.
In the above example, the post-processing program can be obtained by filling the tag mapping file and the output parameters into the post-processing template. Illustratively, the post-processing program may include:
Figure BDA0002629970020000132
in the above example, the path of the padding label mapping file is:
"/public/home/sugon/SothisAI/tensorflow/tf _ interference/independent _ interference/l abel. Txt", the filled fill parameter categories include "detection _ boxes", "detection _ classes", and "detection _ categories".
It will be appreciated that the post-processing template and populated output parameter information and label mapping file in the above example are merely exemplary. For example, the populated output parameter information may be different according to different model requirements. For another example, the populated tag mapping file may also be different according to the different presentation modes of the output result. For another example, the content in the post-processing template may also be different according to different data conversion requirements.
In this embodiment, steps and algorithms required for data conversion may be predefined in the post-processing template, so that when the output parameter information and the tag mapping file are filled in the post-processing template, conversion from the output data of the image processing model to data of an identification result recognizable by a user can be achieved.
Step 310, packaging the pre-processing program, the image processing model and the post-processing program to generate reasoning service.
In the example shown in fig. 4, a "run" button may also be included in the operation interface shown in fig. 4. After filling the various windows with the required information, when the "run" button is clicked, steps 302-310 in this embodiment can be performed to generate the inference service.
In this example, when the selected scenario is different in the operation interface shown in fig. 4, the generated inference service may be used to implement different recognition services. For example, when a storage path of a model file for image classification is loaded in the "model path" window, and at this time, an "image classification" button may be selected among the scene buttons shown in the drawing, the generated inference service may be used to provide a service for image classification. For another example, when the storage address of the model file for semantic segmentation is loaded in the "model path" window, and at this time, the "semantic segmentation" button can be selected from the scene buttons shown in the figure, the generated inference service can be used to provide a service for semantic segmentation.
Illustratively, the operation interface shown in fig. 4 may further include a "reset" button, and when the "reset" button is clicked, the inference generating service may be stopped, so that each window in the operation interface is in an editable state, so that the user may reset information required by the inference generating service.
In this embodiment, before step 302, as shown in fig. 5, the service model generation method may further include: step 301, after the training of the image processing model is completed, storing the image processing model by using a specified storage mode.
When the image processing model is stored by using the specified storage mode, an input layer and an output layer of a specified signature in the image processing model are defined.
Alternatively, the model storage mode savedmodel builder recommended by TF-Serving can be used for storage.
In this embodiment, the image processing model may include a plurality of signatures, each signature may be used to implement a function. Illustratively, the function may be: image classification, target detection, semantic segmentation and the like.
In this embodiment, when the image processing model is stored, the input and output layers of the designated signature may be defined.
Illustratively, the specified signature may be a sub-model in the image processing model that is used to implement the desired functionality.
Illustratively, the model file for storing the image processing model by the savedmodel builder described above is provided with a specific API (Application Programming Interface). The API can acquire input parameter information of an input layer and output parameter information of an output layer under the specified signature of the image processing model. Illustratively, the API may be a save _ model _ cli interface.
Illustratively, step 304 may include: analyzing the designated signature of the image processing model to obtain model features of the image processing model.
Optionally, step 304 may include: and analyzing the image processing model through an interface provided by TF-Serving to obtain the model characteristics of the image processing model.
Illustratively, the interface provided by the TF-Serving may be a saved _ model _ cli interface.
In this embodiment, when the saved _ model _ cli interface is analyzed, the signature in the image processing model is specified, so that the analysis can be embodied in the specified signature, and the input parameter information of the input layer and the output parameter information of the output layer in the signature can be analyzed.
For example, when the input parameter information needs to be analyzed, the signature of the input layer in the image processing model is specified when the saved _ model _ cli interface is analyzed, so that the input layer can be analyzed to obtain the input parameter information.
For example, when the output parameter information needs to be analyzed, the signature of the output layer in the image processing model is specified when the saved _ model _ cli interface is analyzed, so that the output layer can be analyzed to obtain the output parameter information.
For example, in model feature analysis of an image processing model, the signature may be specified by adding the name of the signature and a storage path of the image processing model to the logical code.
For example, the input layer of the image processing model may be a signed input layer of the image processing model and the output layer of the image processing model may be a signed output layer of the image processing model. The signature corresponding to the input layer of the image processing model is the same signature as the signature corresponding to the output layer of the image processing model.
In the service model generation method of the embodiment of the application, the pre-processing template and the post-processing template are directly configured by acquiring the characteristics of the model, so that the time spent for writing the pre-processing program and the post-processing program can be reduced, the time period for generating the whole inference service is reduced, and the efficiency for issuing the inference service can be further improved.
Furthermore, the preprocessing program is automatically generated based on the preprocessing template, and the post-processing program is also automatically generated based on the post-processing template, so that the issuing of the reasoning service and the sharing of the reasoning service are more convenient.
EXAMPLE III
The embodiment of the present application provides a method for generating an image classification service, which is similar to the method in the second embodiment, and the difference is that the method in the present embodiment is specifically used for constructing the image classification service.
As shown in fig. 6, the image classification service generation method in the present embodiment may include the following steps.
Step 401, an image classification model is obtained.
Alternatively, the image classification model may be a model obtained by training a classification model using a labeled training data set. Illustratively, the label may be a category corresponding to each item of training data. The class corresponding to the label of each item of training data is one of multiple classes which can be output by the image classification model.
Step 402, analyzing the image classification model to obtain model characteristics of the image classification model.
Illustratively, the model features may include: input parameter information of an input layer of the image classification model and output parameter information of an output layer of the image classification model.
Step 403, configuring a pre-defined pre-processing template according to the model features to obtain a pre-processing program.
Step 404, configuring a predefined post-processing template according to the model features to obtain a post-processing program.
In this embodiment, the output information of the image classification service includes the classification category corresponding to the identified image. Thus, when configuring the post-processing template, the configured label mapping file may include mapping the classification category into the identified image.
In one example, the classification categories of the image classification model may include three categories, respectively: motorcycles, automobiles, and people. The output of the image classification model may be probabilities for the respective classes. For example, when the output of the image classification model is (a, b, c), where a represents the probability that the object in the image to be recognized is a motorcycle, b represents the probability that the object in the image to be recognized is a car, and c represents the probability that the object in the image to be recognized is a person. In this example, when the classification result is a person, the output of the image classification model may be a probability (0.3, 0.2, 0.5), where the probability value of classification category 3 is highest; when the classification result is an automobile, the output of the image classification model may be probabilities (0.3, 0.6, 0.1), wherein the probability value of the classification category 2 is highest; when the classification result is a motorcycle, the output of the image classification model may be probabilities (0.5, 0.2, 0.3), where the probability value of the classification category 1 is highest. In this example, the content in the tag mapping file may include: and converting the number category corresponding to the classification result output by the output layer into the related content of the corresponding character category. For example, the output probabilities (0.3, 0.2, 0.5) are converted into the words "person", and the classification result is displayed as "person". For example, a display may be made as "classification result: the character of the person "is displayed next to the corresponding image to be recognized.
Step 405, packaging the pre-processing program, the image classification model and the post-processing program to generate an image classification service.
Optionally, the image classification service may be released to a software platform for downloading by a user, so as to provide the user with the image classification service.
The image classification service generation method in this embodiment is similar to the service model generation method provided in the second embodiment, so the implementation of the method in this embodiment can be referred to the description in the embodiment of the service model generation method provided in the second embodiment, and repeated descriptions are omitted.
By using the method in the embodiment, the pre-processing template and the post-processing template are used, so that the pre-processing program and the post-processing program can be obtained in a more convenient and faster manner. Furthermore, the image classification service generation method can be obtained more quickly, and the distribution efficiency of the image classification service can also be improved.
Example four
Based on the same application concept, a service model generation device corresponding to the service model generation method is also provided in the embodiments of the present application, and since the principle of solving the problem of the device in the embodiments of the present application is similar to that in the embodiments of the service model generation method, the implementation of the device in the embodiments of the present application may refer to the description in the embodiments of the method, and repeated details are omitted.
Please refer to fig. 7, which is a functional module diagram of a service model generation apparatus according to an embodiment of the present application. Each module in the service model generation apparatus in this embodiment is configured to execute each step in the service model generation method embodiment provided in the second embodiment. The service model generation device includes: an acquisition module 501, an analysis module 502, a first configuration module 503, a second configuration module 504 and a generation module 505; wherein the content of the first and second substances,
an obtaining module 501, configured to obtain an image processing model;
an analysis module 502, configured to analyze the image processing model to obtain a model feature of the image processing model;
a first configuration module 503, configured to configure a predefined preprocessing template according to the model features to obtain a preprocessing program;
a second configuration module 504, configured to configure a predefined post-processing template according to the model features to obtain a post-processing program;
a generating module 505, configured to package the pre-processing program, the image processing model, and the post-processing program to generate an inference service.
In a possible implementation, the model features include input parameter information, and the first configuration module 503 is configured to:
and filling the predefined preprocessing template with the input parameter information to obtain a preprocessing program.
In a possible implementation, the model features include output parameter information, and the second configuration module 504 is configured to:
acquiring a label mapping file;
and filling the post-processing template according to the output parameter information and the label mapping file to obtain a post-processing program.
In a possible implementation manner, the service model generating apparatus in this embodiment further includes:
the storage module is used for storing the image processing model by using a specified storage mode after the training of the image processing model is finished;
when the image processing model is stored by using the specified storage mode, an input layer and an output layer of a specified signature in the image processing model are defined.
The analysis module 502 is configured to analyze the specified signature of the image processing model to obtain a model feature of the image processing model.
In a possible implementation, the parsing module 502 is configured to:
and analyzing the image processing model through an interface provided by TF-Serving to obtain the model characteristics of the image processing model.
EXAMPLE five
The embodiment of the application provides a flow chart of an image processing method. The image processing method in the embodiment includes: inputting the image to be processed into the reasoning service, and carrying out image processing on the image to be processed to obtain a processing result.
For example, the inference service used in this embodiment may be the inference service generated by the service model generation method provided in the second embodiment, and for the obtaining manner of the inference service, reference may be made to the description in the second embodiment, which is not described herein again.
In one example, the image processing method in the present embodiment may be used to classify images. For example, an image with a person is classified into an image without a person. In this instance, the processing result may be to identify whether the image to be processed includes a portrait.
In one example, the image processing method in the present embodiment may be used to implement attendance checking. In this example, the processing result may be whether the card punching is successful. For example, the processing result may further identify whether a real person is included in the image to be processed, and whether the real person is a person in the designated user group.
In one example, the image processing method in the present embodiment may be used to implement semantic segmentation. In this example, the processing result may be object information included in the image to be processed. In one example, the image to be processed includes a face image area, a vehicle image area, and a building image area. In this example, the object information is face position information, face size information, vehicle position information, vehicle size information, building position information, building size information, and the like.
EXAMPLE six
Based on the same application concept, an image processing apparatus corresponding to the image processing method is further provided in the embodiments of the present application, and since the principle of the apparatus in the embodiments of the present application for solving the problem is similar to that in the embodiments of the image processing method, the implementation of the apparatus in the embodiments of the present application may refer to the description in the embodiments of the image processing method, and repeated details are omitted.
The embodiment of the application provides an image processing device. Each module in the service model generating device in this embodiment is configured to execute each step in the image processing method embodiment provided in the fourth embodiment. The image processing apparatus includes: and the processing module is used for inputting the image to be processed into the reasoning service and carrying out image processing on the image to be processed so as to obtain a processing result.
For example, the inference service used in this embodiment may be the inference service generated by the service model generation method provided in the second embodiment, and for the obtaining manner of the inference service, reference may be made to the description in the second embodiment, and details are not described here.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the service model generation method or the image processing method described in the above method embodiment.
The computer program product of the service model generation method and the image processing method provided in the embodiments of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the service model generation method or the image processing method described in the above method embodiments, which may be specifically referred to in the above method embodiments and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes. It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method for generating a service model, comprising:
acquiring an image processing model;
analyzing the image processing model to obtain model characteristics of the image processing model, including: analyzing an input layer of the image processing model to obtain input parameter information of input data required by the image processing model, and analyzing an output layer of the image processing model to obtain output parameter information of output data of the image processing model;
configuring a pre-defined preprocessing template according to the model features to obtain a preprocessing program, which comprises the following steps: filling the predefined preprocessing template with the input parameter information to obtain a preprocessing program;
configuring a predefined post-processing template according to the model features to obtain a post-processing program, comprising: acquiring a label mapping file; filling the post-processing template according to the output parameter information and the label mapping file to obtain a post-processing program;
and packaging the pre-processing program, the image processing model and the post-processing program to generate reasoning service.
2. The method of claim 1, wherein prior to the step of obtaining an image processing model, the method further comprises:
after the training of the image processing model is finished, storing the image processing model by using a specified storage mode;
when the image processing model is stored by using the specified storage mode, defining an input layer and an output layer of a specified signature in the image processing model;
the analyzing the image processing model to obtain the model characteristics of the image processing model includes: analyzing the designated signature of the image processing model to obtain model features of the image processing model.
3. The method of claim 1, wherein the parsing the image processing model to obtain model features of the image processing model comprises:
and analyzing the image processing model through an interface provided by TF-Serving to obtain the model characteristics of the image processing model.
4. An image processing method, characterized by comprising:
inputting an image to be processed into an inference service generated by using the service model generation method of any one of claims 1 to 3, and performing image processing on the image to be processed to obtain a processing result.
5. A service model generation apparatus, comprising:
the acquisition module is used for acquiring an image processing model;
the analysis module is used for analyzing an input layer of the image processing model to obtain input parameter information of input data required by the image processing model and analyzing an output layer of the image processing model to obtain output parameter information of output data of the image processing model;
the first configuration module is used for filling the input parameter information into a predefined preprocessing template to obtain a preprocessing program;
the second configuration module is used for acquiring the label mapping file; filling a post-processing template according to the output parameter information and the label mapping file to obtain a post-processing program;
and the generating module is used for packaging the preprocessing program, the image processing model and the post-processing program to generate inference service.
6. An image processing apparatus characterized by comprising:
a processing module, configured to input an image to be processed into an inference service generated by using the service model generation method according to any one of claims 1 to 3, and perform image processing on the image to be processed to obtain a processing result.
7. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 4 when the electronic device is run.
8. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 4.
CN202010809792.2A 2020-08-12 2020-08-12 Service model generation method, image processing method, device and electronic equipment Active CN111898761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010809792.2A CN111898761B (en) 2020-08-12 2020-08-12 Service model generation method, image processing method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010809792.2A CN111898761B (en) 2020-08-12 2020-08-12 Service model generation method, image processing method, device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111898761A CN111898761A (en) 2020-11-06
CN111898761B true CN111898761B (en) 2022-11-22

Family

ID=73229290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010809792.2A Active CN111898761B (en) 2020-08-12 2020-08-12 Service model generation method, image processing method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111898761B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893861A (en) * 2004-05-04 2010-11-24 费舍-柔斯芒特系统股份有限公司 Process configuration and control environment in figure integrated

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050011745A (en) * 2002-04-19 2005-01-29 컴퓨터 어소시에이츠 싱크, 인코포레이티드 System and method for providing inferencing services
CN103294475B (en) * 2013-06-08 2016-01-13 北京邮电大学 The business automatic creation system of graphic based business scenario and domain template and method
CN108037961B (en) * 2017-12-07 2021-01-19 北京锐安科技有限公司 Application program configuration method, device, server and storage medium
US20200175396A1 (en) * 2018-12-04 2020-06-04 Intel Corporation Deterministic neural networking interoperability
CN110532098B (en) * 2019-08-30 2022-03-08 广东星舆科技有限公司 Method and system for providing GPU (graphics processing Unit) service
CN110928553A (en) * 2019-10-16 2020-03-27 中国平安人寿保险股份有限公司 Deployment method, device and system of deep learning model
CN111461332B (en) * 2020-03-24 2021-08-20 北京五八信息技术有限公司 Deep learning model online reasoning method and device, electronic equipment and storage medium
CN111488211A (en) * 2020-04-09 2020-08-04 北京嘀嘀无限科技发展有限公司 Task processing method, device, equipment and medium based on deep learning framework

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893861A (en) * 2004-05-04 2010-11-24 费舍-柔斯芒特系统股份有限公司 Process configuration and control environment in figure integrated

Also Published As

Publication number Publication date
CN111898761A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN110785736B (en) Automatic code generation
US10235602B1 (en) Machine learning artificial intelligence system for identifying vehicles
EP3869385B1 (en) Method for extracting structural data from image, apparatus and device
CN111767228B (en) Interface testing method, device, equipment and medium based on artificial intelligence
US11164306B2 (en) Visualization of inspection results
CN113011186A (en) Named entity recognition method, device, equipment and computer readable storage medium
CN111523421A (en) Multi-user behavior detection method and system based on deep learning and fusion of various interaction information
CN110781925A (en) Software page classification method and device, electronic equipment and storage medium
WO2022247112A1 (en) Task processing method and apparatus, device, storage medium, computer program, and program product
JP2019133303A (en) Information processor and information processing program and information processing method
CN111522979B (en) Picture sorting recommendation method and device, electronic equipment and storage medium
US20210279279A1 (en) Automated graph embedding recommendations based on extracted graph features
CN111124863A (en) Intelligent equipment performance testing method and device and intelligent equipment
US20240086165A1 (en) Systems and methods for building and deploying machine learning applications
CN111898761B (en) Service model generation method, image processing method, device and electronic equipment
CN111368709A (en) Picture text recognition method, device and equipment and readable storage medium
CN110766460A (en) User portrait drawing method and device, storage medium and computer equipment
CN115631374A (en) Control operation method, control detection model training method, device and equipment
US20180052699A1 (en) Generating Computer Executable Instructions
US20220092406A1 (en) Meta-feature training models for machine learning algorithms
Sánchez-Morales et al. Generation of user interfaces for mobile applications using neuronal networks
CN113327132A (en) Multimedia recommendation method, device, equipment and storage medium
Kumar et al. Image classification in python using Keras
Sharma et al. DLC re-builder: Sketch based recognition and 2-D conversion of digital logic circuit
KR102648613B1 (en) Method, apparatus and computer-readable recording medium for generating product images displayed in an internet shopping mall based on an input image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220110

Address after: 100089 building 36, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing

Applicant after: Dawning Information Industry (Beijing) Co.,Ltd.

Applicant after: DAWNING INFORMATION INDUSTRY Co.,Ltd.

Applicant after: Zhongke Shuguang International Information Industry Co.,Ltd.

Address before: Building 36, yard 8, Dongbei Wangxi Road, Haidian District, Beijing

Applicant before: Dawning Information Industry (Beijing) Co.,Ltd.

Applicant before: WUXI CITY CLOUD COMPUTING CENTER CO.,LTD.

Applicant before: DAWNING INFORMATION INDUSTRY Co.,Ltd.

Applicant before: Zhongke Shuguang International Information Industry Co.,Ltd.

GR01 Patent grant
GR01 Patent grant