Detailed Description
The technical solution in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example one
To facilitate understanding of the present embodiment, an electronic device that executes the service model generation method or the image processing method disclosed in the embodiments of the present application will be described in detail first.
As shown in fig. 1, is a block schematic diagram of an electronic device. The electronic device 100 may include a memory 110, a memory controller 120, a processor 130, a peripheral interface 140, an input-output unit 150, and a display unit 160. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The aforementioned components of the memory 110, the memory controller 120, the processor 130, the peripheral interface 140, the input/output unit 150, and the display unit 160 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 130 is used to execute the executable modules stored in the memory.
The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 110 is configured to store a program, and the processor 130 executes the program after receiving an execution instruction, and the method performed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 130, or implemented by the processor 130.
The processor 130 may be an integrated circuit chip having signal processing capability. The Processor 130 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 140 couples various input/output devices to the processor 130 and to the memory 110. In some embodiments, peripheral interface 140, processor 130, and memory controller 120 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 150 is used for providing input data to a user. The input/output unit 150 may be, but is not limited to, a mouse, a keyboard, and the like.
The display unit 160 provides an interactive interface (e.g., a user operation interface) between the electronic device 100 and a user or is used for displaying image data to the user for reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing.
It can be known that, when the inference service obtained by the service model generation method provided in the embodiment of the present application is used to provide different services, the electronic device may further set more components according to the different services provided. For example, the inference service is used to implement face card punching, and the electronic device in this embodiment may further include a camera unit for acquiring a face image.
The electronic device 100 in this embodiment may be configured to perform each step in each method provided in this embodiment. The following describes in detail the implementation of the service model generation method and the image processing method by several embodiments.
Example two
First, a process of constructing a deep learning model in a TensorFlow scenario required to be used in the embodiment of the present application is described, as shown in fig. 2. The process of deep learning can be summarized as follows: data set transformation 210, model training 220, and model inference 230.
Among them, data that can be recognized due to the neural network model is generally data of a specified format. For example, the open source software library TensorFlow supports a data format tfrecrd, and general training data is data in a picture format (e.g., jpg, png, etc.). Therefore, it is necessary to convert the picture and the annotation information into a data format tfrecrd supported by tensrflow. The label information may be coordinate information of an object to be identified included in the picture.
The model training 220 described above may include the definition of a model and the training of a model.
Wherein the definition of the model comprises: an input layer, an output layer, a processing algorithm, and learning parameters are defined. For example, the processing algorithm may be to define an initial network model by some neural network models. For example, the neural network model may be: convolutional Neural Networks (CNN), deep Convolutional Neural Networks (DCNN), and the like.
The training of the model can be to input some pictures with labeled information into the initial network model to perform iterative computation so as to adjust parameters in the initial network model until set conditions are reached, thereby obtaining a trained image processing model.
The model inference 230 described above may include: model validation and publication inference services.
The model verification is to verify the trained image processing model by using test data so as to determine the difference between the inference result obtained by the image processing model and an expected inference result.
The issuing inference service is used for merging the preprocessing program, the post-processing program and the image processing model together for issuing.
Illustratively, the deep learning model constructed in the tensrflow scene can be used for realizing target detection, image classification, semantic segmentation and the like.
In the embodiment of the present application, the model inference 230 in the deep learning process can be simplified by defining the pre-processing template and the post-processing template in advance. The service model generation method provided in this embodiment may simplify model inference 230. The service model generation method is described below.
Please refer to fig. 3, which is a flowchart illustrating a method for generating a service model according to an embodiment of the present application. The specific flow shown in fig. 3 will be described in detail below.
Step 302, an image processing model is obtained.
For example, the image processing model may be obtained from a path of the image processing model received in the set window.
Illustratively, as shown in FIG. 4, FIG. 4 shows a schematic view of an operation interface generated by a service model. The figure includes a plurality of windows for receiving input information. Wherein, the operation interface comprises a model path window for inputting the path of the image processing model. The operator interface may also include a "signature" window for receiving model signatures. The operation interface may further include a scene button for selecting a corresponding usage scene, and only three scene buttons are shown in the drawing, respectively: "image classification", "object detection", and "semantic segmentation". The operation interface may further include buttons for selecting an image type of the identification image, and only two types of buttons are shown in the drawing, respectively: "color" and "black and white". The operator interface may also include a "category details" window for loading the label mapping file. The operation interface can also comprise a 'preprocessing template' window for loading preprocessing templates. The operator interface may also include a "post-processing template" window for loading post-processing templates. The operator interface may also include an "image path" window for inputting an image path.
In the above example, the image processing model is obtained through the path of the image processing model in the "model path" window.
Illustratively, as shown in fig. 4, a query button and an upload button are also provided beside the "model path" window. The query button is used to open a local file directory for the user to select a storage path for the image processing model. The upload button is used for uploading the image processing model file in the storage path of the image processing model to the server. The server is connected with the electronic equipment displaying the operation interface.
And step 304, analyzing the image processing model to obtain model characteristics of the image processing model.
In this embodiment, the input parameter information of the input layer and the output parameter information of the output layer of the image processing model may be obtained by analyzing the image processing model.
Illustratively, the input parameter information of the input data required by the image processing model is obtained by analyzing the input layer of the image processing model. The input parameter information may include, but is not limited to, parameter type, parameter dimension, parameter name, and the like.
Illustratively, the parameter type of the input parameter information may be integer type, floating point type, character string type, or the like.
For example, the input data may be a vector or a matrix, and the parameter dimension of the input parameter information may be a dimension of the vector or the matrix.
Illustratively, the output parameter information of the image processing model output data is obtained by analyzing the output layer of the image processing model. The output parameter information may include, but is not limited to, parameter type, parameter dimension, parameter name, parameter category, and the like.
Illustratively, the parameter type of the output parameter information may be integer type, floating point type, character string type, or the like.
Illustratively, the output data may be a vector or a matrix, and the parameter dimension of the output parameter information may be a dimension of the vector or the matrix.
Illustratively, the parameter categories may be detection levels, detection scores, detection boxes, and the like.
Step 306, configuring a pre-defined preprocessing template according to the model characteristics to obtain a preprocessing program.
In this embodiment, the preprocessing program is used to perform data conversion on the picture to be recognized to obtain input data that can be recognized by the image processing model.
Optionally, the predefined preprocessing template is filled with the input parameter information to obtain a preprocessing procedure.
Illustratively, the pre-processing template may be a program that reserves input parameter information that needs to be filled in.
Illustratively, as shown in FIG. 4, the content of a written or loaded pre-processing template may be received in the "pre-processing template" window. Illustratively, a "save" and "withdraw" button may also be provided above the "handle template" window. The save button and the withdraw button are used for the user to perform corresponding operations after writing the content of the pre-processing template.
In one example, the content of the pre-processing template may be:
from PIL import Image
import numpy as np
def preprocess(picture_path):
img=Image.open(picture_path)
img=img.convert(″).
np_img=np.asarray(imag).astyp()
input_dict={:np_img}
return input_dict
the above code reserves information of "img.convert", "np _ img = np.asarray (img).
Wherein the type used to fill in the picture at "img. The parameter type used to fill the required conversion at "np _ img = np. The name used to fill the required model input layer is at "input _ fact = { }" in the pre-processing template.
In the above example, after the input parameters are filled into the preprocessing template, the preprocessing procedure can be obtained as follows:
in the above example, the picture type of the padding is the "RGB" type. The parameter type of the required conversion of the padding is "np.uint8", and the extended dimension process of the padding is "np _ img = np.expand _ dims (np _ img, axis = 0)" the required model input layer name of the padding is "inputs'".
Illustratively, the information filled into the pre-processing template may be input parameter information obtained by parsing the image processing model in step 304.
It is to be appreciated that the pre-processing template and the populated input parameter information in the above example are merely exemplary. For example, the populated input parameter information may differ according to the requirements for identifying the model file. For another example, the contents in the preprocessing template may be different according to different data conversion requirements.
In this embodiment, steps and algorithms required for data conversion may be predefined in the preprocessing template, so that when the input parameter information is filled into the preprocessing template, conversion from data in one format to data in another format can be achieved.
And 308, configuring a predefined post-processing template according to the model characteristics to obtain a post-processing program.
In one embodiment, if some annotation information needs to be displayed on the recognition image after the image is recognized, the corresponding tag information may be configured when the post-processing template is configured.
Illustratively, step 308 may include: and acquiring a label mapping file, and filling the post-processing template according to the output parameter information and the label mapping file to obtain a post-processing program.
Optionally, the obtaining of the tag mapping file may be implemented as: and obtaining the label mapping file from the set path.
In one example, as shown in FIG. 4, a "Category details" window is shown in FIG. 4. Alternatively, the "category details" window may be used to load the tag mapping file. For example, a query button and an upload button are displayed next to the "category details" window. The query button is used for opening a local file directory for a user to select a storage path of the tag mapping file. The upload button is used for uploading the label mapping file in the storage path of the label mapping file to the server. The server is connected with the electronic equipment displaying the operation interface.
Optionally, the "category details" window may also be used to receive an input label mapping file.
Illustratively, as shown in FIG. 4, the contents of a written or loaded post-processing template may be received in the "post-processing template" window. Illustratively, a "save" and "withdraw" button may also be provided above the "post-processing template" window. The "save" and "recall" buttons are used for the user to perform corresponding operations after writing the contents of the post-processing template.
In one example, the content of the post-processing template may include:
in the above example, "label _ map = label _ map _ util.load _ labelmap ()" in the post-processing template is used to fill the path of the label mapping file, "vis _ util.visualize _ boxes _ and _ labels _ on _ image _ array (image _ np, np.squeze (output _ di [" ]) "and"
np. Squeeze (output _ fact [ "]. Astype ()), np. Squeeze (output _ fact [" ]) "are used to fill parameter classes.
The partial contents of the post-processing template described above are merely exemplary. Specifically, the content of the post-processing template may also be different depending on the output requirements of the inference service.
In the above example, the post-processing program can be obtained by filling the tag mapping file and the output parameters into the post-processing template. Illustratively, the post-processing program may include:
in the above example, the path of the padding label mapping file is:
"/public/home/sugon/SothisAI/tensorflow/tf _ interference/independent _ interference/l abel. Txt", the filled fill parameter categories include "detection _ boxes", "detection _ classes", and "detection _ categories".
It will be appreciated that the post-processing template and populated output parameter information and label mapping file in the above example are merely exemplary. For example, the populated output parameter information may be different according to different model requirements. For another example, the populated tag mapping file may also be different according to the different presentation modes of the output result. For another example, the content in the post-processing template may also be different according to different data conversion requirements.
In this embodiment, steps and algorithms required for data conversion may be predefined in the post-processing template, so that when the output parameter information and the tag mapping file are filled in the post-processing template, conversion from the output data of the image processing model to data of an identification result recognizable by a user can be achieved.
Step 310, packaging the pre-processing program, the image processing model and the post-processing program to generate reasoning service.
In the example shown in fig. 4, a "run" button may also be included in the operation interface shown in fig. 4. After filling the various windows with the required information, when the "run" button is clicked, steps 302-310 in this embodiment can be performed to generate the inference service.
In this example, when the selected scenario is different in the operation interface shown in fig. 4, the generated inference service may be used to implement different recognition services. For example, when a storage path of a model file for image classification is loaded in the "model path" window, and at this time, an "image classification" button may be selected among the scene buttons shown in the drawing, the generated inference service may be used to provide a service for image classification. For another example, when the storage address of the model file for semantic segmentation is loaded in the "model path" window, and at this time, the "semantic segmentation" button can be selected from the scene buttons shown in the figure, the generated inference service can be used to provide a service for semantic segmentation.
Illustratively, the operation interface shown in fig. 4 may further include a "reset" button, and when the "reset" button is clicked, the inference generating service may be stopped, so that each window in the operation interface is in an editable state, so that the user may reset information required by the inference generating service.
In this embodiment, before step 302, as shown in fig. 5, the service model generation method may further include: step 301, after the training of the image processing model is completed, storing the image processing model by using a specified storage mode.
When the image processing model is stored by using the specified storage mode, an input layer and an output layer of a specified signature in the image processing model are defined.
Alternatively, the model storage mode savedmodel builder recommended by TF-Serving can be used for storage.
In this embodiment, the image processing model may include a plurality of signatures, each signature may be used to implement a function. Illustratively, the function may be: image classification, target detection, semantic segmentation and the like.
In this embodiment, when the image processing model is stored, the input and output layers of the designated signature may be defined.
Illustratively, the specified signature may be a sub-model in the image processing model that is used to implement the desired functionality.
Illustratively, the model file for storing the image processing model by the savedmodel builder described above is provided with a specific API (Application Programming Interface). The API can acquire input parameter information of an input layer and output parameter information of an output layer under the specified signature of the image processing model. Illustratively, the API may be a save _ model _ cli interface.
Illustratively, step 304 may include: analyzing the designated signature of the image processing model to obtain model features of the image processing model.
Optionally, step 304 may include: and analyzing the image processing model through an interface provided by TF-Serving to obtain the model characteristics of the image processing model.
Illustratively, the interface provided by the TF-Serving may be a saved _ model _ cli interface.
In this embodiment, when the saved _ model _ cli interface is analyzed, the signature in the image processing model is specified, so that the analysis can be embodied in the specified signature, and the input parameter information of the input layer and the output parameter information of the output layer in the signature can be analyzed.
For example, when the input parameter information needs to be analyzed, the signature of the input layer in the image processing model is specified when the saved _ model _ cli interface is analyzed, so that the input layer can be analyzed to obtain the input parameter information.
For example, when the output parameter information needs to be analyzed, the signature of the output layer in the image processing model is specified when the saved _ model _ cli interface is analyzed, so that the output layer can be analyzed to obtain the output parameter information.
For example, in model feature analysis of an image processing model, the signature may be specified by adding the name of the signature and a storage path of the image processing model to the logical code.
For example, the input layer of the image processing model may be a signed input layer of the image processing model and the output layer of the image processing model may be a signed output layer of the image processing model. The signature corresponding to the input layer of the image processing model is the same signature as the signature corresponding to the output layer of the image processing model.
In the service model generation method of the embodiment of the application, the pre-processing template and the post-processing template are directly configured by acquiring the characteristics of the model, so that the time spent for writing the pre-processing program and the post-processing program can be reduced, the time period for generating the whole inference service is reduced, and the efficiency for issuing the inference service can be further improved.
Furthermore, the preprocessing program is automatically generated based on the preprocessing template, and the post-processing program is also automatically generated based on the post-processing template, so that the issuing of the reasoning service and the sharing of the reasoning service are more convenient.
EXAMPLE III
The embodiment of the present application provides a method for generating an image classification service, which is similar to the method in the second embodiment, and the difference is that the method in the present embodiment is specifically used for constructing the image classification service.
As shown in fig. 6, the image classification service generation method in the present embodiment may include the following steps.
Step 401, an image classification model is obtained.
Alternatively, the image classification model may be a model obtained by training a classification model using a labeled training data set. Illustratively, the label may be a category corresponding to each item of training data. The class corresponding to the label of each item of training data is one of multiple classes which can be output by the image classification model.
Step 402, analyzing the image classification model to obtain model characteristics of the image classification model.
Illustratively, the model features may include: input parameter information of an input layer of the image classification model and output parameter information of an output layer of the image classification model.
Step 403, configuring a pre-defined pre-processing template according to the model features to obtain a pre-processing program.
Step 404, configuring a predefined post-processing template according to the model features to obtain a post-processing program.
In this embodiment, the output information of the image classification service includes the classification category corresponding to the identified image. Thus, when configuring the post-processing template, the configured label mapping file may include mapping the classification category into the identified image.
In one example, the classification categories of the image classification model may include three categories, respectively: motorcycles, automobiles, and people. The output of the image classification model may be probabilities for the respective classes. For example, when the output of the image classification model is (a, b, c), where a represents the probability that the object in the image to be recognized is a motorcycle, b represents the probability that the object in the image to be recognized is a car, and c represents the probability that the object in the image to be recognized is a person. In this example, when the classification result is a person, the output of the image classification model may be a probability (0.3, 0.2, 0.5), where the probability value of classification category 3 is highest; when the classification result is an automobile, the output of the image classification model may be probabilities (0.3, 0.6, 0.1), wherein the probability value of the classification category 2 is highest; when the classification result is a motorcycle, the output of the image classification model may be probabilities (0.5, 0.2, 0.3), where the probability value of the classification category 1 is highest. In this example, the content in the tag mapping file may include: and converting the number category corresponding to the classification result output by the output layer into the related content of the corresponding character category. For example, the output probabilities (0.3, 0.2, 0.5) are converted into the words "person", and the classification result is displayed as "person". For example, a display may be made as "classification result: the character of the person "is displayed next to the corresponding image to be recognized.
Step 405, packaging the pre-processing program, the image classification model and the post-processing program to generate an image classification service.
Optionally, the image classification service may be released to a software platform for downloading by a user, so as to provide the user with the image classification service.
The image classification service generation method in this embodiment is similar to the service model generation method provided in the second embodiment, so the implementation of the method in this embodiment can be referred to the description in the embodiment of the service model generation method provided in the second embodiment, and repeated descriptions are omitted.
By using the method in the embodiment, the pre-processing template and the post-processing template are used, so that the pre-processing program and the post-processing program can be obtained in a more convenient and faster manner. Furthermore, the image classification service generation method can be obtained more quickly, and the distribution efficiency of the image classification service can also be improved.
Example four
Based on the same application concept, a service model generation device corresponding to the service model generation method is also provided in the embodiments of the present application, and since the principle of solving the problem of the device in the embodiments of the present application is similar to that in the embodiments of the service model generation method, the implementation of the device in the embodiments of the present application may refer to the description in the embodiments of the method, and repeated details are omitted.
Please refer to fig. 7, which is a functional module diagram of a service model generation apparatus according to an embodiment of the present application. Each module in the service model generation apparatus in this embodiment is configured to execute each step in the service model generation method embodiment provided in the second embodiment. The service model generation device includes: an acquisition module 501, an analysis module 502, a first configuration module 503, a second configuration module 504 and a generation module 505; wherein the content of the first and second substances,
an obtaining module 501, configured to obtain an image processing model;
an analysis module 502, configured to analyze the image processing model to obtain a model feature of the image processing model;
a first configuration module 503, configured to configure a predefined preprocessing template according to the model features to obtain a preprocessing program;
a second configuration module 504, configured to configure a predefined post-processing template according to the model features to obtain a post-processing program;
a generating module 505, configured to package the pre-processing program, the image processing model, and the post-processing program to generate an inference service.
In a possible implementation, the model features include input parameter information, and the first configuration module 503 is configured to:
and filling the predefined preprocessing template with the input parameter information to obtain a preprocessing program.
In a possible implementation, the model features include output parameter information, and the second configuration module 504 is configured to:
acquiring a label mapping file;
and filling the post-processing template according to the output parameter information and the label mapping file to obtain a post-processing program.
In a possible implementation manner, the service model generating apparatus in this embodiment further includes:
the storage module is used for storing the image processing model by using a specified storage mode after the training of the image processing model is finished;
when the image processing model is stored by using the specified storage mode, an input layer and an output layer of a specified signature in the image processing model are defined.
The analysis module 502 is configured to analyze the specified signature of the image processing model to obtain a model feature of the image processing model.
In a possible implementation, the parsing module 502 is configured to:
and analyzing the image processing model through an interface provided by TF-Serving to obtain the model characteristics of the image processing model.
EXAMPLE five
The embodiment of the application provides a flow chart of an image processing method. The image processing method in the embodiment includes: inputting the image to be processed into the reasoning service, and carrying out image processing on the image to be processed to obtain a processing result.
For example, the inference service used in this embodiment may be the inference service generated by the service model generation method provided in the second embodiment, and for the obtaining manner of the inference service, reference may be made to the description in the second embodiment, which is not described herein again.
In one example, the image processing method in the present embodiment may be used to classify images. For example, an image with a person is classified into an image without a person. In this instance, the processing result may be to identify whether the image to be processed includes a portrait.
In one example, the image processing method in the present embodiment may be used to implement attendance checking. In this example, the processing result may be whether the card punching is successful. For example, the processing result may further identify whether a real person is included in the image to be processed, and whether the real person is a person in the designated user group.
In one example, the image processing method in the present embodiment may be used to implement semantic segmentation. In this example, the processing result may be object information included in the image to be processed. In one example, the image to be processed includes a face image area, a vehicle image area, and a building image area. In this example, the object information is face position information, face size information, vehicle position information, vehicle size information, building position information, building size information, and the like.
EXAMPLE six
Based on the same application concept, an image processing apparatus corresponding to the image processing method is further provided in the embodiments of the present application, and since the principle of the apparatus in the embodiments of the present application for solving the problem is similar to that in the embodiments of the image processing method, the implementation of the apparatus in the embodiments of the present application may refer to the description in the embodiments of the image processing method, and repeated details are omitted.
The embodiment of the application provides an image processing device. Each module in the service model generating device in this embodiment is configured to execute each step in the image processing method embodiment provided in the fourth embodiment. The image processing apparatus includes: and the processing module is used for inputting the image to be processed into the reasoning service and carrying out image processing on the image to be processed so as to obtain a processing result.
For example, the inference service used in this embodiment may be the inference service generated by the service model generation method provided in the second embodiment, and for the obtaining manner of the inference service, reference may be made to the description in the second embodiment, and details are not described here.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the service model generation method or the image processing method described in the above method embodiment.
The computer program product of the service model generation method and the image processing method provided in the embodiments of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the service model generation method or the image processing method described in the above method embodiments, which may be specifically referred to in the above method embodiments and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes. It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.