CN114120090A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114120090A
CN114120090A CN202111417707.9A CN202111417707A CN114120090A CN 114120090 A CN114120090 A CN 114120090A CN 202111417707 A CN202111417707 A CN 202111417707A CN 114120090 A CN114120090 A CN 114120090A
Authority
CN
China
Prior art keywords
image
target
target class
processed
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111417707.9A
Other languages
Chinese (zh)
Inventor
丁拥科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongan Online P&c Insurance Co ltd
Original Assignee
Zhongan Online P&c Insurance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongan Online P&c Insurance Co ltd filed Critical Zhongan Online P&c Insurance Co ltd
Priority to CN202111417707.9A priority Critical patent/CN114120090A/en
Publication of CN114120090A publication Critical patent/CN114120090A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein when an image to be processed contains a target object, the image to be processed is firstly input into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area image of the target object, and then the pixel area image is input into a pre-trained color evaluation model to determine the color of the target object. According to the technical scheme, firstly, the target object in the image to be processed is subjected to semantic segmentation, most of interference information in the image to be processed is removed, and then color evaluation is carried out, so that the color evaluation accuracy of the target object can be improved, and the subsequent identity authentication accuracy is improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
With the rapid development of image processing technology, a computer is used for processing, analyzing and understanding the image, so that a target object in the image can be identified or the anti-counterfeiting identification of the image can be realized. The color evaluation of the target object is an important component in image processing, and plays an auxiliary role in identity authentication and image anti-counterfeiting of the target object.
The existing image recognition is mainly realized based on a deep learning method. Specifically, the image classification model is used for extracting and evaluating the characteristics such as the type, the color and the like of the target object in the image so as to realize the authentication of the identity of the target object, and further precondition is provided for the subsequent practical application. For example, in a pet claim settlement scene, the identity of the individual pet is accurately authenticated, the pet risk settlement speed can be effectively improved, the automatic claim settlement risk is reduced, and the labor cost is saved.
However, in practical applications, since the image may include the target object and be interfered by other factors, for example, a background in the image, clothing of the target object, and the like, all of which may interfere with the type and color recognition of the target object, which may result in the failure of the identity authentication of the target object and have a certain influence on the subsequent application of the identity of the target object.
Disclosure of Invention
The application provides an image processing method, an image processing device, image processing equipment and a storage medium, which are used for solving the problem of inaccurate object color identification in the prior art.
In a first aspect, the present application provides an image processing method, including:
acquiring an image to be processed;
when the image to be processed contains a target object, inputting the image to be processed into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area image of the target object;
and inputting the pixel region map into a pre-trained color evaluation model to determine the color of the target object.
In one possible design of the first aspect, the method further includes:
obtaining a first set of image samples, the first set of image samples comprising: a plurality of positive sample images, wherein each positive sample image comprises at least one target class object marked with a detection frame;
processing the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target objects, wherein the semantic segmentation sample set comprises: a plurality of target class images, each target class image comprising: pixel area and background area of at least one kind of object of target class;
and training a preset semantic segmentation network by using the semantic segmentation sample set to obtain the image segmentation model, wherein the image segmentation model is used for segmenting each target class image, removing a background area in each target class image and outputting a pixel area image only containing a pixel area of the target class object.
Optionally, the processing the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target object includes:
cutting each positive sample image according to the labeled detection frame in each positive sample image, and reserving the labeled detection frame in each positive sample image to obtain a target image corresponding to each positive sample image;
and carrying out pixel labeling on the target class objects in each target class image, and determining the pixel area and the background area of at least one class of target class objects in each target class image.
Optionally, the method further includes:
performing semantic segmentation on the semantic segmentation sample set of the at least one type of target object to obtain at least one pixel area map of the at least one type of target object;
acquiring the color type of the target class object in each pixel area image;
and training a preset neural network according to the color type of the target class object in the at least one pixel region graph and a preset collection of multiple color types to obtain the color evaluation model.
In another possible design of the first aspect, after the acquiring the image to be processed, the method further includes:
inputting the image to be processed into a pre-trained object detection model, and determining whether the image to be processed contains a target class object;
when the image to be processed is determined to contain at least two target class objects, determining the area of a detection frame in each target class object;
taking the target class object with the largest detection frame area in the at least two target class objects as the target object;
and sending out an image checking prompt when the image to be processed does not contain the target class object.
Optionally, before the acquiring the image to be processed, the method further includes:
obtaining a second set of image samples, the second set of image samples comprising: the method comprises the following steps that a plurality of positive sample images and at least one negative sample image are obtained, each positive sample image contains at least one target class object marked with a detection frame, and each negative sample image does not contain the target class object;
and performing model training on a preset target detection network by using the second image sample set, and obtaining the object detection model when the error of an object detection frame output by the target detection network is smaller than a preset error threshold value.
In a second aspect, the present application provides an image processing apparatus comprising:
the acquisition module is used for acquiring an image to be processed;
a processing module to:
when the image to be processed contains a target object, inputting the image to be processed into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area image of the target object;
and inputting the pixel region map into a pre-trained color evaluation model to determine the color of the target object.
In one possible design of the second aspect, the obtaining module is further configured to obtain a first set of image samples, where the first set of image samples includes: a plurality of positive sample images, wherein each positive sample image comprises at least one target class object marked with a detection frame;
a processing module further configured to:
processing the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target objects, wherein the semantic segmentation sample set comprises: a plurality of target class images, each target class image comprising: pixel area and background area of at least one kind of object of target class;
and training a preset semantic segmentation network by using the semantic segmentation sample set to obtain the image segmentation model, wherein the image segmentation model is used for segmenting each target class image, removing a background area in each target class image and outputting a pixel area image only containing a pixel area of the target class object.
Optionally, the processing module is configured to process the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target objects, and specifically includes:
the processing module is specifically configured to:
cutting each positive sample image according to the labeled detection frame in each positive sample image, and reserving the labeled detection frame in each positive sample image to obtain a target image corresponding to each positive sample image;
and carrying out pixel labeling on the target class objects in each target class image, and determining the pixel area and the background area of at least one class of target class objects in each target class image.
Optionally, the processing module is further configured to perform semantic segmentation on the semantic segmentation sample set of the at least one class of target object to obtain at least one pixel area map of the at least one class of target object;
the obtaining module is further configured to obtain a color type of the target class object in each pixel region map;
the processing module is further configured to train a preset neural network according to the color type of the target class object in the at least one pixel area map and a preset set of multiple color types, so as to obtain the color evaluation model.
In another possible design of the second aspect, the processing module is further configured to:
inputting the image to be processed into a pre-trained object detection model, and determining whether the image to be processed contains a target class object;
when the image to be processed is determined to contain at least two target class objects, determining the area of a detection frame in each target class object;
taking the target class object with the largest detection frame area in the at least two target class objects as the target object;
and sending out an image checking prompt when the image to be processed does not contain the target class object.
Optionally, the obtaining module is further configured to obtain a second image sample set, where the second image sample set includes: the method comprises the following steps that a plurality of positive sample images and at least one negative sample image are obtained, each positive sample image contains at least one target class object marked with a detection frame, and each negative sample image does not contain the target class object;
the processing module is further configured to perform model training on a preset target detection network by using the second image sample set, and obtain the object detection model when an error of an object detection frame output by the target detection network is smaller than a preset error threshold.
In a third aspect, the present application provides an electronic device, comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method according to the first aspect and each possible design of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing a method as described in the first aspect and each possible design of the first aspect when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising: a computer program for implementing the method as described in the first aspect and each possible design of the first aspect as such when executed by a processor.
According to the image processing method, the image processing device, the image processing equipment and the storage medium, when the image to be processed contains the target object, the image to be processed is firstly input into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area map of the target object, and then the pixel area map is input into a pre-trained color evaluation model to determine the color of the target object. According to the technical scheme, firstly, the target object in the image to be processed is subjected to semantic segmentation to obtain the pixel area image with the background removed from the image to be processed, then color evaluation is carried out, the color evaluation accuracy of the target object can be improved, and the subsequent identity authentication accuracy is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario of an image processing method provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating a first embodiment of an image processing method provided in the present application;
fig. 3 is a schematic flowchart of a second embodiment of an image processing method provided in the present application;
fig. 4 is a schematic flowchart of a third embodiment of an image processing method provided in the present application;
FIG. 5 is a schematic structural diagram of a first embodiment of an image processing apparatus provided in the present application;
fig. 6 is a schematic structural diagram of an embodiment of an electronic device provided in the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Artificial Intelligence (AI) is a comprehensive technique in computer science, and by studying the design principles and implementation methods of various intelligent machines, the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject and relates to a wide range of fields, for example, natural language processing technology and machine learning/deep learning, etc., and along with the development of the technology, the artificial intelligence technology can be applied in more fields and can play more and more important values.
Optionally, image processing is one of important applications in the field of artificial intelligence, and along with the excellent performance of a deep learning method in classification of natural images, features such as types, colors and the like of targets are extracted and evaluated in the images by using a classification model obtained through training, so that more and more applications are applied to automatic target authentication.
For example, the image processing can be applied to a plurality of scenes, such as the recognition of people and animals in the image, the recognition of certificates or invoices in the image, and the like. The following scenario is explained with the identification of a pet in an image as an example.
At present, along with the improvement of living standard of people, people pay more attention to the accompany of spirit, so that pets become more and more spiritual trusts of people, and accordingly, pet accident risks and pet medical risks also become a demand. In order to meet the demand of more and more people on accidental risks and medical risks of pets, many insurance companies put forward pet risks, but how to make the fashion of claim settlement of pets and how to carry out identity authentication on pets in claim settlement and pets in insurance application are important problems.
In the prior art, deep learning methods have excellent performance in the classification aspect of natural images, and more companies begin to extract and evaluate the characteristics of the pet, such as the type, color and the like, in the images by using classification models, so that the automatic authentication of the identity of the pet is realized, the automatic claim settlement is realized while the pet risk settlement speed is increased, the automatic claim settlement is also realized, and the labor cost is saved.
However, the color assessment of pets is often disturbed by factors such as background, pet clothing, etc. in the image: on one hand, when the pet is in the image with the complex background, the color feature of the pet is difficult to accurately extract, so that the authentication result of the pet ID is influenced; on the other hand, if the pet color uploaded by the user is different, the user wears clothes of the same color, so that the color evaluation is interfered by the clothes color, and the result of claim settlement can be wrong.
In view of the above technical problems, an embodiment of the present application provides an image processing method, which belongs to the field of computer vision and artificial intelligence, and the specific scheme is as follows: for the acquired image to be processed, when the image to be processed contains a target object, firstly, the image to be processed is input into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area graph of the target object, and then the pixel area graph is input into a pre-trained color evaluation model to determine the color of the target object. Alternatively, the target object may include, but is not limited to, a pet, an invoice, a certificate, etc., which may be determined according to actual needs. The technical scheme is actually a method for evaluating the color of the target object in the image, and the method can automatically evaluate the overall color of the participated pet in the process of identifying the identity of the pet, thereby realizing the authentication of the identity of the pet and effectively improving the reliability of the automatic settlement of the risk of the pet.
Exemplarily, fig. 1 is a schematic view of an application scenario of an image processing method provided in an embodiment of the present application. As shown in fig. 1, the application diagram may include: electronic equipment 11, at least one terminal equipment that at least one user used.
Alternatively, fig. 1 shows two users (e.g., user 121, user 131) and two terminal devices (terminal device 122, terminal device 132, respectively) used by the two users. The embodiment of the invention does not limit the number of users and terminal devices, and the number can be determined according to the actual scene, which is not described herein again.
Among them, the electronic apparatus 11 is an apparatus having an image processing function, which is capable of processing an acquired image to be processed. Optionally, in an embodiment of the present application, the electronic device 11 is loaded with an image segmentation model and a color evaluation model trained in advance, so that the electronic device may perform semantic segmentation and color evaluation on the acquired image to be processed. It is understood that the electronic device 11 may also have other functions, for example, an object detection function, that is, the electronic device may detect whether the image to be processed includes the target image, which is not described herein again.
In the embodiment of the application, the terminal devices are provided with human-computer interaction interfaces. For example, in the scenario shown in fig. 1, the user 121 may perform information interaction with the electronic device 11 through the human-computer interaction interface of the terminal device 122, for example, to transmit the image to be processed to the electronic device 11. Similarly, the user 131 may also perform information interaction with the electronic device 11 through the human-computer interaction interface of the terminal device 132.
Optionally, referring to fig. 1, the application scenario schematic diagram may further include: a terminal device 142 connected to the electronic device, and a customer service person 141 using the terminal device 142.
Optionally, the user 121 may also send the image to be processed to the terminal device 142 used by the servicer 141 through the terminal device 122, and when the servicer 141 determines that the image to be processed includes the target object, transmit the target object to the electronic device 11 for processing.
It is to be understood that the scene diagram shown in fig. 1 is only an exemplary illustration. In practical application, the scene schematic diagram may further include other devices, for example, a storage device, and the like, which may be specifically adjusted according to actual requirements, and the embodiment of the present application is not limited thereto.
In the scene diagram shown in fig. 1, the embodiment of the present application does not limit the concrete expression form of each device, for example, the electronic device 21 may be a terminal device having an image processing capability, or may be a server having an image processing capability, and the terminal device 122, the terminal device 132, or the terminal device 142 may be a mobile phone, a tablet computer, a PC terminal, and the like, which are not illustrated herein.
The following describes the technical solution of the present invention in detail by using a specific embodiment with reference to the scene diagram shown in fig. 1. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Exemplarily, fig. 2 is a schematic flowchart of a first embodiment of an image processing method provided in the present application. The method is explained by taking the electronic device in the scene diagram shown in fig. 1 as an execution subject. As shown in fig. 2, the image processing method may include the steps of:
s201, acquiring an image to be processed.
As an example, a user may send a captured image to be processed to an electronic device that performs image processing through a terminal device, and accordingly, the electronic device may obtain the image to be processed from the terminal device of the user so as to process the image.
As another example, the user may further send the acquired to-be-processed image to a terminal device used by the customer service staff through the terminal device, and accordingly, the customer service staff determines that the to-be-processed image is correct and then transmits the to-be-processed image to the electronic device, so that the electronic device processes the to-be-processed image.
As still another example, the electronic device may also periodically read the image to be processed from a storage device that stores the image to be processed.
It is understood that the embodiment of the present application does not limit the specific way in which the electronic device obtains the image to be processed, and may be determined according to actual requirements, which is not described herein again.
Optionally, in an embodiment of the present application, the image to be processed may be an original image input by a user, or may be a detection frame image obtained by cutting the image to be processed through object detection. The embodiment of the application does not limit the type of the image to be processed.
S202, when the image to be processed contains the target object, the image to be processed is input into a pre-trained image segmentation model for semantic segmentation, and a pixel area image of the target object is obtained.
In the embodiment of the application, the purpose of image processing is to perform identity verification on a target object in an image to be processed, and therefore, when the image to be processed includes the target object, a pre-trained image segmentation model is loaded on the electronic device, so that when the electronic device acquires the image to be processed, the electronic device may perform semantic analysis on the image to be processed by means of the image segmentation model to determine a pixel region of the target object, then crop the image to be processed based on the pixel region, remove most of a background in the image to be processed, and obtain a pixel region map of the target image.
It can be understood that the pixel region map of the target image may be interpreted as matting the target object in the image to be processed and filling the background with the image after a specific color, and optionally, the specific color of the background filling may be black, white or other colors, which is not limited by this embodiment.
Optionally, the target object usually has a specific color, and thus, in order to accurately determine the identity of the target object, the color of the target object needs to be determined. In particular, in practical applications, the target object is usually in a complex background, and thus, the target object in the image to be processed may be an image in the complex background, and it may be difficult to extract the color feature of the target object, thereby affecting the identity authentication of the target object. In addition, if the target object in the image to be processed uploaded by the user is wrong, the result of the wrong identification may occur due to the interference of factors such as the background, and therefore, semantic segmentation needs to be performed on the image to be processed.
It can be understood that the image segmentation model is obtained by training a lightweight semantic segmentation network (e.g., EfficientNetb3-PAN) with a pre-processed data set, and for the training process, reference may be made to the description in the following embodiments, which are not described herein again.
Optionally, in an embodiment of the present application, before the image to be processed is input to the image segmentation model, the image to be processed may be processed first, for example, operations such as object detection, object positioning, noise reduction, cropping, and the like, which are not limited in this application. For example, the image size input to the image segmentation model is made to meet the set requirements by cropping the image to be processed.
And S203, inputting the pixel region map into a pre-trained color evaluation model, and determining the color of the target object.
Optionally, when the electronic device obtains the pixel region map of the target object by performing semantic segmentation on the image to be processed, in order to accurately determine the color of the target object, the pixel region map may be input into a color evaluation model trained in advance. The color evaluation model can be obtained by training a lightweight neural network (such as MobileNetv 2) based on a preprocessed training sample, and is preloaded in the electronic device.
Illustratively, the color evaluation model may be a predefined color classification model of class K, e.g., black, white, gray, brown, black and white stripes, etc. It is understood that the embodiments of the present application do not limit the types of colors.
Further, in the embodiment of the present application, since the electronic device may directly receive the image to be processed sent by the user through the terminal device, may also receive the image to be processed transmitted by the customer service staff through the terminal device, and may also periodically obtain the image from the storage device, before performing image processing, the electronic device first needs to verify the image and determine whether the image to be processed includes the target object.
Optionally, after the step S201, the image processing method may further include the steps of:
inputting an image to be processed into a pre-trained object detection model, and determining whether the image to be processed contains a target object;
when the image to be processed is determined to contain at least two target class objects, determining the area of a detection frame in each target class object;
taking a target class object with the largest detection frame area in the at least two target class objects as a target object;
and sending out an image checking prompt when the image to be processed does not contain the target class object.
In this step, the target class object may be a predefined class of objects, for example, in a pet claims scenario, the target class object may be a cat, a dog, etc. That is, whether or not a pet such as a cat or a dog is included in the image to be processed can be identified by the pre-trained object detection model.
In this embodiment, the object detection model trained in advance is utilized, so that the purpose of automatically detecting whether the image to be processed includes the target object can be achieved, further, the detection frame of the target object in the image to be processed can be determined, the step of manual verification can be omitted, and the image processing efficiency is improved.
For example, in a pet claim settlement scene, a pre-trained object detection model is loaded on the electronic device, and an image to be processed uploaded during application or claim settlement is input into the object detection model, so that a detection result can be obtained. Optionally, based on the detection result, if the image to be processed contains more than one pet, the area of the detection frame of each pet may be calculated, and the target with the largest area is taken as the insured pet; if the number of the pets in the image to be processed is less than one based on the detection result, namely the pets are not detected, sending out an image checking prompt, and then entering manual review or needing to provide pictures for the user again.
Alternatively, the following briefly explains the training process of the object detection model.
For example, in an embodiment of the present application, before acquiring the image to be processed, the image processing method may further include:
obtaining a second set of image samples, the second set of image samples comprising: the image detection method comprises a plurality of positive sample images and at least one negative sample image, wherein each positive sample image contains at least one target class object marked with a detection frame, and each negative sample image does not contain the target class object.
And performing model training on a preset target detection network by using a second image sample set, and obtaining the object detection model when the error of an object detection frame output by the target detection network is smaller than a preset error threshold value.
For example, in the pet claims scenario, the second image sample set is a pet detection training set, wherein the positive sample image may be a sample image that contains the target class object and is labeled by the target object detection box, and the negative sample image may be an image covered with the non-target class object. In this embodiment, the target class object may include two classes, namely cat and dog. The non-target class objects may be other animal species, such as, for example, other animals like cattle, horses, rabbits, etc.
Optionally, the preset target detection network may be a lightweight yolov5m detection network, the size of the model is reduced by one fourth compared with the standard yolov5x network, the inference time on the cpu is reduced by 50%, and the accuracy on the target class object detection problem is substantially consistent with that of the standard yolov5x network by training the lightweight yolov5m detection network with the customized second image sample set.
For example, first, labeling a detection frame of a captured pet image, including a cat and a dog, to form a pet detection data set, that is, a plurality of positive sample images, and then adding a negative sample image of another animal species (which may cover another animal such as a cow, a horse, a rabbit, etc.) to the pet detection data set to form a new data set, that is, a second image sample set in the embodiment of the present application. Then, the electronic device trains the lightweight yolov5m detection network by using a second image sample set, namely the labeled picture, and the object detection model can be obtained by controlling the error of the object detection frame output by the target detection network to be smaller than a preset error threshold.
According to the analysis, in the scene of pet identity authentication, the pet image is detected and segmented before entering the color classification model, so that a large amount of invalid interference information including background, clothes and other irrelevant pets in the image can be removed, and the accuracy of the classification network for evaluating the pet color is improved. The method can not only promote the accuracy of the pet identity authentication and improve the efficiency of the pet risk claim settlement, but also reduce the difficulty of the pet image acquisition to obtain better user experience.
It can be understood that the technical scheme of the application can also be applied to other object color evaluation scenes, such as invoice color evaluation, certificate color evaluation and the like, and plays an auxiliary role in identity authentication and image anti-counterfeiting for calculating object similarity in such scenes.
According to the image processing method provided by the embodiment of the application, when the image to be processed contains the target object, the image to be processed is firstly input into a pre-trained image segmentation model for semantic segmentation to obtain the pixel area image of the target object, and then the pixel area image is input into a pre-trained color evaluation model to determine the color of the target object. According to the technical scheme, before color evaluation is carried out on a target object in an image to be processed, semantic segmentation is carried out firstly, a large amount of invalid interference information in the image to be processed can be removed, then color evaluation is carried out, the color evaluation accuracy of the target object can be improved, and further the subsequent identity authentication accuracy is improved.
Optionally, on the basis of the foregoing embodiment, fig. 3 is a schematic flowchart of a second embodiment of the image processing method provided in the present application. The embodiment is mainly used for explaining the generation process of the image segmentation model and the color evaluation model. As shown in fig. 3, the image processing method may further include the steps of:
s301, obtaining a first image sample set, where the first image sample set includes: and each positive sample image comprises at least one target class object marked with a detection frame.
Optionally, in an embodiment of the present application, the first image sample set is a training sample of an image segmentation model, and the positive sample images are sample images that contain the target class object and are labeled by the target object detection box. The meaning of the positive sample image is the same as that of the positive sample image included in the second image sample set in the above embodiment, and is not repeated here.
S302, processing the positive sample images to obtain a semantic segmentation sample set of at least one type of target object.
Wherein semantically segmenting the sample set comprises: a plurality of target class images, each target class image comprising: pixel area and background area of at least one type of object of the object class.
For example, in this embodiment, since each positive sample image includes at least one target class object labeled with a detection frame, the target class object in each positive sample image may be processed based on the labeled detection frame in each positive sample image.
In a possible design, the electronic device may cut each positive sample image according to the labeled detection frame in each positive sample image, retain the labeled detection frame in each positive sample image, obtain a target class image corresponding to each positive sample image, further perform pixel labeling on the target class object in each target class image, and determine a pixel area and a background area of at least one class of target class object in each target class image.
For example, in a pet claim settlement scene, the electronic device may cut, based on a labeled detection frame in each positive sample image, a region where the labeled detection frame is located to obtain a pet image, label pixel regions of different pets in the pet image, and segment objects including three types of cats, dogs, and backgrounds, where clothing worn on the pet body is determined as the background, so as to obtain a semantic segmentation dataset of the pet image. Illustratively, in embodiments of the present application, the cat and the dog each belong to different target class objects.
S303, training a preset semantic segmentation network by using the semantic segmentation sample set to obtain an image segmentation model.
The image segmentation model is used for segmenting each target class image, removing a background area in each target class image and outputting a pixel area image only containing a pixel area of a target class object.
In this step, the preset semantic segmentation network may be a lightweight semantic segmentation network (e.g., EfficientNetb 3-PAN). Each target class image in the semantic segmentation sample set is used as the input of a semantic segmentation network, a pixel area image of a pixel area only containing a target class object is used as the target output of the semantic segmentation network, and the semantic segmentation network is trained, so that the error between the actual output and the target output of the semantic segmentation network is smaller than a certain threshold value, and the semantic segmentation model is obtained. Optionally, by cropping the pixel region of the target class object in each positive sample image, a large amount of invalid information can be removed.
Alternatively, in the present embodiment, setting the input image size of the image segmentation model to 384 × 384 enables high accuracy to be maintained, and the model inference time under cpu is only 30 ms. Thus, in practical applications, the image to be processed can be also processed into a size of 384 × 384 to improve the cropping accuracy and efficiency. It is understood that the size of the image is not limited in the embodiments of the present application, and may be determined according to actual settings, which is not described herein again.
Further, as shown in fig. 3, the image processing method may further include the steps of:
s304, performing semantic segmentation on the semantic segmentation sample set of the at least one type of target object to obtain at least one pixel area image of the at least one type of target object.
Optionally, in an embodiment of the present application, after obtaining the semantic segmentation sample set of at least one class of target objects, the electronic device may perform semantic segmentation on each target class image based on the pixel region of each target class object, and cut out to obtain at least one pixel region image of the at least one class of target objects.
S305, acquiring the color type of the target class object in each pixel region graph.
For example, for each pixel region map obtained in S304, the color type of the target class object in each pixel region map may be manually marked and used as a true value for use in training the color evaluation model.
S306, training a preset neural network according to the color type of the target class object in the at least one pixel region graph and a preset collection of multiple color types to obtain a color evaluation model.
Alternatively, the predetermined neural network may be a lightweight neural network (e.g., MobileNetv2, etc.). During training, a plurality of color type sets are defined based on artificial experience, for example, K pet color types (such as black, white, gray, brown, black and white stripes, etc.), when at least one pixel region map obtained through the segmentation is used as a network training sample of a color evaluation model, the color type of a target class object in each pixel region map marked manually is used as a true value, and a preset neural network is trained to obtain a color evaluation model, for example, a classification model of K classes. Correspondingly, when color prediction is performed, the electronic device can send the detected and segmented image to be processed into a trained color evaluation model for color evaluation of the target object.
In the image processing method provided in the embodiment of the present application, by obtaining a first image sample set, the first image sample set includes: the method comprises the steps that a plurality of positive sample images are obtained, each positive sample image comprises at least one target class object marked with a detection frame, then the positive sample images are processed to obtain a semantic segmentation sample set of the at least one class of target class object, a preset semantic segmentation network is trained by using the semantic segmentation sample set to obtain an image segmentation model, the semantic segmentation sample set of the at least one class of target class object is subjected to semantic segmentation to obtain at least one pixel area image of the at least one class of target class object, the color type of the target class object in each pixel area image is obtained, and a preset neural network is trained according to the color type of the target class object in the at least one pixel area image and a preset multi-color type set to obtain a color evaluation model. In the technical scheme, the semantic segmentation model and the color evaluation model are trained in advance, so that a foundation is laid for realizing color evaluation of the subsequent target class object.
On the basis of the foregoing embodiments, fig. 4 is a schematic flowchart of a third embodiment of the image processing method provided in the present application. The method embodiment is a complete implementation of image processing. As shown in fig. 4, the image processing method may include the steps of:
s401, acquiring an image to be processed;
s402, inputting the image to be processed into an object detection model to obtain an object detection result.
The object detection model can be referred to as a pet detection model in the pet claim settlement scene.
S403, judging whether the target object in the image to be processed contains a target class object or not according to the object detection result; if not, executing S404, if yes, executing S405;
s404, determining that the image to be processed is an irrelevant image, and sending an image checking prompt;
s405, judging whether the number of the target class objects in the image to be processed is 1; if yes, go to step S406; if not, executing S407;
s406, determining a target class object in the image to be processed as a target object;
s407, determining the area of a detection frame in each target class object;
s408, taking the target class object with the largest detection frame area in the at least two target class objects as a target object;
s409, inputting the image to be processed into a pre-trained image segmentation model for semantic segmentation to obtain a pixel region map of the target object;
and S410, inputting the pixel region map into a color evaluation model trained in advance, and determining the color of the target object.
For specific implementation of each step in the present application, reference may be made to the descriptions in the foregoing embodiments, and details are not described herein.
In the embodiment of the application, before color evaluation is performed on a target object in an image to be processed, target detection and detection frame segmentation are performed on the image to be processed to obtain a detection frame image, and then the detection frame image is input into an image segmentation model to obtain a pixel area map of the target object.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 5 is a schematic structural diagram of a first embodiment of an image processing apparatus according to the present application. The image processing apparatus may be integrated in an electronic device, or may be implemented by an electronic device. Referring to fig. 5, the image processing apparatus may include:
an obtaining module 501, configured to obtain an image to be processed;
a processing module 502 for:
when the image to be processed contains a target object, inputting the image to be processed into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area image of the target object;
and inputting the pixel region map into a pre-trained color evaluation model to determine the color of the target object.
In a possible design of this embodiment of the present application, the obtaining module 501 is further configured to obtain a first set of image samples, where the first set of image samples includes: a plurality of positive sample images, wherein each positive sample image comprises at least one target class object marked with a detection frame;
the processing module 502 is further configured to:
processing the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target objects, wherein the semantic segmentation sample set comprises: a plurality of target class images, each target class image comprising: pixel area and background area of at least one kind of object of target class;
and training a preset semantic segmentation network by using the semantic segmentation sample set to obtain the image segmentation model, wherein the image segmentation model is used for segmenting each target class image, removing a background area in each target class image and outputting a pixel area image only containing a pixel area of the target class object.
Optionally, the processing module 502 is configured to process the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target objects, specifically:
the processing module 502 is specifically configured to:
cutting each positive sample image according to the labeled detection frame in each positive sample image, and reserving the labeled detection frame in each positive sample image to obtain a target image corresponding to each positive sample image;
and carrying out pixel labeling on the target class objects in each target class image, and determining the pixel area and the background area of at least one class of target class objects in each target class image.
Optionally, the processing module 502 is further configured to perform semantic segmentation on the semantic segmentation sample set of the at least one class of target object to obtain at least one pixel area map of the at least one class of target object;
the obtaining module 501 is further configured to obtain a color type of the target class object in each pixel region map;
the processing module 502 is further configured to train a preset neural network according to the color type of the target class object in the at least one pixel area map and a preset set of multiple color types, so as to obtain the color evaluation model.
In another possible design of this embodiment of the present application, the processing module 502 is further configured to:
inputting the image to be processed into a pre-trained object detection model, and determining whether the image to be processed contains a target class object;
when the image to be processed is determined to contain at least two target class objects, determining the area of a detection frame in each target class object;
taking the target class object with the largest detection frame area in the at least two target class objects as the target object;
and sending out an image checking prompt when the image to be processed does not contain the target class object.
Optionally, the obtaining module 501 is further configured to obtain a second image sample set, where the second image sample set includes: the method comprises the following steps that a plurality of positive sample images and at least one negative sample image are obtained, each positive sample image contains at least one target class object marked with a detection frame, and each negative sample image does not contain the target class object;
the processing module 502 is further configured to perform model training on a preset target detection network by using the second image sample set, and obtain the object detection model when an error of an object detection frame output by the target detection network is smaller than a preset error threshold.
The apparatus provided in the embodiment of the present application may be configured to implement the technical solution of the method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the processing module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a function of the processing module may be called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Fig. 6 is a schematic structural diagram of an embodiment of an electronic device provided in the present application. As shown in fig. 6, the electronic device may include: a processor 601, a memory 602, a transceiver 603 and a computer program stored on the memory 602 and operable on the processor 601, which when executed by the processor 601, implement the solution of the above-mentioned method embodiments.
Optionally, in this embodiment, the transceiver 603 is used for communication with other devices. The electronic device may also include a system bus 604. The memory 602 and the transceiver 603 are coupled to the processor 601 via the system bus 604 and communicate with each other.
Optionally, in fig. 6, the processor may be a general-purpose processor, and includes a central processing unit CPU, a Network Processor (NP), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The memory may comprise Random Access Memory (RAM), read-only memory (RAM), and non-volatile memory (non-volatile memory), such as at least one disk memory.
The transceiver may also be referred to as a communication interface for enabling communication between the database access device and other devices, such as clients, read-write libraries, and read-only libraries.
The system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
According to an embodiment of the present application, there is also provided a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement the technical solutions of the above-mentioned method embodiments.
There is also provided, in accordance with an embodiment of the present application, a computer program product, including: computer program stored in a readable storage medium, which computer program is adapted to be executed by a processor for implementing the solution of the method as described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. An image processing method, comprising:
acquiring an image to be processed;
when the image to be processed contains a target object, inputting the image to be processed into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area image of the target object;
and inputting the pixel region map into a pre-trained color evaluation model to determine the color of the target object.
2. The method of claim 1, further comprising:
obtaining a first set of image samples, the first set of image samples comprising: a plurality of positive sample images, wherein each positive sample image comprises at least one target class object marked with a detection frame;
processing the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target objects, wherein the semantic segmentation sample set comprises: a plurality of target class images, each target class image comprising: pixel area and background area of at least one kind of object of target class;
and training a preset semantic segmentation network by using the semantic segmentation sample set to obtain the image segmentation model, wherein the image segmentation model is used for segmenting each target class image, removing a background area in each target class image and outputting a pixel area image only containing a pixel area of the target class object.
3. The method according to claim 2, wherein the processing the plurality of positive sample images to obtain the set of semantically segmented samples of the at least one class of object comprises:
cutting each positive sample image according to the labeled detection frame in each positive sample image, and reserving the labeled detection frame in each positive sample image to obtain a target image corresponding to each positive sample image;
and carrying out pixel labeling on the target class objects in each target class image, and determining the pixel area and the background area of at least one class of target class objects in each target class image.
4. The method of claim 2, further comprising:
performing semantic segmentation on the semantic segmentation sample set of the at least one type of target object to obtain at least one pixel area map of the at least one type of target object;
acquiring the color type of the target class object in each pixel area image;
and training a preset neural network according to the color type of the target class object in the at least one pixel region graph and a preset collection of multiple color types to obtain the color evaluation model.
5. The method according to any one of claims 1-4, wherein after said acquiring the image to be processed, the method further comprises:
inputting the image to be processed into a pre-trained object detection model, and determining whether the image to be processed contains a target class object;
when the image to be processed is determined to contain at least two target class objects, determining the area of a detection frame in each target class object;
taking the target class object with the largest detection frame area in the at least two target class objects as the target object;
and sending out an image checking prompt when the image to be processed does not contain the target class object.
6. The method of claim 5, wherein prior to said acquiring an image to be processed, the method further comprises:
obtaining a second set of image samples, the second set of image samples comprising: the method comprises the following steps that a plurality of positive sample images and at least one negative sample image are obtained, each positive sample image contains at least one target class object marked with a detection frame, and each negative sample image does not contain the target class object;
and performing model training on a preset target detection network by using the second image sample set, and obtaining the object detection model when the error of an object detection frame output by the target detection network is smaller than a preset error threshold value.
7. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an image to be processed;
a processing module to:
when the image to be processed contains a target object, inputting the image to be processed into a pre-trained image segmentation model for semantic segmentation to obtain a pixel area image of the target object;
and inputting the pixel region map into a pre-trained color evaluation model to determine the color of the target object.
8. The apparatus of claim 7, wherein the obtaining module is further configured to obtain a first set of image samples, the first set of image samples comprising: a plurality of positive sample images, wherein each positive sample image comprises at least one target class object marked with a detection frame;
a processing module further configured to:
processing the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of target objects, wherein the semantic segmentation sample set comprises: a plurality of target class images, each target class image comprising: pixel area and background area of at least one kind of object of target class;
and training a preset semantic segmentation network by using the semantic segmentation sample set to obtain the image segmentation model, wherein the image segmentation model is used for segmenting each target class image, removing a background area in each target class image and outputting a pixel area image only containing a pixel area of the target class object.
9. The apparatus according to claim 8, wherein the processing module is configured to process the multiple positive sample images to obtain a semantic segmentation sample set of the at least one class of object, specifically:
the processing module is specifically configured to:
cutting each positive sample image according to the labeled detection frame in each positive sample image, and reserving the labeled detection frame in each positive sample image to obtain a target image corresponding to each positive sample image;
and carrying out pixel labeling on the target class objects in each target class image, and determining the pixel area and the background area of at least one class of target class objects in each target class image.
10. The apparatus according to claim 8, wherein the processing module is further configured to perform semantic segmentation on the semantic segmentation sample set of the at least one class of object to obtain at least one pixel region map of the at least one class of object;
the obtaining module is further configured to obtain a color type of the target class object in each pixel region map;
the processing module is further configured to train a preset neural network according to the color type of the target class object in the at least one pixel area map and a preset set of multiple color types, so as to obtain the color evaluation model.
11. The apparatus of any of claims 7-10, wherein the processing module is further configured to:
inputting the image to be processed into a pre-trained object detection model, and determining whether the image to be processed contains a target class object;
when the image to be processed is determined to contain at least two target class objects, determining the area of a detection frame in each target class object;
taking the target class object with the largest detection frame area in the at least two target class objects as the target object;
and sending out an image checking prompt when the image to be processed does not contain the target class object.
12. The apparatus of claim 11, wherein the obtaining module is further configured to obtain a second set of image samples, the second set of image samples comprising: the method comprises the following steps that a plurality of positive sample images and at least one negative sample image are obtained, each positive sample image contains at least one target class object marked with a detection frame, and each negative sample image does not contain the target class object;
the processing module is further configured to perform model training on a preset target detection network by using the second image sample set, and obtain the object detection model when an error of an object detection frame output by the target detection network is smaller than a preset error threshold.
13. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of claims 1-6 when executing the computer program.
14. A computer-readable storage medium having computer-executable instructions stored thereon, which when executed by a processor, perform the method of any one of claims 1-6.
15. A computer program product, comprising: computer program for implementing the method according to any of the preceding claims 1-6 when executed by a processor.
CN202111417707.9A 2021-11-25 2021-11-25 Image processing method, device, equipment and storage medium Pending CN114120090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111417707.9A CN114120090A (en) 2021-11-25 2021-11-25 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111417707.9A CN114120090A (en) 2021-11-25 2021-11-25 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114120090A true CN114120090A (en) 2022-03-01

Family

ID=80373722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111417707.9A Pending CN114120090A (en) 2021-11-25 2021-11-25 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114120090A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206759A (en) * 2023-01-17 2023-06-02 西安电子科技大学 Mental health assessment device, equipment and storage medium based on image analysis
CN116681957A (en) * 2023-08-03 2023-09-01 富璟科技(深圳)有限公司 Image recognition method based on artificial intelligence and computer equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206759A (en) * 2023-01-17 2023-06-02 西安电子科技大学 Mental health assessment device, equipment and storage medium based on image analysis
CN116206759B (en) * 2023-01-17 2023-11-28 西安电子科技大学 Mental health assessment device, equipment and storage medium based on image analysis
CN116681957A (en) * 2023-08-03 2023-09-01 富璟科技(深圳)有限公司 Image recognition method based on artificial intelligence and computer equipment
CN116681957B (en) * 2023-08-03 2023-10-17 富璟科技(深圳)有限公司 Image recognition method based on artificial intelligence and computer equipment

Similar Documents

Publication Publication Date Title
CN112232293B (en) Image processing model training method, image processing method and related equipment
CN110705405B (en) Target labeling method and device
US20220051404A1 (en) Pathological section image processing method and apparatus, system, and storage medium
CN114120090A (en) Image processing method, device, equipment and storage medium
CN105121620A (en) Image processing device, image processing method, program, and storage medium
CN111161265A (en) Animal counting and image processing method and device
US20210406607A1 (en) Systems and methods for distributed data analytics
CN114648680B (en) Training method, device, equipment and medium of image recognition model
CN111507403A (en) Image classification method and device, computer equipment and storage medium
CN112418167A (en) Image clustering method, device, equipment and storage medium
CN108182444A (en) The method and device of video quality diagnosis based on scene classification
CN113177554B (en) Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN115294505B (en) Risk object detection and training method and device for model thereof and electronic equipment
CN115546845A (en) Multi-view cow face identification method and device, computer equipment and storage medium
CN115458100A (en) Knowledge graph-based follow-up method and device, electronic equipment and storage medium
CN111753722B (en) Fingerprint identification method and device based on feature point type
CN111401348B (en) Living body detection method and system for target object
CN111079617A (en) Poultry identification method and device, readable storage medium and electronic equipment
CN115587896B (en) Method, device and equipment for processing canine medical insurance data
CN112766387B (en) Training data error correction method, device, equipment and storage medium
CN115240230A (en) Canine face detection model training method and device, and detection method and device
US20230298753A1 (en) Method for annotating pathogenic site of disease by means of semi- supervised learning, and diagnosis system for performing same
CN111914820B (en) Qualification auditing method and device
CN111199547B (en) Image segmentation method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination