CN110349161B - Image segmentation method, image segmentation device, electronic equipment and storage medium - Google Patents

Image segmentation method, image segmentation device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110349161B
CN110349161B CN201910621621.4A CN201910621621A CN110349161B CN 110349161 B CN110349161 B CN 110349161B CN 201910621621 A CN201910621621 A CN 201910621621A CN 110349161 B CN110349161 B CN 110349161B
Authority
CN
China
Prior art keywords
class
image
probability matrix
category
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910621621.4A
Other languages
Chinese (zh)
Other versions
CN110349161A (en
Inventor
何茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910621621.4A priority Critical patent/CN110349161B/en
Publication of CN110349161A publication Critical patent/CN110349161A/en
Application granted granted Critical
Publication of CN110349161B publication Critical patent/CN110349161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses an image segmentation method, an image segmentation device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring an image to be segmented; inputting the image to be segmented into a pre-trained multi-class identification model, obtaining a first class probability matrix according to output result information of a first target layer of the multi-class identification model, and obtaining a second class probability matrix according to output result information of a second target layer of the multi-class identification model, wherein the second class belongs to a subcategory of the first class, elements of the first class probability matrix represent probability values that pixels at corresponding positions in the image to be segmented belong to the first class, and elements of the second class probability matrix represent probability values that pixels at corresponding positions in the image to be segmented belong to the second class. According to the technical scheme of the embodiment of the invention, the image can be rapidly subjected to three-classification semantic segmentation, and the image segmentation efficiency can be improved.

Description

Image segmentation method, image segmentation device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of machine learning, in particular to an image segmentation method, an image segmentation device, electronic equipment and a storage medium.
Background
At present, semantic segmentation is required to be performed on an image in various application scenarios (for example, photo retouching and facial photography), and the purpose of image segmentation is to classify each pixel in the image, i.e., to label each pixel with a category. When image segmentation is performed, problems that three categories need to be classified are sometimes encountered, for example, when facial beautification photographing is performed, not only elements affecting beauty such as speckles and acne on the face and common nevi need to be removed, but also more classical nevi need to be retained.
For such image segmentation problems as described above, it is currently widely practiced to perform model training by labeling training samples with three labels, for example, 0, 1, and 2, and generating three probability maps respectively representing each type according to the probability that each pixel belongs to the three classes. The complexity of the model training stage of the method is undoubtedly higher, and the calculation difficulty is higher, so that the efficiency of image segmentation is lower.
Disclosure of Invention
In view of this, the present disclosure provides an image segmentation method, an image segmentation apparatus, an electronic device, and a storage medium, so as to implement three-classification semantic segmentation on an image quickly.
Additional features and advantages of the disclosed embodiments will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosed embodiments.
In a first aspect, an embodiment of the present disclosure provides an image segmentation method, including:
acquiring an image to be segmented;
inputting the image to be segmented into a pre-trained multi-class identification model, obtaining a first class probability matrix according to output result information of a first target layer of the multi-class identification model, and obtaining a second class probability matrix according to output result information of a second target layer of the multi-class identification model, wherein the second class belongs to a sub-class of the first class, the first class probability matrix, the second class probability matrix and the image to be segmented have the same size, elements of the first class probability matrix represent probability values of pixels at corresponding positions in the image to be segmented belonging to the first class, and elements of the second class probability matrix represent probability values of pixels at corresponding positions in the image to be segmented belonging to the second class.
In a second aspect, an embodiment of the present disclosure further provides an image segmentation apparatus, including:
the image to be segmented acquiring unit is used for acquiring an image to be segmented;
the class identification unit is used for inputting the image to be segmented into a pre-trained multi-class identification model, obtaining a first class probability matrix according to output result information of a first target layer of the multi-class identification model, and obtaining a second class probability matrix according to output result information of a second target layer of the multi-class identification model, wherein the second class belongs to a sub-class of the first class, the first class probability matrix, the second class probability matrix and the image to be segmented have the same size, elements of the first class probability matrix represent probability values that pixels at corresponding positions in the image to be segmented belong to the first class, and elements of the second class probability matrix represent probability values that pixels at corresponding positions in the image to be segmented belong to the second class.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the instructions of the method of any one of the first aspects.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method according to any one of the first aspect.
According to the image segmentation method and device, the image to be segmented is input into the pre-trained multi-class identification model, the first class probability matrix is obtained according to the output result information of the first target layer of the multi-class identification model, the second class probability matrix is obtained according to the output result information of the second target layer of the multi-class identification model, the image can be rapidly segmented in a three-class semantic mode, and the image segmentation efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments of the present disclosure will be briefly described below, and it is obvious that the drawings in the following description are only a part of the embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the contents of the embodiments of the present disclosure and the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image segmentation method provided in an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another image segmentation method provided in the embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a method for training a multi-class recognition model according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image segmentation apparatus provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of another image segmentation apparatus provided in the embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a training apparatus for multi-class recognition models according to an embodiment of the present disclosure;
FIG. 7 illustrates a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
In order to make the technical problems solved, technical solutions adopted and technical effects achieved by the embodiments of the present disclosure clearer, the technical solutions of the embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments, but not all embodiments, of the embodiments of the present disclosure. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present disclosure, belong to the protection scope of the embodiments of the present disclosure.
It should be noted that the terms "system" and "network" are often used interchangeably in the embodiments of the present disclosure. Reference to "and/or" in embodiments of the present disclosure is intended to "include any and all combinations of one or more of the associated listed items. The terms "first", "second", and the like in the description and claims of the present disclosure and in the drawings are used for distinguishing between different objects and not for limiting a particular order.
It should also be noted that, in the embodiments of the present disclosure, each of the following embodiments may be executed alone, or may be executed in combination with each other, and the embodiments of the present disclosure are not limited specifically.
The technical solutions of the embodiments of the present disclosure are further described by the following detailed description in conjunction with the accompanying drawings.
Fig. 1 shows a flowchart of an image segmentation method provided by an embodiment of the present disclosure, where the embodiment is applicable to a case of performing three-classification semantic segmentation on an image, and the method may be performed by an image segmentation apparatus configured in an electronic device, as shown in fig. 1, the image segmentation method according to the embodiment includes:
in step S110, an image to be segmented is acquired.
For example, the image to be segmented is a face image, the first category is a speckle mole category, the second category is a mole category with set characteristics, for example, the mole belongs to a classical mole, and the image segmentation method can be used for removing speckle and common moles on a face and reserving the classical mole.
Taking the image to be segmented as a face image as an example, the operation of acquiring the image to be segmented comprises the following steps: acquiring an original image comprising a human face; and acquiring a face image to be processed in the original image as the image to be segmented.
The method for processing the face image includes acquiring a face image to be processed in an original image, and performing face contour analysis on the original image to obtain face contour information.
The original image including the face may be an image shot in advance, or a photo acquired by a camera in real time, and the photo is cached in a buffer area to serve as the original image including the face. For the former, can adopt the technical scheme of this embodiment to carry out the later stage restoration to the picture, for the latter, can adopt the technical scheme of this implementation to carry out the filter shooting in real time to shoot and dispel the picture or the record of the ordinary nevus and remain the picture of classical nevus.
In step S120, the image to be segmented is input into a multi-class recognition model trained in advance, a first class probability matrix is obtained according to output result information of a first target layer of the multi-class recognition model, and a second class probability matrix is obtained according to output result information of a second target layer of the multi-class recognition model.
The second category belongs to a subcategory of the first category, the first category probability matrix, the second category probability matrix and the image to be segmented have the same size, elements of the first category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the first category, and elements of the second category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the second category.
According to the image segmentation method and device, the image to be segmented is input into the pre-trained multi-class identification model, the first class probability matrix is obtained according to the output result information of the first target layer of the multi-class identification model, the second class probability matrix is obtained according to the output result information of the second target layer of the multi-class identification model, the image can be rapidly segmented in a three-class mode, and the image segmentation efficiency can be improved.
Fig. 2 is a schematic flow chart of another image segmentation method provided in the embodiment of the present disclosure, and the embodiment is based on the foregoing embodiment and is optimized. As shown in fig. 2, the image segmentation method according to this embodiment includes:
in step S210, an image to be segmented is acquired.
In step S220, the image to be segmented is input into a multi-class recognition model trained in advance, a first class probability matrix is obtained according to output result information of a first target layer of the multi-class recognition model, and a second class probability matrix is obtained according to output result information of a second target layer of the multi-class recognition model.
The second category belongs to a subcategory of the first category, the first category probability matrix, the second category probability matrix and the image to be segmented have the same size, elements of the first category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the first category, and elements of the second category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the second category.
The steps S210 to S220 are the same as the steps S110 to S120 in the previous embodiment, and the description of this embodiment is omitted.
In step S230, inverse transformation is performed on the first class probability matrix to obtain a first class probability matrix corresponding to the original image, and inverse transformation is performed on the second class probability matrix to obtain a second class probability matrix corresponding to the original image.
After the inverse transformation, the first class probability matrix corresponding to the original image, the second class probability matrix corresponding to the original image, and the size of the original image are all the same.
In step S240, the original image is repaired according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image.
For example, the first class probability matrix corresponding to the original image may be thresholded according to a first set threshold to obtain a binary first class threshold matrix, the second class probability matrix corresponding to the original image may be thresholded according to a second set threshold to obtain a binary second class threshold matrix, and the image to be segmented may be repaired according to the first class threshold matrix and/or the second class threshold matrix.
For example, in the original image, the first category threshold matrix represents that the pixel is in a nevus maculatus category, and the second category threshold matrix represents that the pixel is not in a nevus category (i.e. a classical nevus) with a set characteristic.
In the technical solution of this embodiment, based on the above embodiment, the first class probability matrix is further subjected to inverse transformation to obtain a first class probability matrix corresponding to the original image, the second class probability matrix is subjected to inverse transformation to obtain a second class probability matrix corresponding to the original image, and the original image is subjected to restoration processing according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image, so that image distortion can be avoided when the image is subjected to restoration processing.
Fig. 3 is a schematic flow chart of a method for training a multi-class recognition model according to an embodiment of the present disclosure, where the multi-class recognition model includes a first class recognition submodel and a second class recognition submodel, and as shown in fig. 3, the method for training the multi-class recognition model according to the embodiment includes:
in step S310, a set of training samples is obtained.
The training sample comprises a sample image and an annotation class probability matrix used for representing the class of each pixel in the sample image, wherein the class comprises a background class, a first class and a second class, and the second class belongs to a sub-class of the first class.
In step S320, training a first class recognition submodel according to the training sample set, including:
and resetting the element value with the element value as the background class in the labeling class probability matrix corresponding to each sample image to 0, and resetting the element value with the element value as the first class or the second class to 1 to obtain the labeling first class probability matrix corresponding to each sample image.
Determining an initialized first class identification submodel, wherein the initialized first class identification submodel comprises a first target layer for outputting a probability that each pixel in a target image belongs to a first class; by utilizing a machine learning method, taking the sample images in the training sample set as the input of the initialized first-class recognition submodel, taking the labeled first-class probability matrix corresponding to the input sample images as the expected output of the initialized first-class recognition submodel, and training to obtain the first-class recognition submodel;
in step S330, training a second class identification submodel according to the training sample set includes:
resetting the element value with the element value of the background class in the labeling class probability matrix corresponding to each sample image as a set value, resetting the element value with the element value of the first class as 0, and resetting the element value with the element value of the second class as 1 to obtain the labeling second class probability matrix corresponding to each sample image, wherein the set value is greater than 1, for example, the set value is set to 255.
Determining an initialized second class identification submodel, wherein the initialized second class identification submodel comprises a second target layer for outputting a probability that each pixel in a target image belongs to a second class; and training to obtain the second class recognition submodel by using a machine learning method and taking the sample images in the training sample set as the input of the initialized second class recognition submodel and taking the labeled second class probability matrix corresponding to the input sample images as the expected output of the initialized second class recognition submodel.
Further, in the labeled class probability matrix, a pixel belonging to the class of the background class in the sample image may be represented by 0, a pixel belonging to the class of the first class in the sample image may be represented by 1, and a pixel belonging to the class of the second class in the sample image may be represented by 2.
The technical scheme of the embodiment discloses a training method of a multi-class identification model, wherein a labeled first class probability matrix corresponding to each sample image in a sample set is obtained by resetting an element value of a background class in the labeled class probability matrix corresponding to each sample image in the sample set to 0 and resetting an element value of a first class or a second class to 1; and training to obtain the first-class recognition submodel by using a machine learning method and taking the sample images in the training sample set as the input of the initialized first-class recognition submodel and taking the labeled first-class probability matrix corresponding to the input sample images as the expected output of the initialized first-class recognition submodel. Resetting element values of background classes of the labeled class probability matrix corresponding to each sample image to be set values, resetting element values of first classes of the element values to be 0, resetting element values of second classes of the element values to be 1 to obtain a labeled second class probability matrix corresponding to each sample image, using a machine learning method to input the sample images in the training sample set as an initialized second class recognition sub-model, using the labeled second class probability matrix corresponding to the input sample images as expected output of the initialized second class recognition sub-model, training to obtain the second class recognition sub-model, and inputting the images to be segmented into the pre-trained multi-class recognition model when segmenting the images, and obtaining the first class probability matrix, the second class recognition sub-model and the labeled second class probability matrix according to the output result information of the first class recognition sub-model, And obtaining a second category probability matrix according to the output result information of the second category identification submodel so as to rapidly carry out three-category semantic segmentation on the image, thereby improving the image segmentation efficiency.
Fig. 4 is a schematic structural diagram of an image segmentation apparatus provided in an embodiment of the present disclosure, and as shown in fig. 4, the image segmentation apparatus according to this embodiment includes an image to be segmented acquisition unit 410 and a category identification unit 420.
The image to be segmented acquisition unit 410 is configured to acquire an image to be segmented.
The class recognition unit 420 is configured to input the image to be segmented into a multi-class recognition model trained in advance, obtain a first class probability matrix according to output result information of a first target layer of the multi-class recognition model, and obtain a second class probability matrix according to output result information of a second target layer of the multi-class recognition model.
The second category belongs to a subcategory of the first category, the first category probability matrix, the second category probability matrix and the image to be segmented have the same size, elements of the first category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the first category, and elements of the second category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the second category.
Further, the image to be segmented is a human face image;
the first category is a nevus maculatus category, and the second category is a nevus category with set characteristics.
Further, the image to be segmented acquisition unit 410 is configured to acquire an original image including a human face; and acquiring a face image to be processed in the original image as the image to be segmented.
Further, the image to be segmented acquiring unit 410 is configured to acquire a photo captured by a camera, and cache the photo in a buffer as the original image including the face.
The image segmentation device provided by the embodiment can execute the image segmentation method provided by the method embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 5 is a schematic structural diagram of another image segmentation apparatus provided in the embodiment of the present disclosure, and as shown in fig. 5, the image segmentation apparatus according to the embodiment includes an image to be segmented acquisition unit 510, a category identification unit 520, an inverse transformation unit 530, and a repair processing unit 540.
The image to be segmented acquisition unit 510 is configured to acquire an image to be segmented;
the class recognition unit 520 is configured to input the image to be segmented into a pre-trained multi-class recognition model, obtain a first class probability matrix according to output result information of a first target layer of the multi-class recognition model, and obtain a second class probability matrix according to output result information of a second target layer of the multi-class recognition model.
The second category belongs to a subcategory of the first category, the first category probability matrix, the second category probability matrix and the image to be segmented have the same size, elements of the first category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the first category, and elements of the second category probability matrix represent probability values that corresponding position pixels in the image to be segmented belong to the second category.
The inverse transformation unit 530 is configured to perform inverse transformation on the first category probability matrix to obtain a first category probability matrix corresponding to the original image, and perform inverse transformation on the second category probability matrix to obtain a second category probability matrix corresponding to the original image, where the first category probability matrix corresponding to the original image, the second category probability matrix corresponding to the original image, and the size of the original image are the same;
the restoration processing unit 540 is configured to perform restoration processing on the original image according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image.
Further, the image to be segmented is a human face image; the first category is a nevus maculatus category, and the second category is a nevus category with set characteristics.
Further, the image to be segmented acquisition unit 510 is configured to: acquiring an original image comprising a human face; and acquiring a face image to be processed in the original image as the image to be segmented.
Further, the image to be segmented obtaining unit 510 is configured to obtain a photo collected by a camera, and cache the photo in a buffer as the original image including the face.
Further, the repair processing unit 540 includes a first thresholding subunit (not shown in fig. 5), a second thresholding subunit (not shown in fig. 5), and a repair subunit (not shown in fig. 5).
The first thresholding subunit is configured to threshold a first-class probability matrix corresponding to the original image according to a first set threshold to obtain a binary first-class threshold matrix; and/or
The second thresholding subunit is configured to threshold a second class probability matrix corresponding to the original image according to a second set threshold to obtain a binary second class threshold matrix;
the repairing subunit is configured to perform repairing processing on the image to be segmented according to the first category threshold matrix and/or the second category threshold matrix.
Further, the repairing subunit is configured to perform repairing processing on a pixel in the original image, where the first category threshold matrix represents that the pixel is in a nevus category, and the second category threshold matrix represents that the pixel is not in a nevus category with a set characteristic.
The image segmentation device provided by the embodiment can execute the image segmentation method provided by the method embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 6 is a schematic structural diagram of a training apparatus for a multi-class recognition model according to an embodiment of the present disclosure, as shown in fig. 6, the multi-class recognition model according to this embodiment includes a first class recognition submodel and a second class recognition submodel, and the training apparatus for the multi-class recognition model includes a sample obtaining module 610, a first class recognition submodel training module 620, and a second class recognition submodel training module 630.
Wherein the sample obtaining module 610 is configured to obtain a training sample set, wherein a training sample includes a sample image and an annotated class probability matrix for representing a class to which each pixel in the sample image belongs, the class including a background class, a first class, and a second class, wherein the second class belongs to a sub-class of the first class;
the first class identifier sub-model training module 620 includes a first labeling sub-module 621 and a first training sub-module 622 configured for the first class identifier sub-model to train the first class identifier sub-model according to the training sample set and the sub-modules.
The first labeling submodule 621 is configured to reset, to 0, an element value of a background class in a labeling class probability matrix corresponding to each sample image, and reset, to 1, an element value of a first class or a second class, so as to obtain a labeling first class probability matrix corresponding to each sample image;
the first training submodule 622 is configured to determine an initialized first class recognition submodel, wherein the initialized first class recognition submodel comprises a first target layer for outputting a probability that each pixel in a target image belongs to a first class; using a machine learning device to take the sample images in the training sample set as the input of the initialized first-class recognition submodel, taking the labeled first-class probability matrix corresponding to the input sample images as the expected output of the initialized first-class recognition submodel, and training to obtain the first-class recognition submodel;
the second class identifier sub-model training module 630 comprises a second labeling sub-module 631 and a second training sub-module 632 configured for the second class identifier sub-model to train the second class identifier sub-model according to the training sample set and the sub-modules.
The second labeling submodule 631 is configured to reset, to a set value, an element value of the background class in the labeling class probability matrix corresponding to each sample image, reset to 0 the element value of the first class, and reset to 1 the element value of the second class, so as to obtain a labeling second class probability matrix corresponding to each sample image, where the set value is greater than 1;
the second training submodule 632 is configured to determine an initialized second class identification submodel, wherein the initialized second class identification submodel comprises a second target layer for outputting a probability that each pixel in the target image belongs to a second class; and training to obtain the second class recognition submodel by using a machine learning device and taking the sample images in the training sample set as the input of the initialized second class recognition submodel and taking the labeled second class probability matrix corresponding to the input sample images as the expected output of the initialized second class recognition submodel.
Further, in the labeled class probability matrix, 0 represents a pixel of the sample image belonging to the class of the background class, 1 represents a pixel of the sample image belonging to the class of the first class, and 2 represents a pixel of the sample image belonging to the class of the second class.
Further, the set value is 255.
The training device for the multi-class recognition model provided by the embodiment can execute the training method for the multi-class recognition model provided by the embodiment of the method disclosed by the invention, and has corresponding functional modules and beneficial effects of the execution method.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium described above in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the disclosed embodiments, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring an image to be segmented;
inputting the image to be segmented into a pre-trained multi-class identification model, obtaining a first class probability matrix according to output result information of a first target layer of the multi-class identification model, and obtaining a second class probability matrix according to output result information of a second target layer of the multi-class identification model, wherein the second class belongs to a sub-class of the first class, the first class probability matrix, the second class probability matrix and the image to be segmented have the same size, elements of the first class probability matrix represent probability values of pixels at corresponding positions in the image to be segmented belonging to the first class, and elements of the second class probability matrix represent probability values of pixels at corresponding positions in the image to be segmented belonging to the second class.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
According to one or more embodiments of the present disclosure, in the image segmentation method, the image to be segmented is a face image; the first category is a nevus maculatus category, and the second category is a nevus category with set characteristics.
According to one or more embodiments of the present disclosure, in the image segmentation method, the operation of acquiring an image to be segmented includes:
acquiring an original image comprising a human face;
and acquiring a face image to be processed in the original image as the image to be segmented.
According to one or more embodiments of the present disclosure, in the image segmentation method, the operation of acquiring an original image including a human face includes: and acquiring a photo collected by a camera, and caching the photo into a buffer area to be used as the original image comprising the human face.
According to one or more embodiments of the present disclosure, in the image segmentation method, after obtaining the first category probability matrix and the second category probability matrix corresponding to the image to be segmented, the method further includes:
performing inverse transformation on the first category probability matrix to obtain a first category probability matrix corresponding to the original image, and performing inverse transformation on the second category probability matrix to obtain a second category probability matrix corresponding to the original image, wherein the first category probability matrix corresponding to the original image, the second category probability matrix corresponding to the original image and the size of the original image are the same;
and repairing the original image according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image.
According to one or more embodiments of the present disclosure, in the image segmentation method, the operation of performing a repairing process on the original image according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image includes:
thresholding a first class probability matrix corresponding to the original image according to a first set threshold value to obtain a binary first class threshold value matrix; and/or
Thresholding a second class probability matrix corresponding to the original image according to a second set threshold value to obtain a binary second class threshold value matrix;
and repairing the image to be segmented according to the first category threshold matrix and/or the second category threshold matrix.
According to one or more embodiments of the present disclosure, in the image segmentation method, the operation of performing the repairing process on the image to be segmented according to the first category threshold matrix and/or the second category threshold matrix includes:
and in the original image, the first type threshold matrix represents that the pixel is of a speckle nevus category, and the second type threshold matrix represents that the pixel is not of a nevus category with set characteristics, and the pixel is subjected to repairing treatment.
According to one or more embodiments of the present disclosure, in the image segmentation method, the multi-class recognition model includes a first class recognition submodel and a second class recognition submodel, and is obtained by training through the following steps:
acquiring a training sample set, wherein the training sample comprises a sample image and an annotated class probability matrix used for representing the class to which each pixel in the sample image belongs, the class to which the pixel belongs comprises a background class, a first class and a second class, and the second class belongs to a sub-class of the first class;
training the first class recognition submodel according to the training sample set and the following steps:
resetting the element value with the element value of the background class in the labeling class probability matrix corresponding to each sample image to 0, and resetting the element value with the element value of the first class or the second class to 1 to obtain a labeling first class probability matrix corresponding to each sample image;
determining an initialized first class identification submodel, wherein the initialized first class identification submodel comprises a first target layer for outputting a probability that each pixel in a target image belongs to a first class; by utilizing a machine learning method, taking the sample images in the training sample set as the input of the initialized first-class recognition submodel, taking the labeled first-class probability matrix corresponding to the input sample images as the expected output of the initialized first-class recognition submodel, and training to obtain the first-class recognition submodel;
training the second category identification submodel according to the training sample set and the following steps:
resetting the element value with the element value of the background class in the labeling class probability matrix corresponding to each sample image as a set value, resetting the element value with the element value of the first class as 0, and resetting the element value with the element value of the second class as 1 to obtain a labeling second class probability matrix corresponding to each sample image, wherein the set value is more than 1;
determining an initialized second class identification submodel, wherein the initialized second class identification submodel comprises a second target layer for outputting a probability that each pixel in a target image belongs to a second class; and training to obtain the second class recognition submodel by using a machine learning method and taking the sample images in the training sample set as the input of the initialized second class recognition submodel and taking the labeled second class probability matrix corresponding to the input sample images as the expected output of the initialized second class recognition submodel.
According to one or more embodiments of the present disclosure, in the image segmentation method, in the labeled class probability matrix, a pixel belonging to a class of a background class in a sample image is represented by 0, a pixel belonging to a class of a first class in the sample image is represented by 1, and a pixel belonging to a class of a second class in the sample image is represented by 2.
According to one or more embodiments of the present disclosure, in the image segmentation method, the set value is 255.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, the image to be segmented is a face image;
the first category is a nevus maculatus category, and the second category is a nevus category with set characteristics.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, the image to be segmented acquiring unit is configured to:
acquiring an original image comprising a human face;
and acquiring a face image to be processed in the original image as the image to be segmented.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, the image to be segmented acquiring unit is configured to: and acquiring a photo collected by a camera, and caching the photo into a buffer area to be used as the original image comprising the human face.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, the apparatus further includes an inverse transform unit and a repair processing unit;
the inverse transformation unit is used for performing inverse transformation on the first category probability matrix to obtain a first category probability matrix corresponding to the original image after obtaining the first category probability matrix and a second category probability matrix corresponding to the image to be segmented, and performing inverse transformation on the second category probability matrix to obtain a second category probability matrix corresponding to the original image, wherein the first category probability matrix corresponding to the original image, the second category probability matrix corresponding to the original image and the size of the original image are the same;
and the restoration processing unit is used for restoring the original image according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, the repair processing unit includes a first thresholding subunit, a second thresholding subunit, and a repair subunit;
the first thresholding subunit is used for thresholding a first class probability matrix corresponding to the original image according to a first set threshold to obtain a binary first class threshold matrix; and/or
The second thresholding subunit is used for thresholding a second category probability matrix corresponding to the original image according to a second set threshold to obtain a binary second category threshold matrix;
and the repairing subunit is used for repairing the image to be segmented according to the first category threshold matrix and/or the second category threshold matrix.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, the repair subunit is configured to:
and in the original image, the first type threshold matrix represents that the pixel is of a speckle nevus category, and the second type threshold matrix represents that the pixel is not of a nevus category with set characteristics, and the pixel is subjected to repairing treatment.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, the multi-class recognition model in the class recognition unit includes a first class recognition sub-model and a second class recognition sub-model, and the multi-class recognition model is obtained by training through the following modules:
the system comprises a sample acquisition module, a comparison module and a comparison module, wherein the sample acquisition module is used for acquiring a training sample set, the training sample comprises a sample image and an annotation class probability matrix used for representing the class to which each pixel in the sample image belongs, the class to which each pixel belongs comprises a background class, a first class and a second class, and the second class belongs to a sub-class of the first class;
the first class recognition submodel training module is used for obtaining the first class recognition submodel by training according to the training sample set and the following submodules:
the first labeling submodule is used for resetting the element value of the background class in the labeling class probability matrix corresponding to each sample image to 0, and resetting the element value of the first class or the second class to 1 to obtain the labeling first class probability matrix corresponding to each sample image;
a first training sub-module for determining an initialized first class recognition sub-model, wherein the initialized first class recognition sub-model comprises a first target layer for outputting a probability that each pixel in a target image belongs to a first class; using a machine learning device to take the sample images in the training sample set as the input of the initialized first-class recognition submodel, taking the labeled first-class probability matrix corresponding to the input sample images as the expected output of the initialized first-class recognition submodel, and training to obtain the first-class recognition submodel;
the second category identification submodel training module is used for obtaining the second category identification submodel through training according to the training sample set and the following submodules:
the second labeling submodule is used for resetting the element value of the background class in the labeling class probability matrix corresponding to each sample image to be a set value, resetting the element value of the first class to be 0, and resetting the element value of the second class to be 1 to obtain a labeling second class probability matrix corresponding to each sample image, wherein the set value is more than 1;
a second training sub-module for determining an initialized second class identification sub-model, wherein the initialized second class identification sub-model comprises a second target layer for outputting a probability that each pixel in the target image belongs to a second class; and training to obtain the second class recognition submodel by using a machine learning device and taking the sample images in the training sample set as the input of the initialized second class recognition submodel and taking the labeled second class probability matrix corresponding to the input sample images as the expected output of the initialized second class recognition submodel.
According to one or more embodiments of the present disclosure, in the image segmentation apparatus, in the labeled class probability matrix, a pixel belonging to a class of a background class in the sample image is represented by 0, a pixel belonging to a class of a first class in the sample image is represented by 1, and a pixel belonging to a class of a second class in the sample image is represented by 2.
According to one or more embodiments of the present disclosure, in the image dividing apparatus, the setting value is 255.
The foregoing description is only a preferred embodiment of the disclosed embodiments and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure in the embodiments of the present disclosure is not limited to the particular combination of the above-described features, but also encompasses other embodiments in which any combination of the above-described features or their equivalents is possible without departing from the scope of the present disclosure. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. An image segmentation method, comprising:
acquiring an image to be segmented;
inputting the image to be segmented into a pre-trained multi-class identification model, obtaining a first class probability matrix according to output result information of a first target layer of the multi-class identification model, and obtaining a second class probability matrix according to output result information of a second target layer of the multi-class identification model, wherein the second class belongs to a sub-class of the first class, the first class probability matrix, the second class probability matrix and the image to be segmented have the same size, elements of the first class probability matrix represent probability values of pixels at corresponding positions in the image to be segmented belonging to the first class, and elements of the second class probability matrix represent probability values of pixels at corresponding positions in the image to be segmented belonging to the second class;
the multi-class recognition model comprises a first class recognition submodel and a second class recognition submodel, and is obtained by training the following steps:
acquiring a training sample set, wherein the training sample comprises a sample image and an annotated class probability matrix used for representing the class to which each pixel in the sample image belongs, the class to which the pixel belongs comprises a background class, a first class and a second class, and the second class belongs to a sub-class of the first class;
training the first class recognition submodel according to the training sample set and the following steps:
resetting the element value with the element value of the background class in the labeling class probability matrix corresponding to each sample image to 0, and resetting the element value with the element value of the first class or the second class to 1 to obtain a labeling first class probability matrix corresponding to each sample image;
determining an initialized first class identification submodel, wherein the initialized first class identification submodel comprises a first target layer for outputting a probability that each pixel in a target image belongs to a first class; by utilizing a machine learning method, taking the sample images in the training sample set as the input of the initialized first-class recognition submodel, taking the labeled first-class probability matrix corresponding to the input sample images as the expected output of the initialized first-class recognition submodel, and training to obtain the first-class recognition submodel;
training the second category identification submodel according to the training sample set and the following steps:
resetting the element value with the element value of the background class in the labeling class probability matrix corresponding to each sample image as a set value, resetting the element value with the element value of the first class as 0, and resetting the element value with the element value of the second class as 1 to obtain a labeling second class probability matrix corresponding to each sample image, wherein the set value is more than 1;
determining an initialized second class identification submodel, wherein the initialized second class identification submodel comprises a second target layer for outputting a probability that each pixel in a target image belongs to a second class; and training to obtain the second class recognition submodel by using a machine learning method and taking the sample images in the training sample set as the input of the initialized second class recognition submodel and taking the labeled second class probability matrix corresponding to the input sample images as the expected output of the initialized second class recognition submodel.
2. The method according to claim 1, wherein the image to be segmented is a face image;
the first category is a nevus maculatus category, and the second category is a nevus category with set characteristics.
3. The method of claim 2, wherein the operation of acquiring an image to be segmented comprises:
acquiring an original image comprising a human face;
and acquiring a face image to be processed in the original image as the image to be segmented.
4. The method of claim 3, wherein the operation of obtaining an original image comprising a human face comprises:
and acquiring a photo collected by a camera, and caching the photo into a buffer area to be used as the original image comprising the human face.
5. The method according to claim 3, further comprising, after obtaining the first class probability matrix and the second class probability matrix corresponding to the image to be segmented:
performing inverse transformation on the first category probability matrix to obtain a first category probability matrix corresponding to the original image, and performing inverse transformation on the second category probability matrix to obtain a second category probability matrix corresponding to the original image, wherein the first category probability matrix corresponding to the original image, the second category probability matrix corresponding to the original image and the size of the original image are the same;
and repairing the original image according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image.
6. The method according to claim 5, wherein the operation of performing the repairing process on the original image according to the first class probability matrix corresponding to the original image and the second class probability matrix corresponding to the original image comprises:
thresholding a first class probability matrix corresponding to the original image according to a first set threshold value to obtain a binary first class threshold value matrix; and/or
Thresholding a second class probability matrix corresponding to the original image according to a second set threshold value to obtain a binary second class threshold value matrix;
and repairing the image to be segmented according to the first category threshold matrix and/or the second category threshold matrix.
7. The method according to claim 6, wherein the operation of performing the repairing process on the image to be segmented according to the first category threshold matrix and/or the second category threshold matrix comprises:
and in the original image, the first type threshold matrix represents that the pixel is of a speckle nevus category, and the second type threshold matrix represents that the pixel is not of a nevus category with set characteristics, and the pixel is subjected to repairing treatment.
8. The method according to claim 1, wherein the labeled class probability matrix represents pixels in the sample image belonging to the class of the background class with 0, pixels in the sample image belonging to the class of the first class with 1, and pixels in the sample image belonging to the class of the second class with 2.
9. The method of claim 1, wherein the set point is 255.
10. An image segmentation apparatus, comprising:
the image to be segmented acquiring unit is used for acquiring an image to be segmented;
the class identification unit is used for inputting the image to be segmented into a pre-trained multi-class identification model, obtaining a first class probability matrix according to output result information of a first target layer of the multi-class identification model, and obtaining a second class probability matrix according to output result information of a second target layer of the multi-class identification model, wherein the second class belongs to a sub-class of the first class, the first class probability matrix, the second class probability matrix and the image to be segmented have the same size, elements of the first class probability matrix represent probability values that pixels at corresponding positions in the image to be segmented belong to the first class, and elements of the second class probability matrix represent probability values that pixels at corresponding positions in the image to be segmented belong to the second class;
the multi-class recognition model comprises a first class recognition submodel and a second class recognition submodel, and is obtained by training the following steps:
acquiring a training sample set, wherein the training sample comprises a sample image and an annotated class probability matrix used for representing the class to which each pixel in the sample image belongs, the class to which the pixel belongs comprises a background class, a first class and a second class, and the second class belongs to a sub-class of the first class;
training the first class recognition submodel according to the training sample set and the following steps:
resetting the element value with the element value of the background class in the labeling class probability matrix corresponding to each sample image to 0, and resetting the element value with the element value of the first class or the second class to 1 to obtain a labeling first class probability matrix corresponding to each sample image;
determining an initialized first class identification submodel, wherein the initialized first class identification submodel comprises a first target layer for outputting a probability that each pixel in a target image belongs to a first class; by utilizing a machine learning method, taking the sample images in the training sample set as the input of the initialized first-class recognition submodel, taking the labeled first-class probability matrix corresponding to the input sample images as the expected output of the initialized first-class recognition submodel, and training to obtain the first-class recognition submodel;
training the second category identification submodel according to the training sample set and the following steps:
resetting the element value with the element value of the background class in the labeling class probability matrix corresponding to each sample image as a set value, resetting the element value with the element value of the first class as 0, and resetting the element value with the element value of the second class as 1 to obtain a labeling second class probability matrix corresponding to each sample image, wherein the set value is more than 1;
determining an initialized second class identification submodel, wherein the initialized second class identification submodel comprises a second target layer for outputting a probability that each pixel in a target image belongs to a second class; and training to obtain the second class recognition submodel by using a machine learning method and taking the sample images in the training sample set as the input of the initialized second class recognition submodel and taking the labeled second class probability matrix corresponding to the input sample images as the expected output of the initialized second class recognition submodel.
11. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
instructions which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN201910621621.4A 2019-07-10 2019-07-10 Image segmentation method, image segmentation device, electronic equipment and storage medium Active CN110349161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910621621.4A CN110349161B (en) 2019-07-10 2019-07-10 Image segmentation method, image segmentation device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910621621.4A CN110349161B (en) 2019-07-10 2019-07-10 Image segmentation method, image segmentation device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110349161A CN110349161A (en) 2019-10-18
CN110349161B true CN110349161B (en) 2021-11-23

Family

ID=68174765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910621621.4A Active CN110349161B (en) 2019-07-10 2019-07-10 Image segmentation method, image segmentation device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110349161B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837789B (en) * 2019-10-31 2023-01-20 北京奇艺世纪科技有限公司 Method and device for detecting object, electronic equipment and medium
CN111046944A (en) * 2019-12-10 2020-04-21 北京奇艺世纪科技有限公司 Method and device for determining object class, electronic equipment and storage medium
CN111310815A (en) * 2020-02-07 2020-06-19 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and storage medium
CN114565759A (en) * 2022-02-22 2022-05-31 北京百度网讯科技有限公司 Image semantic segmentation model optimization method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203999A (en) * 2017-04-28 2017-09-26 北京航空航天大学 A kind of skin lens image automatic division method based on full convolutional neural networks
CN108010021A (en) * 2017-11-30 2018-05-08 上海联影医疗科技有限公司 A kind of magic magiscan and method
US10096122B1 (en) * 2017-03-28 2018-10-09 Amazon Technologies, Inc. Segmentation of object image data from background image data
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task
CN109886986A (en) * 2019-01-23 2019-06-14 北京航空航天大学 A kind of skin lens image dividing method based on multiple-limb convolutional neural networks

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346620B (en) * 2013-07-25 2017-12-29 佳能株式会社 To the method and apparatus and image processing system of the pixel classifications in input picture
US9990472B2 (en) * 2015-03-23 2018-06-05 Ohio State Innovation Foundation System and method for segmentation and automated measurement of chronic wound images
CN108256476A (en) * 2018-01-17 2018-07-06 百度在线网络技术(北京)有限公司 For identifying the method and apparatus of fruits and vegetables
CN109344752B (en) * 2018-09-20 2019-12-10 北京字节跳动网络技术有限公司 Method and apparatus for processing mouth image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10096122B1 (en) * 2017-03-28 2018-10-09 Amazon Technologies, Inc. Segmentation of object image data from background image data
CN107203999A (en) * 2017-04-28 2017-09-26 北京航空航天大学 A kind of skin lens image automatic division method based on full convolutional neural networks
CN108010021A (en) * 2017-11-30 2018-05-08 上海联影医疗科技有限公司 A kind of magic magiscan and method
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task
CN109886986A (en) * 2019-01-23 2019-06-14 北京航空航天大学 A kind of skin lens image dividing method based on multiple-limb convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automated Segmentation of Left Ventricle Using Local and Global Intensity Based Active Contour and Dynamic Programming;Dharanibai,G 等;《INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING》;20181231;第673-688页 *
基于内容的图像分割方法综述;姜枫 等;《软件学报》;20170131;第160-183页 *

Also Published As

Publication number Publication date
CN110349161A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110349161B (en) Image segmentation method, image segmentation device, electronic equipment and storage medium
CN107578017B (en) Method and apparatus for generating image
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
US11436863B2 (en) Method and apparatus for outputting data
CN111369427B (en) Image processing method, image processing device, readable medium and electronic equipment
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
WO2019214321A1 (en) Vehicle damage identification processing method, processing device, client and server
CN111414879A (en) Face shielding degree identification method and device, electronic equipment and readable storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110209658B (en) Data cleaning method and device
CN109934142B (en) Method and apparatus for generating feature vectors of video
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN113033677A (en) Video classification method and device, electronic equipment and storage medium
CN113158773A (en) Training method and training device for living body detection model
EP4254315A1 (en) Image processing method and apparatus, image generation method and apparatus, device, and medium
CN114399814B (en) Deep learning-based occlusion object removing and three-dimensional reconstructing method
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
CN111783777A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN113033552A (en) Text recognition method and device and electronic equipment
CN110852250B (en) Vehicle weight removing method and device based on maximum area method and storage medium
CN112488095A (en) Seal image identification method and device and electronic equipment
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium
CN115278355A (en) Video editing method, device, equipment, computer readable storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant