CN112419331A - Image segmentation method, device, terminal, storage medium and processor - Google Patents

Image segmentation method, device, terminal, storage medium and processor Download PDF

Info

Publication number
CN112419331A
CN112419331A CN202011268751.3A CN202011268751A CN112419331A CN 112419331 A CN112419331 A CN 112419331A CN 202011268751 A CN202011268751 A CN 202011268751A CN 112419331 A CN112419331 A CN 112419331A
Authority
CN
China
Prior art keywords
segmented
image
target object
target
contour feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011268751.3A
Other languages
Chinese (zh)
Inventor
钟成堡
刘志昌
张亚昇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202011268751.3A priority Critical patent/CN112419331A/en
Publication of CN112419331A publication Critical patent/CN112419331A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation method, an image segmentation device, a terminal, a storage medium and a processor for multi-target objects of the same type, wherein the method comprises the following steps: collecting an image to be segmented; the image to be segmented at least has a complete image of one target object to be segmented in the multi-target objects of the same type; extracting the contour feature of the target object to be segmented in the image to be segmented; determining a region containing the contour feature of a target object to be segmented in the image to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented, wherein the region is used as an independent target region of the target object to be segmented; and performing image segmentation according to the single target area and a pre-trained image segmentation model, and determining a segmentation image of the target object to be segmented. According to the scheme, all complete target objects can be separately segmented, and only a certain single target object can be selected to be segmented, so that the segmentation capability is improved.

Description

Image segmentation method, device, terminal, storage medium and processor
Technical Field
The invention belongs to the technical field of computers, and relates to an image segmentation method, an image segmentation device, a terminal, a storage medium and a processor, in particular to an image segmentation method, an image segmentation device, a terminal, a storage medium and a processor for multi-target objects of the same type, and particularly relates to a multi-target object segmentation method, an image segmentation device, a terminal, a storage medium and a processor of the same type based on deep learning.
Background
With the rapid development of science and technology, the popularity of industrial automation production lines is increasing, and more industrial production lines begin to use machine vision technology to replace workers, such as target detection, defect detection, target object segmentation, and the like. For target object segmentation, different labels need to be defined for different target objects first, and then the target objects need to be segmented by using techniques such as image segmentation.
In the related scheme, the deep learning segmentation network mainly considers the image segmentation of multiple targets and multiple types of targets, and under the same type, when multiple complete target objects exist in one image at the same time, all targets of the same type are almost directly segmented, and each target cannot be segmented independently.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention aims to provide an image segmentation method, an image segmentation device, a terminal, a storage medium and a processor for same-type multi-target objects, which are used for solving the problems that under the condition that a plurality of same-type complete target objects exist at the same time, different labels are defined for different target objects, and then the target objects are segmented out of an image segmentation network, all the target objects can be segmented out at the same time, each complete target object cannot be segmented out independently, and the segmentation capability is weak, so that all the complete target objects can be segmented out independently, and only a certain single target object can be selected to be segmented out, and the segmentation capability is improved.
The invention provides an image segmentation method of multi-target objects of the same type, which comprises the following steps: collecting an image to be segmented; the images to be segmented at least have a complete image of one target object to be segmented in the multi-target objects of the same type; extracting the contour feature of the target object to be segmented in the image to be segmented; determining a region containing the contour feature of a target object to be segmented in the image to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented, wherein the region is used as an independent target region of the target object to be segmented; and performing image segmentation according to the single target area and a pre-trained image segmentation model, and determining a segmentation image of the target object to be segmented.
In some embodiments, the contour feature of one target object to be segmented in the image to be segmented includes: an external contour feature or an internal contour feature of a target object to be segmented in an image to be segmented; the image segmentation model comprises: and (4) pre-training an image segmentation model.
In some embodiments, determining, according to the extracted contour feature of one target object to be segmented in the image to be segmented, a region in the image to be segmented that includes the contour feature of the one target object to be segmented includes: determining coordinate position information of a target object to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented; determining an outer frame of the target object to be segmented according to the coordinate position information of the target object to be segmented; and determining a region containing the contour feature of the target object to be segmented in the image to be segmented according to the external frame of the target object to be segmented.
In some embodiments, determining, according to a bounding box of the one target object to be segmented, a region in the image to be segmented that includes a contour feature of the one target object to be segmented includes: determining the external frames of all the target objects in the image to be segmented according to the mode of determining the external frame of the target object to be segmented; calculating the distance between the center coordinate of the circumscribed frame of each target object and the center coordinate of the original image of the image to be segmented; according to the set relationship between the target object to be segmented and the original image of the image to be segmented, selecting an external frame of the target object with the distance conforming to the set relationship from the distance between the center coordinate of the external frame of each target object and the center coordinate of the original image of the image to be segmented as an area containing the contour feature of the target object to be segmented in the image to be segmented.
In some embodiments, the image segmentation is performed according to the single target region and a pre-trained image segmentation model, and determining a segmented image of the target object to be segmented includes: inputting the single target area into a pre-trained image segmentation model to obtain an image Mask of the target object to be segmented; converting the pixel coordinates of the image Mask of the target object to be segmented into coordinates in the original image of the image to be segmented; and fusing the image Mask of the target object to be segmented with the original image of the image to be segmented to obtain the segmented image of the target object to be segmented.
In accordance with the above method, another aspect of the present invention provides an image segmentation apparatus for multi-target objects of the same type, comprising: an acquisition unit configured to acquire an image to be segmented; the images to be segmented at least have a complete image of one target object to be segmented in the multi-target objects of the same type; an extracting unit configured to extract a contour feature of the one target object to be segmented in the image to be segmented; the determining unit is configured to determine a region, which contains the contour feature of one target object to be segmented, in the image to be segmented as an individual target region of the one target object to be segmented according to the extracted contour feature of the one target object to be segmented in the image to be segmented; the determining unit is further configured to perform image segmentation according to the single target area and a pre-trained image segmentation model, and determine a segmented image of the target object to be segmented.
In some embodiments, the contour feature of one target object to be segmented in the image to be segmented includes: an external contour feature or an internal contour feature of a target object to be segmented in an image to be segmented; the image segmentation model comprises: and (4) pre-training an image segmentation model.
In some embodiments, the determining unit, according to the extracted contour feature of one target object to be segmented in the image to be segmented, determines a region in the image to be segmented, where the region includes the contour feature of the one target object to be segmented, including: determining coordinate position information of a target object to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented; determining an outer frame of the target object to be segmented according to the coordinate position information of the target object to be segmented; and determining a region containing the contour feature of the target object to be segmented in the image to be segmented according to the external frame of the target object to be segmented.
In some embodiments, the determining unit, according to the bounding box of the one target object to be segmented, determines the region in the image to be segmented, which includes the contour feature of the one target object to be segmented, including: determining the external frames of all the target objects in the image to be segmented according to the mode of determining the external frame of the target object to be segmented; calculating the distance between the center coordinate of the circumscribed frame of each target object and the center coordinate of the original image of the image to be segmented; according to the set relationship between the target object to be segmented and the original image of the image to be segmented, selecting an external frame of the target object with the distance conforming to the set relationship from the distance between the center coordinate of the external frame of each target object and the center coordinate of the original image of the image to be segmented as an area containing the contour feature of the target object to be segmented in the image to be segmented.
In some embodiments, the determining unit, performing image segmentation according to the single target region and a pre-trained image segmentation model, and determining a segmented image of the target object to be segmented, includes: inputting the single target area into a pre-trained image segmentation model to obtain an image Mask of the target object to be segmented; converting the pixel coordinates of the image Mask of the target object to be segmented into coordinates in the original image of the image to be segmented; and fusing the image Mask of the target object to be segmented with the original image of the image to be segmented to obtain the segmented image of the target object to be segmented.
In accordance with the above apparatus, a further aspect of the present invention provides a terminal, including: the image segmentation device for the same type of multi-target objects is described above.
In accordance with the foregoing method, a further aspect of the present invention provides a storage medium, where the storage medium includes a stored program, and when the program runs, the storage medium is controlled to execute the image segmentation method for multiple target objects of the same type.
In accordance with the above method, a further aspect of the present invention provides a processor for executing a program, wherein the program executes the above image segmentation method for multi-target objects of the same type.
Therefore, according to the scheme of the invention, the contour is extracted firstly, then the coordinate position of the target object is judged according to the contour characteristics, the area where the target object is located is determined according to the coordinate position, and finally the image segmentation is carried out, so that all complete target objects can be segmented independently, and only a certain single target object can be selected to be segmented, and the segmentation capability is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flowchart illustrating an embodiment of a method for image segmentation of multiple target objects of the same type according to the present invention;
fig. 2 is a schematic flow chart of an embodiment of determining a region containing a contour feature of a target object to be segmented in the image to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented in the method of the present invention;
fig. 3 is a schematic flow chart illustrating an embodiment of determining a region including a contour feature of a target object to be segmented in the image to be segmented according to an outer bounding box of the target object to be segmented in the method of the present invention;
FIG. 4 is a flowchart illustrating an embodiment of determining a segmented image of the target object to be segmented according to the image segmentation performed by the single target region and the pre-trained image segmentation model in the method of the present invention;
FIG. 5 is a schematic structural diagram illustrating an embodiment of an image segmentation apparatus for multi-target objects of the same type according to the present invention;
FIG. 6 is a schematic diagram of an embodiment of a same type of multi-target object;
FIG. 7 is a schematic structural view of another embodiment of the same type of multi-target object;
FIG. 8 is a schematic diagram illustrating a segmentation process of an embodiment of the same type of multi-target objects.
The reference numbers in the embodiments of the present invention are as follows, in combination with the accompanying drawings:
10-a first target object; 20-a second target object; 30-third target object.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to an embodiment of the present invention, a method for segmenting images of multiple target objects of the same type is provided, as shown in fig. 1, which is a schematic flow chart of an embodiment of the method of the present invention. The image segmentation method of the same type of multi-target object can comprise the following steps: step S110 to step S140.
At step S110, an image to be segmented is acquired. And the image to be segmented at least has a complete image of one target object to be segmented in the multi-target objects of the same type. I.e. image acquisition is performed. When the image is collected, the definition of the target object needs to be noticed, and the image at least has one complete target, otherwise, the target object cannot be segmented.
At step S120, a contour feature of the one target object to be segmented in the image to be segmented is extracted. That is, the contour extraction is performed on the target object in the captured image (i.e., the original image). The contour feature of a target object to be segmented in the image to be segmented comprises: the method comprises the following steps of obtaining an external contour feature or an internal contour feature of a target object to be segmented in an image to be segmented. Wherein the outer profile is the outer profile and the inner profile is the inner profile. That is, the outer contour or the inner contour of the target object can be obtained by the contour extraction method.
At step S130, according to the extracted contour feature of one target object to be segmented in the image to be segmented, determining a region containing the contour feature of the one target object to be segmented in the image to be segmented as an individual target region of the one target object to be segmented.
In step S140, image segmentation is performed according to the single target region and a pre-trained image segmentation model, and a segmented image of the target object to be segmented is determined. Wherein the image segmentation model comprises: and (4) pre-training an image segmentation model.
Therefore, by firstly extracting the contour, then judging the coordinate position of the target object according to the contour characteristics, determining the area of the target object according to the coordinate position, and finally segmenting the image, all complete target objects can be segmented independently, and only a single target object can be selected to be segmented, one or more interested single targets in the image can be directly segmented, the interference of other target objects is removed, and the model segmentation speed is improved.
In some embodiments, with reference to a flowchart of an embodiment of determining a region including a contour feature of a target object to be segmented in the image to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented shown in fig. 2, a specific process of determining a region including a contour feature of a target object to be segmented in the image to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented in step S130 may include: step S210 to step S230.
Step S210, determining coordinate position information of a target object to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented.
Step S220, determining an external frame of the target object to be segmented according to the coordinate position information of the target object to be segmented.
Step S230, determining a region containing the contour feature of the target object to be segmented in the image to be segmented according to the outer frame of the target object to be segmented.
Namely, according to the extracted contour features, the coordinate position information of the target object is judged, and an area only containing a complete single target object is obtained, namely the single target area is obtained. Specifically, coordinate position information of the target object is obtained through the target contour, and a minimum circumscribed rectangle frame or a minimum circumscribed circle frame of the target object is drawn according to the coordinate position information of the target object. The circumscribed rectangle frame or the circumscribed circle frame of the target object may be determined according to the shape of the target object, and are herein collectively referred to as the circumscribed frame. In which case the smallest bounding box contains only one complete individual object. Of course, by performing this operation on all the target objects in the image, the bounding boxes of all the target objects can be obtained.
In some embodiments, with reference to the flowchart of an embodiment of determining, according to the bounding box of the one target object to be segmented, the region including the contour feature of the one target object to be segmented in the image to be segmented in the method shown in fig. 3, a specific process of determining, according to the bounding box of the one target object to be segmented, the region including the contour feature of the one target object to be segmented in the image to be segmented in step S230 may include: step S310 to step S330.
Step S310, determining the external frames of all the target objects in the image to be segmented according to the mode of determining the external frame of the target object to be segmented.
Step S320, calculating a distance between the center coordinate of the circumscribed frame of each target object and the center coordinate of the original image of the image to be segmented.
Step S330, according to the set relationship between the one target object to be segmented and the original image of the image to be segmented, selecting, from the distance between the center coordinate of the circumscribed frame of each target object and the center coordinate of the original image of the image to be segmented, the circumscribed frame of the one target object whose distance matches the set relationship, as the region in the image to be segmented, which includes the contour feature of the one target object to be segmented.
That is, when selecting an individual target object, the individual target object may be selected by calculating the distance of the center of each target object from the center of the image. Specifically, assuming that the height and width of the original image are height and width, respectively, the center coordinates of the original image may be represented by (height/2, width/2), and the center coordinates of the target object may be represented by the center coordinates (x) of the bounding box1,y1),(x2,y2),(x3,y3) .., radius r1,r2,r3.... In order to ensure that each circumscribing frame can contain a complete target object, the radius R of the circumscribing frame is defined1、R2、R3Magnification was 1.2 fold, i.e.:
R1=r1*1.2,R2=r2*1.2,R3=r3*1.2...。
then the distance d between the center of each target object and the center of the imageiComprises the following steps:
Figure BDA0002776944620000071
obtaining the distance d between the center of each target object and the center of the imageiThen, the distance d between the center of each target object and the center of the image can be selectediTo select each complete individual target object, e.g. by selecting the individual target object closest to the center of the original image, the distance d between the center of each target object and the image center may be selected in each imageiThe smallest circumscribing frame.
In some embodiments, an embodiment of a flowchart illustrating an example of determining a segmented image of the target object to be segmented according to the image segmentation performed by the single target region and the pre-trained image segmentation model in the method shown in fig. 4 may be further described, where the specific process of determining the segmented image of the target object to be segmented according to the image segmentation performed by the single target region and the pre-trained image segmentation model in step S140 may include: step S410 to step S430.
And step S410, inputting the single target area into a pre-trained image segmentation model to obtain an image Mask of the target object to be segmented. Namely, the obtained single target area is input into the image segmentation model, and an image Mask (Mask) of a complete single target object can be segmented, so that the single target segmentation Mask is obtained. Specifically, after the individual target object is selected, the outer frame region corresponding to the individual target object may be input into the trained image segmentation model, and the image segmentation model may output an image Mask of the individual target object.
Step S420, converting the pixel coordinates of the image Mask of the target object to be segmented into coordinates in the original image of the image to be segmented. That is, the pixel coordinates of the obtained individual target division Mask are converted into the original image.
And step S430, fusing the image Mask of the target object to be segmented with the original image of the image to be segmented to obtain a segmented image of the target object to be segmented. That is, after the pixel coordinates of the individual target division Mask are converted into the original image, the individual target division Mask is fused with the original image, so that a divided image including only the complete individual target object can be obtained, that is, the divided original image including only the individual target object is output. Specifically, the coordinates of each pixel in the Mask of the individual target object are converted into coordinates in the original image, and then the Mask and the original image are fused. The fused image only retains the single target object, so that the segmentation of the single target object under the multiple targets of the same type is completed.
Therefore, by carrying out image acquisition, then carrying out contour extraction on a target object in an acquired image, judging the coordinate position information of the target object according to contour characteristics, then obtaining an area only containing a complete single target object, inputting the area only containing the complete single target object into an image segmentation model to segment an image Mask of the complete single target object, finally converting the pixel coordinates of the image Mask of the complete single target object into an original image, and fusing the image Mask of the complete single target object with the original image to obtain a segmented image only containing the complete single target object, directly segmenting the single target, and improving the overall segmentation efficiency.
Through a large number of tests, the technical scheme of the embodiment is adopted, contour extraction is firstly carried out, then the judgment of the coordinate position of the target object is carried out according to the contour characteristics, the area where the target object is located is determined through the coordinate position, finally, image segmentation is carried out, all complete target objects can be separately segmented, and only one single target object can be selected to be segmented, so that the segmentation capability is improved.
According to the embodiment of the invention, the image segmentation device of the same type of multi-target objects is also provided, which corresponds to the image segmentation method of the same type of multi-target objects. Referring to fig. 5, a schematic diagram of an embodiment of the apparatus of the present invention is shown. The image segmentation device for the same type of multi-target objects may include: the device comprises an acquisition unit, an extraction unit and a determination unit.
Wherein, the acquisition unit is configured to acquire an image to be segmented. And the image to be segmented at least has a complete image of one target object to be segmented in the multi-target objects of the same type. I.e. image acquisition is performed. When the image is collected, the definition of the target object needs to be noticed, and the image at least has one complete target, otherwise, the target object cannot be segmented. The specific function and processing of the acquisition unit are shown in step S110.
An extracting unit configured to extract a contour feature of the one target object to be segmented in the image to be segmented. That is, the contour extraction is performed on the target object in the captured image (i.e., the original image). The specific function and processing of the extracting unit are referred to in step S120. The contour feature of a target object to be segmented in the image to be segmented comprises: the method comprises the following steps of obtaining an external contour feature or an internal contour feature of a target object to be segmented in an image to be segmented. Wherein the outer profile is the outer profile and the inner profile is the inner profile. That is, the outer contour or the inner contour of the target object can be obtained by the contour extraction means.
The determining unit is configured to determine a region containing the contour feature of one target object to be segmented in the image to be segmented as an individual target region of the one target object to be segmented according to the extracted contour feature of the one target object to be segmented in the image to be segmented. The specific function and processing of the determination unit are referred to in step S130.
In some embodiments, the determining unit, according to the extracted contour feature of one target object to be segmented in the image to be segmented, determines a region in the image to be segmented, where the region includes the contour feature of the one target object to be segmented, including:
the determining unit is specifically configured to determine coordinate position information of one target object to be segmented according to the extracted contour feature of the one target object to be segmented in the image to be segmented. The specific function and processing of the determination unit are also referred to step S210.
The determining unit is specifically configured to determine an outer bounding box of the one target object to be segmented according to the coordinate position information of the one target object to be segmented. The specific function and processing of the determination unit are also referred to in step S220.
The determining unit is specifically configured to determine, according to an outer bounding box of the one target object to be segmented, a region in the image to be segmented, where the region includes a contour feature of the one target object to be segmented. The specific function and processing of the determination unit are also referred to step S230.
Namely, according to the extracted contour features, the coordinate position information of the target object is judged, and an area only containing a complete single target object is obtained, namely the single target area is obtained. Specifically, coordinate position information of the target object is obtained through the target contour, and a minimum circumscribed rectangle frame or a minimum circumscribed circle frame of the target object is drawn according to the coordinate position information of the target object. The circumscribed rectangle frame or the circumscribed circle frame of the target object may be determined according to the shape of the target object, and are herein collectively referred to as the circumscribed frame. In which case the smallest bounding box contains only one complete individual object. Of course, by performing this operation on all the target objects in the image, the bounding boxes of all the target objects can be obtained.
In some embodiments, the determining unit, according to the bounding box of the one target object to be segmented, determines the region in the image to be segmented, which includes the contour feature of the one target object to be segmented, including:
the determining unit is specifically configured to determine the bounding boxes of all the target objects in the image to be segmented in a manner of determining the bounding box of the target object to be segmented. The specific function and processing of the determination unit are also referred to step S310.
The determining unit is specifically further configured to calculate a distance between a center coordinate of an outer bounding box of each target object and a center coordinate of an original image of the image to be segmented. The specific function and processing of the determination unit are also referred to step S320.
The determining unit is specifically configured to, according to a set relationship between the one target object to be segmented and the original image of the image to be segmented, select, from distances between center coordinates of an outline frame of each target object and center coordinates of the original image of the image to be segmented, an outline frame of the one target object whose distance matches the set relationship, as an area in the image to be segmented, where the area includes a contour feature of the one target object to be segmented. The specific function and processing of the determination unit are also referred to step S330.
That is, when selecting an individual target object, the individual target object may be selected by calculating the distance of the center of each target object from the center of the image. Specifically, assuming that the height and width of the original image are height and width, respectively, the center coordinates of the original image may be represented by (height/2, width/2), and the center coordinates of the target object may be represented by the center coordinates (x) of the bounding box1,y1),(x2,y2),(x3,y3) .., radius r1,r2,r3.... In order to ensure that each circumscribing frame can contain a complete target object, the radius R of the circumscribing frame is defined1、R2、R3Magnification was 1.2 fold, i.e.:
R1=r1*1.2,R2=r2*1.2,R3=r3*1.2...。
then the distance d between the center of each target object and the center of the imageiComprises the following steps:
Figure BDA0002776944620000111
obtaining the distance d between the center of each target object and the center of the imageiThen, the distance d between the center of each target object and the center of the image can be selectediTo select each complete individual target object, e.g. by selecting the individual target object closest to the center of the original image, the distance d between the center of each target object and the image center may be selected in each imageiThe smallest circumscribing frame.
The determining unit is further configured to perform image segmentation according to the single target area and a pre-trained image segmentation model, and determine a segmented image of the target object to be segmented. The specific function and processing of the determination unit are also referred to step S140. Wherein the image segmentation model comprises: and (4) pre-training an image segmentation model.
Therefore, by firstly extracting the contour, then judging the coordinate position of the target object according to the contour characteristics, determining the area of the target object according to the coordinate position, and finally segmenting the image, all complete target objects can be segmented independently, and only a single target object can be selected to be segmented, one or more interested single targets in the image can be directly segmented, the interference of other target objects is removed, and the model segmentation speed is improved.
In some embodiments, the determining unit, performing image segmentation according to the single target region and a pre-trained image segmentation model, and determining a segmented image of the target object to be segmented, includes:
the determining unit is specifically configured to input the single target region into a pre-trained image segmentation model, so as to obtain an image Mask of the target object to be segmented. Namely, the obtained single target area is input into the image segmentation model, and an image Mask (Mask) of a complete single target object can be segmented, so that the single target segmentation Mask is obtained. Specifically, after the individual target object is selected, the outer frame region corresponding to the individual target object may be input into the trained image segmentation model, and the image segmentation model may output an image Mask of the individual target object. The specific function and processing of the determination unit are also referred to step S410.
The determining unit is specifically configured to convert the pixel coordinates of the image Mask of the one target object to be segmented into coordinates in the original image of the image to be segmented. That is, the pixel coordinates of the obtained individual target division Mask are converted into the original image. The specific function and processing of the determination unit are also referred to step S420.
The determining unit is specifically configured to fuse the image Mask of the one target object to be segmented with the original image of the image to be segmented to obtain a segmented image of the one target object to be segmented. That is, after the pixel coordinates of the individual target division Mask are converted into the original image, the individual target division Mask is fused with the original image, so that a divided image including only the complete individual target object can be obtained, that is, the divided original image including only the individual target object is output. Specifically, the coordinates of each pixel in the Mask of the individual target object are converted into coordinates in the original image, and then the Mask and the original image are fused. The fused image only retains the single target object, so that the segmentation of the single target object under the multiple targets of the same type is completed. The specific function and processing of the determination unit are also referred to step S430.
Therefore, by carrying out image acquisition, then carrying out contour extraction on a target object in an acquired image, judging the coordinate position information of the target object according to contour characteristics, then obtaining an area only containing a complete single target object, inputting the area only containing the complete single target object into an image segmentation model to segment an image Mask of the complete single target object, finally converting the pixel coordinates of the image Mask of the complete single target object into an original image, and fusing the image Mask of the complete single target object with the original image to obtain a segmented image only containing the complete single target object, directly segmenting the single target, and improving the overall segmentation efficiency.
Since the processes and functions implemented by the apparatus of this embodiment substantially correspond to the embodiments, principles and examples of the method shown in fig. 1 to 4, the description of this embodiment is not detailed, and reference may be made to the related descriptions in the foregoing embodiments, which are not repeated herein.
Through a large number of tests, the technical scheme of the invention is adopted, and under the condition that a plurality of complete target objects of the same type exist at the same time, the position relation of a plurality of targets in an image is judged without performing target detection and then segmenting, and each target object is directly segmented, so that the complexity of target segmentation is reduced, and the segmentation efficiency is improved.
According to the embodiment of the invention, a terminal of the image segmentation device corresponding to the same type of multi-target objects is also provided. The terminal may include: the image segmentation device for the same type of multi-target objects is described above.
In the related scheme, when a single target is segmented from a multi-target image, firstly, target detection is carried out, and then, semantic segmentation is carried out on a target detection area. The operation steps of the segmentation mode are complex, two deep learning models are needed, the segmentation efficiency is not high, and the segmentation condition of a plurality of target objects of the same type is not considered.
In the related scheme, multi-target segmentation recognition can be performed based on the image, and specifically, each object region is segmented by extracting the contour features of the object. However, the segmentation effect of the segmentation mode depends on the result of binarization, the edge segmentation effect cannot be guaranteed, and the condition of the same type of multi-target object is not considered.
In some embodiments, the invention provides a same-type multi-target object segmentation method based on deep learning, and the method is used for segmenting the same-type multi-target object.
Specifically, under the condition that a plurality of complete target objects of the same type exist simultaneously, the image segmentation network in the related scheme cannot segment the complete target objects independently, and only can segment all the target objects simultaneously. The segmentation method provided by the scheme of the invention can be used for independently segmenting all complete target objects and selecting only a certain independent target object. Therefore, one or more interested single targets in the image can be directly segmented, the interference of other target objects is removed, and the model segmentation speed is improved.
In addition, the multi-target segmentation method used in the related scheme adopts a method of target detection and image segmentation, and the method needs to train two or more deep neural network models, so that the overall complexity is high. The multi-target independent segmentation method provided by the scheme of the invention does not need to detect the target and then segment the target, and directly segments each target object by judging the position relation of a plurality of targets in the image, thereby reducing the complexity of target segmentation and improving the segmentation efficiency.
Specifically, according to the scheme of the invention, contour extraction is firstly carried out, then the judgment of the coordinate position of the target object is carried out according to the contour characteristics, the area where the target object is located is determined according to the coordinate position, and finally image segmentation is carried out. Therefore, the problem that the image segmentation technology in the related scheme can only segment all targets is solved on the whole, and the technical effect of independently segmenting any object in the image is achieved. The position of the single target object is judged by adopting the information such as the contour of the single target object, the target detection steps are reduced, the single target object can be directly segmented, the target segmentation scheme is simplified, and the target segmentation efficiency is improved.
The method for determining the target object region by contour extraction and position judgment is simple and effective, and has higher efficiency compared with a method for determining the position of a target object by using a target detection model.
Compared with the multi-target segmentation method of target detection and image segmentation in the related scheme, the method for independently segmenting the same type of multi-target objects provided by the scheme has the advantages that the process is simpler, a target detection model does not need to be trained, the independent target can be directly segmented, and the integral segmentation efficiency is improved.
For example, in the correlation scheme, the target detection model + the image segmentation model are adopted, that is, two models need to be trained. The main function of the target detection model is to determine the position of each target in the image and give a target position frame, but the scheme does not use the target detection model, obtains target coordinate information by searching the outline of the target, and then determines which target is selected according to the relation between the target coordinate information and the image center. It is not necessary to train the target detection model here.
In some embodiments, the scheme of the invention provides a solution for segmenting each individual target object when the same type of multi-target objects exist simultaneously.
The following describes an exemplary implementation process of the scheme of the present invention with reference to the examples shown in fig. 6, fig. 7, and fig. 8.
Fig. 6 and 7 are schematic structural views of the same type of multi-target object. As shown in fig. 6 and 7, in the case where a plurality of complete target objects of the same type coexist, the same type of multi-target objects, such as the first target object 10, the second target object 20, and the third target object 30, are all single complete target objects.
FIG. 8 is a schematic diagram illustrating a segmentation process of an embodiment of the same type of multi-target objects. As shown in fig. 8, when multiple target objects of the same type exist simultaneously, the method for segmenting each individual target object from the multiple target objects of the same type may include:
and step 1, image acquisition is carried out.
In step 1, attention must be paid to the definition of the target object during image acquisition, and the image has at least one complete target, otherwise the target object cannot be segmented.
And 2, extracting the contour of the target object in the collected image (namely the original image).
For example: the contour extraction is a houghcirles () function, circular contours of different sizes can be extracted by setting parameters thereof, and parameters of houghcirles (dp is 1.5, minDist is 300, param1 is 220, param2 is 100, minRadius is 180, and maxRadius is 300) are adopted in experiments.
As shown in fig. 6 and 7, the inner contour refers to a circular contour within the target object, and can be obtained by houghcircles (); the outer contour refers to a contour which can be obtained by the coordinates of the center point of the inner contour and completely contains the target object. The outer contour is obtained using the center coordinates of the inner contour and a radius set on the condition that the target object can be completely contained.
In step 2, the outer contour or the inner contour of the target object can be obtained by a contour extraction method.
And 3, judging the coordinate position information of the target object according to the extracted contour features to obtain an area only containing a complete single target object, namely the single target area.
In step 3, coordinate position information of the target object is obtained through the target contour, and a minimum circumscribed rectangle frame or a minimum circumscribed circle frame of the target object is drawn according to the coordinate position information of the target object. The circumscribed rectangle frame or the circumscribed circle frame of the target object may be determined according to the shape of the target object, and are herein collectively referred to as the circumscribed frame. In which case the smallest bounding box contains only one complete individual object. Of course, by performing this operation on all the target objects in the image, the bounding boxes of all the target objects can be obtained.
In step 3, in selecting the individual target objects, the individual target objects may be selected by calculating the distances of the centers of the respective target objects from the center of the image.
For example: assuming that the height and width of the original image are height and width, respectively, the center coordinates of the original image can be expressed by (height/2, width/2), and the center coordinates of the target object can be expressed using the center coordinates (x) of the circumscribing frame1,y1),(x2,y2),(x3,y3) .., radius r1,r2,r3.... In order to ensure that each circumscribing frame can contain a complete target object, the radius R of the circumscribing frame is defined1、R2、R3Magnification was 1.2 fold, i.e.:
R1=r1*1.2,R2=r2*1.2,R3=r3*1.2...。
then the distance d between the center of each target object and the center of the imageiComprises the following steps:
Figure BDA0002776944620000151
obtaining the distance d between the center of each target object and the center of the imageiThen, the distance d between the center of each target object and the center of the image can be selectediTo select each complete individual target object, e.g. by selecting the individual target object closest to the center of the original image, the distance d between the center of each target object and the image center may be selected in each imageiThe smallest circumscribing frame.
For example: if there are multiple objects in the image, the distance d of each object from the center of the image is different, and if we only focus on the object closest to the center of the image, we can select the object with the smallest d, and similarly select the object with the largest d if we want to select the object closest to the edge of the image.
And 4, inputting the obtained single target area into an image segmentation model, and segmenting an image Mask (Mask) of a complete single target object to obtain the single target segmentation Mask. If there are multiple objects in the image, the distance d of each object from the center of the image is different, and if we only focus on the object closest to the center of the image, we can select the object with the smallest d, and similarly select the object with the largest d if we want to select the object closest to the edge of the image.
For example: the image segmentation model is obtained through data training. Firstly, a model network structure needs to be selected, and the scheme adopts a U-Net network. Firstly, data needs to be labeled, namely, the outline of an object is labeled in an image, then the image and a labeling file are input into a U-Net network for model training, and an image segmentation model can be obtained after the model training.
Specifically, after the individual target object is selected, the outer frame region corresponding to the individual target object may be input into the trained image segmentation model, and the image segmentation model may output an image Mask of the individual target object.
And 5, converting the obtained pixel coordinates of the single target segmentation Mask into an original image.
For example: the conversion means to expand the individual target area mask obtained in step 4 to be consistent with the original image size. When extracting the single target area, the program records the position of the single target area relative to the whole image, and the target area is consistent with the mask size, so that the expansion can be carried out outside the target area, the pixel value of the expansion area is 0, and the expansion size is the original image size.
And 6, converting the pixel coordinates of the single target segmentation Mask into the original image, and fusing the single target segmentation Mask and the original image to obtain a segmentation image only containing a complete single target object, namely outputting the segmentation original image only containing the single target object.
For example: in step 5, the mask of the single target area is already expanded to the size of the original image, and as explained above, the pixel value of the position where the target object is located is 1, and we can multiply the original image and the expanded mask by the position pixel value, so that all the pixel values except the target object become 0, and the image segmentation is completed.
Specifically, the coordinates of each pixel in the Mask of the individual target object are converted into coordinates in the original image, and then the Mask and the original image are fused. The fused image only retains the single target object, so that the segmentation of the single target object under the multiple targets of the same type is completed.
In summary, according to the scheme of the present invention, image acquisition is performed first, then the contour of the target object in the acquired image is extracted, the coordinate position information of the target object is determined according to the contour characteristics, then the region only containing the complete individual target object is obtained, the region only containing the complete individual target object is input into the image segmentation model, the image Mask of the complete individual target object can be segmented, finally, the pixel coordinates of the image Mask of the complete individual target object are converted into the original image, and the image Mask of the complete individual target object is fused with the original image, so that the segmented image only containing the complete individual target object can be obtained.
Since the processes and functions implemented by the terminal of this embodiment substantially correspond to the embodiments, principles, and examples of the apparatus shown in fig. 5, reference may be made to the related descriptions in the foregoing embodiments for details which are not described in detail in the description of this embodiment, and no further description is given here.
Through a large number of tests, the technical scheme of the invention is adopted, under the condition that a plurality of complete target objects of the same type exist at the same time, target detection is not needed firstly and then segmentation is not needed, and the position of the single target object is judged by adopting the information such as the contour of the single target object, so that the target detection steps are reduced, the single target object can be directly segmented, the target segmentation scheme is simplified, and the target segmentation efficiency is improved.
According to an embodiment of the present invention, there is also provided a storage medium corresponding to an image segmentation method for multi-target objects of the same type, the storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the above image segmentation method for multi-target objects of the same type.
Since the processing and functions implemented by the storage medium of this embodiment substantially correspond to the embodiments, principles, and examples of the methods shown in fig. 1 to fig. 4, details are not described in the description of this embodiment, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.
Through a large number of tests, the technical scheme of the invention is adopted, the contour is extracted firstly, then the coordinate position of the target object is judged according to the contour characteristics, the area of the target object is determined according to the coordinate position, and finally the image segmentation is carried out, so that any object in the image can be segmented independently, and the efficiency is higher.
According to an embodiment of the present invention, there is also provided a processor corresponding to an image segmentation method for multiple target objects of the same type, the processor being configured to run a program, wherein the program is run to execute the above-described image segmentation method for multiple target objects of the same type.
Since the processing and functions implemented by the processor of this embodiment substantially correspond to the embodiments, principles, and examples of the methods shown in fig. 1 to fig. 4, details are not described in the description of this embodiment, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.
Through a large number of tests, the technical scheme of the invention is adopted, the contour is extracted firstly, then the coordinate position of the target object is judged according to the contour characteristics, the area of the target object is determined according to the coordinate position, and finally the image segmentation is carried out, so that the single target can be directly segmented, and the overall segmentation efficiency is improved.
In summary, it is readily understood by those skilled in the art that the advantageous modes described above can be freely combined and superimposed without conflict.
The above description is only an example of the present invention, and is not intended to limit the present invention, and it is obvious to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (13)

1. An image segmentation method for multi-target objects of the same type is characterized by comprising the following steps:
collecting an image to be segmented; the images to be segmented at least have a complete image of one target object to be segmented in the multi-target objects of the same type;
extracting the contour feature of the target object to be segmented in the image to be segmented;
determining a region containing the contour feature of a target object to be segmented in the image to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented, wherein the region is used as an independent target region of the target object to be segmented;
and performing image segmentation according to the single target area and a pre-trained image segmentation model, and determining a segmentation image of the target object to be segmented.
2. The image segmentation method for multi-target objects of the same type as set forth in claim 1,
the contour feature of a target object to be segmented in the image to be segmented comprises: an external contour feature or an internal contour feature of a target object to be segmented in an image to be segmented;
the image segmentation model comprises: and (4) pre-training an image segmentation model.
3. The image segmentation method for multiple target objects of the same type as in claim 1 or 2, wherein determining the region of the image to be segmented, which contains the contour feature of one target object to be segmented, according to the extracted contour feature of the one target object to be segmented in the image to be segmented comprises:
determining coordinate position information of a target object to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented;
determining an outer frame of the target object to be segmented according to the coordinate position information of the target object to be segmented;
and determining a region containing the contour feature of the target object to be segmented in the image to be segmented according to the external frame of the target object to be segmented.
4. The image segmentation method of multiple target objects of the same type as in claim 3, wherein determining the region containing the contour feature of one target object to be segmented in the image to be segmented according to the bounding box of the one target object to be segmented comprises:
determining the external frames of all the target objects in the image to be segmented according to the mode of determining the external frame of the target object to be segmented;
calculating the distance between the center coordinate of the circumscribed frame of each target object and the center coordinate of the original image of the image to be segmented;
according to the set relationship between the target object to be segmented and the original image of the image to be segmented, selecting an external frame of the target object with the distance conforming to the set relationship from the distance between the center coordinate of the external frame of each target object and the center coordinate of the original image of the image to be segmented as an area containing the contour feature of the target object to be segmented in the image to be segmented.
5. The image segmentation method of the same type of multi-target objects according to claim 1 or 2, wherein the image segmentation is performed according to the single target region and a pre-trained image segmentation model, and the determination of the segmented image of the target object to be segmented comprises:
inputting the single target area into a pre-trained image segmentation model to obtain an image Mask of the target object to be segmented;
converting the pixel coordinates of the image Mask of the target object to be segmented into coordinates in the original image of the image to be segmented;
and fusing the image Mask of the target object to be segmented with the original image of the image to be segmented to obtain the segmented image of the target object to be segmented.
6. An image segmentation apparatus for multi-target objects of the same type, comprising:
an acquisition unit configured to acquire an image to be segmented; the images to be segmented at least have a complete image of one target object to be segmented in the multi-target objects of the same type;
an extracting unit configured to extract a contour feature of the one target object to be segmented in the image to be segmented;
the determining unit is configured to determine a region, which contains the contour feature of one target object to be segmented, in the image to be segmented as an individual target region of the one target object to be segmented according to the extracted contour feature of the one target object to be segmented in the image to be segmented;
the determining unit is further configured to perform image segmentation according to the single target area and a pre-trained image segmentation model, and determine a segmented image of the target object to be segmented.
7. The same-type multi-target object image segmentation apparatus as set forth in claim 6,
the contour feature of a target object to be segmented in the image to be segmented comprises: an external contour feature or an internal contour feature of a target object to be segmented in an image to be segmented;
the image segmentation model comprises: and (4) pre-training an image segmentation model.
8. The image segmentation apparatus for multiple target objects of the same type as in claim 6 or 7, wherein the determining unit determines, according to the extracted contour feature of one target object to be segmented in the image to be segmented, a region containing the contour feature of the one target object to be segmented in the image to be segmented, and includes:
determining coordinate position information of a target object to be segmented according to the extracted contour feature of the target object to be segmented in the image to be segmented;
determining an outer frame of the target object to be segmented according to the coordinate position information of the target object to be segmented;
and determining a region containing the contour feature of the target object to be segmented in the image to be segmented according to the external frame of the target object to be segmented.
9. The image segmentation apparatus for multiple target objects of the same type as set forth in claim 8, wherein the determining unit determines, according to the bounding box of the target object to be segmented, a region in the image to be segmented that includes the contour feature of the target object to be segmented, and includes:
determining the external frames of all the target objects in the image to be segmented according to the mode of determining the external frame of the target object to be segmented;
calculating the distance between the center coordinate of the circumscribed frame of each target object and the center coordinate of the original image of the image to be segmented;
according to the set relationship between the target object to be segmented and the original image of the image to be segmented, selecting an external frame of the target object with the distance conforming to the set relationship from the distance between the center coordinate of the external frame of each target object and the center coordinate of the original image of the image to be segmented as an area containing the contour feature of the target object to be segmented in the image to be segmented.
10. The image segmentation apparatus for multi-target objects of the same type as in claim 6 or 7, wherein the determining unit performs image segmentation according to the single target region and a pre-trained image segmentation model to determine the segmented image of the target object to be segmented, and comprises:
inputting the single target area into a pre-trained image segmentation model to obtain an image Mask of the target object to be segmented;
converting the pixel coordinates of the image Mask of the target object to be segmented into coordinates in the original image of the image to be segmented;
and fusing the image Mask of the target object to be segmented with the original image of the image to be segmented to obtain the segmented image of the target object to be segmented.
11. A terminal, comprising: the image segmentation apparatus for the same type of the multi-target objects as set forth in any one of claims 6 to 10.
12. A storage medium characterized by comprising a stored program, wherein an apparatus in which the storage medium is located is controlled to execute the image segmentation method for the same type of multi-target objects according to any one of claims 1 to 5 when the program is executed.
13. A processor for executing a program, wherein the program is executed to execute the image segmentation method for the same type of multi-target objects according to any one of claims 1 to 5.
CN202011268751.3A 2020-11-13 2020-11-13 Image segmentation method, device, terminal, storage medium and processor Pending CN112419331A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011268751.3A CN112419331A (en) 2020-11-13 2020-11-13 Image segmentation method, device, terminal, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011268751.3A CN112419331A (en) 2020-11-13 2020-11-13 Image segmentation method, device, terminal, storage medium and processor

Publications (1)

Publication Number Publication Date
CN112419331A true CN112419331A (en) 2021-02-26

Family

ID=74831787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011268751.3A Pending CN112419331A (en) 2020-11-13 2020-11-13 Image segmentation method, device, terminal, storage medium and processor

Country Status (1)

Country Link
CN (1) CN112419331A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219791A (en) * 2021-12-17 2022-03-22 盛视科技股份有限公司 Road ponding detection method based on vision, electronic equipment and vehicle alarm system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570434A (en) * 2018-06-06 2019-12-13 杭州海康威视数字技术股份有限公司 image segmentation and annotation method and device
CN110751659A (en) * 2019-09-27 2020-02-04 北京小米移动软件有限公司 Image segmentation method and device, terminal and storage medium
CN110930419A (en) * 2020-02-13 2020-03-27 北京海天瑞声科技股份有限公司 Image segmentation method and device, electronic equipment and computer storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570434A (en) * 2018-06-06 2019-12-13 杭州海康威视数字技术股份有限公司 image segmentation and annotation method and device
CN110751659A (en) * 2019-09-27 2020-02-04 北京小米移动软件有限公司 Image segmentation method and device, terminal and storage medium
CN110930419A (en) * 2020-02-13 2020-03-27 北京海天瑞声科技股份有限公司 Image segmentation method and device, electronic equipment and computer storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周向等: "一种基于机器视觉的瓷砖定位分割方法", 《中国陶瓷》 *
王瑞: "自然场景下猕猴桃识别方法研究", 《中国优秀硕士学位论文全文数据库 农业科技辑》 *
陕硕等: "基于实例分割的多目标跟踪", 《中国体视学与图像分析》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219791A (en) * 2021-12-17 2022-03-22 盛视科技股份有限公司 Road ponding detection method based on vision, electronic equipment and vehicle alarm system

Similar Documents

Publication Publication Date Title
CN110598609B (en) Weak supervision target detection method based on significance guidance
CN108334881B (en) License plate recognition method based on deep learning
JP6188400B2 (en) Image processing apparatus, program, and image processing method
WO2018103608A1 (en) Text detection method, device and storage medium
JP2018200685A (en) Forming of data set for fully supervised learning
CN110751154B (en) Complex environment multi-shape text detection method based on pixel-level segmentation
CN111832659B (en) Laser marking system and method based on feature point extraction algorithm detection
CN112418216A (en) Method for detecting characters in complex natural scene image
CN113435240A (en) End-to-end table detection and structure identification method and system
CN113673338A (en) Natural scene text image character pixel weak supervision automatic labeling method, system and medium
CN113159024A (en) License plate recognition technology based on improved YOLOv4
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN109447117A (en) The double-deck licence plate recognition method, device, computer equipment and storage medium
CN115424017B (en) Building inner and outer contour segmentation method, device and storage medium
JP4926266B2 (en) Learning data creation device, learning data creation method and program
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN117557784B (en) Target detection method, target detection device, electronic equipment and storage medium
CN117495891B (en) Point cloud edge detection method and device and electronic equipment
CN112419331A (en) Image segmentation method, device, terminal, storage medium and processor
CN116523916B (en) Product surface defect detection method and device, electronic equipment and storage medium
JP2011258036A (en) Three-dimensional shape search device, three-dimensional shape search method, and program
CN110889418A (en) Gas contour identification method
CN108564020B (en) Micro-gesture recognition method based on panoramic 3D image
CN113436251B (en) Pose estimation system and method based on improved YOLO6D algorithm
Li et al. Arbitrary shape scene text detector with accurate text instance generation based on instance-relevant contexts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226

RJ01 Rejection of invention patent application after publication