CN110910334B - Instance segmentation method, image processing device and computer readable storage medium - Google Patents

Instance segmentation method, image processing device and computer readable storage medium Download PDF

Info

Publication number
CN110910334B
CN110910334B CN201811079789.9A CN201811079789A CN110910334B CN 110910334 B CN110910334 B CN 110910334B CN 201811079789 A CN201811079789 A CN 201811079789A CN 110910334 B CN110910334 B CN 110910334B
Authority
CN
China
Prior art keywords
instance
segmentation
instances
image
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811079789.9A
Other languages
Chinese (zh)
Other versions
CN110910334A (en
Inventor
杨爽
胡志强
李嘉辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201811079789.9A priority Critical patent/CN110910334B/en
Publication of CN110910334A publication Critical patent/CN110910334A/en
Application granted granted Critical
Publication of CN110910334B publication Critical patent/CN110910334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

An example segmentation method, an image processing apparatus, and a computer-readable storage medium are disclosed. The method comprises the following steps: acquiring a target image; respectively carrying out example segmentation on the target image by adopting a preset number of example segmentation technologies to obtain an example set, wherein the example set comprises at least a preset number of examples, and the preset number is an integer greater than 1; determining similar instances in the instance set according to the overlapping degree of the instances in the instance set; and fusing the similar examples together to obtain at least one example image of the target image. According to the method and the device, the target image is subjected to instance segmentation through multiple instance segmentation technologies, and the multiple instance images obtained through segmentation are fused together to obtain the instance image of the target image.

Description

Instance segmentation method, image processing device and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an instance segmentation method, an image processing device, and a computer-readable storage medium.
Background
When processing an image, it is often necessary to locate and distinguish the various instances contained in the image. For example, different examples are framed by using an object detection method, and the regions where the examples of different categories are located are marked pixel by using a semantic segmentation method, so that the examples of different categories are distinguished. If the instances in the same category need to be further distinguished, the instances of the image are divided, and the instances are divided, so that the categories of the image can be distinguished, and different instances can be distinguished in the instances in the same category.
Specifically, an example segmentation framework based on the candidate region is adopted to carry out example segmentation on the image so as to directly obtain an example segmentation result. Although the example segmentation framework based on the candidate region is good at processing the target with the overlapped region, the related hyper-parameters are various, and the phenomena of missing detection and false detection are easy to occur.
As can be seen, the current example segmentation method is easy to have the phenomena of missing detection and false detection, and the accuracy rate is not high.
Disclosure of Invention
The embodiment of the application provides an example segmentation method and image processing equipment, which can integrate the advantages of various example segmentation technologies to improve the accuracy of example segmentation of an image.
In a first aspect, an embodiment of the present application provides an instance segmentation method, including:
acquiring a target image;
respectively carrying out instance segmentation on the target image by adopting a preset number of instance segmentation technologies to obtain an instance set, wherein the instance set comprises at least a preset number of instances, and the preset number is an integer greater than 1;
determining similar instances in the instance set according to the overlapping degree of the instances in the instance set;
and fusing the similar examples together to obtain at least one example image of the target image.
With reference to the first aspect, in a first implementation manner of the first aspect, the determining, according to the overlapping degree between the instances in the instance set, a similar instance in the instance set includes:
determining an intersection ratio between every two instances in the instance set;
acquiring an example pair of which the intersection ratio is greater than a preset value;
the similar examples were obtained by combining the example sets containing the same example into an example set.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the combining the instances that contain the same instance into an instance group to obtain the similar instance includes:
combining example groups containing the same example to obtain at least one example group;
and in the at least one example group, acquiring an example group containing the number of the examples which is more than or equal to a preset number as the similar examples.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the preset number is greater than half of the preset number.
With reference to the first aspect, in a fourth implementation manner of the first aspect, the fusing the similar examples together to obtain at least one example image of the target image includes:
acquiring an intersection between every two similar examples;
and taking a union set of intersections between every two instances to obtain at least one instance image of the target image.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, after the obtaining of the instance image of the at least one target image, the method further includes:
and displaying a target image containing the at least one example image, wherein the areas where different example images are located are distinguished.
With reference to the first aspect to the fifth implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the example segmentation technique includes at least one of two types of techniques, namely a candidate-region-based example segmentation technique and a semantic-based example segmentation technique.
In a second aspect, an embodiment of the present application provides an image processing apparatus including means for performing the method of the first aspect, the image processing apparatus including:
the acquisition module is used for acquiring a target image;
the segmentation module is used for respectively carrying out example segmentation on the target image by adopting a preset number of example segmentation technologies to obtain an example set, wherein the example set comprises at least a preset number of examples, and the preset number is an integer larger than 1;
a determining module, configured to determine similar instances in the instance set according to overlapping degrees between the instances in the instance set;
and the fusion module is used for fusing the similar examples together to obtain at least one example image of the target image.
With reference to the second aspect, in a first implementation manner of the second aspect, the determining module includes a determining unit, a first obtaining unit, and a combining unit, specifically:
the determining unit is used for determining the intersection and combination ratio between every two instances in the instance set;
the first obtaining unit is used for obtaining the example pair of which the intersection ratio is greater than a preset value;
the combination unit is used for combining the examples containing the same example into an example group to obtain the similar example.
With reference to the first implementation manner of the second aspect, in a second implementation manner of the second aspect, the combining unit includes a combining subunit and an obtaining subunit, specifically:
the combination subunit is used for combining the example groups containing the same examples together to obtain at least one example group;
the obtaining subunit is configured to obtain, as the similar instance, an instance group in which the number of the instances is greater than or equal to a preset number, in the at least one instance group.
With reference to the second implementation manner of the second aspect, in a third implementation manner of the second aspect, the preset number is greater than half of the preset number.
With reference to the second aspect, in a fourth implementation manner of the second aspect, the fusion module includes a second obtaining unit and a third obtaining unit, specifically:
the second obtaining unit is configured to obtain an intersection between every two instances in the similar instances;
the third obtaining unit is configured to obtain a union of intersections between all the two instances to obtain at least one instance image of the target image.
With reference to the second aspect, in a fifth implementation manner of the second aspect, the image processing apparatus further includes a display module, configured to display a target image including the at least one instance image, where areas where different instance images are located are distinguished.
With reference to the second aspect to the fifth implementation manner of the second aspect, in a sixth implementation manner of the second aspect, the example segmentation technique includes at least one of two types of techniques, namely a candidate-region-based example segmentation technique and a semantic-based example segmentation technique.
In a third aspect, an embodiment of the present application provides another image processing apparatus, including a processor, an output device, and a memory, where the processor, the output device, and the memory are connected to each other through a bus, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to perform the method according to any one of the implementation manners of the first aspect to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program includes program instructions, where the program instructions are executed by a processor to perform the method of any one of the implementation manners of the first aspect to the first aspect.
According to the method, a target image is obtained, instance segmentation is carried out on the target image by adopting a preset number of instance segmentation technologies, so that an instance set containing at least a preset number of instances is obtained, then similar instances in the instance set are detected, and the similar instances are fused together, so that an instance image of the target image is obtained. Therefore, the method and the device have the advantages that the target image is subjected to the example segmentation by adopting different example segmentation methods to obtain the results of the multiple example segmentations, and then the results of the multiple example segmentations are fused together, so that the target image is subjected to the more accurate example segmentation by combining the advantages of the different example segmentation methods, and the efficiency and the accuracy of the example segmentation are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic flow chart diagram of an example segmentation method provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an example segmentation method provided in another embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of an example segmentation method provided by an embodiment of the present application;
FIG. 4 is a method for rapidly marking different instances in a target image according to an embodiment of the present application;
fig. 5A is a schematic block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 5B is a schematic partial block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 5C is a schematic partial block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 6 is a structural block diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the image processing devices described in embodiments of the present application include, but are not limited to, terminal devices and servers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). Where the terminal device is a device such as a mobile phone, laptop or tablet computer, the server includes an image processing apparatus having a touch sensitive surface (e.g., a touch screen display and/or touch pad) and a desktop computer.
In the discussion that follows, an image processing device is described that includes a display and a touch sensitive surface. However, it should be understood that the image processing device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
When performing example segmentation on an image, two types of methods are often used, one is an example segmentation architecture based on a candidate region, for example, mask R-CNN (masks with conditional Neural Networks) based on a region and the other is an example segmentation architecture based on semantics, for example, various types of Full Convolution Networks (FCN). The two methods have advantages and disadvantages respectively, the Mask R-CNN can directly obtain an instance segmentation result, and is good at processing the target with an overlapping area, but the over-parameters are various, different data and problems need to be well debugged, and the phenomena of missing detection and false detection are easy to occur; the FCN-based method has less hyper-parameters, higher operating efficiency and less missed detection and false detection, but needs a complex post-processing method to obtain the final example segmentation result, and is not accurate enough for the segmentation of the target with an overlapping area.
In order to solve the above problems, embodiments of the present application provide an example segmentation method, which can combine advantages of multiple example segmentation methods, and compared with any one of the above single example segmentation methods, a result obtained by performing example segmentation on an image by using the example segmentation method provided by the present application is more accurate.
In order to better understand the embodiment of the present application, a method applying the embodiment of the present application will be described below, and the embodiment of the present application may be applied to a scene in which an image processing model performs instance segmentation on an image.
As shown in fig. 3, the target image is subjected to instance segmentation by using four instance segmentation methods, namely, a, B, C, and D, to obtain four instance segmentation results, each instance segmentation result includes a plurality of instances, all the obtained instances are collected to obtain an instance set, the similarity between every two instances in the instance set is calculated, a pair of instances similar to each other is obtained, the instance sets including the same instance are combined to obtain a plurality of sets of instance sets, one set of instance sets is shown as instance 11, instance 21, and instance 31 in fig. 3, all the instances in the instance sets are fused to obtain a graph in which the three instances shown in fig. 3 intersect, a shadow portion is one instance image in the target image, similarly, other instance graphs of the target image are obtained from other instance sets, respectively, all the obtained instance images are fused together, the target image is recombined, and each instance image is distinguished.
More specifically, assuming that the target entity includes 7 instances, and performing instance segmentation on the 7 instances included in the target image by using four instance segmentation methods, namely, an a instance segmentation method processes the target image to obtain four instance segmentation results, namely, an a instance segmentation method processes the target image to obtain an instance 11, an instance 12, an instance 13, an instance 14, an instance 15, an instance 16, and an instance 17; b, processing the target image by using the example segmentation method to obtain an example 21, an example 22, an example 23, an example 24, an example 25, an example 26 and an example 27; c, processing the target image by using the example segmentation method to obtain an example 31, an example 32, an example 33, an example 34, an example 35, an example 36 and an example 37; the D instance segmentation method processes the target image to obtain instances 41, 42, 43, 44, 45, 46, and 47, thus obtaining 4 × 7=28 instances in total, and collecting the 28 instances in one instance set.
It should be noted that the multiple instances in the same instance segmentation result are different objects, such as instance 11, instance 12, instance 13, instance 14, instance 15, instance 16, and instance 17, and the instances describing the same object in the multiple instance segmentation results are possibly similar, such as instance 11, instance 21, and instance 31 describing the same object are similar and are respectively from different instance segmentation results.
When detecting similar examples in the example set, the similar example pairs are from different example division results, so that examples in the example group obtained from the example pairs are from different example division results respectively, and the number of the example group is less than or equal to the number of the example division method adopted in the application, and the example group shown in the figure comprises 3 examples from the example division result adopting the example division method A, the example division result adopting the example division method B and the example division result adopting the example division method C.
As described above, it can be known that the number of instances included in an instance group represents the number of instance segmentation methods involved in the instance group, and since there may be several instance groups for the same object, for example, the instance group consisting of instance 11, instance 21, and instance 31, and the instance group containing only instance 41, an instance group including more than half of the number of instances of the instance segmentation method adopted in the present application is selected from the several instance groups describing the same object, so as to ensure the correctness of the instance image obtained after the instance group fusion. If the example set is not selected, the object described by the example set is discarded and used as the background.
After the best example set is selected, the example set is fused to obtain example images of the shadow parts shown in the figure, namely, the intersection (1) + (4) between examples 11 and 21, the intersection (2) + (4) between examples 11 and 31, and the intersection (3) + (4) between example 21 and example 31 are obtained, then when the union of the intersections of the two examples, namely, the shadow parts (1) + (2) + (3) + (4) in the figure, is obtained, the shadow parts are used as example images of the target image, the obtained example images are integrated together in a figure mode, the target image comprising a plurality of example images is obtained, the parts except the example images are backgrounds, and the areas where the example images are located are distinguished.
Referring to fig. 1, fig. 1 is a schematic flow chart of an example segmentation method disclosed in an embodiment of the present application, where the method includes:
101: and acquiring a target image.
In the embodiment of the application, an image needing instance segmentation is obtained as the target image.
102: and performing instance segmentation on the target image by adopting a preset number of instance segmentation technologies to obtain an instance set, wherein the instance set comprises at least a preset number of instances, and the preset number is an integer greater than 1.
In the embodiment of the application, a preset number of example segmentation technologies are adopted to perform example segmentation on the target image respectively so as to obtain an example respectively, all the obtained examples are combined into an example set, and the example set at least comprises the preset number of examples due to the adoption of the preset number of example segmentation technologies. Wherein, the example refers to objects except background in the target image, such as living beings and/or articles, and the preset number is an integer larger than one.
It should be noted that the above example segmentation technique refers to a method for segmenting an example in a target image, and the example segmentation technique includes at least one of two types of techniques, namely, a candidate region-based example segmentation technique and a semantic-based example segmentation technique. The example segmentation based on the candidate region refers to an example segmentation method for generating candidate Regions (region services) in an image and then classifying the candidate Regions, and includes a region-based Convolutional Neural Network (R-CNN) series algorithm, such as a region-based Convolutional Neural Network (R-CNN), a region-based Fast Convolutional Network (Fast R-CNN) and a region-based Faster Convolutional Network (Fast R-CNN) and a region-based Mask Neural Network (Mask R-CNN) which are segmented with semantic Network (semantic Network), and a method for classifying pixel points (map services) in a volume-based Convolutional Network (fccnn) or a Network (map Network).
It should be noted that, the example segmentation method based on the candidate region can directly obtain the example segmentation result, which is good at processing the target with the overlapping region, but the superparameters are numerous, different data and problems need to be debugged elaborately, and the phenomena of missing detection and false detection are easy to occur. Both types of methods have advantages and disadvantages, respectively.
103: and determining similar examples in the example set according to the overlapping degree of the examples in the example set.
In the embodiment of the present application, similar instances in the instance set are detected, the similar instances in the instance set are determined according to an overlapping degree overlapping rate (IoU) between the instances in the instance set, and when the overlapping rate between two instances is greater than a preset value, it is described that the two instances are similar.
It should be noted that similar examples of examples and their similar examples are also similar examples, e.g., example a and example B are similar examples, example B and example C are similar examples, and example a and example C are also similar examples regardless of the degree of overlap between example a and example C.
Specifically, determining the intersection ratio between every two instances in the instance set; acquiring an example pair of which the intersection ratio is greater than a preset value; the similar examples were obtained by combining the example sets containing the same example into an example set.
In the embodiment of the present application, the similarity ratio between the examples is measured by calculating the size of the Intersection ratio between the above examples, where the Intersection ratio refers to an overlap ratio (IoU), which is the overlap ratio of the generated candidate frame (candidate frame) and the original labeled frame (ground route frame), i.e. the ratio of their Intersection to Union, and the optimal case is complete overlap, i.e. the ratio is 1. And then calculating the intersection ratio between every two examples in the example set, if the intersection ratio is greater than a preset value, describing that the examples are similar, then forming example pairs by every two similar examples, and because the intersection ratios between the existing examples and different examples are greater than the preset value, collecting the example pairs containing the same examples to form at least one example group, wherein the example group comprises at least a preset number of examples, and taking the example group as the similar examples.
It should be noted that at least one similar instance exists in any one instance in the instance group, and no similar instance exists between different instance groups, and the two instances are relatively independent.
For example, if there is an intersection between example a and example B, then the intersection-and-merge ratio between example a and example B is IoU = [ area (a) # area (B) ]/[ area (a) # area (B) ], i.e., the area ratio of the overlapping area of example a and example B to the union of example a and example B is calculated.
More specifically, the example groups containing the same example are combined to obtain at least one example group; and in the at least one example group, acquiring an example group containing the number of the examples which is more than or equal to a preset number as the similar examples.
In the embodiment of the present application, when an instance group is generated, a plurality of instance groups may be generated, and then an instance group containing an instance whose number is greater than or equal to a preset number is selected as the similar instance, where the preset number is greater than half of the number of the instances contained in the instance set.
104: and fusing the similar examples together to obtain at least one example image of the target image.
In the embodiment of the present application, after obtaining the similar examples, the similar examples are fused together, so as to obtain a final example image of the target image.
Specifically, an intersection between every two instances in the similar instances is obtained; and taking a union set of intersections between every two instances to obtain an instance image of at least one target image.
In this embodiment, the similar examples may include more than two examples, and then intersection sets between two examples are respectively taken to obtain at least two intersection sets, and then a union set of the at least two intersection sets is taken as an example image of the target image.
Further, after an example image of at least one of the target images is obtained, the target image including the at least one example image is displayed, wherein areas where different example images are located are distinguished.
In the embodiment of the present application, after the example image in the target image is divided, the target image is displayed, and different marks are marked on the example image included in the target image to distinguish the areas where different example graphics are located, for example, different translucent colors are filled in different example images.
For example, assume that there are 7 objects in the target image, instance 1, instance 2, instance 3, and instance 4, instance 5, instance 6, and instance 7, respectively. Respectively adopting four example segmentation technologies, namely, an example segmentation A, an example segmentation B, an example segmentation C and an example segmentation D are respectively adopted to segment the target graph to obtain four segmentation results shown in FIG. 4, each segmentation result comprises a segmentation result obtained by carrying out example segmentation on 7 examples of the target graph, namely, the example segmentation A is carried out on the target graph to obtain an example 11, an example 12, an example 13, an example 14, an example 15, an example 16 and an example 17; example segmentation B performs example segmentation on the target image to obtain an example 21, an example 22, an example 23, an example 24, an example 25, an example 26 and an example 27; example segmentation C performs example segmentation on the target image to obtain an example 31, an example 32, an example 33, an example 34, an example 35, an example 36 and an example 37; example segmentation D example segmentation of the target image yields example 41, example 42, example 43, example 44, example 45, example 46, and example 47. And then detecting similar examples therein to obtain four example groups, namely example group 1, example group 2, example group 3, example group 4, example group 5, example group 6 and example group 7, and fusing the 7 example groups respectively to obtain an example image of example 1, an example image of example 2, an example image of example 3, an example image of example 4, an example image of example 5, an example image of example 6 and an example image of example 7. For example, example set 1 shown in FIG. 3 contains example 11, example 21, and example 31, taking the collection of intersections (i.e., shaded portions) between the examples as the example image of example 1. And after example images respectively corresponding to the 7 examples are obtained, displaying the target image, marking different example images on the target image according to the example images respectively corresponding to the 7 examples to distinguish the example images respectively corresponding to the 7 examples, for example, filling different semitransparent colors in the different example images.
Further, after the 7 example images are obtained, before the target image is displayed, according to the result of the example segmentation, a number is marked on each pixel point in the target image, different numbers represent example images to which the pixel point belongs, and then a position diagram of the position of the example image in the target image is generated, which can abstract the positions of the different example images and the background in the target image, as shown in fig. 4, in the diagram, numbers 1 to 7 respectively represent the position of the example image in example 1, the position of the example image in example 2, the position of the example image in example 3, the position of the example image in example 4, the position of the example image in example 5, the position of the example image in example 6, and the position of the example image in example 7, and 0 represents the position of the background. Therefore, when the target image is displayed, the segmentation result of the example does not need to be obtained, or the target image is segmented again, and only when the target image is displayed, the same semitransparent color is filled in the pixel points corresponding to the same number in the target image by contrasting the position map, and the different semitransparent colors are filled in the pixel points corresponding to different numbers. Therefore, by the embodiment of the application, the example images in the target image can be marked and distinguished quickly when the target image is displayed.
According to the embodiment of the application, a target image is obtained, the target image is subjected to instance segmentation by adopting a preset number of instance segmentation technologies, so that an instance set containing at least a preset number of instances is obtained, then similar instances in the instance set are detected, and the similar instances are fused together, so that an instance image of the target image is obtained. Therefore, the target image is subjected to instance segmentation by adopting different instance segmentation methods to obtain a plurality of instance segmentation results, and the plurality of instance segmentation results are fused together, so that the target image is subjected to more accurate instance segmentation by combining the advantages of the different instance segmentation methods, and the efficiency and the accuracy of the instance segmentation are improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of an example segmentation method disclosed in an embodiment of the present application, where the method includes:
201: and acquiring a target image.
In the embodiment of the application, an image needing instance segmentation is obtained as the target image.
202: and respectively carrying out example segmentation on the target image by adopting a preset number of example segmentation technologies to obtain an example set, wherein the example set comprises at least a preset number of examples.
In the embodiment of the application, a preset number of example segmentation technologies are adopted to perform example segmentation on the target image respectively so as to obtain an example respectively, all the obtained examples are combined into an example set, and the example set at least comprises the preset number of examples due to the adoption of the preset number of example segmentation technologies. Wherein, the example refers to objects except background in the target image, such as creatures and/or articles, and the preset number is an integer larger than one.
It should be noted that the example segmentation techniques described above include at least one of candidate region-based example segmentation techniques and semantic-based example segmentation techniques. The example segmentation based on the candidate region refers to an example segmentation method for generating candidate Regions (region services) in an image and then classifying the candidate Regions, and includes a region-based Convolutional Neural Network (R-CNN) series algorithm, such as a region-based Convolutional Neural Network (R-CNN), a region-based Fast Convolutional Network (Fast R-CNN) and a region-based Faster Convolutional Network (Fast R-CNN) and a region-based Mask Neural Network (Mask R-CNN) which are segmented with semantic Network (semantic Network), and a method for classifying pixel points (map services) in a volume-based Convolutional Network (fccnn) or a Network (map Network).
It should be noted that, the example segmentation method based on the candidate region can directly obtain the example segmentation result, which is good at processing the target with the overlapping region, but the superparameters are numerous, different data and problems need to be debugged elaborately, and the phenomena of missing detection and false detection are easy to occur. Both types of methods have advantages and disadvantages, respectively.
203: and determining the intersection ratio between every two examples in the example set.
In the embodiment of the present application, an Intersection ratio between two instances in the above instance set is calculated, and the similarity between the instances is measured by calculating the size of the Intersection ratio between the instances, where the Intersection ratio refers to an overlap ratio (IoU), which is an overlap ratio of a generated candidate frame (candidate frame) and an original mark frame (ground route frame), i.e. a ratio of their Intersection to Union, and most ideally is complete overlap, i.e. the ratio is 1.
It should be noted that similar examples of examples and their similar examples are also similar examples, e.g., example a and example B are similar examples, example B and example C are similar examples, and example a and example C are also similar examples regardless of the degree of overlap between example a and example C.
204: and acquiring the example pair with the intersection ratio larger than the preset value.
In the embodiment of the application, the example pair with the intersection ratio larger than the preset value is obtained. Specifically, if the intersection ratio of the example pair is greater than a preset value, the example pair is similar.
For example, if there is an intersection between example a and example B, then the intersection-and-merge ratio between example a and example B is IoU = [ area (a) # area (B) ]/[ area (a) # area (B) ], i.e., the area ratio of the overlapping area of example a and example B to the union of example a and example B is calculated.
205: example sets containing the same example are combined to produce at least one example set.
In the embodiment of the application, two similar examples are combined into an example pair, and because the intersection ratio between the existing example and different examples is greater than a preset value, the example pairs containing the same example are collected to form at least one example group, and the example group contains at least a preset number of examples.
It should be noted that at least one similar instance exists in any one instance in the instance group, and no similar instance exists between different instance groups, and the two instances are relatively independent.
206: in the at least one example group, an example group containing the number of examples greater than or equal to a preset number is obtained as the similar example.
In the embodiment of the present application, when an instance group is generated, a plurality of instance groups may be generated, and then an instance group containing an instance whose number is greater than or equal to a preset number is selected as the similar instance, where the preset number is greater than half of the number of the instances contained in the instance set.
207: and acquiring the intersection between every two similar examples.
In the embodiment of the present application, the similar examples may include more than two examples, and then the intersection between each two examples is taken to obtain at least two intersections.
For example, as shown in fig. 3, four example segmentation methods are used to perform example segmentation on the target image respectively to obtain four examples a, B, C, and D, and then intersection and comparison between two examples in the four examples is calculated, assuming that the intersection and comparison between example a and example B is greater than a preset value, the intersection and comparison between example B and example C is greater than a preset value, and the intersection and comparison between example D and any one example is not greater than a preset value, so that examples a, B, and C are obtained as example sets, that is, similar examples, the similar examples are fused to obtain an intersection (1) + (4) between examples a and B, an intersection (2) + (4) between examples a and C, and an intersection (3) + (4) between examples B and C.
208: and taking a union of intersections between every two instances to obtain at least one instance image of the target image.
In the embodiment of the present application, when an intersection between the two instances is obtained, a union of the at least two intersections is taken as an instance image of the target image.
For example, as shown in fig. 3, the union of the intersections of the two instances, that is, the shaded portion (1) + (2) + (3) + (4), is used as the instance image of the target image, and the portion of the target image other than the instance image is the background.
209: and displaying a target image containing the at least one example image, wherein the areas where different example images are located are distinguished.
In the embodiment of the present application, after the example image in the target image is divided, the target image is displayed, and different marks are marked on the example image included in the target image to distinguish the areas where different example graphics are located, for example, different translucent colors are filled in different example images.
For example, assume that there are 7 objects in the target image, instance 1, instance 2, instance 3, and instance 4, instance 5, instance 6, and instance 7, respectively. Respectively adopting four example segmentation technologies, namely, an example segmentation A, an example segmentation B, an example segmentation C and an example segmentation D are respectively adopted to segment the target graph to obtain four segmentation results shown in FIG. 4, each segmentation result comprises a segmentation result obtained by carrying out example segmentation on 7 examples of the target graph, namely, the example segmentation A is carried out on the target graph to obtain an example 11, an example 12, an example 13, an example 14, an example 15, an example 16 and an example 17; example segmentation B performs example segmentation on the target image to obtain an example 21, an example 22, an example 23, an example 24, an example 25, an example 26 and an example 27; example segmentation C example segmentation is carried out on the target image to obtain an example 31, an example 32, an example 33, an example 34, an example 35, an example 36 and an example 37; example segmentation D example segmentation of the target image yields example 41, example 42, example 43, example 44, example 45, example 46, and example 47. Then, similar examples are detected to obtain four example groups, namely example group 1, example group 2, example group 3, example group 4, example group 5, example group 6 and example group 7, and the 7 example groups are merged to obtain an example image of example 1, an example image of example 2, an example image of example 3, an example image of example 4, an example image of example 5, an example image of example 6 and an example image of example 7. For example, example set 1 shown in FIG. 3 contains example 11, example 21, and example 31, taking the collection of intersections (i.e., shaded portions) between the examples as the example image of example 1. And after example images respectively corresponding to the 7 examples are obtained, displaying the target image, marking different example images on the target image according to the example images respectively corresponding to the 7 examples to distinguish the example images respectively corresponding to the 7 examples, for example, filling different semitransparent colors in the different example images.
Further, after the 7 example images are obtained, before the target image is displayed, according to the result of the example segmentation, a number is marked on each pixel point in the target image, different numbers represent example images to which the pixel points belong, then a position diagram of the position of the example image in the target image is generated, and the position diagram can abstract the positions of different example images and the background in the target image, as shown in fig. 4, in the diagram, numbers 1 to 7 respectively represent the position of the example image of example 1, the position of the example image of example 2, the position of the example image of example 3, the position of the example image of example 4, the position of the example image of example 5, the position of the example image of example 6, and the position of the example image of example 7, and 0 represents the position of the background. Therefore, when the target image is displayed, the segmentation result of the example does not need to be obtained, or the target image is segmented again, and only when the target image is displayed, the same semitransparent color is filled in the pixel points corresponding to the same number in the target image by contrasting the position map, and the different semitransparent colors are filled in the pixel points corresponding to different numbers. Therefore, by the embodiment of the application, the example images in the target image can be marked and distinguished quickly when the target image is displayed.
In the embodiment of the application, different example segmentation methods are adopted to perform example segmentation on the target image, so that a plurality of prediction results are obtained, and then a voting method based on a cross-over ratio is adopted to fuse the plurality of prediction results, so that missing detection and false detection are balanced to the greatest extent. In the entity segmentation problem, the prediction results of different methods are different, for example, the example segmentation method based on the candidate region predicts the probability that all the pixel points in the bounding box belong to the target, while the example segmentation method based on the semantics predicts the probability that all the pixel points in the whole image belong to the target. Therefore, the embodiment of the application can be used for integrating the advantages of various segmentation methods, is very simple and easy to realize, and can obtain higher precision because the embodiment of the application integrates the final output results of various example segmentation methods without adopting difficult in-model integration, and the number and the types of the adopted example segmentation methods are not limited. Therefore, in summary, the embodiment of the present application can provide a simple and accurate example segmentation method.
The embodiment of the present application also provides an image processing apparatus, which is configured to execute the units of the method of the foregoing first embodiment. Specifically, referring to fig. 5A, a schematic block diagram of an image processing apparatus provided in an embodiment of the present application is shown. The image processing apparatus of the present embodiment includes: the acquiring module 510, the segmenting module 520, the determining module 530 and the fusing module 540 specifically include:
an obtaining module 510, configured to obtain a target image;
a segmentation module 520, configured to perform instance segmentation on the target image by using a preset number of instance segmentation technologies, respectively, to obtain an instance set, where the instance set includes at least a preset number of instances, and the preset number is an integer greater than 1;
a determining module 530, configured to determine similar instances in the instance set according to overlapping degrees between the instances in the instance set.
Specifically, as shown in fig. 5B, the determining module 530 includes a determining unit 531, a first obtaining unit 532, and a combining unit 533, where the determining unit 531 is configured to determine an intersection-to-parallel ratio between two instances in the instance set; the first obtaining unit 532 is configured to obtain the pair of instances in which the intersection ratio is greater than a preset value; the combining unit 533 is configured to combine the instances containing the same instance into an instance group, so as to obtain the similar instance.
More specifically, as shown in fig. 5B, the combining unit 533 includes a combining subunit 5331 and an obtaining subunit 5332, where the combining subunit 5331 is configured to combine the example pairs containing the same example together to obtain at least one example group; the obtaining subunit 5332 is configured to obtain, as the similar instance, an instance group in which the number of the instances is greater than or equal to a preset number, in the at least one instance group.
A fusion module 540, configured to fuse the similar examples together to obtain at least one example image of the target image.
Specifically, as shown in fig. 5C, the fusion module 540 includes a second obtaining unit 541 and a third obtaining unit 542, where the second obtaining unit 541 is configured to obtain an intersection between two instances in the similar instances; the third obtaining unit 542 is configured to obtain a union of intersections between all the two instances to obtain at least one instance image of the target image.
Further, the image processing apparatus further includes a display module 550, configured to display a target image including the at least one example image, where areas where different example images are located are distinguished.
It should be noted that the preset number is greater than half of the preset number.
It is further noted that the example segmentation techniques include at least one of candidate region-based example segmentation techniques and semantic-based example segmentation techniques.
According to the embodiment of the application, the target image is obtained through the obtaining unit, the target image is subjected to instance segmentation through the segmentation unit by adopting a preset number of instance segmentation technologies, so that an instance set containing at least a preset number of instances is obtained, similar instances in the instance set are detected by the detection unit, and the similar instances are fused together by the fusion unit, so that an instance image of the target image is obtained. Therefore, the target image is subjected to instance segmentation by adopting different instance segmentation methods to obtain a plurality of instance segmentation results, and the plurality of instance segmentation results are fused together, so that the target image is subjected to more accurate instance segmentation by combining the advantages of the different instance segmentation methods, and the efficiency and the accuracy of the instance segmentation are improved.
Referring to fig. 6, another image processing apparatus provided in the embodiment of the present application includes one or more processors 610, an output device 620, and a memory 630, where the processors 610, the output device 620, and the memory 630 are connected to each other through a bus 640, where the memory 630 is used to store a computer program, the computer program is used to perform data interaction with other terminal devices, the computer program includes program instructions, and the processors 610 are configured to call the program instructions to perform the method according to the embodiment of the present invention as described above, specifically:
a processor 410 for executing the functions of the acquisition module 510 for acquiring a target image.
The processor 410 is further configured to execute a function of the segmentation module 520, and configured to perform instance segmentation on the target image by using a preset number of instance segmentation technologies, so as to obtain an instance set, where the instance set includes at least a preset number of instances, and the preset number is an integer greater than 1;
the processor 410 is further configured to execute the function of the determining module 530, configured to determine similar instances in the instance set according to the overlapping degree between the instances in the instance set.
Specifically, the processor 410 is configured to execute a function of the determining unit 531, and is configured to determine an intersection ratio between every two instances in the instance set; the function of the first obtaining unit 532 is further executed to obtain the pair of instances where the intersection ratio is greater than a preset value; and also for performing the function of the combining unit 533 for combining the instances containing the same instance into an instance group, resulting in the similar instance.
More specifically, the processor 410 is configured to execute the combining subunit 5331, configured to combine the instance groups containing the same instance to obtain at least one instance group; and is further configured to execute the function of the obtaining subunit 5332, configured to obtain, as the similar instance, an instance group in which the number of the instances is greater than or equal to a preset number, in the at least one instance group.
The processor 410 is further configured to execute a function of the fusion module 540, configured to fuse the similar instances together to obtain at least one instance image of the target image.
Specifically, the processor 410 is further configured to execute a function of the second obtaining unit 541, configured to obtain an intersection between two instances in the similar instances; and is further configured to execute a function of the third obtaining unit 542, configured to obtain a union of intersections between all the two instances, so as to obtain at least one instance image of the target image.
Further, the output device 620 is configured to execute the function of the display module 550, and is configured to display the target image including the at least one example image, where areas where different example images are located are distinguished.
It should be noted that the preset number is greater than half of the preset number.
It is further noted that the example segmentation techniques include at least one of candidate region-based example segmentation techniques and semantic-based example segmentation techniques.
It should be understood that in the embodiments of the present Application, the Processor 610 may be a Central Processing Unit (CPU), and the Processor may be other general-purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 630 may include both read-only memory and random access memory, and provides instructions and data to the processor 610. A portion of the memory 630 may also include non-volatile random access memory. For example, the memory 630 may also store device type information.
In a specific implementation, the processor 610 and the memory 630 described in this embodiment of the present application may execute the implementation manners described in the first embodiment and the second embodiment of the example segmentation method provided in this embodiment of the present application, and may also execute the implementation manner of the image processing apparatus described in this embodiment of the present application, which is not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, wherein the computer storage medium stores a computer program, and the computer program includes program instructions, which are executed by a processor, to perform the method according to the embodiment of the present invention.
The computer readable storage medium may be an internal storage unit of the image processing apparatus of any of the foregoing embodiments, such as a hard disk or a memory of the image processing apparatus. The computer-readable storage medium may also be an external storage device of the image processing apparatus, such as a plug-in hard disk provided on the image processing apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the image processing apparatus. The computer-readable storage medium is used to store a computer program and other programs and data required by the image processing apparatus. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The embodiment of the present application further provides a computer program product, which includes a computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute the method of the embodiment of the present invention.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the server and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed server and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the elements may be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.

Claims (14)

1. An instance splitting method, the method comprising:
acquiring a target image;
respectively carrying out example segmentation on the target image by adopting a preset number of example segmentation technologies to obtain an example set, wherein the example set comprises at least a preset number of examples, and the preset number is an integer greater than 1;
determining similar instances in the instance set according to the overlapping degree of the instances in the instance set;
acquiring an intersection between every two similar examples;
and taking a union set of intersections between every two instances to obtain at least one instance image of the target image.
2. The method of claim 1, wherein determining similar instances in the set of instances according to a degree of overlap between the instances in the set of instances comprises:
determining an intersection-to-parallel ratio between every two instances in the instance set;
acquiring an example pair of which the intersection ratio is greater than a preset value;
the similar examples were obtained by combining the example sets containing the same example into an example set.
3. The method of claim 2, wherein combining the set of instances containing the same instance into an instance set to obtain the similar instance comprises:
combining example groups containing the same example to obtain at least one example group;
and acquiring an example group with the number of the examples being more than or equal to a preset number from the at least one example group as the similar examples.
4. The method of claim 3, wherein the predetermined number is greater than half of the predetermined number.
5. The method of claim 1, wherein after obtaining the instance image of the at least one target image, further comprising:
and displaying a target image containing the at least one example image, wherein the areas where different example images are located are distinguished.
6. The method of any of claims 1 to 5, wherein the instance segmentation techniques comprise at least one of candidate region-based instance segmentation techniques and semantic-based instance segmentation techniques.
7. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a target image;
the segmentation module is used for respectively carrying out example segmentation on the target image by adopting a preset number of example segmentation technologies to obtain an example set, wherein the example set comprises at least a preset number of examples, and the preset number is an integer larger than 1;
a determining module, configured to determine similar instances in the instance set according to overlapping degrees between the instances in the instance set;
a fusion module comprising a second acquisition unit and a third acquisition unit;
the second obtaining unit is configured to obtain an intersection between every two similar instances;
the third obtaining unit is configured to obtain a union of intersections between all the two instances to obtain at least one instance image of the target image.
8. The image processing apparatus according to claim 7, wherein the determining module comprises a determining unit, a first obtaining unit, and a combining unit, specifically:
the determining unit is used for determining the intersection and combination ratio between every two instances in the instance set;
the first obtaining unit is used for obtaining the example pair of which the intersection ratio is greater than a preset value;
the combination unit is used for combining the examples containing the same examples into an example group to obtain the similar examples.
9. The image processing apparatus according to claim 8, wherein the combining unit comprises a combining subunit and an obtaining subunit, in particular:
the combination subunit is used for combining the example groups containing the same examples together to obtain at least one example group;
the obtaining subunit is configured to obtain, as the similar instance, an instance group in which the number of the instances is greater than or equal to a preset number, in the at least one instance group.
10. The apparatus according to claim 9, wherein the preset number is greater than half of the preset number.
11. The apparatus according to claim 7, wherein the image processing device further comprises a display module configured to display a target image including the at least one instance image, wherein areas where different instance images are located are distinguished.
12. The apparatus according to any one of claims 7 to 11, wherein the instance segmentation technique comprises at least one of a candidate-region-based instance segmentation technique and a semantic-based instance segmentation technique.
13. An image processing device comprising a processor, an output device and a memory, the processor, the output device and the memory being interconnected by a bus, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-6.
14. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions for execution by a processor for performing the method according to any one of claims 1-6.
CN201811079789.9A 2018-09-15 2018-09-15 Instance segmentation method, image processing device and computer readable storage medium Active CN110910334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811079789.9A CN110910334B (en) 2018-09-15 2018-09-15 Instance segmentation method, image processing device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811079789.9A CN110910334B (en) 2018-09-15 2018-09-15 Instance segmentation method, image processing device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110910334A CN110910334A (en) 2020-03-24
CN110910334B true CN110910334B (en) 2023-03-21

Family

ID=69813474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811079789.9A Active CN110910334B (en) 2018-09-15 2018-09-15 Instance segmentation method, image processing device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110910334B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582263A (en) * 2020-05-12 2020-08-25 上海眼控科技股份有限公司 License plate recognition method and device, electronic equipment and storage medium
CN114375460A (en) * 2020-07-31 2022-04-19 华为技术有限公司 Data enhancement method and training method of instance segmentation model and related device
CN112132832B (en) 2020-08-21 2021-09-28 苏州浪潮智能科技有限公司 Method, system, device and medium for enhancing image instance segmentation
CN112348828A (en) * 2020-10-27 2021-02-09 浙江大华技术股份有限公司 Example segmentation method and device based on neural network and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013004093A (en) * 2011-06-16 2013-01-07 Fujitsu Ltd Search method and system by multi-instance learning
EP2645329A1 (en) * 2012-03-27 2013-10-02 Westfälische Wilhelms-Universität Münster Method and system for image segmentation
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107193800A (en) * 2017-05-18 2017-09-22 苏州黑云信息科技有限公司 A kind of semantic goodness of fit evaluating method and device towards third party's language text
WO2017206400A1 (en) * 2016-05-30 2017-12-07 乐视控股(北京)有限公司 Image processing method, apparatus, and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226708B (en) * 2013-04-07 2016-06-29 华南理工大学 A kind of multi-model fusion video hand division method based on Kinect
JP6664163B2 (en) * 2015-08-05 2020-03-13 キヤノン株式会社 Image identification method, image identification device, and program
US9972092B2 (en) * 2016-03-31 2018-05-15 Adobe Systems Incorporated Utilizing deep learning for boundary-aware image segmentation
CN107341506A (en) * 2017-06-12 2017-11-10 华南理工大学 A kind of Image emotional semantic classification method based on the expression of many-sided deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013004093A (en) * 2011-06-16 2013-01-07 Fujitsu Ltd Search method and system by multi-instance learning
EP2645329A1 (en) * 2012-03-27 2013-10-02 Westfälische Wilhelms-Universität Münster Method and system for image segmentation
WO2017206400A1 (en) * 2016-05-30 2017-12-07 乐视控股(北京)有限公司 Image processing method, apparatus, and electronic device
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107193800A (en) * 2017-05-18 2017-09-22 苏州黑云信息科技有限公司 A kind of semantic goodness of fit evaluating method and device towards third party's language text

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Lesion Segmentation in Dynamic Contrast Enhanced MRI of Breast;Xi Liang et al.;《2012 International Conference on Digital Image Computing Techniques and Applications (DICTA)》;全文 *
半监督学习及其在MR图像分割中的应用;蔡加欣;《CNKI优秀硕士学位论文全文库信息科技》;全文 *
条件随机场像素建模与深度特征融合的目标区域分割算法;李宗民等;《计算机辅助设计与图形学学报》(第06期);全文 *

Also Published As

Publication number Publication date
CN110910334A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN110910334B (en) Instance segmentation method, image processing device and computer readable storage medium
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
CN109508681B (en) Method and device for generating human body key point detection model
CN109255352B (en) Target detection method, device and system
CN107545262B (en) Method and device for detecting text in natural scene image
CN109192054B (en) Data processing method and device for map region merging
CN112184738B (en) Image segmentation method, device, equipment and storage medium
US7928978B2 (en) Method for generating multi-resolution three-dimensional model
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
US20210350115A1 (en) Methods and apparatus for identifying surface features in three-dimensional images
CN109711427A (en) Object detection method and Related product
CN106463000A (en) Information processing device, superimposed information image display device, marker display program, and superimposed information image display program
CN110490839A (en) The method, apparatus and computer equipment of failure area in a kind of detection highway
CN111540027B (en) Detection method, detection device, electronic equipment and storage medium
CN116484036A (en) Image recommendation method, device, electronic equipment and computer readable storage medium
CN113537026B (en) Method, device, equipment and medium for detecting graphic elements in building plan
CN114332809A (en) Image identification method and device, electronic equipment and storage medium
CN114049488A (en) Multi-dimensional information fusion remote weak and small target detection method and terminal
CN116186354B (en) Method, apparatus, electronic device, and computer-readable medium for displaying regional image
CN113763419A (en) Target tracking method, target tracking equipment and computer-readable storage medium
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN116052175A (en) Text detection method, electronic device, storage medium and computer program product
Roch et al. Car pose estimation through wheel detection
CN112001247A (en) Multi-target detection method, equipment and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant