CN109035279B - Image segmentation method and device - Google Patents

Image segmentation method and device Download PDF

Info

Publication number
CN109035279B
CN109035279B CN201810950326.9A CN201810950326A CN109035279B CN 109035279 B CN109035279 B CN 109035279B CN 201810950326 A CN201810950326 A CN 201810950326A CN 109035279 B CN109035279 B CN 109035279B
Authority
CN
China
Prior art keywords
image
target object
shooting visual
segmented
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810950326.9A
Other languages
Chinese (zh)
Other versions
CN109035279A (en
Inventor
吴一黎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yi Tunnel Beijing Technology Co Ltd
Original Assignee
Yi Tunnel Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yi Tunnel Beijing Technology Co Ltd filed Critical Yi Tunnel Beijing Technology Co Ltd
Priority to CN201810950326.9A priority Critical patent/CN109035279B/en
Publication of CN109035279A publication Critical patent/CN109035279A/en
Application granted granted Critical
Publication of CN109035279B publication Critical patent/CN109035279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Abstract

The invention discloses an image segmentation method and device. The method comprises the following steps: acquiring an image to be segmented and a background image under a first shooting visual angle, wherein the first shooting visual angle is one of a plurality of shooting visual angles; acquiring images to be segmented and background images under the residual shooting visual angles in the multiple shooting visual angles; obtaining a visible shell according to the acquired images to be segmented and the background images under all the shooting visual angles; acquiring the actual silhouette of the target object according to the projection of the visible shell in the image coordinate system corresponding to the first shooting visual angle; and obtaining an image of the target object under the first shooting visual angle according to the actual silhouette and the image to be segmented under the first shooting visual angle. The device comprises: the device comprises a first shooting visual angle image acquisition module, a residual shooting visual angle image acquisition module, a visible shell generation module, an actual silhouette acquisition module and a target object image acquisition module. According to the technical scheme, the accuracy of image segmentation is improved.

Description

Image segmentation method and device
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an image segmentation method and device.
Background
With the rapid development of deep learning in the field of computer vision, more and more computer vision problems are supported by deep learning to obtain a more mature and reliable scheme. In deep learning, a large amount of data is often needed to train parameters due to a large solution space, and the cost of manual data acquisition and labeling is high, so that an automatic image segmentation method needs to be provided to assist data acquisition and labeling. The existing image segmentation method has low accuracy.
Disclosure of Invention
In order to at least solve the problem of low image segmentation accuracy in the prior art, the invention provides an image segmentation method, which comprises the following steps: step S1, acquiring an image to be segmented and a background image at a first shooting angle of view, where the first shooting angle of view is one of the multiple shooting angles of view; step S2, acquiring images to be segmented and background images under the rest shooting visual angles in the plurality of shooting visual angles; step S3, obtaining a visible shell according to the images to be segmented and the background images under all the acquired shooting visual angles; step S4, acquiring the actual silhouette of the target object according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle; step S5, obtaining an image of the target object at the first shooting visual angle according to the actual silhouette and the image to be segmented at the first shooting visual angle; the material of the bearing plate bearing the target object is transparent, and the viewpoints of the plurality of shooting visual angles include viewpoints above and below the bearing plate.
In the image segmentation method as described above, it is preferable that after step S1, the method further includes: step S6, judging whether a target object is inverted in the image to be segmented under the first shooting visual angle, if not, executing step S7 and step S5, otherwise, jumping to step S2; step S7, acquiring an initial silhouette of the target object according to the image to be segmented and the background image at the first shooting angle, and taking the initial silhouette as an actual silhouette.
In the image segmentation method as described above, preferably, the step S6 includes: and judging whether the viewpoint of the first shooting visual angle is located above the plane of the bearing plate, if so, judging that the target object is inverted in the image to be segmented under the first shooting visual angle.
In the image segmentation method as described above, preferably, the step S3 includes: step S31, acquiring an initial silhouette of the target object at each shooting visual angle according to the image to be segmented at each shooting visual angle and the background image; step S32, initializing a three-dimensional body, wherein the three-dimensional body envelops the target object under a world coordinate system; step S33, counting the times that the projection of the voxel of the three-dimensional body in the image coordinate system corresponding to each shooting view angle is positioned in the corresponding initial silhouette; step S34, judging whether the frequency is not less than a preset frequency threshold, if so, determining the voxel of the three-dimensional body as the voxel of the target object; repeating steps S33 through step 34 to determine all voxels of the target object to form a visible shell.
In the image segmentation method as described above, preferably, the step S31 includes: step S311, obtaining a dense prior field of the target object in the image to be segmented according to the background image and the image to be segmented which are respectively obtained under the first shooting visual angle; step S312, processing the dense prior field and the image to be segmented by using a graph cut algorithm to obtain an initial silhouette of the target object in the image to be segmented; and repeating the steps S311 to S312 to obtain an initial silhouette of the target object at each shooting angle.
Another aspect of the present invention provides an image segmentation apparatus, including: the device comprises a first shooting visual angle image acquisition module, a second shooting visual angle image acquisition module and a first segmentation module, wherein the first shooting visual angle is one of a plurality of shooting visual angles, a bearing plate bearing a target object is made of a transparent material, and viewpoints of the plurality of shooting visual angles comprise viewpoints above and below the bearing plate; the residual shooting visual angle image acquisition module is used for acquiring images to be segmented and background images under the residual shooting visual angles in the plurality of shooting visual angles; the visible shell generation module is used for obtaining a visible shell according to the acquired images to be segmented and the background images under all the shooting visual angles; the actual silhouette acquisition module is used for acquiring the actual silhouette of the target object according to the projection of the visible shell in the image coordinate system corresponding to the first shooting visual angle; and the target object image obtaining module is used for obtaining the image of the target object under the first shooting visual angle according to the actual silhouette and the image to be segmented under the first shooting visual angle.
In the image segmentation apparatus as described above, preferably, the image segmentation apparatus further includes: the judging module is used for judging whether a target object is inverted in the image to be segmented under the first shooting visual angle, if not, the functions of the determining module and the target object image obtaining module are executed, and if not, the function jumps to the residual shooting visual angle image obtaining module; and the determining module is used for acquiring an initial silhouette of the target object according to the image to be segmented and the background image under the first shooting visual angle when the judging module judges that the target object is sometimes found, and taking the initial silhouette as an actual silhouette.
In the image segmentation apparatus as described above, preferably, the determining module is specifically configured to: and judging whether the viewpoint of the first shooting visual angle is located above the plane of the bearing plate, if so, judging that the target object is inverted in the image to be segmented under the first shooting visual angle.
In the image segmentation apparatus as described above, preferably, the visual shell generation module includes: the initial silhouette acquiring unit is used for acquiring an initial silhouette of the target object under each shooting visual angle according to the image to be segmented under each shooting visual angle and the background image; a three-dimensional body unit for initializing a three-dimensional body, which envelopes the target object under a world coordinate system; the number counting unit is used for counting the number of times that the projection of the voxel of the three-dimensional body in the image coordinate system corresponding to each shooting visual angle is positioned in the corresponding initial silhouette; the judging unit is used for judging whether the frequency is not less than a preset frequency threshold value, and if so, determining the voxel of the three-dimensional body as the voxel of the target object; and the repeated execution unit is used for repeatedly executing the functions of the times counting unit and the judging unit so as to determine all voxels of the target object to form a visible shell.
In the image segmentation apparatus as described above, preferably, the initial silhouette contour acquisition unit includes: the dense prior field obtaining subunit is configured to obtain a dense prior field of the target object in the image to be segmented according to a background image and the image to be segmented, which are respectively obtained under the first shooting view angle; a first initial silhouette obtaining unit, configured to apply a graph cut algorithm to process the dense prior field and the image to be segmented, so as to obtain an initial silhouette of the target object in the image to be segmented; and the repeated execution subunit is used for repeatedly executing the functions of the dense prior field obtaining subunit and the first initial silhouette obtaining unit so as to obtain an initial silhouette of the target object under each shooting visual angle.
Yet another aspect of the present invention provides an image segmentation apparatus, including: the bearing plate is used for bearing a target object and is made of a transparent material; the system comprises a plurality of image acquisition devices, a plurality of image acquisition devices and a plurality of image processing devices, wherein the image acquisition devices are used for forming a plurality of shooting visual angles; a processor; and a memory for storing executable instructions for the processor; wherein the processor is configured to perform the above method.
Yet another aspect of the present invention provides a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the above method.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the actual silhouette of the target object is obtained by obtaining the visual shell, then according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle, and then according to the actual silhouette and the image to be segmented under the first shooting visual angle, the image of the target object is obtained, so that the accuracy of image segmentation is improved.
Drawings
Fig. 1 is a schematic flowchart of an image segmentation method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another image segmentation method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for obtaining a visible shell according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for obtaining an initial silhouette according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating an image segmentation method according to another embodiment of the present invention;
FIG. 6 is a flowchart illustrating an image segmentation method according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present invention;
fig. 8 shows the segmentation optimization result based on the visual shell constraint for 2 total shooting angles according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present invention provides an image segmentation method, which includes the following steps:
step S1, acquiring an image to be segmented and a background image at a first shooting angle of view, where the first shooting angle of view is one of multiple shooting angles of view.
Step S2, acquiring the image to be segmented and the background image at the remaining shooting angles of view from the plurality of shooting angles.
And step S3, obtaining a visible shell according to the acquired images to be segmented and the background images under all the shooting visual angles.
Step S4, acquiring the actual silhouette of the target object according to the projection of the visible shell in the image coordinate system corresponding to the first shooting visual angle;
step S5, obtaining an image of the target object under the first shooting visual angle according to the actual silhouette and the image to be segmented under the first shooting visual angle;
the material of the bearing plate for bearing the target object is transparent, and the viewpoints of the plurality of shooting visual angles comprise a viewpoint above the bearing plate and a viewpoint below the bearing plate.
It should be noted that, in the embodiment of the present invention, the execution sequence of steps S1 and S2 is not limited, and the steps may be performed simultaneously, or step S1 may be performed first, and then step S2 is performed.
Referring to fig. 2, as an alternative embodiment, after step S1, the method further includes:
step S6, judging whether a target object is inverted in the image to be segmented under the first shooting visual angle, if not, executing step S7 and step S5, otherwise (namely, judging yes), jumping to step S2; step S7, acquiring an initial silhouette of the target object according to the image to be segmented and the background image under the first shooting visual angle, and taking the initial silhouette as an actual silhouette;
as an alternative embodiment, step S6 includes:
and judging whether the viewpoint of the first shooting visual angle is located above the plane of the bearing plate, if so, judging that the target object is inverted in the image to be segmented under the first shooting visual angle.
Referring to fig. 3, as an alternative embodiment, step S3 includes:
step S31, acquiring an initial silhouette of the target object at each shooting visual angle according to the image to be segmented at each shooting visual angle and the background image;
step S32, initializing a three-dimensional body;
step S33, counting the times that the projection of the voxel of the three-dimensional body in the image coordinate system corresponding to each shooting visual angle is positioned in the corresponding initial silhouette, wherein the three-dimensional body envelopes the target object in the world coordinate system;
step S34, judging whether the frequency is not less than a preset frequency threshold, if so, determining the voxel of the three-dimensional body as the voxel of the target object;
steps S33 through S34 are repeated to determine all voxels of the target object to form a visible shell.
Referring to fig. 4, as an alternative embodiment, step S31 includes:
step S311, obtaining a dense prior field of the target object in the image to be segmented according to the background image and the image to be segmented which are respectively obtained under the first shooting visual angle;
step S312, processing the dense prior field and the image to be segmented by using an image segmentation algorithm to obtain an initial silhouette of the target object under the first shooting visual angle;
and repeating the steps S311 to S312 to obtain an initial silhouette of the target object at each shooting angle.
As an alternative embodiment, step S311 includes: acquiring a background image and an image to be segmented under a first shooting visual angle; acquiring a difference image of an image to be segmented relative to a background image; acquiring a binary image according to the pixel value of the difference image and a preset difference threshold; and sequentially carrying out denoising operation, opening and closing operation and connected domain screening operation on the binary image to obtain a dense prior field of the target object in the image to be segmented under the first shooting visual angle.
According to the method provided by the embodiment of the invention, the visual shell is obtained, the actual silhouette of the target object is obtained according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle, and the image of the target object under the first shooting visual angle is obtained according to the actual silhouette and the image to be segmented under the first shooting visual angle, so that the accuracy of image segmentation is improved.
Another embodiment of the present invention provides an image segmentation method, which combines the contents of the first embodiment, with reference to fig. 5, and the method flow is as follows:
in step S501, a plurality of shooting angles are preset.
In particular, a plurality of image acquisition devices [ C ]1,C2…Cn]Are respectively fixed at different shooting angles to form a plurality of shooting angles, CmAnd m is more than or equal to 1 and less than or equal to n, n is more than or equal to 2, and m and n are positive integers. That is, each image capturing device corresponds to one shooting angle of view, that is, n image capturing devices form n shooting angles of view in total. The shooting positions (or called viewpoints) corresponding to the multiple shooting visual angles have viewpoints positioned above the bearing plate and below the bearing plate, wherein the multiple image acquisition devices can be positioned on an arc with the same point as the center of a circle, and the point can be positioned on the bearing plate and also can be positioned above or below the bearing plate; the image acquisition devices can also be positioned on arcs with different circle centers, and a target object is placed on the bearing plate. In other embodiments, one image acquisition device may be adopted, and multiple shooting angles are realized by rotating and lifting, and the mode of realizing multiple shooting angles is not limited in this embodiment. The image acquisition device may be a camera.
Step S502, acquiring an image to be segmented and a background image under each shooting visual angle, and taking one of the shooting visual angles as a first shooting visual angle.
Specifically, if an image of a target object at a certain shooting angle needs to be acquired, the shooting angle is taken as a first shooting angle, at this time, "first" is not used to limit the shooting angle, but indicates that the image of the target object at the shooting angle needs to be acquired, and each of the multiple shooting angles can be determined as the first shooting angle. The following describes a method for acquiring a to-be-segmented and background image at a first shooting angle, taking the first shooting angle as an example:before the target object is placed into the first shooting visual angle, shooting a current scene, and acquiring a background image without the target object. Then, the target object is placed in the current scene, then the target object is shot, an image containing the target object is acquired, and the image is used as an image to be segmented (or called a foreground image). Other methods for acquiring the image to be segmented and the background image under the shooting angle can refer to the method for acquiring the image to be segmented and the background image under the first shooting angle, and are not described in detail herein. For the purpose of describing aspects, the background images acquired at multiple shooting perspectives are used as [ I ]bg_C1,Ibg_C2…Ibg_Cn]Is represented bybg_CmIs connected with an image acquisition device CmUsing [ I ] for the images to be segmented obtained from multiple shooting angles as the corresponding background images at the shooting anglesf_C1,If_C2…If_Cn]Is represented byf_CmIs connected with an image acquisition device CmAnd the image to be segmented under the corresponding shooting visual angle. The image acquisition device corresponding to the first shooting angle of view is denoted by C1Then correspondingly with Ibg_C1And If_C1Respectively representing a background image and an image to be segmented under a first shooting visual angle.
Step S503, obtaining an initial silhouette of the target object under each shooting visual angle according to the acquired image to be segmented under each shooting visual angle and the background image.
Taking the first shooting angle as an example, a method for obtaining an initial silhouette of a target object under the first shooting angle according to an image to be segmented under the first shooting angle and a background image is described below: obtaining a dense prior field of the target object in the image to be segmented according to the background image and the image to be segmented which are respectively obtained under the first shooting visual angle, processing the dense prior field by using a graph segmentation algorithm to obtain an initial silhouette of the target object under the first shooting visual angle, and repeating the steps for each shooting visual angle in other shooting visual angles of a plurality of shooting visual angles to obtain the initial silhouette of the target object under each shooting visual angle.
The method for obtaining the dense prior field includes, but is not limited to:
firstly, a graph to be segmented is obtainedLike If_C1Relative to background image Ibg_C1The difference image of (2), i.e. the image to be segmented If_C1Carrying out background subtraction operation to obtain a difference image Idiff_c1The formula involved is as follows:
Idiff_c1=If_c1-Ibg_C1
and then acquiring a binary image according to the pixel value of the difference image and a preset difference threshold value. Such as: presetting a difference threshold thre _ d if the image I is differentdiff_c1If the pixel value at a certain position is greater than the difference threshold thre _ d, then the binary image (or called initial mask image) Minit_c1Setting the pixel value of the position corresponding to the position as 1, otherwise, setting the pixel value of the position as 0, comparing the pixel values of all the positions of the difference image with the difference threshold value by the judging method to obtain the pixel values of all the positions in the binary image, namely obtaining the binary image, wherein the related formula is as follows:
Figure BDA0001771326390000071
in the formula, Minit_c1(i, j) represents a binary image Minit_c1The value at the (i, j) spatial position.
And finally, sequentially carrying out denoising operation, opening and closing operation and connected domain screening operation on the binary image to obtain a dense prior field of the target object in the image to be segmented.
Specifically, denoising operation, opening and closing operation and connected domain screening operation are sequentially carried out on the binary image, and a dense prior field of the target object in the image to be segmented is obtained. The opening and closing operation means that an opening operation is performed first and then a closing operation is performed on the result of the opening operation. Noise points can be eliminated through denoising and switching operation, the edge of the binary image is smoothed, and narrow connection between connected domains is disconnected. If the denoising function F1 (-) represents and the opening and closing function F2 (-) represents, the connected domain set S can be obtained after the binary image is denoised and opened and closed: f2(F1 (M)init_c1(i, j))) and then perform area-based threshold thre on the set of connected components SareaTo obtain the targetDense prior field M of object in image to be segmentedcut_c1The dense prior field is used for image segmentation, and the related formula is as follows:
Mcut_c1={si|si∈S,Area(si)>threarea}
in the formula, Area (·) represents the Area of the connected domain.
Methods for obtaining the initial silhouette include, but are not limited to:
the method comprises the following steps of obtaining an initial silhouette of a target object in an image to be segmented by applying a graph cut algorithm to a dense prior field and the image to be segmented, wherein the graph cut algorithm has an energy optimization function:
E(If_c1)=aR(If_c1)+B(If_c1)
wherein R (I)f_c1) Is a region term, B (I)f_c1) As a boundary term, by applying energy E (I)f_c1) At the minimum, the classification of each pixel in the image to be segmented, namely belonging to the background or the foreground, namely the image to be segmented I can be obtainedf_C1Distinguishing the front background to obtain an initial silhouette M under a first shooting visual anglefinal_c1For the method of obtaining the initial silhouette of the target object at each of the other shooting perspectives of the multiple shooting perspectives, reference may be made to the above-mentioned method regarding the initial silhouette of the target object at the first shooting perspective, and details thereof are not repeated herein.
Before calculating the energy optimization function, the method needs to be based on a dense prior field Mcut_c1And an image I to be segmentedf_C1Obtaining a background pixel set Pbg_c1And a foreground set of pixels Pfg_c1The two sets can be obtained by the following formulas:
Figure BDA0001771326390000081
Figure BDA0001771326390000082
step S504, a visual shell is generated based on the initial silhouette of the target object.
Methods for accomplishing this step include, but are not limited to:
first, a three-dimensional volume is initialized, and voxels of the three-dimensional volume are projected on each imaging plane.
A three-dimensional volume is assumed based on the approximate volume of the target object, which is capable of enveloping the target object in a world coordinate system. The three-dimensional volume may be segmented into small cubic grids, each called a voxel. The imaging plane is an imaging coordinate system corresponding to the shooting visual angle, the imaging coordinate systems corresponding to different shooting visual angles are different, and the voxel of the three-dimensional body is projected under the imaging coordinate system. The formula involved is as follows:
let the spatial coordinates of the center point P of a voxel be (X, Y, Z) and the coordinates of the projection of P onto the imaging plane be (X, Y), which can be obtained according to the following formula.
Figure BDA0001771326390000091
Wherein, [ x, y,1 ]]T,[X,Y,Z,1]TIs the corresponding homogeneous coordinate, K is the internal reference matrix of the image acquisition device, R is the rotation matrix, and T is the translation matrix.
And then counting the number of times that the voxels of the three-dimensional body fall in all the initial silhouettes, judging whether the number of times is greater than a preset number threshold, if so, determining the voxels of the three-dimensional body as the voxels of the target object, repeating the counting until all the voxels of the three-dimensional body are counted, and forming a visible shell according to all the voxels of the target object.
That is, when the number of times is greater than or equal to the preset number of times threshold, it indicates that the voxel is a voxel that can be seen from the shooting perspective, is actually present, corresponds to the target object, belongs to the target object, and thus the voxel is retained; and when the number of times is smaller than a preset number threshold, indicating that the voxel does not correspond to the target object, eliminating the voxel, and determining all voxels of the target object by performing the operation on each voxel.
In other implementations, the visual shell may be generated by other methods, such as by back-projecting the initial silhouette at the first capture view angle to obtain a cone of view VSC1Then VH { # VS is the visual shell of the objectCm,m=1,2,…,n}。
And step S505, acquiring the actual silhouette of the target object according to the projection of the visible shell in the image coordinate system corresponding to the first shooting visual angle.
The method of projecting the image coordinate system can refer to the above related formula in step 503, and is not described in detail here. Since the visible shell corresponds to the solid portion of the target object, the actual silhouette of the target object can be obtained after projection.
In the deep learning, the image data of a large amount of target objects need to be collected to train parameters, so that the loading plate made of transparent materials can be adopted to bear the target objects for the convenience of collection and marking of the image data, the target objects do not need to be inverted, and the upright image data and the inverted image data of the target objects can be collected through different shooting visual angles. Due to the fact that the transparent material is adopted, when the erecting image data of the target object are collected, the erecting image data contain the inverted image of the target object, accuracy of the data is affected, and then the segmentation effect is affected, therefore, the actual silhouette of the target object is generated through the visual shell, and the influence of the inverted image is eliminated.
And S506, obtaining an image of the target object under the first shooting visual angle according to the actual silhouette and the image to be segmented.
Specifically, the point operation is carried out on the actual silhouette and the image to be segmented, and the image I of the target object under the first shooting visual angle can be obtainedobj_c1The formula involved is as follows:
Iobj_c1=If_c1.Mfinal_c1
according to the method provided by the embodiment of the invention, the visual shell is obtained, the actual silhouette of the target object is obtained according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle, and the image of the target object is obtained according to the actual silhouette and the image to be segmented at the first shooting visual angle, so that the accuracy of image segmentation is improved.
Another embodiment of the present invention provides an image segmentation method, which combines the contents of the above embodiments, with reference to fig. 6, and the method flow is as follows:
in step S601, a plurality of shooting angles are preset.
Step S602, an image to be segmented and a background image under a first shooting angle of view are acquired, where the first shooting angle of view is one of multiple shooting angles of view.
Step S603, judging whether a target object is inverted in the image to be segmented under the first shooting visual angle, if not, executing step S604 and step S606, otherwise, skipping to step S605;
methods for accomplishing this step include, but are not limited to:
and judging whether the viewpoint of the first shooting visual angle is located above the plane of the bearing plate, if so, judging that the target object is inverted in the image to be segmented under the first shooting visual angle. The amount of calculation can be reduced by this step.
Step S604, acquiring an initial silhouette of the target object according to the image to be segmented and the background image under the first shooting visual angle, and taking the initial silhouette as an actual silhouette.
And step S605, obtaining a visual shell according to the acquired images to be segmented and the background images under all the shooting visual angles, and acquiring the actual silhouette of the target object according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle.
And step S606, obtaining an image of the target object under the first shooting visual angle according to the actual silhouette and the image to be segmented.
It should be noted that, for the implementation of step S601, reference may be made to the related description of step S601 in the foregoing embodiment, for the implementation of step S602, reference may be made to the related description of step S602 in the foregoing embodiment, for the implementation of step S604, reference may be made to the related description of step S603 in the foregoing embodiment, for the implementation of step S605, reference may be made to the related description of step S604 to step S605 in the foregoing embodiment, for the implementation of step S606, reference may be made to the related description of step S606 in the foregoing embodiment,
and will not be described in detail herein.
According to the method provided by the embodiment of the invention, the visual shell is obtained, the actual silhouette of the target object is obtained according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle, and the image of the target object is obtained according to the actual silhouette and the image to be segmented at the first shooting visual angle, so that the accuracy of image segmentation is improved.
Referring to fig. 7, an embodiment of the present invention provides an image segmentation apparatus, configured to perform the method provided in the foregoing embodiment, where the method includes: a first shooting visual angle image acquisition module 701, a residual shooting visual angle image acquisition module 702, a visible shell generation module 703, an actual silhouette outline acquisition module 704 and a target object image obtaining module 705.
The first shooting visual angle image acquisition module 701 is configured to acquire an image to be segmented and a background image at a first shooting visual angle, where the first shooting visual angle is one of multiple shooting visual angles, a material of a bearing plate bearing a target object is a transparent material, and viewpoints at which the multiple shooting visual angles are located include viewpoints above and below the bearing plate. The remaining shooting angle image acquiring module 702 is configured to acquire an image to be segmented and a background image at a remaining shooting angle in a plurality of shooting angles. The visible shell generation module 703 is configured to obtain a visible shell according to the acquired image to be segmented and the background image at all the shooting angles. The actual silhouette acquisition module 704 is configured to acquire an actual silhouette of the target object according to a projection of the visible shell in an image coordinate system corresponding to the first capturing view. The target object image obtaining module 705 is configured to obtain an image of a target object at a first shooting view angle according to the actual silhouette and the image to be segmented at the first shooting view angle.
As an alternative embodiment, the image segmentation apparatus further includes: the judging module is used for judging whether a target object is inverted in the image to be segmented under the first shooting visual angle, if so, executing the functions of the determining module and the target object image obtaining module, and otherwise, skipping to the residual shooting visual angle image obtaining module; and the determining module is used for acquiring an initial silhouette of the target object according to the image to be segmented and the background image under the first shooting visual angle when the judging module judges that the target object is in the moment, and taking the initial silhouette as an actual silhouette.
As an optional embodiment, the determining module is specifically configured to: and judging whether the viewpoint of the first shooting visual angle is located above the plane of the bearing plate, if so, judging that the target object is inverted in the image to be segmented under the first shooting visual angle.
As an alternative embodiment, the visual shell generation module comprises: the initial silhouette acquiring unit is used for acquiring an initial silhouette of the target object under each shooting visual angle according to the image to be segmented under each shooting visual angle and the background image; the three-dimensional body unit is used for initializing a three-dimensional body, and the three-dimensional body envelopes the target object under a world coordinate system; the number counting unit is used for counting the number of times that the projection of the voxel of the three-dimensional body in the image coordinate system corresponding to each shooting visual angle is positioned in the corresponding initial silhouette; the judging unit is used for judging whether the times are larger than a preset time threshold value or not, and if so, determining the voxel of the three-dimensional body as the voxel of the target object; and the repeated execution unit is used for repeatedly executing the functions of the times counting unit and the judging unit so as to determine all voxels of the target object to form the visible shell.
As an alternative embodiment, the initial silhouette obtaining unit includes: the dense prior field obtaining subunit is used for obtaining a dense prior field of the target object in the image to be segmented according to the background image and the image to be segmented which are respectively obtained under the first shooting visual angle; the first initial silhouette obtaining unit is used for processing the dense prior field and the image to be segmented by using an image segmentation algorithm to obtain an initial silhouette of the target object in the image to be segmented; and the repeated execution subunit is used for repeatedly executing the functions of the dense prior field obtaining subunit and the first initial silhouette obtaining unit so as to obtain the initial silhouette of the target object under each shooting visual angle.
The processing manner of the first capturing view image acquiring module 701 may specifically refer to the related descriptions of steps S1 and S501 to S502 in the foregoing embodiment, the processing manner of the remaining capturing view image acquiring module 702 may specifically refer to the related descriptions of step S2 and steps S501 to S502 in the foregoing embodiment, the processing manner of the visible shell generating module 703 may specifically refer to the related descriptions of step S3 and steps S503 to 504 in the foregoing embodiment, the processing manner of the actual silhouette acquiring module 704 may specifically refer to the related descriptions of step S4 and step S505 in the foregoing embodiment, and the processing manner of the target object image acquiring module 705 may specifically refer to the related descriptions of step S5 and step S506 in the foregoing embodiment, which is not described in detail herein.
According to the device provided by the embodiment of the invention, the visual shell is obtained, the actual silhouette of the target object is obtained according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle, and the image of the target object under the first shooting visual angle is obtained according to the actual silhouette and the image to be segmented under the first shooting visual angle, so that the accuracy of image segmentation is improved.
An embodiment of the present invention provides an image segmentation apparatus, including: the device comprises a bearing plate, a plurality of image acquisition devices, a processor and a memory. The bearing plate is used for bearing a target object and is made of transparent materials. The image acquisition devices are used for forming a plurality of shooting visual angles, each image acquisition device is used for acquiring an image to be segmented and a background image, and the image to be segmented contains a target object. The processor is configured to perform the aforementioned image segmentation method. The memory is used for storing executable instructions of the processor. In order to facilitate acquisition of an image of the target object, the image segmentation apparatus further includes: the rotary table is connected with the bearing plate to drive the bearing plate to rotate along the axis of the bearing plate, so that the image acquisition device can conveniently acquire the image of the target object from multiple angles.
An embodiment of the present invention provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the foregoing image segmentation method.
It will be appreciated by those skilled in the art that the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The embodiments disclosed above are therefore to be considered in all respects as illustrative and not restrictive. All changes which come within the scope of or equivalence to the invention are intended to be embraced therein.

Claims (10)

1. An image segmentation method, characterized in that the image segmentation method comprises:
step S1, acquiring an image to be segmented and a background image under a first shooting visual angle, wherein the first shooting visual angle is one of a plurality of shooting visual angles;
step S2, acquiring images to be segmented and background images under the rest shooting visual angles in the plurality of shooting visual angles;
step S3, obtaining a visible shell according to the images to be segmented and the background images under all the acquired shooting visual angles;
step S4, acquiring the actual silhouette of the target object according to the projection of the visual shell in the image coordinate system corresponding to the first shooting visual angle;
step S5, obtaining an image of the target object at the first shooting visual angle according to the actual silhouette and the image to be segmented at the first shooting visual angle;
the material of the bearing plate bearing the target object is transparent, and the viewpoints of the plurality of shooting visual angles comprise viewpoints above and below the bearing plate;
the step S3 includes:
step S31, acquiring an initial silhouette of the target object at each shooting visual angle according to the image to be segmented at each shooting visual angle and the background image;
step S32, initializing a three-dimensional body, wherein the three-dimensional body envelops the target object under a world coordinate system;
step S33, counting the times that the projection of the voxel of the three-dimensional body in the image coordinate system corresponding to each shooting view angle is positioned in the corresponding initial silhouette;
step S34, judging whether the frequency is not less than a preset frequency threshold, if so, determining the voxel of the three-dimensional body as the voxel of the target object;
repeating steps S33 through step 34 to determine all voxels of the target object to form a visible shell.
2. The image segmentation method according to claim 1, further comprising, after step S1:
step S6, judging whether a target object is inverted in the image to be segmented under the first shooting visual angle, if not, executing step S7 and step S5, otherwise, jumping to step S2;
step S7, acquiring an initial silhouette of the target object according to the image to be segmented and the background image at the first shooting angle, and taking the initial silhouette as an actual silhouette.
3. The image segmentation method according to claim 2, wherein the step S6 includes:
and judging whether the viewpoint of the first shooting visual angle is located above the plane of the bearing plate, if so, judging that the target object is inverted in the image to be segmented under the first shooting visual angle.
4. The image segmentation method according to claim 1, wherein the step S31 includes:
step S311, obtaining a dense prior field of the target object in the image to be segmented according to the background image and the image to be segmented which are respectively obtained under the first shooting visual angle;
step S312, processing the dense prior field and the image to be segmented by using a graph cut algorithm to obtain an initial silhouette of the target object in the image to be segmented;
and repeating the steps S311 to S312 to obtain an initial silhouette of the target object at each shooting angle.
5. An image segmentation apparatus, characterized in that the image segmentation apparatus comprises:
the device comprises a first shooting visual angle image acquisition module, a second shooting visual angle image acquisition module and a first segmentation module, wherein the first shooting visual angle is one of a plurality of shooting visual angles, a bearing plate bearing a target object is made of a transparent material, and viewpoints of the plurality of shooting visual angles comprise viewpoints above and below the bearing plate;
the residual shooting visual angle image acquisition module is used for acquiring images to be segmented and background images under the residual shooting visual angles in the plurality of shooting visual angles;
the visible shell generation module is used for obtaining a visible shell according to the acquired images to be segmented and the background images under all the shooting visual angles;
the actual silhouette acquisition module is used for acquiring the actual silhouette of the target object according to the projection of the visible shell in the image coordinate system corresponding to the first shooting visual angle;
the target object image obtaining module is used for obtaining an image of the target object under the first shooting visual angle according to the actual silhouette and the image to be segmented under the first shooting visual angle;
wherein the visual shell generation module comprises:
the initial silhouette acquiring unit is used for acquiring an initial silhouette of the target object under each shooting visual angle according to the image to be segmented under each shooting visual angle and the background image;
a three-dimensional body unit for initializing a three-dimensional body, which envelopes the target object under a world coordinate system;
the number counting unit is used for counting the number of times that the projection of the voxel of the three-dimensional body in the image coordinate system corresponding to each shooting visual angle is positioned in the corresponding initial silhouette;
the judging unit is used for judging whether the frequency is not less than a preset frequency threshold value, and if so, determining the voxel of the three-dimensional body as the voxel of the target object;
and the repeated execution unit is used for repeatedly executing the functions of the times counting unit and the judging unit so as to determine all voxels of the target object to form a visible shell.
6. The image segmentation apparatus according to claim 5, further comprising:
the judging module is used for judging whether a target object is inverted in the image to be segmented under the first shooting visual angle, if not, the functions of the determining module and the target object image obtaining module are executed, and if not, the function jumps to the residual shooting visual angle image obtaining module;
and the determining module is used for acquiring an initial silhouette of the target object according to the image to be segmented and the background image under the first shooting visual angle when the judging module judges that the target object is sometimes found, and taking the initial silhouette as an actual silhouette.
7. The image segmentation apparatus according to claim 6, wherein the determination module is specifically configured to: and judging whether the viewpoint of the first shooting visual angle is located above the plane of the bearing plate, if so, judging that the target object is inverted in the image to be segmented under the first shooting visual angle.
8. The image segmentation apparatus according to claim 5, wherein the initial silhouette acquisition unit includes:
the dense prior field obtaining subunit is configured to obtain a dense prior field of the target object in the image to be segmented according to a background image and the image to be segmented, which are respectively obtained under the first shooting view angle;
a first initial silhouette obtaining unit, configured to apply a graph cut algorithm to process the dense prior field and the image to be segmented, so as to obtain an initial silhouette of the target object in the image to be segmented;
and the repeated execution subunit is used for repeatedly executing the functions of the dense prior field obtaining subunit and the first initial silhouette obtaining unit so as to obtain an initial silhouette of the target object under each shooting visual angle.
9. An image segmentation apparatus, comprising:
the bearing plate is used for bearing a target object and is made of a transparent material;
the system comprises a plurality of image acquisition devices, a plurality of image acquisition devices and a plurality of image processing devices, wherein the image acquisition devices are used for forming a plurality of shooting visual angles;
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-4.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the method of any of claims 1 to 4.
CN201810950326.9A 2018-08-20 2018-08-20 Image segmentation method and device Active CN109035279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810950326.9A CN109035279B (en) 2018-08-20 2018-08-20 Image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810950326.9A CN109035279B (en) 2018-08-20 2018-08-20 Image segmentation method and device

Publications (2)

Publication Number Publication Date
CN109035279A CN109035279A (en) 2018-12-18
CN109035279B true CN109035279B (en) 2022-04-12

Family

ID=64632197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810950326.9A Active CN109035279B (en) 2018-08-20 2018-08-20 Image segmentation method and device

Country Status (1)

Country Link
CN (1) CN109035279B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651357A (en) * 2020-12-30 2021-04-13 浙江商汤科技开发有限公司 Segmentation method of target object in image, three-dimensional reconstruction method and related device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915006B2 (en) * 1998-01-16 2005-07-05 Elwin M. Beaty Method and apparatus for three dimensional inspection of electronic components
CN101334900B (en) * 2008-08-01 2011-07-27 北京大学 Image based plotting method
CN103854301A (en) * 2012-11-29 2014-06-11 沈阳工业大学 3D reconstruction method of visible shell in complex background
CN107976441A (en) * 2017-11-20 2018-05-01 广东泰安模塑科技股份有限公司 A kind of product automatic identification equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
recognition and positioning of strawberry fruits for harvesting robot based on convex hull;Lijiao Zhang 等,;《ASABE》;20141231;第2014年卷;第1-10页 *

Also Published As

Publication number Publication date
CN109035279A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
Reinbacher et al. Real-time panoramic tracking for event cameras
JP6902028B2 (en) Methods and systems for large scale determination of RGBD camera orientation
Weise et al. In-hand scanning with online loop closure
US20200234397A1 (en) Automatic view mapping for single-image and multi-view captures
Strecha et al. On benchmarking camera calibration and multi-view stereo for high resolution imagery
RU2642167C2 (en) Device, method and system for reconstructing 3d-model of object
EP2383699B1 (en) Method for estimating a pose of an articulated object model
US8363926B2 (en) Systems and methods for modeling three-dimensional objects from two-dimensional images
EP3598385B1 (en) Face deblurring method and device
US11783443B2 (en) Extraction of standardized images from a single view or multi-view capture
Kehl et al. Real-time 3D model tracking in color and depth on a single CPU core
WO2015017941A1 (en) Systems and methods for generating data indicative of a three-dimensional representation of a scene
CN111640145B (en) Image registration method and related model training method, equipment and device thereof
WO2019122205A1 (en) Method and apparatus for generating a three-dimensional model
US20150147047A1 (en) Simulating tracking shots from image sequences
CN112991458B (en) Rapid three-dimensional modeling method and system based on voxels
CN110910431B (en) Multi-view three-dimensional point set recovery method based on monocular camera
CN112598789A (en) Image texture reconstruction method, device and equipment and storage medium
Li et al. 3d reconstruction and texture optimization using a sparse set of rgb-d cameras
CN109035279B (en) Image segmentation method and device
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium
Wang et al. 3D modeling from wide baseline range scans using contour coherence
Malleson et al. Single-view RGBD-based reconstruction of dynamic human geometry
CN111192308A (en) Image processing method and device, electronic equipment and computer storage medium
Lu et al. Multi-view stereo reconstruction with high dynamic range texture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant