CN111144212A - Depth image target segmentation method and device - Google Patents
Depth image target segmentation method and device Download PDFInfo
- Publication number
- CN111144212A CN111144212A CN201911173140.8A CN201911173140A CN111144212A CN 111144212 A CN111144212 A CN 111144212A CN 201911173140 A CN201911173140 A CN 201911173140A CN 111144212 A CN111144212 A CN 111144212A
- Authority
- CN
- China
- Prior art keywords
- palm
- upper limb
- depth image
- arm
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Abstract
The application provides a depth image target segmentation method and a device, and the method comprises the following steps: determining a palm area mass center position and an arm area mass center position in the upper limb depth image according to the palm center position in the upper limb depth image; determining the bottom edge of the palm according to the centroid position of the palm area and the centroid position of the arm area; determining the edge of the front end of an arm according to the bottom edge of the palm, and determining the midpoint position of the edge of the front end of the arm; determining a boundary position between a palm region and an arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm; and segmenting a palm image area from the upper limb depth image based on the boundary position between the palm area and the arm area. According to the technical scheme, the palm image region can be segmented from the upper limb depth image, namely, the palm image region is accurately segmented.
Description
Technical Field
The present application relates to the field of depth image processing technologies, and in particular, to a depth image target segmentation method and apparatus.
Background
With the development of machine vision technology, a series of applications and market explosion development related to 3D camera technology, such as 3D gesture interaction technology, which relies on a depth camera to enable a user to experience virtual three-dimensional object interaction, necessarily involves technologies such as hand detection and hand tracking based on depth images.
In the conventional depth image processing technology, palm center coordinates can be accurately detected, but the hand region cannot be accurately segmented from the depth image.
Disclosure of Invention
Based on the above requirements, the present application provides a depth image target segmentation method and device, which can segment a hand region from a depth image.
A depth image object segmentation method comprises the following steps:
determining a palm area mass center position and an arm area mass center position in the upper limb depth image according to the palm center position in the upper limb depth image; the upper limb depth image comprises an image obtained by performing depth imaging on the palm and the arm of the same upper limb;
determining the bottom edge of the palm according to the centroid position of the palm area and the centroid position of the arm area;
determining the edge of the front end of an arm according to the bottom edge of the palm, and determining the midpoint position of the edge of the front end of the arm;
determining a boundary position between a palm region and an arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm;
and segmenting a palm image area from the upper limb depth image based on the boundary position between the palm area and the arm area.
Optionally, the determining the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center position in the upper limb depth image includes:
and taking the palm center position in the upper limb depth image as a seed point, and determining the palm area centroid position and the arm area centroid position by performing region growing processing in the upper limb depth image.
Optionally, the determining the palm area centroid position and the arm area centroid position by using the palm center position in the upper limb depth image as a seed point and performing region growing processing in the upper limb depth image includes:
determining a palm region and an arm region in an upper limb depth image by taking a palm center position in the upper limb depth image as a seed point and performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point; the set distance is determined according to the camera focal length of the upper limb depth image obtained by shooting and the depth value of the palm center in the upper limb depth image;
and respectively calculating the mass centers of the palm area and the arm area, and determining the mass center position of the palm area and the mass center position of the arm area.
Optionally, the determining the palm region and the arm region in the upper limb depth image by using the palm center position in the upper limb depth image as a seed point and performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point includes:
taking the palm center position in the upper limb depth image as a seed point, and carrying out unconditional region growing treatment in the upper limb depth image to obtain an upper limb mask image;
determining an effective upper limb area in the upper limb depth image according to the upper limb mask image and the upper limb depth image;
taking the palm center position in the upper limb depth image as a seed point, and performing region growing processing within a set distance range from the seed point in the upper limb depth image to obtain a palm region mask image;
and determining a palm area and an arm area in the upper limb depth image according to the palm area mask image and the effective upper limb area.
Optionally, the determining the palm bottom edge according to the palm region centroid position and the arm region centroid position includes:
determining the position of the middle point of the wrist according to the position of the mass center of the palm area and the position of the mass center of the arm area;
and determining the bottom edge of the palm according to the centroid position of the palm area and the middle point position of the wrist.
Optionally, the determining a boundary position between the palm region and the arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm includes:
determining a pixel row where the central axis of the arm is located according to the position of the middle point of the bottom edge of the palm and the position of the middle point of the front edge of the arm;
and determining the boundary position between the palm area and the arm area according to the middle point position of the bottom edge of the palm and the pixel row where the central axis of the arm is located.
A depth image object segmentation apparatus comprising:
the position determining unit is used for determining the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center position in the upper limb depth image; the upper limb depth image comprises an image obtained by performing depth imaging on the palm and the arm of the same upper limb;
the first calculation unit is used for determining the palm bottom edge according to the palm area centroid position and the arm area centroid position;
the second computing unit is used for determining the edge of the front end of the arm according to the bottom edge of the palm and determining the midpoint position of the edge of the front end of the arm;
the third calculation unit is used for determining the boundary position between the palm region and the arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm;
and the image segmentation unit is used for segmenting a palm image area from the upper limb depth image based on the boundary position between the palm area and the arm area.
Optionally, when the position determining unit determines the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center position in the upper limb depth image, the position determining unit is specifically configured to:
and taking the palm center position in the upper limb depth image as a seed point, and determining the palm area centroid position and the arm area centroid position by performing region growing processing in the upper limb depth image.
Optionally, the position determining unit is specifically configured to, when determining the palm area centroid position and the arm area centroid position by performing region growing processing in the upper limb depth image with the palm center position in the upper limb depth image as a seed point:
determining a palm region and an arm region in an upper limb depth image by taking a palm center position in the upper limb depth image as a seed point and performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point; the set distance is determined according to the camera focal length of the upper limb depth image obtained by shooting and the depth value of the palm center in the upper limb depth image;
and respectively calculating the mass centers of the palm area and the arm area, and determining the mass center position of the palm area and the mass center position of the arm area.
Optionally, the position determining unit is configured to determine the palm region and the arm region in the upper limb depth image by using a palm center position in the upper limb depth image as a seed point and performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point, and is specifically configured to:
taking the palm center position in the upper limb depth image as a seed point, and carrying out unconditional region growing treatment in the upper limb depth image to obtain an upper limb mask image;
determining an effective upper limb area in the upper limb depth image according to the upper limb mask image and the upper limb depth image;
taking the palm center position in the upper limb depth image as a seed point, and performing region growing processing within a set distance range from the seed point in the upper limb depth image to obtain a palm region mask image;
and determining a palm area and an arm area in the upper limb depth image according to the palm area mask image and the effective upper limb area.
The depth image target segmentation method provided by the application can be used for carrying out palm region segmentation on the upper limb depth image. Firstly, determining the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center coordinates in the upper limb depth image; then, determining the bottom edge of the palm and the midpoint position of the front edge of the arm according to the center of mass position of the palm area and the center of mass position of the arm area; secondly, determining the boundary position between the palm region and the arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm, and further segmenting the palm image region from the upper limb depth image according to the boundary position. In the processing process, the boundary position between the palm region and the arm region in the upper limb depth image is determined on the basis of the palm center coordinates in the upper limb depth image, and then the palm image region is segmented from the upper limb depth image, namely the accurate segmentation of the palm image region is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a depth image target segmentation method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an upper limb depth image provided by an embodiment of the present application;
FIG. 3 is a schematic flowchart of another depth image object segmentation method provided in an embodiment of the present application;
FIG. 4 is a schematic flowchart of another depth image object segmentation method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of another depth image object segmentation apparatus according to an embodiment of the present application.
Detailed Description
The technical scheme of the embodiment of the application is suitable for the application scene of segmenting the palm image area in the upper limb depth image. By adopting the technical scheme of the embodiment of the application, the palm image area can be segmented from the upper limb depth image.
For example, the technical solution of the present application may be applied to a hardware device such as a hardware processor, or packaged into a software program to be executed, and when the hardware processor executes the processing procedure of the technical solution of the present application, or the software program is executed, the palm image region and the arm image region may be distinguished from the upper limb depth image, so that the palm image region may be separated from the upper limb depth image. The embodiment of the present application only introduces the specific processing procedure of the technical scheme of the present application by way of example, and does not limit the specific execution form of the technical scheme of the present application, and any technical implementation form that can execute the processing procedure of the technical scheme of the present application may be adopted by the embodiment of the present application.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a depth image target segmentation method, and as shown in fig. 1, the method includes:
s101, determining a palm area centroid position and an arm area centroid position in the upper limb depth image according to a palm center position in the upper limb depth image; the upper limb depth image comprises an image obtained by performing depth imaging on the palm and the arm of the same upper limb;
specifically, the upper limb depth image refers to a depth image obtained by performing depth imaging on an upper limb of a human body, and particularly to a depth image obtained by performing depth imaging on a forearm part of the human body. The depth image includes a human arm and a palm. The upper limb depth image can be seen in fig. 2.
The position coordinates of the palm center K in the upper limb depth image are determined in advance in the embodiment of the application. For example, the position coordinates of the palm center K may be determined by calculating a centroid or calculating a desnsift cluster center for the hand detection result based on the preliminary hand detection result. Alternatively, the palm center coordinate data in the known upper limb depth image may be read.
Then, based on the palm center coordinates, determining a palm area and an arm area in the upper limb depth image, and further determining a mass center position of the palm area and a mass center position of the arm area.
It should be noted that the palm region and the arm region determined in this step are not the exact palm region and arm region, but are only roughly determined, and especially, the boundary between the two regions cannot be determined exactly. The embodiment of the application further determines the boundary between the two through subsequent processing.
For example, in the embodiment of the present application, the palm region and the arm region in the upper limb depth image are determined by performing region growing processing on the upper limb depth image with the position of the palm center K as a seed point. Further, calculating the mass centers of the palm area and the arm area, and respectively determining the mass center positions of the palm area and the arm area.
S102, determining the bottom edge of the palm according to the centroid position of the palm area and the centroid position of the arm area;
specifically, the bottom edge of the palm refers to a boundary between the palm and the arm.
The embodiment of the application determines the junction position of the palm and the arm, namely the position of the middle point W of the wrist by calculating the middle point position of the connecting line between the palm area centroid position and the arm area centroid position.
Then, a pixel row where the palm central axis Lcw is located is determined at the palm-arm interface position W, and a pixel row LTcw position perpendicular to the pixel row where the palm central axis Lcw is located is determined, where the pixel row LTcw is located in the pixel set Sw in the palm area range, that is, the palm bottom edge.
S103, determining the edge of the front end of an arm according to the edge of the bottom of the palm, and determining the position of the middle point of the edge of the front end of the arm;
specifically, after the palm bottom edge Sw is determined, in the embodiment of the present application, a pixel row PLTcw parallel to the palm bottom edge Sw and located at a distance b from the pixel row LTcw in which the palm bottom edge Sw is located is determined from the upper limb depth image.
The pixel line PLTcw is the pixel line where the arm front edge is located. The distance b may be flexibly set according to actual conditions or may be set according to experience.
A pixel set Spw within the arm region range is determined in the pixel row PLTcw, that is, the arm front edge, and the midpoint position of the pixel set Spw, that is, the position of the midpoint Mpw of the arm front edge is determined.
Illustratively, after determining the palm bottom edge Sw and the pixel row LTcw in which the palm bottom edge is located, the parallel pixel row PLTcw of the pixel row LTcw is calculated:
A(PLTcw)x+B(PLTcw)y+C(PLTcw)=0
a, B, C are coefficients of the equation of the line in which the pixel row is located.
The pixel line PLTcw and the pixel line LTcw satisfy the following distance requirement:
|C(PLTcw)-C(LTcw)|/(A(LTcw)*A(LTcw)+B(LTcw)*B(LTcw))1/2=b
moreover, the pixel line PLTcw and the palm area centroid are respectively positioned at two sides of the pixel line LTcw; the value of b may be flexibly set according to actual conditions or may be set according to experience.
S104, determining a boundary position between a palm region and an arm region according to the middle point position of the bottom edge of the palm and the middle point position of the front edge of the arm;
specifically, the boundary position between the palm region and the arm region specifically refers to a pixel row where the boundary between the palm region and the arm region is located.
For example, in the embodiment of the present application, a pixel row where a connecting line between the palm bottom edge Mw (xmw, ymw) and the arm front edge Mpw (xmpw, ympw) is located is determined according to the midpoint position Mw (xmw, ymw) of the palm bottom edge, and then a pixel row perpendicular to the pixel row is determined as a boundary position between the palm region and the arm region.
And S105, segmenting a palm image area from the upper limb depth image based on the boundary position between the palm area and the arm area.
Specifically, on the basis of determining the boundary position between the palm region and the arm region in the above-described upper limb depth image and determining the palm region and the arm region in step S101, the palm region and the arm region can be accurately divided in the upper limb depth image, so that the palm image region can be divided from the upper limb depth image.
For example, an effective upper limb image region at the same side of the palm center coordinates of the boundary position between the palm region and the arm region in the upper limb depth image is determined as a palm image region, and the palm image region is obtained by dividing the effective upper limb image region.
As can be seen from the above description, the depth image target segmentation method provided in the embodiment of the present application can perform palm region segmentation on an upper limb depth image. Firstly, determining the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center coordinates in the upper limb depth image; then, determining the bottom edge of the palm and the midpoint position of the front edge of the arm according to the center of mass position of the palm area and the center of mass position of the arm area; secondly, determining the boundary position between the palm region and the arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm, and further segmenting the palm image region from the upper limb depth image according to the boundary position. In the processing process, the boundary position between the palm region and the arm region in the upper limb depth image is determined on the basis of the palm center coordinates in the upper limb depth image, and then the palm image region is segmented from the upper limb depth image, namely the accurate segmentation of the palm image region is realized.
As an exemplary implementation manner, the determining the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center position in the upper limb depth image further disclosed in an embodiment of the present application includes:
and taking the palm center position in the upper limb depth image as a seed point, and determining the palm area centroid position and the arm area centroid position by performing region growing processing in the upper limb depth image.
Referring to fig. 3, the processing procedure specifically includes:
s301, carrying out unconditional region growing treatment in the upper limb depth image by taking the palm center position in the upper limb depth image as a seed point to obtain an upper limb mask image;
specifically, the palm center position (x0, y0) is used as a seed point, whether the absolute value of the difference between the adjacent pixel value and the current reference seed point pixel value is within a set range is accessed according to eight neighborhoods or four neighborhoods (upper, lower, left and right, and the like), if so, the point coordinate is added into a seed point queue, the point coordinate is marked to be effective, and otherwise, the point coordinate is skipped; the coordinates that have been visited are marked and are not visited any more thereafter.
Then, the coordinates of the seed point queues are read in sequence until dequeuing is finished, when the coordinates of each seed point queue are read, whether the pixel values of the neighborhood coordinates meet the pixel value difference condition is calculated, and finally all the coordinates meeting the condition are returned to generate a mask image meeting the growth condition.
S302, determining an effective upper limb area in the upper limb depth image according to the upper limb mask image and the upper limb depth image;
specifically, the mask image generated in step S301 and the upper limb depth image are subjected to bitwise and (&) operation, so as to obtain an effective upper limb area in the upper limb depth image.
The above-mentioned & operation can keep the depth value of the position of the upper limb depth image corresponding to the effective position marked in the mask image, and the depth value of other ineffective positions is 0. After this operation, the pixel position of the upper limb depth image where the depth value is retained is the effective upper limb pixel position, the pixel position of the upper limb depth image where the depth value is 0 is the ineffective upper limb pixel position, and all the effective pixel positions constitute the effective upper limb area.
S303, taking the palm center position in the upper limb depth image as a seed point, and performing region growing processing within a set distance range from the seed point in the upper limb depth image to obtain a palm region mask image;
specifically, the palm position coordinates (x0, y0) are used as a seed point (the point is also an original seed point), the distance between a plane and the palm position coordinates (x0, y0) is set to be lambda, and region growing is performed, that is, the region growing is performed within the range of the distance lambda from the seed.
During the region growing of each step, judging whether the pixel value error of the eight-neighborhood or four-neighborhood coordinates (x ', y') of the current seed point and the current seed point coordinates (x, y) meets a set condition, and the coordinates (x0, y0) from the original seed point (palm center position) meet (x '-x 0) × (x' -x0) + (y '-y 0) ((y' -y0) < ═ λ ^ λ), if the two conditions are met, marking the seed points as valid, adding the seed points into a seed point queue, and waiting to be sequentially read as reference seed points; otherwise, skipping over; the accessed point is marked as read and is not accessed any more thereafter.
After the area growth stops, sequentially reading the coordinates in the seed point queue until the dequeuing is finished, calculating whether the pixel value of the neighborhood coordinate meets the pixel value difference condition when reading one coordinate, and finally returning all the coordinates meeting the condition to generate a mask image meeting the growth condition, namely obtaining the mask image of the palm area.
Wherein the determination of the λ value may be determined with reference to λ 250 × (fx + fy)/2/d.
Where fx, fy are focal distances of the camera's internal reference horizontal and vertical, and d is the depth value of the palm center (in mm).
S304, determining a palm area and an arm area in the upper limb depth image according to the palm area mask image and the effective upper limb area;
specifically, the palm area mask image and the effective upper limb area are subjected to bitwise and (&) operation, a coordinate point of a position marked with a mask as an effective position is regarded as a palm area, and the palm area is classified into a hand point set, namely a palm area; the other invalid regions are regarded as arm regions and classified as arm point sets, namely arm regions. Note that here the depth values of hand and all points in the arm point set are within the set range, excluding the 0 value and the value larger than the farthest point.
It is understood that the processing in steps S301 to S304 determines the palm region and the arm region in the upper limb depth image by performing the unconditional region growing processing in the upper limb depth image with the palm center position in the upper limb depth image as the seed point and the region growing processing within the set distance range from the seed point.
S305, calculating the mass centers of the palm area and the arm area respectively, and determining the mass center position of the palm area and the mass center position of the arm area.
Specifically, the centroid of the set is calculated in the palm region, that is, the hand point set, to obtain the centroid position of the palm region; and calculating the mass center of the set in the arm area, namely the arm point set, and obtaining the mass center position of the arm area.
Steps S306 to S309 in this embodiment correspond to steps S102 to S105 in the method embodiment shown in fig. 1, and for details, please refer to the contents of the method embodiment shown in fig. 1, which is not described herein again.
Fig. 4 shows a processing procedure of another embodiment of the present application, in which the processing procedure of the depth image object segmentation method proposed in the present application is shown in detail from another perspective.
Referring to fig. 4, the depth image target segmentation method provided by the present application specifically includes:
s401, determining a palm area centroid position and an arm area centroid position in the upper limb depth image according to the palm center position in the upper limb depth image;
specifically, the specific processing procedure of this step can be referred to as the processing procedure shown in fig. 3, and is not repeated here.
The palm area centroid position and the arm area centroid position are respectively shown as a point C and a point a in fig. 2.
S402, determining the position of the middle point of the wrist according to the position of the mass center of the palm area and the position of the mass center of the arm area;
specifically, as shown in fig. 2, assuming that the coordinates of the palm region centroid position C are (xc, yc) and the coordinates of the arm region centroid position a are (xa, ya), the midpoint position between the point C and the point a is the wrist midpoint position in the first person's perspective, and assuming that the position is W (xw, yw), xw is (xc + xa)/2, and yw is (yc + ya)/2.
According to the position relation, the position of the middle point of the wrist can be calculated and determined.
S403, determining the bottom edge of the palm according to the centroid position of the palm area and the middle point position of the wrist;
specifically, as shown in fig. 2, a straight line equation Lcw of a pixel row passing through the palm area centroid position C point and the wrist midpoint position W point is first calculated: a (lcw) x + b (lcw) y + c (lcw) 0; wherein A, B, C are all equation coefficients.
Then, the equation of the line LTcw for the pixel row perpendicular to Lcw passing through point W is calculated: a (LTcw) x + b (LTcw) y + c (LTcw) 0, and the pixel line LTcw is regarded as the pixel line where the palm bottom edge is located.
Finally, the pixel set Sw of the effective upper limb area determined in step S401 in the pixel row LTcw is determined to be the palm bottom edge.
S404, determining the edge of the front end of an arm according to the edge of the bottom of the palm, and determining the position of the middle point of the edge of the front end of the arm;
specifically, according to the pixel row LTcw where the palm bottom edge Sw is located, a pixel row linear equation PLTcw parallel to LTcw is calculated: a (PLTcw) x + b (PLTcw) y + C (PLTcw) ═ 0, so that | C (PLTcw) |/(a (LTcw) × a (LTcw) + b (LTcw) × b (LTcw))1/2 ═ b, that is, so that the distance between the two line equations is b, and the pixel row PLTcw and the palm region centroid position C point are located on both sides of the line LTcw, respectively. The pixel row where the above-mentioned linear equation PLTcw is located is the pixel row where the front edge of the arm is located.
Then, the pixel set Spw of the effective upper limb area determined in step S401 in the pixel line PLTcw is determined to be the arm front end edge.
Finally, the midpoint coordinates Mpw (xmpw, ympw) of the pixel set Spw are calculated, i.e., the midpoint position of the front edge of the arm is obtained.
S405, determining a pixel row where the central axis of the arm is located according to the middle point position of the bottom edge of the palm and the middle point position of the front edge of the arm;
specifically, the straight line equation LPcw: a (LPcw) x + b (LPcw) y + c (LPcw) 0 for the pixel row passing through the midpoint position Mw (xmw, ymw) of the bottom edge of the palm and the midpoint position Mpw (xmpw, ympw) of the front edge of the arm is calculated, i.e., the pixel row where the central axis of the arm is located.
S406, determining a boundary position between a palm area and an arm area according to the middle point position of the bottom edge of the palm and the pixel row where the central axis of the arm is located;
specifically, a (LPTcw) x + b (LPTcw) + c (LPTcw) 0 is calculated as a linear equation LPTcw of a pixel row passing through the middle point position Mw (xmw, ymw) of the palm bottom edge and perpendicular to the pixel row LPcw in which the central axis of the arm is located, and the position of the pixel row LPTcw is defined as the boundary position between the palm region and the arm region.
And S407, segmenting a palm image area from the upper limb depth image based on the boundary position between the palm area and the arm area.
Specifically, after the boundary position LPTcw between the palm region and the arm region is determined, an image region of the effective upper limb region determined in step S401, which is on the same side as the palm region centroid position C at the boundary position LPTcw, is a palm image region; and the rest image areas in the effective upper limb area are arm image areas.
The specific processing contents of steps S401, S404, and S407 in this embodiment may also refer to steps S301 to S305 in the method embodiment shown in fig. 3, and steps S102 and S104 in the method embodiment shown in fig. 1, respectively, which are not described herein again.
In the depth image object segmentation method proposed in the embodiment of the present application, in order to facilitate the representation of the image positions where some pixel rows are located, the pixel rows are expressed in the form of linear equations, and the coordinates, lengths, and the like of each linear equation in the embodiment of the present application are calculated based on the image pixel coordinates. It is understood that each of the linear equations in the above embodiments of the present application is a representation of a corresponding pixel row in the upper limb depth image.
Corresponding to the above depth image object segmentation method, an embodiment of the present application further provides a depth image object segmentation apparatus, as shown in fig. 5, the apparatus includes:
a position determining unit 100, configured to determine a palm region centroid position and an arm region centroid position in the upper limb depth image according to a palm center position in the upper limb depth image; the upper limb depth image comprises an image obtained by performing depth imaging on the palm and the arm of the same upper limb;
a first calculating unit 110, configured to determine a palm bottom edge according to the palm region centroid position and the arm region centroid position;
a second computing unit 120, configured to determine, according to the palm bottom edge, an arm front end edge, and determine a midpoint position of the arm front end edge;
a third calculating unit 130, configured to determine a boundary position between the palm region and the arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm;
an image segmentation unit 140, configured to segment a palm image region from the upper limb depth image based on a boundary position between the palm region and an arm region.
As an exemplary implementation manner, when the position determining unit 100 determines the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center position in the upper limb depth image, specifically:
and taking the palm center position in the upper limb depth image as a seed point, and determining the palm area centroid position and the arm area centroid position by performing region growing processing in the upper limb depth image.
As an exemplary implementation manner, when the position determination unit 100 determines the palm area centroid position and the arm area centroid position by performing the region growing process in the upper limb depth image with the palm center position in the upper limb depth image as the seed point, it is specifically configured to:
determining a palm region and an arm region in an upper limb depth image by taking a palm center position in the upper limb depth image as a seed point and performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point; the set distance is determined according to the camera focal length of the upper limb depth image obtained by shooting and the depth value of the palm center in the upper limb depth image;
and respectively calculating the mass centers of the palm area and the arm area, and determining the mass center position of the palm area and the mass center position of the arm area.
As an exemplary implementation manner, when the position determination unit 100 determines the palm region and the arm region in the upper limb depth image by performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point with the palm center position in the upper limb depth image as the seed point, specifically:
taking the palm center position in the upper limb depth image as a seed point, and carrying out unconditional region growing treatment in the upper limb depth image to obtain an upper limb mask image;
determining an effective upper limb area in the upper limb depth image according to the upper limb mask image and the upper limb depth image;
taking the palm center position in the upper limb depth image as a seed point, and performing region growing processing within a set distance range from the seed point in the upper limb depth image to obtain a palm region mask image;
and determining a palm area and an arm area in the upper limb depth image according to the palm area mask image and the effective upper limb area.
As an exemplary implementation manner, when the first computing unit 110 determines the palm bottom edge according to the palm area centroid position and the arm area centroid position, it is specifically configured to:
determining the position of the middle point of the wrist according to the position of the mass center of the palm area and the position of the mass center of the arm area;
and determining the bottom edge of the palm according to the centroid position of the palm area and the middle point position of the wrist.
As an exemplary implementation manner, when determining the boundary position between the palm region and the arm region according to the midpoint position of the palm bottom edge and the midpoint position of the arm front edge, the third computing unit 130 is specifically configured to:
determining a pixel row where the central axis of the arm is located according to the position of the middle point of the bottom edge of the palm and the position of the middle point of the front edge of the arm;
and determining the boundary position between the palm area and the arm area according to the middle point position of the bottom edge of the palm and the pixel row where the central axis of the arm is located.
Specifically, please refer to the contents of the method embodiments for the specific working contents of each unit of the depth image target segmentation apparatus, which is not described herein again.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps in the method of the embodiments of the present application may be sequentially adjusted, combined, and deleted according to actual needs.
The modules and sub-modules in the device and the terminal in the embodiments of the application can be combined, divided and deleted according to actual needs.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of a module or a sub-module is only one logical division, and there may be other divisions when the terminal is actually implemented, for example, a plurality of sub-modules or modules may be combined or integrated into another module, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules or sub-modules described as separate parts may or may not be physically separate, and parts that are modules or sub-modules may or may not be physical modules or sub-modules, may be located in one place, or may be distributed over a plurality of network modules or sub-modules. Some or all of the modules or sub-modules can be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, each functional module or sub-module in the embodiments of the present application may be integrated into one processing module, or each module or sub-module may exist alone physically, or two or more modules or sub-modules may be integrated into one module. The integrated modules or sub-modules may be implemented in the form of hardware, or may be implemented in the form of software functional modules or sub-modules.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software cells may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A depth image object segmentation method is characterized by comprising the following steps:
determining a palm area mass center position and an arm area mass center position in the upper limb depth image according to the palm center position in the upper limb depth image; the upper limb depth image comprises an image obtained by performing depth imaging on the palm and the arm of the same upper limb;
determining the bottom edge of the palm according to the centroid position of the palm area and the centroid position of the arm area;
determining the edge of the front end of an arm according to the bottom edge of the palm, and determining the midpoint position of the edge of the front end of the arm;
determining a boundary position between a palm region and an arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm;
and segmenting a palm image area from the upper limb depth image based on the boundary position between the palm area and the arm area.
2. The method of claim 1, wherein determining the palm region centroid position and the arm region centroid position in the upper limb depth image from the palm center position in the upper limb depth image comprises:
and taking the palm center position in the upper limb depth image as a seed point, and determining the palm area centroid position and the arm area centroid position by performing region growing processing in the upper limb depth image.
3. The method according to claim 2, wherein the determining the palm region centroid position and the arm region centroid position by performing region growing processing in the upper limb depth image with the palm center position in the upper limb depth image as a seed point comprises:
determining a palm region and an arm region in an upper limb depth image by taking a palm center position in the upper limb depth image as a seed point and performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point; the set distance is determined according to the camera focal length of the upper limb depth image obtained by shooting and the depth value of the palm center in the upper limb depth image;
and respectively calculating the mass centers of the palm area and the arm area, and determining the mass center position of the palm area and the mass center position of the arm area.
4. The method according to claim 3, wherein the determining the palm region and the arm region in the upper limb depth image by performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point with the palm position in the upper limb depth image as a seed point comprises:
taking the palm center position in the upper limb depth image as a seed point, and carrying out unconditional region growing treatment in the upper limb depth image to obtain an upper limb mask image;
determining an effective upper limb area in the upper limb depth image according to the upper limb mask image and the upper limb depth image;
taking the palm center position in the upper limb depth image as a seed point, and performing region growing processing within a set distance range from the seed point in the upper limb depth image to obtain a palm region mask image;
and determining a palm area and an arm area in the upper limb depth image according to the palm area mask image and the effective upper limb area.
5. The method of claim 1, wherein determining the palm bottom edge from the palm region centroid position and the arm region centroid position comprises:
determining the position of the middle point of the wrist according to the position of the mass center of the palm area and the position of the mass center of the arm area;
and determining the bottom edge of the palm according to the centroid position of the palm area and the middle point position of the wrist.
6. The method of claim 1, wherein determining a boundary position between a palm region and an arm region based on the position of the midpoint of the palm bottom edge and the position of the midpoint of the arm front edge comprises:
determining a pixel row where the central axis of the arm is located according to the position of the middle point of the bottom edge of the palm and the position of the middle point of the front edge of the arm;
and determining the boundary position between the palm area and the arm area according to the middle point position of the bottom edge of the palm and the pixel row where the central axis of the arm is located.
7. A depth image object segmentation apparatus, comprising:
the position determining unit is used for determining the palm area centroid position and the arm area centroid position in the upper limb depth image according to the palm center position in the upper limb depth image; the upper limb depth image comprises an image obtained by performing depth imaging on the palm and the arm of the same upper limb;
the first calculation unit is used for determining the palm bottom edge according to the palm area centroid position and the arm area centroid position;
the second computing unit is used for determining the edge of the front end of the arm according to the bottom edge of the palm and determining the midpoint position of the edge of the front end of the arm;
the third calculation unit is used for determining the boundary position between the palm region and the arm region according to the midpoint position of the bottom edge of the palm and the midpoint position of the front edge of the arm;
and the image segmentation unit is used for segmenting a palm image area from the upper limb depth image based on the boundary position between the palm area and the arm area.
8. The apparatus according to claim 7, wherein the position determining unit is specifically configured to, when determining the palm region centroid position and the arm region centroid position in the upper limb depth image according to the palm center position in the upper limb depth image:
and taking the palm center position in the upper limb depth image as a seed point, and determining the palm area centroid position and the arm area centroid position by performing region growing processing in the upper limb depth image.
9. The apparatus according to claim 8, wherein the position determination unit is configured to determine the palm region centroid position and the arm region centroid position by performing region growing processing in the upper limb depth image with the palm center position in the upper limb depth image as a seed point, and is specifically configured to:
determining a palm region and an arm region in an upper limb depth image by taking a palm center position in the upper limb depth image as a seed point and performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point; the set distance is determined according to the camera focal length of the upper limb depth image obtained by shooting and the depth value of the palm center in the upper limb depth image;
and respectively calculating the mass centers of the palm area and the arm area, and determining the mass center position of the palm area and the mass center position of the arm area.
10. The apparatus according to claim 9, wherein the position determination unit is configured to, when determining the palm region and the arm region in the upper limb depth image by performing unconditional region growing processing in the upper limb depth image and region growing processing within a set distance range from the seed point with the palm center position in the upper limb depth image as the seed point, specifically:
taking the palm center position in the upper limb depth image as a seed point, and carrying out unconditional region growing treatment in the upper limb depth image to obtain an upper limb mask image;
determining an effective upper limb area in the upper limb depth image according to the upper limb mask image and the upper limb depth image;
taking the palm center position in the upper limb depth image as a seed point, and performing region growing processing within a set distance range from the seed point in the upper limb depth image to obtain a palm region mask image;
and determining a palm area and an arm area in the upper limb depth image according to the palm area mask image and the effective upper limb area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911173140.8A CN111144212B (en) | 2019-11-26 | 2019-11-26 | Depth image target segmentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911173140.8A CN111144212B (en) | 2019-11-26 | 2019-11-26 | Depth image target segmentation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111144212A true CN111144212A (en) | 2020-05-12 |
CN111144212B CN111144212B (en) | 2023-06-23 |
Family
ID=70516715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911173140.8A Active CN111144212B (en) | 2019-11-26 | 2019-11-26 | Depth image target segmentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111144212B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100034457A1 (en) * | 2006-05-11 | 2010-02-11 | Tamir Berliner | Modeling of humanoid forms from depth maps |
US20100197400A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Visual target tracking |
CN103226387A (en) * | 2013-04-07 | 2013-07-31 | 华南理工大学 | Video fingertip positioning method based on Kinect |
US9256780B1 (en) * | 2014-09-22 | 2016-02-09 | Intel Corporation | Facilitating dynamic computations for performing intelligent body segmentations for enhanced gesture recognition on computing devices |
CN107341811A (en) * | 2017-06-20 | 2017-11-10 | 上海数迹智能科技有限公司 | The method that hand region segmentation is carried out using MeanShift algorithms based on depth image |
US20180047175A1 (en) * | 2016-08-12 | 2018-02-15 | Nanjing Huajie Imi Technology Co., Ltd | Method for implementing human skeleton tracking system based on depth data |
CN108564063A (en) * | 2018-04-27 | 2018-09-21 | 北京华捷艾米科技有限公司 | Centre of the palm localization method based on depth information and system |
CN109190516A (en) * | 2018-08-14 | 2019-01-11 | 东北大学 | A kind of static gesture identification method based on volar edge contour vectorization |
CN109948461A (en) * | 2019-02-27 | 2019-06-28 | 浙江理工大学 | A kind of sign language image partition method based on center coordination and range conversion |
-
2019
- 2019-11-26 CN CN201911173140.8A patent/CN111144212B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100034457A1 (en) * | 2006-05-11 | 2010-02-11 | Tamir Berliner | Modeling of humanoid forms from depth maps |
US20100197400A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Visual target tracking |
CN103226387A (en) * | 2013-04-07 | 2013-07-31 | 华南理工大学 | Video fingertip positioning method based on Kinect |
US9256780B1 (en) * | 2014-09-22 | 2016-02-09 | Intel Corporation | Facilitating dynamic computations for performing intelligent body segmentations for enhanced gesture recognition on computing devices |
US20180047175A1 (en) * | 2016-08-12 | 2018-02-15 | Nanjing Huajie Imi Technology Co., Ltd | Method for implementing human skeleton tracking system based on depth data |
CN107341811A (en) * | 2017-06-20 | 2017-11-10 | 上海数迹智能科技有限公司 | The method that hand region segmentation is carried out using MeanShift algorithms based on depth image |
CN108564063A (en) * | 2018-04-27 | 2018-09-21 | 北京华捷艾米科技有限公司 | Centre of the palm localization method based on depth information and system |
CN109190516A (en) * | 2018-08-14 | 2019-01-11 | 东北大学 | A kind of static gesture identification method based on volar edge contour vectorization |
CN109948461A (en) * | 2019-02-27 | 2019-06-28 | 浙江理工大学 | A kind of sign language image partition method based on center coordination and range conversion |
Non-Patent Citations (2)
Title |
---|
崔家礼;解威;王一丁;贾瑞明;: "基于形状特征的静态手势数字识别" * |
张黎: "复杂背景下动态手势的识别研究" * |
Also Published As
Publication number | Publication date |
---|---|
CN111144212B (en) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111815755B (en) | Method and device for determining blocked area of virtual object and terminal equipment | |
KR101606628B1 (en) | Pointing-direction detecting device and its method, program and computer readable-medium | |
JP3735344B2 (en) | Calibration apparatus, calibration method, and calibration program | |
JP7280393B2 (en) | Visual positioning method, related model training method and related device and equipment | |
CN110349086B (en) | Image splicing method under non-concentric imaging condition | |
CN109961523B (en) | Method, device, system, equipment and storage medium for updating virtual target | |
CN113436238A (en) | Point cloud registration accuracy evaluation method and device and electronic equipment | |
KR20220062622A (en) | Data processing methods and related devices | |
JP4938748B2 (en) | Image recognition apparatus and program | |
WO2021003807A1 (en) | Image depth estimation method and device, electronic apparatus, and storage medium | |
WO2018039936A1 (en) | Fast uv atlas generation and texture mapping | |
CN113470112A (en) | Image processing method, image processing device, storage medium and terminal | |
CN113362445B (en) | Method and device for reconstructing object based on point cloud data | |
CN113902853A (en) | Face three-dimensional reconstruction method and device, electronic equipment and storage medium | |
KR102610900B1 (en) | Golf ball overhead detection method, system and storage medium | |
CN113298870A (en) | Object posture tracking method and device, terminal equipment and storage medium | |
CN111144212A (en) | Depth image target segmentation method and device | |
CN116188594B (en) | Calibration method, calibration system, calibration device and electronic equipment of camera | |
CN116921932A (en) | Welding track recognition method, device, equipment and storage medium | |
CN115511724A (en) | Three-dimensional scanning method and device, computer equipment and storage medium | |
JP2007034964A (en) | Method and device for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter, and program for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter | |
CN109377525B (en) | Three-dimensional coordinate estimation method of shooting target and shooting equipment | |
CN113222830A (en) | Image processing method and device | |
CN112613357A (en) | Face measurement method, face measurement device, electronic equipment and medium | |
CN109472741A (en) | Three-dimensional splicing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |