CN107341811B - Method for segmenting hand region by using MeanShift algorithm based on depth image - Google Patents

Method for segmenting hand region by using MeanShift algorithm based on depth image Download PDF

Info

Publication number
CN107341811B
CN107341811B CN201710471608.6A CN201710471608A CN107341811B CN 107341811 B CN107341811 B CN 107341811B CN 201710471608 A CN201710471608 A CN 201710471608A CN 107341811 B CN107341811 B CN 107341811B
Authority
CN
China
Prior art keywords
area
polygon
hand
region
hand region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710471608.6A
Other languages
Chinese (zh)
Other versions
CN107341811A (en
Inventor
邹耀
应忍冬
金柯
马燕辉
鄢青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Data Miracle Intelligent Technology Co ltd
Original Assignee
Shanghai Data Miracle Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Data Miracle Intelligent Technology Co ltd filed Critical Shanghai Data Miracle Intelligent Technology Co ltd
Priority to CN201710471608.6A priority Critical patent/CN107341811B/en
Publication of CN107341811A publication Critical patent/CN107341811A/en
Application granted granted Critical
Publication of CN107341811B publication Critical patent/CN107341811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention discloses a method for segmenting a hand region by using a MeanShift algorithm based on a depth image, which comprises the following steps: 1. reading a depth image; 2. preprocessing the depth image and preliminarily extracting a hand region containing a redundant outline; 3. selecting an initial iteration point from the hand region obtained by the initial extraction and calculating an iteration radius; 4. performing iterative operation on the initial iteration point and the iteration radius by using a MeanShift algorithm to obtain a circular region closest to the palm region; 5. and removing the redundant contour on the preliminarily extracted hand region according to the circular region closest to the palm region, and updating to obtain an accurate hand region contour. The method for segmenting the hand region by using the MeanShift algorithm can effectively remove other redundant contour information such as arms and the like, provides a more accurate input data source for the subsequent steps of feature extraction, classification learning and the like, and improves the stability and the accuracy of the final gesture recognition and interaction system.

Description

Method for segmenting hand region by using MeanShift algorithm based on depth image
Technical Field
The invention relates to the technical field of computer mode recognition and computer vision, in particular to a method for segmenting a hand region by using a MeanShift algorithm based on a depth image.
Background
Gesture interaction is an important interaction mode in novel human-computer interaction research, the interaction is non-contact and natural interaction and better accords with natural behaviors of people, and therefore the gesture-based interaction mode is a trend of future human-computer interaction development. The gesture recognition technology relates to many disciplines such as artificial intelligence, pattern recognition, machine learning, computer graphics and the like. In addition, the study of gestures is designed into many disciplines such as mathematics, computer graphics, robot kinematics, and medicine. Therefore, the research of gesture recognition has very important research value and research significance. The current research based on gesture interaction mainly focuses on processing based on RGB optical images, including three parts of human hand detection, target tracking and gesture recognition.
The gesture detection is used for detecting gestures for acquiring control rights and mainly comprises two modes of static gestures and dynamic gestures, the static gestures are detected by using an object detection method based on region features, such as Haar features, HOG features, skin color features, shape features and the like, the dynamic gestures are detected by mainly using a motion detection algorithm, and certain predefined gestures are detected according to the features of motion regions. At present, gesture detection research is mature, but is influenced by illumination, background and the like.
For example, chinese patent application No. 201510282688.1 discloses a method for detecting hand feature points based on depth maps, which includes the following steps: (1) hand segmentation: acquiring a human motion video sequence by using Kinect to extract a hand, obtaining human hand position information by using OPENNI through a depth map, and preliminarily obtaining a hand center by setting a search area and a depth threshold value; obtaining a hand contour by using a find _ constraints function of the OPENCV; the method comprises the steps of accurately determining a hand center point of a hand by finding the center of a maximum inscribed circle in a hand contour, and finding a maximum value M in the shortest distance by calculating the shortest distance M between all hand inner points and contour points, wherein the hand inner point represented by M is the hand center point, and the radius R of the inscribed circle is equal to M; (2) extracting characteristic points: continuously performing Gaussian smoothing on the hand contour, combining a curvature threshold value to obtain a CSS curvature diagram, obtaining coordinates of a hand finger tip point and a finger valley point according to a CSS contour analysis limit value in the diagram, and simultaneously completing the hand finger valley points which cannot be obtained according to the CSS curvature diagram; (3) completing missing fingers: and (3) utilizing a mode of combining an angle threshold value and depth jump to fill up the missing finger, thereby finding out a fingertip point of the bent finger.
However, the hand contour obtained by the method based on setting the search area and the depth threshold may be accompanied by contour information of an arm or other obstacles, and these redundant contour information may interfere with subsequent steps such as feature extraction, classification learning, and the like, thereby causing instability of the final gesture recognition and interaction system. The applicant has therefore made an advantageous search and attempt to solve the above-mentioned problems, in the context of which the technical solutions to be described below have been created.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the defects of the prior art, the method for segmenting the hand region by using the MeanShift algorithm based on the depth image is provided, the hand region finally obtained by processing is ensured to start from the wrist, redundant contour information caused by the arm and other obstacles is removed, and the stability of a gesture recognition and interaction system is ensured.
The technical problem solved by the invention can be realized by adopting the following technical scheme:
a method for segmenting a hand region by using a MeanShift algorithm based on a depth image comprises the following steps:
step S10, reading a depth image;
step S20, preprocessing the depth image and preliminarily extracting a hand area containing redundant outlines;
step S30, selecting an initial iteration point from the hand area obtained by the preliminary extraction and calculating an iteration radius;
step S40, performing iterative operation on the initial iteration point and the iteration radius by using a MeanShift algorithm to obtain a circular area closest to the palm area;
and step S50, removing the redundant contour on the preliminarily extracted hand region according to the acquired circular region closest to the palm region, and updating to obtain an accurate hand region contour.
In a preferred embodiment of the present invention, in the step S20, the preprocessing the depth image refers to performing depth cutting, graphical filtering and calculating a maximum connected region on the depth image by using a depth image preprocessing module.
In a preferred embodiment of the present invention, the depth image preprocessing module for performing depth cutting and graphical filtering on the depth image and calculating the maximum connected region includes the following steps:
step S21, performing depth cutting on the depth image by using a depth image preprocessing module, extracting a hand region containing redundant contours according to a depth threshold value, and mapping the extracted hand region into a binary image, wherein the hand region is white and the background region is black;
step S22, using graphics to operate the binary image, firstly performing opening operation, smoothing the outline of the binary image and removing the background noise of the binary image, then performing closing operation, and filling fine holes in the binary image;
step S23, finding the maximum area contour on the binary image after the graphical operation, and considering the contour as a hand region contour including a redundant contour, and filling the holes in the contour.
In a preferred embodiment of the present invention, in step S30, the selecting an initial iteration point and calculating an iteration radius in the hand region obtained by the preliminary extraction includes the following steps:
step S31, representing the preliminarily extracted hand area by a polygon, and repairing the polygon condition containing the inner ring;
step S32, calculating the minimum bounding rectangle of the polygon, comparing the minimum bounding rectangle with the image boundary, and performing the following classification and discussion according to the number of overlapped edges:
(1) if the number of the overlapped edges is larger than or equal to 3, the fact that the hand is too close to the lens is indicated, the image cannot display a complete hand area, and the algorithm is ended;
(2) if the number of the overlapped edges is 2 and the two overlapped edges are parallel edges, the fact that the hand transversely or longitudinally penetrates through the lens is indicated, the image cannot display a complete hand area, and the algorithm is ended;
(3) if the number of the overlapped edges is 0, the fact that no contour of the arm part is intersected with the image boundary and no redundant contour exists is indicated, the returned initial iteration point is the mass center of the hand area, and the initial iteration radius is selected according to the actual palm experience value;
(4) if the number of overlapping sides is 1 or the number of overlapping sides is 2 and the two overlapping sides are intersecting sides, then the process proceeds to step S33;
step S33, calculating the vertex nearest to the polygon in the four vertexes of the minimum bounding rectangle of the polygon, and calculating the projection point of the vertex on the polygon, namely the point of the vertex nearest to the polygon, to ensure that the vertex is effective and does not intersect with the image boundary;
and step S34, taking the middle point of the connecting line of the projection point and the centroid of the polygon as an initial iteration point, taking half of the length of the connecting line as an initial iteration radius, and taking the projection point of the point on the polygon as a new initial iteration point if the initial iteration point is outside the polygon.
In a preferred embodiment of the present invention, in the step S40, the iterative operation of the initial iteration point and the iteration radius by using the MeanShift algorithm includes the following sub-steps:
step S41, obtaining an initial circular area according to the initial iteration point and the initial iteration radius;
step S42, searching the intersection area of the initial circular area and the polygon of the hand area, and calculating the centroid of the intersection area;
step S43, comparing the positions of the center of mass and the center of the circle in the intersection area, and entering step S44 if the distance between the center of mass and the center of the circle exceeds the iteration threshold of the MeanShift algorithm; if the distance between the two is within the iteration threshold of the MeanShift algorithm, the step S45 is executed;
step S44, adjusting the center of the current circular area as the center of mass of the intersection area, and the radius as the minimum distance from the center of mass of the intersection area to the polygon boundary of the hand area, and returning to the step S42;
step S45, if the area of the intersection region/the circular area exceeds the pixel threshold of the effective area by 1.1, the radius of the circle is increased, and the step S42 is returned; if the intersection area/circle area is lower than the effective area pixel threshold 0.9, the circle radius is decreased, and the step S42 is returned to; otherwise, ending the iteration and outputting the circle center c and the radius r of the circular area when the iteration is ended.
In a preferred embodiment of the present invention, in the step S50, the updating to obtain the precise hand region contour includes the following steps:
step S51, according to the intersection condition of the iterated circular area and the hand area polygon, dividing the hand area polygon into an intersection area I and a non-intersection area P, wherein the non-intersection area P is composed of a plurality of independent polygons P;
in step S52, for each independent polygon P in the disjoint region P, the length of the coincident line segment of P and the boundary is calculated. If the length of the overlapped line segment is greater than the overlapped segment threshold value, cutting off the part of the independent polygon p in the original hand region polygon, and entering the step S54; if the length of the coincident line segment of none of the independent polygons p and the boundary is larger than the coincident segment threshold value, the step S53 is executed;
step S53, calculating the centroid of P for each independent polygon P in the disjoint areas P, taking the circle center c of the circular area after iteration as the starting point, making a cp extension line until the image boundary, and calculating the length of the coincident line segment of P and cp extension line. If the length of the superposed line segment is greater than 0.4 × cp, cutting off the part of the independent polygon p in the original hand region polygon;
in step S54, it is determined whether or not the obtained hand region polygon includes a plurality of independent polygon parts, and if so, the polygon part having the largest area among the polygon parts is set as the finally obtained hand region polygon. And returning to the finally obtained polygonal outer contour of the hand area.
Due to the adoption of the technical scheme, the invention has the beneficial effects that: compared with the traditional hand region segmentation algorithm, the method for segmenting the hand region by using the MeanShift algorithm can effectively remove other redundant contour information such as arms and the like, provides a more accurate input data source for the subsequent steps of feature extraction, classification learning and the like, and improves the stability and the accuracy of a final gesture recognition and interaction system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a general algorithmic flow diagram of the present invention.
FIG. 2 is a flow diagram of depth image pre-processing of the present invention.
FIG. 3 is a block flow diagram of the present invention for calculating an initial iteration point and an initial iteration radius.
Fig. 4 is a block diagram of a flow of iteratively finding an optimal palm region according to the MeanShift algorithm of the present invention.
FIG. 5 is a block flow diagram of the hand region contour update of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
Referring to fig. 1, a method for performing hand region segmentation by using a MeanShift algorithm based on a depth image according to the present invention is shown, and includes the following steps:
step S10, reading a depth image;
step S20, preprocessing the depth image and preliminarily extracting a hand area containing redundant outlines;
step S30, selecting an initial iteration point from the hand area obtained by the preliminary extraction and calculating an iteration radius;
step S40, performing iterative operation on the initial iteration point and the iteration radius by using a MeanShift algorithm to obtain a circular area closest to the palm area;
and step S50, removing the redundant contour on the preliminarily extracted hand region according to the acquired circular region closest to the palm region, and updating to obtain an accurate hand region contour.
In step S20, the preprocessing the depth image means performing depth cutting and graphical filtering on the depth image by using a depth image preprocessing module, and calculating a maximum connected region. Specifically, referring to fig. 2, the preprocessing of the depth image includes the steps of:
step S21, performing depth cutting on the depth image by using a depth image preprocessing module, extracting a hand region containing redundant contours according to a depth threshold value, and mapping the extracted hand region into a binary image, wherein the hand region is white and the background region is black;
step S22, using graphics to operate the binary image, firstly performing opening operation, smoothing the outline of the binary image and removing the background noise of the binary image, then performing closing operation, and filling fine holes in the binary image;
step S23, finding the maximum area contour on the binary image after the graphical operation, and considering the contour as a hand region contour including a redundant contour, and filling the holes in the contour.
In step S30, referring to fig. 3, selecting an initial iteration point from the preliminarily extracted hand region and calculating an iteration radius includes the following steps:
step S31, representing the preliminarily extracted hand area by a polygon, and repairing the polygon condition containing the inner ring;
step S32, calculating the minimum bounding rectangle of the polygon, comparing the minimum bounding rectangle with the image boundary, and performing the following classification and discussion according to the number of overlapped edges:
(1) if the number of the overlapped edges is larger than or equal to 3, the fact that the hand is too close to the lens is indicated, the image cannot display a complete hand area, and the algorithm is ended;
(2) if the number of the overlapped edges is 2 and the two overlapped edges are parallel edges, the fact that the hand transversely or longitudinally penetrates through the lens is indicated, the image cannot display a complete hand area, and the algorithm is ended;
(3) if the number of the overlapped edges is 0, the fact that no contour of the arm part is intersected with the image boundary and no redundant contour exists is indicated, the returned initial iteration point is the mass center of the hand area, and the initial iteration radius is selected according to the actual palm experience value;
(4) if the number of overlapping sides is 1 or the number of overlapping sides is 2 and the two overlapping sides are intersecting sides, then the process proceeds to step S33;
step S33, calculating the vertex nearest to the polygon in the four vertexes of the minimum bounding rectangle of the polygon, and calculating the projection point of the vertex on the polygon, namely the point of the vertex nearest to the polygon, to ensure that the vertex is effective and does not intersect with the image boundary;
and step S34, taking the middle point of the connecting line of the projection point and the centroid of the polygon as an initial iteration point, taking half of the length of the connecting line as an initial iteration radius, and taking the projection point of the point on the polygon as a new initial iteration point if the initial iteration point is outside the polygon.
In step S40, referring to fig. 4, the initial iteration point and the iteration radius are iteratively operated by using the MeanShift algorithm, which includes the following sub-steps:
step S41, obtaining an initial circular area according to the initial iteration point and the initial iteration radius;
step S42, searching the intersection area of the initial circular area and the polygon of the hand area, and calculating the centroid of the intersection area;
step S43, comparing the positions of the center of mass and the center of the circle in the intersection area, and entering step S44 if the distance between the center of mass and the center of the circle exceeds the iteration threshold of the MeanShift algorithm; if the distance between the two is within the iteration threshold of the MeanShift algorithm, the step S45 is executed;
step S44, adjusting the center of the current circular area as the center of mass of the intersection area, and the radius as the minimum distance from the center of mass of the intersection area to the polygon boundary of the hand area, and returning to the step S42;
step S45, if the area of the intersection region/the circular area exceeds the pixel threshold of the effective area by 1.1, the radius of the circle is increased, and the step S42 is returned; if the intersection area/circle area is lower than the effective area pixel threshold 0.9, the circle radius is decreased, and the step S42 is returned to; otherwise, ending the iteration and outputting the circle center c and the radius r of the circular area when the iteration is ended.
In step S50, referring to fig. 5, the method for updating the precise hand region contour includes the following steps:
step S51, according to the intersection condition of the iterated circular area and the hand area polygon, dividing the hand area polygon into an intersection area I and a non-intersection area P, wherein the non-intersection area P is composed of a plurality of independent polygons P;
in step S52, for each independent polygon P in the disjoint region P, the length of the coincident line segment of P and the boundary is calculated. If the length of the overlapped line segment is greater than the overlapped segment threshold value, cutting off the part of the independent polygon p in the original hand region polygon, and entering the step S54; if the length of the coincident line segment of none of the independent polygons p and the boundary is larger than the coincident segment threshold value, the step S53 is executed;
step S53, calculating the centroid of P for each independent polygon P in the disjoint areas P, taking the circle center c of the circular area after iteration as the starting point, making a cp extension line until the image boundary, and calculating the length of the coincident line segment of P and cp extension line. If the length of the superposed line segment is greater than 0.4 × cp, cutting off the part of the independent polygon p in the original hand region polygon;
in step S54, it is determined whether or not the obtained hand region polygon includes a plurality of independent polygon parts, and if so, the polygon part having the largest area among the polygon parts is set as the finally obtained hand region polygon. And returning to the finally obtained polygonal outer contour of the hand area.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. A method for segmenting a hand region by using a MeanShift algorithm based on a depth image is characterized by comprising the following steps:
step S10, reading a depth image;
step S20, preprocessing the depth image and preliminarily extracting a hand area containing redundant outlines;
step S30, selecting an initial iteration point from the hand area obtained by the preliminary extraction and calculating an iteration radius;
step S40, performing iterative operation on the initial iteration point and the iteration radius by using a MeanShift algorithm to obtain a circular area closest to the palm area;
and step S50, removing the redundant contour on the preliminarily extracted hand region according to the acquired circular region closest to the palm region, and updating to obtain an accurate hand region contour.
2. The method for hand region segmentation using the MeanShift algorithm based on depth image as claimed in claim 1, wherein in step S20, the preprocessing the depth image means depth cutting, graphical filtering and maximum connected region calculation for the depth image by using a depth image preprocessing module.
3. The method for hand region segmentation by using the MeanShift algorithm based on depth image as claimed in claim 2, wherein the depth image preprocessing module is used for performing depth cutting, graphical filtering and maximum connected region calculation on the depth image, and comprises the following steps:
step S21, performing depth cutting on the depth image by using a depth image preprocessing module, extracting a hand region containing redundant contours according to a depth threshold value, and mapping the extracted hand region into a binary image, wherein the hand region is white and the background region is black;
step S22, using graphics to operate the binary image, firstly performing opening operation, smoothing the outline of the binary image and removing the background noise of the binary image, then performing closing operation, and filling fine holes in the binary image;
step S23, finding the maximum area contour on the binary image after the graphical operation, and considering the contour as a hand region contour including a redundant contour, and filling the holes in the contour.
4. The method for hand region segmentation using the MeanShift algorithm based on depth image as claimed in claim 3, wherein in step S30, the step of selecting an initial iteration point and calculating an iteration radius in the preliminarily extracted hand region comprises the following steps:
step S31, representing the preliminarily extracted hand area by a polygon, and repairing the polygon condition containing the inner ring;
step S32, calculating the minimum bounding rectangle of the polygon, comparing the minimum bounding rectangle with the image boundary, and performing the following classification and discussion according to the number of overlapped edges:
(1) if the number of the overlapped edges is larger than or equal to 3, the fact that the hand is too close to the lens is indicated, the image cannot display a complete hand area, and the algorithm is ended;
(2) if the number of the overlapped edges is 2 and the two overlapped edges are parallel edges, the fact that the hand transversely or longitudinally penetrates through the lens is indicated, the image cannot display a complete hand area, and the algorithm is ended;
(3) if the number of the overlapped edges is 0, the fact that no contour of the arm part is intersected with the image boundary and no redundant contour exists is indicated, the returned initial iteration point is the mass center of the hand area, and the initial iteration radius is selected according to the actual palm experience value;
(4) if the number of overlapping sides is 1 or the number of overlapping sides is 2 and the two overlapping sides are intersecting sides, then the process proceeds to step S33;
step S33, calculating the vertex nearest to the polygon in the four vertices of the minimum bounding rectangle of the polygon, and calculating the projection point of the vertex on the polygon, namely the point of the vertex nearest to the polygon, to ensure that the vertex is effective and does not intersect with the image boundary;
and step S34, taking the middle point of the connecting line of the projection point and the centroid of the polygon as an initial iteration point, taking half of the length of the connecting line as an initial iteration radius, and taking the projection point of the point on the polygon as a new initial iteration point if the initial iteration point is outside the polygon.
5. The method for hand region segmentation using the MeanShift algorithm based on depth image as claimed in claim 4, wherein in step S40, the iterative operation of the initial iteration point and the iteration radius using the MeanShift algorithm comprises the following sub-steps:
step S41, obtaining an initial circular area according to the initial iteration point and the initial iteration radius;
step S42, searching the intersection area of the initial circular area and the polygon of the hand area, and calculating the centroid of the intersection area;
step S43, comparing the positions of the center of mass and the center of the circle in the intersection area, and entering step S44 if the distance between the center of mass and the center of the circle exceeds the iteration threshold of the MeanShift algorithm; if the distance between the two is within the iteration threshold of the MeanShift algorithm, the step S45 is executed;
step S44, adjusting the center of the current circular area as the center of mass of the intersection area, and the radius as the minimum distance from the center of mass of the intersection area to the polygon boundary of the hand area, and returning to the step S42;
step S45, if the area of the intersection region/the circular area exceeds the pixel threshold of the effective area by 1.1, the radius of the circle is increased, and the step S42 is returned; if the intersection area/circle area is lower than the effective area pixel threshold 0.9, the circle radius is decreased, and the step S42 is returned to; otherwise, ending the iteration and outputting the circle center c and the radius r of the circular area when the iteration is ended.
6. The method for hand region segmentation using the MeanShift algorithm based on depth image as claimed in claim 5, wherein in the step S50, the updating to obtain the precise hand region contour comprises the following steps:
step S51, according to the intersection condition of the iterated circular area and the hand area polygon, dividing the hand area polygon into an intersection area I and a non-intersection area P, wherein the non-intersection area P is composed of a plurality of independent polygons P;
step S52, calculating the length of the coincident line segment of P and the boundary for each independent polygon P in the disjoint region P; if the length of the overlapped line segment is greater than the overlapped segment threshold value, cutting off the part of the independent polygon p in the original hand region polygon, and entering the step S54; if the length of the coincident line segment of none of the independent polygons p and the boundary is larger than the coincident segment threshold value, the step S53 is executed;
step S53, calculating the centroid of P for each independent polygon P in the disjoint areas P, taking the circle center c of the circular area after iteration as a starting point, making a cp extension line until the image boundary, and calculating the length of the coincident line segment of P and cp extension line; if the length of the superposed line segment is greater than 0.4 × cp, cutting off the part of the independent polygon p in the original hand region polygon;
in step S54, it is determined whether the obtained hand region polygon includes a plurality of independent polygon parts, and if so, the polygon part having the largest area is set as the finally obtained hand region polygon, and the finally obtained hand region polygon outline is returned.
CN201710471608.6A 2017-06-20 2017-06-20 Method for segmenting hand region by using MeanShift algorithm based on depth image Active CN107341811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710471608.6A CN107341811B (en) 2017-06-20 2017-06-20 Method for segmenting hand region by using MeanShift algorithm based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710471608.6A CN107341811B (en) 2017-06-20 2017-06-20 Method for segmenting hand region by using MeanShift algorithm based on depth image

Publications (2)

Publication Number Publication Date
CN107341811A CN107341811A (en) 2017-11-10
CN107341811B true CN107341811B (en) 2020-11-13

Family

ID=60220811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710471608.6A Active CN107341811B (en) 2017-06-20 2017-06-20 Method for segmenting hand region by using MeanShift algorithm based on depth image

Country Status (1)

Country Link
CN (1) CN107341811B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509837A (en) * 2018-01-29 2018-09-07 上海数迹智能科技有限公司 A kind of finger tip recognition methods with rotational invariance based on depth image
CN108563329B (en) * 2018-03-23 2021-04-27 上海数迹智能科技有限公司 Human body arm position parameter extraction algorithm based on depth map
CN108520264A (en) * 2018-03-23 2018-09-11 上海数迹智能科技有限公司 A kind of hand contour feature optimization method based on depth image
CN110007754B (en) * 2019-03-06 2020-08-28 清华大学 Real-time reconstruction method and device for hand-object interaction process
CN110163208B (en) * 2019-05-22 2021-06-29 长沙学院 Scene character detection method and system based on deep learning
CN110310336B (en) * 2019-06-10 2021-08-06 青岛小鸟看看科技有限公司 Touch projection system and image processing method
CN111127535B (en) * 2019-11-22 2023-06-20 北京华捷艾米科技有限公司 Method and device for processing hand depth image
CN111144212B (en) * 2019-11-26 2023-06-23 北京华捷艾米科技有限公司 Depth image target segmentation method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363160B1 (en) * 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking
CN102521567A (en) * 2011-11-29 2012-06-27 Tcl集团股份有限公司 Human-computer interaction fingertip detection method, device and television
CN103984928A (en) * 2014-05-20 2014-08-13 桂林电子科技大学 Finger gesture recognition method based on field depth image
CN104809430A (en) * 2015-04-02 2015-07-29 海信集团有限公司 Palm region recognition method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9747306B2 (en) * 2012-05-25 2017-08-29 Atheer, Inc. Method and apparatus for identifying input features for later recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363160B1 (en) * 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking
CN102521567A (en) * 2011-11-29 2012-06-27 Tcl集团股份有限公司 Human-computer interaction fingertip detection method, device and television
CN103984928A (en) * 2014-05-20 2014-08-13 桂林电子科技大学 Finger gesture recognition method based on field depth image
CN104809430A (en) * 2015-04-02 2015-07-29 海信集团有限公司 Palm region recognition method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《基于Kinect的手势跟踪概述》;刘佳 等;《计算机应用研究》;20150731;第32卷(第7期);第1921-1925页 *
《基于深度信息的手势检测与跟踪》;陈子毫;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130115(第01期);第16-17页第2.3.2小节 *
《基于自适应活动轮廓模型的实时手势跟踪》;齐苏敏 等;《计算机科学》;20061231;第33卷(第11期);第192-193页第2.2小节 *

Also Published As

Publication number Publication date
CN107341811A (en) 2017-11-10

Similar Documents

Publication Publication Date Title
CN107341811B (en) Method for segmenting hand region by using MeanShift algorithm based on depth image
CN110232311B (en) Method and device for segmenting hand image and computer equipment
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
US10963066B2 (en) Keyboard input system and keyboard input method using finger gesture recognition
Geetha et al. A vision based recognition of indian sign language alphabets and numerals using b-spline approximation
CN104978012B (en) One kind points to exchange method, apparatus and system
JP2003346162A (en) Input system by image recognition of hand
Li et al. Fully automatic 3D facial expression recognition using polytypic multi-block local binary patterns
Krejov et al. Multi-touchless: Real-time fingertip detection and tracking using geodesic maxima
JP6651388B2 (en) Gesture modeling device, gesture modeling method, program for gesture modeling system, and gesture modeling system
CN108520264A (en) A kind of hand contour feature optimization method based on depth image
US20160239702A1 (en) Image processing device, image display device, image processing method, and medium
WO2021196013A1 (en) Word recognition method and device, and storage medium
CN107610236B (en) Interaction method and system based on graph recognition
CN114445853A (en) Visual gesture recognition system recognition method
Aksaç et al. Real-time multi-objective hand posture/gesture recognition by using distance classifiers and finite state machine for virtual mouse operations
Elakkiya et al. Intelligent system for human computer interface using hand gesture recognition
CN109325387B (en) Image processing method and device and electronic equipment
CN107169449A (en) Chinese sign language interpretation method based on depth transducer
CN110349111B (en) Correction method and device for two-dimensional code image
CN112434632A (en) Pattern recognition method, intelligent terminal and storage medium
KR20160097513A (en) Paired-edge based hand tracking method using depth image
CN110737364A (en) Control method for touch writing acceleration under android systems
Babu et al. Touchless User Interface for Sketching Using Hand Gesture Recognition
CN114596582B (en) Augmented reality interaction method and system with vision and force feedback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant