WO2018032700A1 - 跟踪指蹼位置的方法及其装置 - Google Patents

跟踪指蹼位置的方法及其装置 Download PDF

Info

Publication number
WO2018032700A1
WO2018032700A1 PCT/CN2016/113492 CN2016113492W WO2018032700A1 WO 2018032700 A1 WO2018032700 A1 WO 2018032700A1 CN 2016113492 W CN2016113492 W CN 2016113492W WO 2018032700 A1 WO2018032700 A1 WO 2018032700A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
point
hand
fingerprint
depth
Prior art date
Application number
PCT/CN2016/113492
Other languages
English (en)
French (fr)
Inventor
杨铭
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018032700A1 publication Critical patent/WO2018032700A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present invention relates to the field of image technology processing, and in particular, to a method and device for tracking a finger position.
  • Finger licking refers to the three fiber gaps between the transverse fibers of the deep palpebral membrane and the four longitudinal fibers that are emitted distally at the metacarpal head. It is the palm and back of the palm, back and fingers. The passage between. In the application scenario of virtual ring try-on or other need to track the user's hand finger position information, it is difficult to accurately extract the position information of the finger through the image data captured by the camera.
  • the embodiment of the invention provides a method for extracting the position of the fingerprint, which can accurately track the position information of the fingerprint from the image data.
  • an embodiment of the present invention provides a method for tracking a fingerprint position, including:
  • the extracting the outer contour of the hand from the depth image is specifically:
  • the centroid of the contour is selected from the outer contours closest to the average distance of the joint points, and the outer contour having the longest total length of the contour curve serves as the outer contour of the hand.
  • the current position of the finger is indicated by the contour inflection point, and the current position of the finger is performed by using the color image of the hand Iteratively correcting and obtaining the output position of the fingerprint, specifically:
  • the area is a candidate area
  • the current location is used as an output location of the fingerprint
  • the offset position corresponding to the candidate region with the smallest degree of structural deviation of the local region is selected to update the a current location and update the local area and the candidate area.
  • a structural deviation degree of the candidate region from the local region is d ( P, Q), wherein P is a set of pixel values including each pixel of the local area, Q is a set of pixel values including each pixel of the candidate area, and ⁇ P is an average of all pixel values in the set P, ⁇ Q is the mean of all pixel values in the set Q, ⁇ PQ is the covariance of the set P and the set Q, ⁇ P is the variance of the set P, ⁇ Q is the variance of the set Q, and c 1 and c 2 are preset constants.
  • the current location is (x, y), and the offset location is (x+ ⁇ ) x , y + ⁇ y ); where ⁇ x ⁇ ⁇ -1, 0, 1 ⁇ , ⁇ y ⁇ ⁇ -1, 0, 1 ⁇ , and ⁇ x and ⁇ y are not 0 at the same time.
  • the method further includes:
  • Gaussian blurring is performed on the color image.
  • the method further includes:
  • the abscissa of the contour point is revised to the mean of the abscissa of the N consecutive contour points on the left side of the contour point and the N consecutive contour points on the right side, and
  • the ordinate of the contour point is revised to the mean of the ordinates of the N consecutive contour points on the left side of the contour point and the N consecutive contour points on the right side.
  • the present invention also provides an apparatus for tracking a position of a finger, comprising:
  • An image acquisition module configured to acquire a depth image and a color image of the same image recorded by the user's hand
  • An outer contour extraction module configured to extract an outer contour of the hand from the depth image
  • a contour inflection point selecting module configured to select, from a contour point between adjacent fingertips of the outer contour, a contour point farthest from a line connecting the adjacent fingertips as a contour inflection point;
  • the fingerprint position determining module is configured to use the contour inflection point as a current position of the fingerprint, and iteratively correct the current position of the fingerprint by using the color image to obtain an output position of the fingerprint.
  • the outer contour extraction module specifically includes:
  • a node depth calculation unit configured to calculate a depth of each joint point of the hand from the depth image according to a preset hand joint point model
  • a reference depth determining unit configured to take a median of the depth of the related node as a reference depth d ref ;
  • a contour extracting unit configured to extract, from the depth image, an outer contour of a region having a depth within a hand depth range [d ref - ⁇ , d ref + ⁇ ]; wherein ⁇ is a measure of the back of the hand and the palm of the hand Parameter value between thicknesses;
  • a contour selecting unit configured to select, from the outer contour, a centroid whose contour is closest to an average distance of the joint point, and an outer contour having a longest total length of the contour curve as an outer contour of the hand.
  • the fingerprint location determining module includes:
  • a local area determining unit configured to extract a local area centered on the current position from the color image of the hand by using the contour inflection point as a current position of the finger;
  • a candidate region determining unit configured to offset the current location to obtain a plurality of offset locations, and for each offset location, extract a color point from the color image of the hand as a center point and The area of the same shape of the local area is used as a candidate area;
  • a deviation degree calculation unit configured to calculate a degree of structural deviation of each of the candidate regions and the local region
  • An output position determining unit configured to use the current position as an output position of the fingerprint when the degree of structural deviation of each of the candidate regions and the local region is greater than a preset threshold
  • a current location updating unit configured to have a small degree of structural deviation when there is one of the candidate regions and the local region And in the preset threshold, the offset position corresponding to the candidate region with the smallest degree of structural deviation of the local region is selected to update the current location, and then the local region and the candidate region are updated.
  • the fingerprint location determining module further includes:
  • a contour inflection point revision unit for modifying an abscissa of the contour inflection point to a median value of an abscissa of a left continuous M contour point of the contour inflection point and a right side M consecutive contour points, and the contour
  • the ordinate of the inflection point is revised to the median of the ordinates of the M consecutive contour points on the left side of the contour inflection point and the M consecutive contour points on the right side;
  • a high-end blur processing unit is configured to perform Gaussian blur processing on the color image.
  • the device for tracking a finger position further includes:
  • a contour point revision module for modifying, for each of the outer contour points, an abscissa of the contour point to a horizontal of N consecutive contour points on the left side of the contour point and N consecutive contour points on the right side
  • the mean value of the coordinates, and the ordinate of the contour point is revised to the mean of the ordinates of the N consecutive contour points on the left side of the contour point and the N consecutive contour points on the right side.
  • a method and apparatus for tracking a finger position provided by an embodiment of the present invention, by acquiring a depth image and a color image of the same image of a user's hand; extracting an outer contour of the hand from the depth image; Among the contour points between adjacent fingertips of the contour, the contour point farthest from the line connecting the adjacent fingertips is selected as the contour inflection point; the contour inflection point is used as the current position of the fingerprint, and the color image is utilized Iteratively correcting the current position of the fingerprint to obtain the output position of the fingerprint, thereby accurately tracking the position information of the fingerprint from the image data.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for tracking a finger position provided by the present invention
  • FIG. 2 is a schematic flow chart of an embodiment of step S4 in the method for tracking finger position provided by FIG. 1;
  • step S4 is a schematic flow chart of another embodiment of step S4 in the method for tracking finger position provided by FIG. 1;
  • FIG. 4 is a schematic structural diagram of an embodiment of an apparatus for tracking a finger position provided by the present invention.
  • FIG. 5 is a schematic structural diagram of an embodiment of an outer contour extraction module of a device for tracking a finger position provided by the present invention
  • FIG. 6 is a schematic structural diagram of an embodiment of a fingerprint position determining module of a device for tracking a finger position provided by the present invention.
  • FIG. 1 is a schematic flowchart diagram of an embodiment of a method for tracking a finger position provided by the present invention, where the method includes steps S1 to S4, as follows:
  • the depth image is an image captured by the depth camera, and the pixel value of each pixel included therein reflects the position of the object corresponding to the pixel from the camera.
  • the distance information; the color image is an image captured by a general imaging device, and the pixel value of each pixel included therein reflects the appearance color information of the position of the object corresponding to the pixel.
  • there are generally four fingers in the hand and the acquisition of the output position of each finger can be obtained according to the above steps S1 to S4.
  • the outer contour at this time is a set consisting of a group of coordinate points, and the adjacent two coordinate points are connected by a straight line, and the outer contour is connected by a plurality of short broken lines.
  • the outer contour needs to be slightly smoothed as follows:
  • the abscissa of the contour point is revised to the mean of the abscissa of the N consecutive contour points on the left side of the contour point and the N consecutive contour points on the right side, and
  • the ordinate of the contour point is revised to the mean of the ordinates of the N consecutive contour points on the left side of the contour point and the N consecutive contour points on the right side.
  • the value of N here can be set according to actual needs.
  • a convex hull is obtained for the outer contour, and then a straight line connecting adjacent fingertips in the convex hull is selected, and then a contour point farthest from the straight line is selected from the contour points between adjacent fingertips as a contour inflection point;
  • the fingertip is represented as a contour point farthest from the joint point corresponding to the fingertip in the contour point surrounding the joint point corresponding to the fingertip, and then connected to the adjacent fingertip to obtain a straight line, and then Among the contour points between the adjacent fingertips, a contour point farthest from the straight line is selected as the contour inflection point.
  • the contour inflection point obtained by the above operation cannot be directly used as the output position of the fingerprint, since the extracted contour is easily affected by the image background, the finger posture, and the contour point on the outer contour is affected by noise. Therefore, directly extracting the contour inflection point as the output position of the fingerprint has a large ambiguity. For example, if the contour inflection point is at a certain position of the finger slit, and is not at the fingertip, then the contour inflection point needs to be performed through the above step S4. Make corrections to get the true position of the fingerprint (the output position of the fingerprint).
  • step S2 the outer contour of the hand is extracted from the depth image, and the specific implementation process is as follows:
  • the centroid of the contour is selected from the outer contours closest to the average distance of the joint points, and the outer contour having the longest total length of the contour curve serves as the outer contour of the hand.
  • the hand joint point model is a model trained in advance using a large number of training sets in which a hand depth image is recorded, and the model includes: a kinect-based hand joint point trace model, a multi-random forest model, and the like. It is generated based on the information training of the depth image of the hand, and the hand joint point model can be preferably trained using the random forest algorithm.
  • the joint point of the hand provides the approximate position of each joint point of the hand, and the depth range of the entire hand can be estimated by the depth of each joint point.
  • the depth of the entire hand is within the hand depth range [d ref - ⁇ , d ref + ⁇ ], which is a measure of the back of the hand and the palm of the hand.
  • the parameter value between the thicknesses is extracted from the outer contour of the edge of the region within the range, that is, the outer contour of the hand.
  • the outer contour of the contour whose center of mass is closest to the joint point and the longest total length of the contour curve is selected. Just fine.
  • FIG. 2 is a schematic flowchart of an embodiment of step S4 in the method for tracking the location of the fingerprint provided by FIG. 1.
  • the specific implementation manner of the foregoing step S4 is as follows:
  • the region of the shape is a candidate region; for example, when the current position is (x, y), the offset position is (x + ⁇ x , y + ⁇ y ); wherein ⁇ x ⁇ ⁇ -1, 0,1 ⁇ , ⁇ y ⁇ -1,0,1 ⁇ , and ⁇ x and ⁇ y are not 0 at the same time, and the setting of ⁇ x and ⁇ y is not limited to the above value, and can be adjusted according to actual conditions.
  • the color distribution (ie, the pixel value distribution) of the local area and the adjacent candidate area are largely different; when the local area is in the finger seam, the local area and the edge finger The difference in color distribution of adjacent candidate regions in the seam direction is relatively small, and the color distribution of the local region and the other candidate regions is relatively large.
  • the degree of structural deviation of each of the candidate regions and the local region is greater than a preset threshold, it can be determined that the local region falls on the fingerprint, and the central location of the local region (the above current Position) as the output position of the fingerprint, thereby completing the correction of the output position of the fingerprint; otherwise, it can be determined that the local area falls on the finger joint, and it is necessary to continue to correct the current position of the fingerprint, and select and The central position corresponding to the candidate region with the smallest degree of structural deviation of the local region (the above-mentioned offset position) is updated to the current position of the fingerprint, which can ensure that the current position of the subsequently updated fingerprint is still on the finger joint, instead of biasing Move to a position other than the finger joint.
  • FIG. 3 is a schematic flowchart of another embodiment of step S4 in the method for tracking the position of the finger provided in FIG. 1.
  • the method further includes:
  • the method of tracking the position of the fingertip provided above is not limited to tracking the human hand, but can also track the foot or other hand or foot constructed from the mold, or even the hand or foot of the animal.
  • the method for tracking the location of the fingerprint is provided, which can be applied to VR (Virtual Reality), for example, virtual ring trial, that is, the user's camera on the client.
  • VR Virtual Reality
  • the front hand is raised, the camera captures the user's hand, and the captured depth image and color image are transmitted to another receiving end, and the receiving end is from the depth image and the color image according to the tracking finger position provided above.
  • the receiving end determines the wearing position of the ring on the finger according to the position of the adjacent two fingers, and wears the ring on the wearing position of the user's finger Return to the client and display it on the client's display so that the user can see the effect of wearing the ring from the image in the display.
  • a method for tracking a finger position by acquiring a depth image and a color image of the same image of a user's hand; extracting an outer contour of the hand from the depth image; Among the contour points between adjacent fingertips, the contour point farthest from the line connecting the adjacent fingertips is selected as the contour inflection point; the contour inflection point is used as the current position of the fingerprint, and the color image is used
  • the current position of the finger is iteratively modified to obtain the output position of the fingerprint, thereby accurately tracking the position information of the fingerprint from the image data.
  • the present invention further provides a flow chart of a method for tracking the position of a finger, and a method for tracking the position of the fingertip provided by the above embodiment.
  • FIG. 4 it is an implementation of the device for tracking the position of the finger provided by the present invention.
  • a schematic structural diagram of an example, the device specifically includes:
  • the image obtaining module 10 is configured to acquire a depth image and a color image of the same image recorded by the user's hand;
  • An outer contour extraction module 20 configured to extract an outer contour of the hand from the depth image
  • a contour inflection point selection module 30 configured to select, from a contour point between adjacent fingertips of the outer contour, a contour point farthest from a line connecting the adjacent fingertips as a contour inflection point;
  • the fingerprint position determining module 40 is configured to use the contour inflection point as the current position of the fingerprint, and iteratively correct the current position of the fingerprint by using the color image to obtain the output position of the fingerprint.
  • FIG. 5 is a schematic structural diagram of an embodiment of an outer contour extraction module of a device for tracking a finger position provided by the present invention
  • the contour extraction module 20 specifically includes:
  • a node depth calculation unit 21 configured to calculate a depth of each joint point of the hand from the depth image according to a preset hand joint point model
  • a reference depth determining unit 22 configured to take a median of the depth of the related node as a reference depth d ref ;
  • a contour extracting unit 23 configured to extract, from the depth image, an outer contour of a region whose depth is within a hand depth range [d ref - ⁇ , d ref + ⁇ ]; wherein ⁇ is a measure of the back of the hand The parameter value of the thickness between the palms;
  • the contour selecting unit 24 is configured to select, from the outer contour, a centroid whose contour is closest to the joint point, and an outer contour having the longest total length of the contour as the outer contour of the hand.
  • FIG. 6 A schematic structural diagram of an embodiment of the fingerprint position determining module of the device for tracking the position of the finger, the fingerprint position determining module 40, specifically comprising:
  • a local area determining unit 41 configured to extract a local area centered on the current position from the color image of the hand with the contour inflection point as a current position of the finger;
  • a candidate region determining unit 42 configured to offset the current location to obtain a plurality of offset locations, and for each offset location, extract a color point from the color image of the hand with the offset location as a center point a region having the same shape as the local region as a candidate region; for example, when the current position is (x, y), the offset position is (x + ⁇ x , y + ⁇ y ); wherein ⁇ x ⁇ -1,0,1 ⁇ , ⁇ y ⁇ -1,0,1 ⁇ , and ⁇ x and ⁇ y are not 0 at the same time.
  • the deviation degree calculation unit 43 is configured to calculate a degree of structural deviation between each of the candidate regions and the local region; for each candidate region, a structural deviation degree of the candidate region from the local region is d (P, Q) ),
  • P is a set of pixel values including each pixel of the local area
  • Q is a set of pixel values including each pixel of the candidate area
  • ⁇ P is an average of all pixel values in the set P
  • ⁇ Q is the mean of all pixel values in the set Q
  • ⁇ PQ is the covariance of the set P and the set Q
  • ⁇ P is the variance of the set P
  • ⁇ Q is the variance of the set Q
  • c 1 and c 2 are preset constants.
  • the output position determining unit 44 is configured to use the current position as the output position of the fingerprint when the degree of structural deviation of each of the candidate regions and the local region is greater than a preset threshold;
  • the current location updating unit 45 is configured to: when there is a structural deviation degree of the candidate region and the local region is not greater than the preset threshold, select a candidate region that has the smallest degree of structural deviation from the local region. The offset position is to update the current location, thereby updating the local area and the candidate area.
  • the fingerprint location determining module 40 further includes:
  • a contour inflection point revision unit 46 configured to modify an abscissa of the contour inflection point to a median value of an abscissa of a leftmost M consecutive contour points of the contour inflection point and a right side M consecutive contour points, and The ordinate of the contour inflection point is revised to the median of the ordinates of the M consecutive contour points on the left side of the contour inflection point and the M consecutive contour points on the right side;
  • the high-end blur processing unit 47 is configured to perform Gaussian blur processing on the color image.
  • the device for tracking a finger position further includes:
  • a contour point revision module 50 for each contour point in the outer contour, modifying an abscissa of the contour point to N consecutive contour points on the left side of the contour point and N consecutive contour points on the right side
  • the mean of the abscissa and, will be described
  • the ordinate of the contour point is revised to the mean of the ordinates of the N consecutive contour points on the left side of the contour point and the N consecutive contour points on the right side.
  • the device for tracking the position of the finger is provided by the embodiment of the present invention, by acquiring a depth image and a color image of the same image of the user's hand; extracting the outer contour of the hand from the depth image; Among the contour points between adjacent fingertips, the contour point farthest from the line connecting the adjacent fingertips is selected as the contour inflection point; the contour inflection point is used as the current position of the fingerprint, and the color image is used The current position of the finger is iteratively modified to obtain the output position of the fingerprint, thereby accurately tracking the position information of the fingerprint from the image data.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种跟踪指蹼位置的方法,包括:获取记录用户手部的同一影像的深度图像和彩色图像(S1);从所述深度图像中提取所述手部的外轮廓(S2);从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点(S3);以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置(S4)。该方法能准确地从图像数据中跟踪到指蹼的位置信息。

Description

跟踪指蹼位置的方法及其装置 技术领域
本发明涉及图像技术处理领域,尤其涉及一种跟踪指蹼位置的方法及其装置。
背景技术
指蹼是指在掌骨头处,掌腱膜深层的横行纤维与其向远端发出的四束纵行纤维之间所围成的三个纤维间隙,是手掌、手背与手指的掌、背侧之间的通道。而在虚拟戒指试戴或其他需要跟踪用户的手部指蹼位置信息的应用场景中,难以通过摄像头拍摄的图像数据准确地提取出指蹼的位置信息。
发明内容
本发明实施例提出一种提取指蹼位置的方法,能准确地从图像数据中跟踪到指蹼的位置信息。
第一方面,本发明实施例提供一种跟踪指蹼位置的方法,包括:
获取记录用户手部的同一影像的深度图像和彩色图像;
从所述深度图像中提取所述手部的外轮廓;
从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;
以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置。
结合第一方面,在第一方面的第一种可能的实现方式中,所述从所述深度图像中提取所述手部的外轮廓,具体为:
根据预设的手部关节点模型,从所述深度图像中计算出所述手部的每一个关节点的深度;
取所述有关节点的深度的中值作为参考深度dref
从所述深度图像中提取深度在手部深度范围[dref-δ,dref+δ]内的区域的外轮廓;其中,δ为衡量所述手部的手背与手掌之间厚度的参数值;
从所述外轮廓中选取轮廓的质心距离所述关节点的平均距离最近,且轮廓曲线总长度最长的外轮廓作为所述手部的外轮廓。
结合第一方面,在第一方面的第二种可能的实现方式中,所述以所述轮廓拐点为指蹼的当前位置,利用所述手部的彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置,具体为:
以所述轮廓拐点为指蹼的当前位置,从所述手部的彩色图像中提取以所述当前位置为中心点的局部区域;
将所述当前位置进行偏移获得多个偏移位置,并对于每一个偏移位置,从所述手部的彩色图像中提取以该偏移位置为中心点且与所述局部区域相同形状的区域作为候选区域;
计算每一个所述候选区域与所述局部区域的结构偏差程度;
若每一个所述候选区域与所述局部区域的结构偏差程度均大于预设阈值,则将所述当前位置作为所述指蹼的输出位置;
若存在一个所述候选区域与所述局部区域的结构偏差程度不大于所述预设阈值时,则选取与所述局部区域的结构偏差程度最小的候选区域所对应的偏移位置来更新所述当前位置,并更新所述局部区域和所述候选区域。
结合第一方面的第二种可能的实现方式中,在第一方面的第三种可能的实现方式中,对于每一个候选区域,所述候选区域与所述局部区域的结构偏差程度为d(P,Q),
Figure PCTCN2016113492-appb-000001
其中,P为包含所述局部区域的每一个像素点的像素值的集合,Q为包含所述候选区域的每一个像素点的像素值的集合,μP为集合P中所有像素值的均值,μQ为集合Q中所有像素值的均值,σPQ为集合P和集合Q的协方差,σP为集合P的方差,σQ为集合Q的方差,c1和c2为预设常数。
结合第一方面的第二种可能的实现方式中,在第一方面的第四种可能的实现方式中,所述当前位置为(x,y),则所述偏移位置为(x+δx,y+δy);其中,δx∈{-1,0,1},δy∈{-1,0,1},且δx和δy不同时为0。
结合第一方面的第二种可能的实现方式中,在第一方面的第五种可能的实现方式中,在以所述轮廓拐点为指蹼的当前位置之前,还包括:
将所述轮廓拐点的横坐标修订所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的横坐标的中值,以及将所述轮廓拐点的纵坐标修订为所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的纵坐标的中值;
对所述彩色图像进行高斯模糊处理。
结合第一方面,在第一方面的第六种可能的实现方式中,在从所述外轮廓的相邻指尖之 间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点之前,还包括:
对于所述外轮廓中的每一个轮廓点,将所述轮廓点的横坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的横坐标的均值,以及,将所述轮廓点的纵坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的纵坐标的均值。
相应地,在第二方面,本发明还提供一种跟踪指蹼位置的装置,包括:
图像获取模块,用于获取记录用户手部的同一影像的深度图像和彩色图像;
外轮廓提取模块,用于从所述深度图像中提取所述手部的外轮廓;
轮廓拐点选取模块,用于从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;
指蹼位置确定模块,用于以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置。
结合第二方面,在第二方面的第一种可能的实现方式中,所述外轮廓提取模块,具体包括:
关节点深度计算单元,用于根据预设的手部关节点模型,从所述深度图像中计算出所述手部的每一个关节点的深度;
参考深度确定单元,用于取所述有关节点的深度的中值作为参考深度dref
轮廓提取单元,用于从所述深度图像中提取深度在手部深度范围[dref-δ,dref+δ]内的区域的外轮廓;其中,δ为衡量所述手部的手背与手掌之间厚度的参数值;
轮廓选取单元,用于从所述外轮廓中选取轮廓的质心距离所述关节点的平均距离最近,且轮廓曲线总长度最长的外轮廓作为所述手部的外轮廓。
结合第二方面,在第二方面的第二种可能的实现方式中,所述指蹼位置确定模块,具体包括:
局部区域确定单元,用于以所述轮廓拐点为指蹼的当前位置,从所述手部的彩色图像中提取以所述当前位置为中心点的局部区域;
候选区域确定单元,用于将所述当前位置进行偏移获得多个偏移位置,并对于每一个偏移位置,从所述手部的彩色图像中提取以该偏移位置为中心点且与所述局部区域相同形状的区域作为候选区域;
偏差程度计算单元,用于计算每一个所述候选区域与所述局部区域的结构偏差程度;
输出位置确定单元,用于当每一个所述候选区域与所述局部区域的结构偏差程度均大于预设阈值时,将所述当前位置作为所述指蹼的输出位置;
当前位置更新单元,用于当存在一个所述候选区域与所述局部区域的结构偏差程度不大 于所述预设阈值时,则选取与所述局部区域的结构偏差程度最小的候选区域所对应的偏移位置来更新所述当前位置,进而更新所述局部区域和所述候选区域。
结合第二方面的第二种可能的实现方式,在第二方面的第三种可能的实现方式中,所述指蹼位置确定模块还包括:
轮廓拐点修订单元,用于将所述轮廓拐点的横坐标修订所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的横坐标的中值,以及将所述轮廓拐点的纵坐标修订为所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的纵坐标的中值;
高期模糊处理单元,用于对所述彩色图像进行高斯模糊处理。
结合第二方面,在第二方面的第四种可能的实现方式中,所述跟踪指蹼位置的装置还包括:
轮廓点修订模块,用于对于所述外轮廓中的每一个轮廓点,将所述轮廓点的横坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的横坐标的均值,以及,将所述轮廓点的纵坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的纵坐标的均值。
实施本发明实施例,具有如下有益效果:
本发明实施例提供的跟踪指蹼位置的方法和装置,通过获取记录用户手部的同一影像的深度图像和彩色图像;从所述深度图像中提取所述手部的外轮廓;从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,来获得所述指蹼的输出位置,从而准确地从图像数据中跟踪到指蹼的位置信息。
附图说明
图1是本发明提供的跟踪指蹼位置的方法的一个实施例的流程示意图;
图2是图1提供的跟踪指蹼位置的方法中的步骤S4的一个实施例的流程示意图;
图3是图1提供的跟踪指蹼位置的方法中的步骤S4的另一个实施例的流程示意图;
图4是本发明提供的跟踪指蹼位置的装置的一个实施例的结构示意图;
图5是本发明提供的跟踪指蹼位置的装置的外轮廓提取模块的一个实施例的结构示意图;
图6是本发明提供的跟踪指蹼位置的装置的指蹼位置确定模块的一个实施例的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描 述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
参见图1,是本发明提供的跟踪指蹼位置的方法的一个实施例的流程示意图,该方法包括步骤S1至S4,具体如下:
S1,获取记录用户手部的同一影像的深度图像和彩色图像;
S2,从所述深度图像中提取所述手部的外轮廓;
S3,从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;
S4,以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置。
需要说明的是,深度图像是由深度摄像装置捕获到被拍摄物体的图像,其所包含的每一个像素点的像素值反映的是该被拍摄物体与该像素点对应的位置距离摄像头之间的距离信息;彩色图像是由普通摄像装置捕获到被拍摄物体的图像,其所包含的每一个像素点的像素值反映的是该被拍摄物体与该像素点对应的位置的外观颜色信息。另外,手部的指蹼一般有四个,对于每一个指蹼的输出位置的获取都可依据上述步骤S1至步骤S4来获取。
在通过步骤S2获取到手部的外轮廓时,此时的外轮廓是由一群坐标点构成的集合,相邻两个坐标点之间通过直线连接,则外轮廓由多个细短的折线相互连接成的,此时需要对外轮廓进行轻微的平滑操作,具体如下:
对于所述外轮廓中的每一个轮廓点,将所述轮廓点的横坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的横坐标的均值,以及,将所述轮廓点的纵坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的纵坐标的均值。此处的N的数值可根据实际需要进行设置。
而对于上述轮廓拐点的确定,有两种方式:
其一,对所述外轮廓求凸包,然后选取凸包中连接相邻指尖的直线,进而从相邻指尖之间的轮廓点中选取距离该直线最远的轮廓点作为轮廓拐点;
其二,将所述指尖表示为在包围所述指尖对应的关节点的轮廓点中距离所述指尖对应的关节点最远的轮廓点,然后连接相邻指尖获得直线,进而从所述相邻指尖之间的轮廓点中选取距离该直线最远的轮廓点作为轮廓拐点。
需要注意的是,通过上述操作所获得轮廓拐点并不能直接作为指蹼的输出位置,由于提取出来的轮廓容易受到图像背景、手指姿态的影响以及外轮廓上的轮廓点带有噪声的影响, 因而直接提取该轮廓拐点作为指蹼的输出位置存在较大的歧义性,例如,轮廓拐点处于手指缝的某处位置,而并不处于指蹼处,则此时需要通过上述步骤S4对轮廓拐点进行修正,获取指蹼的真正位置(指蹼的输出位置)。
而对于上述步骤S2从深度图像中提取所述手部的外轮廓,其具体实施过程为:
根据预设的手部关节点模型,从所述深度图像中计算出所述手部的每一个关节点的深度;
取所述有关节点的深度的中值作为参考深度dref
从所述深度图像中提取深度在手部深度范围[dref-δ,dref+δ]内的区域的外轮廓;其中,δ为衡量所述手部的手背与手掌之间厚度的参数值;
从所述外轮廓中选取轮廓的质心距离所述关节点的平均距离最近,且轮廓曲线总长度最长的外轮廓作为所述手部的外轮廓。
需要说明的是,手部关节点模型是预先利用大量的记录有手部的深度图像的训练集训练出来的模型,该模型包括:基于kinect的手部关节点踪模型、多随机森林模型等,其是基于手部的深度图像的信息训练生成的,可优选利用随机森林算法训练手部关节点模型。手部关节点提供手部各关节点的大致位置,并且通过各关节点的深度可以估算出整个手部的深度范围。另外,在少数情况下,计算到的部分关节点也可能因精度不足超出该手部的区域,或是因深度图像噪声导致关节点的深度误差较大,因而,为了减少这些异常关节点的影响,取所述关节点的深度的中值作为参考深度,则整个手部的深度处于手部深度范围[dref-δ,dref+δ]内,δ为衡量所述手部的手背与手掌之间厚度的参数值,则提取该范围内的区域的边缘的外轮廓出来,即为该手部的外轮廓。但是由于受到噪声或其它干扰区域的影响,提取出来的外轮廓可能会有多个,此时,从中选取轮廓的质心距离所述关节点的平均距离最近,且轮廓曲线总长度最长的外轮廓即可。
如图2所示,图2是图1提供的跟踪指蹼位置的方法中的步骤S4的一个实施例的流程示意图,则上述步骤S4的具体实施方式为:
S41,以所述轮廓拐点为指蹼的当前位置,从所述手部的彩色图像中提取以所述当前位置为中心点的局部区域;
S42,将所述当前位置进行偏移获得多个偏移位置,并对于每一个偏移位置,从所述手部的彩色图像中提取以该偏移位置为中心点且与所述局部区域相同形状的区域作为候选区域;例如,当所述当前位置为(x,y)时,则所述偏移位置为(x+δx,y+δy);其中,δx∈{-1,0,1},δy∈{-1,0,1},且δx和δy不同时为0,δx和δy的设置并不限于为上述数值,可以根据实际情况进行调整。
S43,计算每一个所述候选区域与所述局部区域的结构偏差程度;对于每一个候选区域, 所述候选区域与所述局部区域的结构偏差程度为d(P,Q),
Figure PCTCN2016113492-appb-000002
其中,P为包含所述局部区域的每一个像素点的像素值的集合,Q为包含所述候选区域的每一个像素点的像素值的集合,μP为集合P中所有像素值的均值,μQ为集合Q中所有像素值的均值,σPQ为集合P和集合Q的协方差,σP为集合P的方差,σQ为集合Q的方差,c1和c2为预设常数。由于候选区域与局部区域均是从手部的彩色图像中提取出来的,则上述该结构偏差程度可准确描述候选区域与局部区域两者的每一个点上的颜色分布(像素值分布)情况。
S44,若每一个所述候选区域与所述局部区域的结构偏差程度均大于预设阈值,则将所述当前位置作为所述指蹼的输出位置;
S45,若存在一个所述候选区域与所述局部区域的结构偏差程度不大于所述预设阈值时,则选取与所述局部区域的结构偏差程度最小的候选区域所对应的偏移位置来更新所述当前位置,并更新所述局部区域和所述候选区域。
需要说明的是,当局部区域处于指蹼时,该局部区域与邻近的候选区域的颜色分布(即像素值分布)均有较大差别;当局部区域处于指缝时,该局部区域与沿指缝方向上的邻近的候选区域的颜色分布差别相对较小,且该局部区域与其他的候选区域的颜色分布差较大。因而,当在比较出每一个所述候选区域与所述局部区域的结构偏差程度均大于预设阈值时,即可判断出局部区域落在指蹼上,将该局部区域的中心位置(上述当前位置)作为指蹼的输出位置,从而完成对指蹼的输出位置的修正;反之,可判断出该局部区域落在指缝上,需要继续对该指蹼的当前位置进行修正,且选取与该局部区域的结构偏差程度最小的候选区域所对应的中心位置(上述偏移位置)更新为该指蹼的当前位置,可以确保后续更新后的指蹼的当前位置仍处于指缝上,而不是偏移到非指缝的位置上。
如图3所示,图3是图1提供的跟踪指蹼位置的方法中的步骤S4的另一个实施例的流程示意图;在上一个实施例的基础上,即在上述步骤S41之前还包括:
S46,将所述轮廓拐点的横坐标修订所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的横坐标的中值,以及将所述轮廓拐点的纵坐标修订为所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的纵坐标的中值;
S47,对所述彩色图像进行高斯模糊处理。此步骤可滤除噪声对彩色图像的影响,便于后续步骤的判断。
上述提供的跟踪指蹼位置的方法并不限于跟踪人体手部,还可以跟踪脚部或其他由模具构造成的手部或脚部,甚至动物的手部或脚部。另外,上述提供的跟踪指蹼位置的方法可以应用于VR(Virtual Reality,虚拟现实),例如,虚拟戒指试戴,即:用户在客户端的摄像头 前举起手部,该摄像头拍摄用户的手部,并将拍摄到的深度图像和彩色图像传送到另一接收端,接收端根据以上提供的跟踪指蹼位置的方法从深度图像和彩色图像中跟踪到用户的手部的指蹼的确切位置,然后接收端根据相邻两个指蹼的位置确定戒指在手指上的佩戴位置,并将戒指佩戴在所述用户的手指的佩戴位置上的图像返回到客户端,并在客户端的显示器上显示,从而用户可以从显示器中的图像获知其佩带戒指的效果。
本发明实施例提供的跟踪指蹼位置的方法,通过获取记录用户手部的同一影像的深度图像和彩色图像;从所述深度图像中提取所述手部的外轮廓;从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,来获得所述指蹼的输出位置,从而准确地从图像数据中跟踪到指蹼的位置信息。
相应地,本发明还提供一种跟踪指蹼位置的装置能实现上述实施例提供的跟踪指蹼位置的方法的全部流程,参见图4,是本发明提供的跟踪指蹼位置的装置的一个实施例的结构示意图,该装置具体包括:
图像获取模块10,用于获取记录用户手部的同一影像的深度图像和彩色图像;
外轮廓提取模块20,用于从所述深度图像中提取所述手部的外轮廓;
轮廓拐点选取模块30,用于从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;
指蹼位置确定模块40,用于以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置。
结合第二方面,在第二方面的第一种可能的实现方式中,参见图5,是本发明提供的跟踪指蹼位置的装置的外轮廓提取模块的一个实施例的结构示意图,所述外轮廓提取模块20,具体包括:
关节点深度计算单元21,用于根据预设的手部关节点模型,从所述深度图像中计算出所述手部的每一个关节点的深度;
参考深度确定单元22,用于取所述有关节点的深度的中值作为参考深度dref
轮廓提取单元23,用于从所述深度图像中提取深度在手部深度范围[dref-δ,dref+δ]内的区域的外轮廓;其中,δ为衡量所述手部的手背与手掌之间厚度的参数值;
轮廓选取单元24,用于从所述外轮廓中选取轮廓的质心距离所述关节点的平均距离最近,且轮廓曲线总长度最长的外轮廓作为所述手部的外轮廓。
结合第二方面,在第二方面的第二种可能的实现方式中,参见图6,是本发明提供的跟 踪指蹼位置的装置的指蹼位置确定模块的一个实施例的结构示意图,所述指蹼位置确定模块40,具体包括:
局部区域确定单元41,用于以所述轮廓拐点为指蹼的当前位置,从所述手部的彩色图像中提取以所述当前位置为中心点的局部区域;
候选区域确定单元42,用于将所述当前位置进行偏移获得多个偏移位置,并对于每一个偏移位置,从所述手部的彩色图像中提取以该偏移位置为中心点且与所述局部区域相同形状的区域作为候选区域;例如,当所述当前位置为(x,y)时,则所述偏移位置为(x+δx,y+δy);其中,δx∈{-1,0,1},δy∈{-1,0,1},且δx和δy不同时为0。
偏差程度计算单元43,用于计算每一个所述候选区域与所述局部区域的结构偏差程度;对于每一个候选区域,所述候选区域与所述局部区域的结构偏差程度为d(P,Q),
Figure PCTCN2016113492-appb-000003
其中,P为包含所述局部区域的每一个像素点的像素值的集合,Q为包含所述候选区域的每一个像素点的像素值的集合,μP为集合P中所有像素值的均值,μQ为集合Q中所有像素值的均值,σPQ为集合P和集合Q的协方差,σP为集合P的方差,σQ为集合Q的方差,c1和c2为预设常数。
输出位置确定单元44,用于当每一个所述候选区域与所述局部区域的结构偏差程度均大于预设阈值时,将所述当前位置作为所述指蹼的输出位置;
当前位置更新单元45,用于当存在一个所述候选区域与所述局部区域的结构偏差程度不大于所述预设阈值时,则选取与所述局部区域的结构偏差程度最小的候选区域所对应的偏移位置来更新所述当前位置,进而更新所述局部区域和所述候选区域。
结合第二方面的第二种可能的实现方式,在第二方面的第三种可能的实现方式中,参见图6,所述指蹼位置确定模块40还包括:
轮廓拐点修订单元46,用于将所述轮廓拐点的横坐标修订所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的横坐标的中值,以及将所述轮廓拐点的纵坐标修订为所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的纵坐标的中值;
高期模糊处理单元47,用于对所述彩色图像进行高斯模糊处理。
结合第二方面,在第二方面的第四种可能的实现方式中,如图4所示,所述跟踪指蹼位置的装置还包括:
轮廓点修订模块50,用于对于所述外轮廓中的每一个轮廓点,将所述轮廓点的横坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的横坐标的均值,以及,将所述 轮廓点的纵坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的纵坐标的均值。
本发明实施例提供的跟踪指蹼位置的装置,通过获取记录用户手部的同一影像的深度图像和彩色图像;从所述深度图像中提取所述手部的外轮廓;从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,来获得所述指蹼的输出位置,从而准确地从图像数据中跟踪到指蹼的位置信息。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。

Claims (12)

  1. 一种跟踪指蹼位置的方法,其特征在于,包括:
    获取记录用户手部的同一影像的深度图像和彩色图像;
    从所述深度图像中提取所述手部的外轮廓;
    从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点;
    以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置。
  2. 如权利要求1所述的跟踪指蹼位置的方法,其特征在于,所述从所述深度图像中提取所述手部的外轮廓,具体为:
    根据预设的手部关节点模型,从所述深度图像中计算出所述手部的每一个关节点的深度;
    取所述有关节点的深度的中值作为参考深度dref
    从所述深度图像中提取深度在手部深度范围[dref-δ,dref+δ]内的区域的外轮廓;其中,δ为衡量所述手部的手背与手掌之间厚度的参数值;
    从所述外轮廓中选取轮廓的质心距离所述关节点的平均距离最近,且轮廓曲线总长度最长的外轮廓作为所述手部的外轮廓。
  3. 如权利要求1所述的跟踪指蹼位置的方法,其特征在于,所述以所述轮廓拐点为指蹼的当前位置,利用所述手部的彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置,具体为:
    以所述轮廓拐点为指蹼的当前位置,从所述手部的彩色图像中提取以所述当前位置为中心点的局部区域;
    将所述当前位置进行偏移获得多个偏移位置,并对于每一个偏移位置,从所述手部的彩色图像中提取以该偏移位置为中心点且与所述局部区域相同形状的区域作为候选区域;
    计算每一个所述候选区域与所述局部区域的结构偏差程度;
    若每一个所述候选区域与所述局部区域的结构偏差程度均大于预设阈值,则将所述当前位置作为所述指蹼的输出位置;
    若存在一个所述候选区域与所述局部区域的结构偏差程度不大于所述预设阈值时,则选取与所述局部区域的结构偏差程度最小的候选区域所对应的偏移位置来更新所述当前位置, 并更新所述局部区域和所述候选区域。
  4. 如权利要求3所述的跟踪指蹼位置的方法,其特征在于,
    对于每一个候选区域,所述候选区域与所述局部区域的结构偏差程度为d(P,Q),
    Figure PCTCN2016113492-appb-100001
    其中,P为包含所述局部区域的每一个像素点的像素值的集合,Q为包含所述候选区域的每一个像素点的像素值的集合,μP为集合P中所有像素值的均值,μQ为集合Q中所有像素值的均值,σPQ为集合P和集合Q的协方差,σP为集合P的方差,σQ为集合Q的方差,c1和c2为预设常数。
  5. 如权利要求3所述的跟踪指蹼位置的方法,其特征在于,所述当前位置为(x,y),则所述偏移位置为(x+δx,y+δy);其中,δx∈{-1,0,1},δy∈{-1,0,1},且δx和δy不同时为0。
  6. 如权利要求3所述的跟踪指蹼位置的方法,其特征在于,在以所述轮廓拐点为指蹼的当前位置之前,还包括:
    将所述轮廓拐点的横坐标修订所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的横坐标的中值,以及将所述轮廓拐点的纵坐标修订为所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的纵坐标的中值;
    对所述彩色图像进行高斯模糊处理。
  7. 如权利要求1所述的跟踪指蹼位置的方法,其特征在于,在从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述相邻指尖的直线最远的轮廓点作为轮廓拐点之前,还包括:
    对于所述外轮廓中的每一个轮廓点,将所述轮廓点的横坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的横坐标的均值,以及,将所述轮廓点的纵坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的纵坐标的均值。
  8. 一种跟踪指蹼位置的装置,其特征在于,包括:
    图像获取模块,用于获取记录用户手部的同一影像的深度图像和彩色图像;
    外轮廓提取模块,用于从所述深度图像中提取所述手部的外轮廓;
    轮廓拐点选取模块,用于从所述外轮廓的相邻指尖之间的轮廓点中,选取距离连接所述 相邻指尖的直线最远的轮廓点作为轮廓拐点;
    指蹼位置确定模块,用于以所述轮廓拐点作为指蹼的当前位置,利用所述彩色图像对所述指蹼的当前位置进行迭代修正,获得所述指蹼的输出位置。
  9. 如权利要求8所述的跟踪指蹼位置的装置,其特征在于,所述外轮廓提取模块,具体包括:
    关节点深度计算单元,用于根据预设的手部关节点模型,从所述深度图像中计算出所述手部的每一个关节点的深度;
    参考深度确定单元,用于取所述有关节点的深度的中值作为参考深度dref
    轮廓提取单元,用于从所述深度图像中提取深度在手部深度范围[dref-δ,dref+δ]内的区域的外轮廓;其中,δ为衡量所述手部的手背与手掌之间厚度的参数值;
    轮廓选取单元,用于从所述外轮廓中选取轮廓的质心距离所述关节点的平均距离最近,且轮廓曲线总长度最长的外轮廓作为所述手部的外轮廓。
  10. 如权利要求8所述的跟踪指蹼位置的方法,其特征在于,所述指蹼位置确定模块,具体包括:
    局部区域确定单元,用于以所述轮廓拐点为指蹼的当前位置,从所述手部的彩色图像中提取以所述当前位置为中心点的局部区域;
    候选区域确定单元,用于将所述当前位置进行偏移获得多个偏移位置,并对于每一个偏移位置,从所述手部的彩色图像中提取以该偏移位置为中心点且与所述局部区域相同形状的区域作为候选区域;
    偏差程度计算单元,用于计算每一个所述候选区域与所述局部区域的结构偏差程度;
    输出位置确定单元,用于当每一个所述候选区域与所述局部区域的结构偏差程度均大于预设阈值时,将所述当前位置作为所述指蹼的输出位置;
    当前位置更新单元,用于当存在一个所述候选区域与所述局部区域的结构偏差程度不大于所述预设阈值时,则选取与所述局部区域的结构偏差程度最小的候选区域所对应的偏移位置来更新所述当前位置,进而更新所述局部区域和所述候选区域。
  11. 如权利要求10所述的跟踪指蹼位置的装置,其特征在于,所述指蹼位置确定模块还包括:
    轮廓拐点修订单元,用于将所述轮廓拐点的横坐标修订所述轮廓拐点的左侧M个连续的 轮廓点和右侧M个连续的轮廓点的横坐标的中值,以及将所述轮廓拐点的纵坐标修订为所述轮廓拐点的左侧M个连续的轮廓点和右侧M个连续的轮廓点的纵坐标的中值;
    高期模糊处理单元,用于对所述彩色图像进行高斯模糊处理。
  12. 如权利要求10所述的跟踪指蹼位置的装置,其特征在于,所述跟踪指蹼位置的装置还包括:
    轮廓点修订模块,用于对于所述外轮廓中的每一个轮廓点,将所述轮廓点的横坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的横坐标的均值,以及,将所述轮廓点的纵坐标修订为所述轮廓点左侧N个连续轮廓点和右侧N个连续轮廓点的纵坐标的均值。
PCT/CN2016/113492 2016-08-16 2016-12-30 跟踪指蹼位置的方法及其装置 WO2018032700A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610675192.5A CN106327486B (zh) 2016-08-16 2016-08-16 跟踪指蹼位置的方法及其装置
CN201610675192.5 2016-08-16

Publications (1)

Publication Number Publication Date
WO2018032700A1 true WO2018032700A1 (zh) 2018-02-22

Family

ID=57739990

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113492 WO2018032700A1 (zh) 2016-08-16 2016-12-30 跟踪指蹼位置的方法及其装置

Country Status (2)

Country Link
CN (1) CN106327486B (zh)
WO (1) WO2018032700A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117393485A (zh) * 2023-11-01 2024-01-12 东莞触点智能装备有限公司 基于深度学习的芯片高精固晶机视觉定位系统
CN117495967A (zh) * 2023-12-29 2024-02-02 四川高速公路建设开发集团有限公司 一种隧道掌子面位移场监测方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112083870B (zh) * 2020-09-09 2022-03-22 青岛海信商用显示股份有限公司 信息识别方法及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6256411B1 (en) * 1997-05-28 2001-07-03 Minolta Co., Ltd. Image processing device and method for detecting objects in image data
CN104899600A (zh) * 2015-05-28 2015-09-09 北京工业大学 一种基于深度图的手部特征点检测方法
CN105160323A (zh) * 2015-09-07 2015-12-16 哈尔滨市一舍科技有限公司 一种手势识别方法
CN105806315A (zh) * 2014-12-31 2016-07-27 上海新跃仪表厂 基于主动编码信息的非合作目标相对测量系统及测量方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530892B (zh) * 2013-10-21 2016-06-22 清华大学深圳研究生院 一种基于Kinect传感器的双手跟踪方法与装置
KR20150068550A (ko) * 2013-12-11 2015-06-22 삼성전자주식회사 전경 물체 정보 획득 방법 및 장치.
CN104598915B (zh) * 2014-01-24 2017-08-11 深圳奥比中光科技有限公司 一种手势识别方法与装置
CN103886303A (zh) * 2014-03-28 2014-06-25 上海云享科技有限公司 一种掌纹识别方法及装置
CN105261038B (zh) * 2015-09-30 2018-02-27 华南理工大学 基于双向光流和感知哈希的指尖跟踪方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6256411B1 (en) * 1997-05-28 2001-07-03 Minolta Co., Ltd. Image processing device and method for detecting objects in image data
CN105806315A (zh) * 2014-12-31 2016-07-27 上海新跃仪表厂 基于主动编码信息的非合作目标相对测量系统及测量方法
CN104899600A (zh) * 2015-05-28 2015-09-09 北京工业大学 一种基于深度图的手部特征点检测方法
CN105160323A (zh) * 2015-09-07 2015-12-16 哈尔滨市一舍科技有限公司 一种手势识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG, YANKAI: "Research of Dynamic Gesture Recognition and Application", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE , CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 March 2016 (2016-03-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117393485A (zh) * 2023-11-01 2024-01-12 东莞触点智能装备有限公司 基于深度学习的芯片高精固晶机视觉定位系统
CN117393485B (zh) * 2023-11-01 2024-05-03 东莞触点智能装备有限公司 基于深度学习的芯片高精固晶机视觉定位系统
CN117495967A (zh) * 2023-12-29 2024-02-02 四川高速公路建设开发集团有限公司 一种隧道掌子面位移场监测方法
CN117495967B (zh) * 2023-12-29 2024-04-05 四川高速公路建设开发集团有限公司 一种隧道掌子面位移场监测方法

Also Published As

Publication number Publication date
CN106327486B (zh) 2018-12-28
CN106327486A (zh) 2017-01-11

Similar Documents

Publication Publication Date Title
US11749025B2 (en) Eye pose identification using eye features
US10636156B2 (en) Apparatus and method for analyzing three-dimensional information of image based on single camera and computer-readable medium storing program for analyzing three-dimensional information of image
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
CN109934065B (zh) 一种用于手势识别的方法和装置
WO2009091029A1 (ja) 顔姿勢推定装置、顔姿勢推定方法、及び、顔姿勢推定プログラム
US9727776B2 (en) Object orientation estimation
JP2009020761A (ja) 画像処理装置及びその方法
CN111488775B (zh) 注视度判断装置及方法
WO2018032700A1 (zh) 跟踪指蹼位置的方法及其装置
WO2019003973A1 (ja) 顔認証装置、顔認証方法およびプログラム記録媒体
JP2018119833A (ja) 情報処理装置、システム、推定方法、コンピュータプログラム、及び記憶媒体
JP2017097770A (ja) カメラ位置姿勢推定装置、カメラ位置姿勢推定方法およびカメラ位置姿勢推定プログラム
WO2018032703A1 (zh) 跟踪指部轮廓的方法及其装置
JP7327776B2 (ja) 表情推定装置、感情判定装置、表情推定方法及びプログラム
WO2018032701A1 (zh) 跟踪指部轮廓的方法及其装置
CN115272417A (zh) 图像数据的处理方法、图像处理设备以及可读存储介质
KR101660596B1 (ko) 안면 형상 기울기 보정 방법 및 보정 시스템
JP5688514B2 (ja) 視線計測システム、方法およびプログラム
WO2018032704A1 (zh) 跟踪指部轮廓的方法及其装置
JP3811474B2 (ja) 顔部品位置検出方法及び顔部品位置検出装置
CN111368675A (zh) 手势深度信息的处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16913452

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/04/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16913452

Country of ref document: EP

Kind code of ref document: A1