WO2022002133A1 - 手势追踪方法及装置 - Google Patents

手势追踪方法及装置 Download PDF

Info

Publication number
WO2022002133A1
WO2022002133A1 PCT/CN2021/103545 CN2021103545W WO2022002133A1 WO 2022002133 A1 WO2022002133 A1 WO 2022002133A1 CN 2021103545 W CN2021103545 W CN 2021103545W WO 2022002133 A1 WO2022002133 A1 WO 2022002133A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearable
information
tracking
feature
optical
Prior art date
Application number
PCT/CN2021/103545
Other languages
English (en)
French (fr)
Inventor
吴涛
Original Assignee
青岛小鸟看看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛小鸟看看科技有限公司 filed Critical 青岛小鸟看看科技有限公司
Publication of WO2022002133A1 publication Critical patent/WO2022002133A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the present invention relates to the technical field of gesture recognition, and in particular, to a gesture tracking method and device.
  • the opponent's posture, position and finger joint posture are recognized and restored.
  • the tracking camera is set up in the environment, or the tracking camera is set on the VR head-mounted display device, and the image recognition technology is used to track the hand under the camera in real time. position, rotation information and finger joint pose information.
  • some complex gestures of the user such as crossed hands, overlapping hands, etc., will generate less recognition information on the image captured by the camera, resulting in a large error between the gesture recognition result and the user's actual gesture; or
  • the scene background used by the user is relatively complex, the segmentation detection of the user's hand on the image is prone to error detection, resulting in unstable gesture recognition accuracy of the user.
  • inertial sensors for hand motion capture has the advantages of simple operation, high portability, no interference from external light, and no influence of complex gestures of the hand, high gesture recognition capture accuracy, and high data sampling rate.
  • the fine movements of the hand joints cannot be restored, and the position and movement information of the hand cannot be located; and because the inertial sensor is used for integral operation to obtain the hand posture data, the hand posture data will accumulate drift over time; Moreover, it is easily affected by the surrounding ferromagnets and accelerates the accumulation of drift errors.
  • the purpose of the present invention is to provide a gesture tracking method and device to solve the problems of unstable recognition accuracy and easy influence by environmental factors in the existing gesture recognition technology.
  • One aspect of the present invention is to provide a gesture tracking method, comprising:
  • the surface of the wearable wristband is provided with optical pattern marking points, and an inertial navigation sensor is built in the wearable wristband;
  • the 4DoF information of the wearable bracelet is obtained through computer vision technology, and the 4DoF information is used to represent the position of the hand in the three-dimensional space x-axis, y-axis, and z-axis three directions Movement information and rotation information of the hand around the z-axis;
  • the other DoF information of the hand is calculated by using the 4DoF information, so as to obtain the position and posture information of the hand.
  • Another aspect of the present invention is to provide a gesture tracking device, comprising:
  • a wearable wristband the surface of the wearable wristband is provided with optical pattern marking points, and an inertial navigation sensor is built in the wearable wristband, and the inertial navigation sensor is used to obtain the inertia of the wearable wristband navigation data;
  • a tracking camera used to capture the motion state of the optical pattern marker point in space in real time, and obtain an optical tracking image
  • the 4DoF information acquisition module acquires the 4DoF information of the wearable bracelet through computer vision technology, and uses the 4DoF information to indicate that the hand is in the three-dimensional space x-axis, y-axis, z-axis Position movement information in three directions and rotation information of the hand around the z-axis;
  • the hand information acquisition module calculates other DoF information of the hand through the 4DoF information, so as to obtain the position and posture information of the hand.
  • the present invention has the following advantages and beneficial effects:
  • the computer vision technology is used to track the position and flip angle information of the wearable bracelet in space. 4DOF information. And the calculation and tracking of other DoFs of the hand are carried out through the 4DOF information of the hand, which greatly improves the accuracy and stability of the DoF information tracking of the hand, and improves the accuracy of gesture tracking and recognition. Precise positioning.
  • FIG. 1 is a schematic flowchart of a gesture tracking method according to the present invention
  • FIG. 2 is a schematic diagram of the position of the hand 26DoF in the present invention.
  • FIG. 1 is a schematic flowchart of the gesture tracking method of the present invention. As shown in FIG. 1 , the gesture tracking method of the present invention includes:
  • the surface of the wearable bracelet is provided with optical pattern marking points, so as to track the movement state of the wearable bracelet through the optical pattern marking points, and the wearable bracelet has a built-in Inertial navigation sensor to obtain inertial navigation data through the inertial navigation sensor, thereby facilitating the acquisition of the spatial position of the optical pattern marking point in the wearable bracelet, wherein the inertial navigation sensor can be an inertial measurement unit (Inertial measurement unit, IMU);
  • IMU inertial measurement unit
  • the optical tracking image can be displayed on the helmet by setting it. It can be captured by the tracking camera on the device, and the position of the optical pattern marking point can be captured in real time through the optical tracking image; of course, the tracking camera can also be set on other tracking devices instead of the helmet display device;
  • the 4DoF (Degrees of Freedom) information of the wearable bracelet is obtained through computer vision technology, and the 4DoF information of the wearable bracelet includes the position of the wearable bracelet in space and flip angle (angle of rotation around the Z-axis) information, the 4DoF information of the wearable bracelet represents the position movement information of the hand in the three-dimensional space x-axis, y-axis, and z-axis and the hand movement around the z-axis direction. rotation information;
  • the other DoF information of the hand is calculated by using the 4DoF information, so as to obtain the position and posture information of the hand.
  • hand position and pose can be identified by 26 degrees of freedom.
  • FIG. 2 shows a schematic diagram of the position of the hand 26DoF in the present invention, and the degree of freedom information of each position of the hand is obtained according to the hand feature. By tracking the 26DoF information of the hand, the position and posture of the hand can be accurately identified.
  • the method of calculating other 22DoF information by using the 4DoF information of the hand is a conventional technology, which is not described in detail in the present invention.
  • the following takes the application of the gesture tracking method of the present invention in the field of VR/AR/MR as an example for detailed description.
  • multiple tracking cameras are built in the head-mounted display device.
  • a tracking camera is used to complete the large-angle tracking through the perspective stitching of the 4 tracking cameras.
  • the step of obtaining the 4DoF information of the wearable bracelet by computer vision technology includes:
  • the step of detecting the feature points of the optical pattern marking points on the wearable wristband on the optical tracking image includes:
  • the feature matching database includes each optical pattern marking point on the wearable bracelet and its corresponding feature vector; Specifically, pre-set the optical pattern marking point distribution on the wearable bracelet, The feature points are detected by the image feature point detection algorithm, and the feature vectors of the feature points are extracted by the feature point feature vector calculation algorithm, so as to establish the optical pattern marker points on the wearable bracelet corresponding to each marker point on the optical tracking image. Feature vector. For various motion postures of the wearable wristband relative to the tracking camera, the feature vector corresponding to each optical pattern mark point on the wearable wristband can be obtained. Since the wearable wristband may have various angles relative to the VR head-mounted display In other words, there are multiple feature vectors for each optical pattern marking point on the wearable bracelet. The optical pattern marking points and the corresponding feature vectors are stored to form a feature matching database, which can be stored in a local file according to a certain data structure;
  • the present invention can use FAST (Features from accelerated segment test) detection algorithm, scale-invariant feature transform (SIFT), SURF (Speed Up Robust Features) feature point extraction and other methods for feature extraction point detection.
  • FAST detection algorithm is used to detect the feature points of the optical tracking image according to the distribution of the optical pattern marking points on the wearable bracelet; the feature vector is extracted for each detected feature point by using the region extraction method. . Specifically, for the neighborhood 5*5 window of the feature point, the pixel gradient is calculated for each pixel gray value in the window and the pixel gray value corresponding to the feature point, and all the pixels in the window are traversed in turn to calculate the value of all window pixels. Gradient, and then normalize the gradient value to obtain the feature vector of the feature point.
  • the step of matching the feature points of the optical tracking image with the feature points in the feature matching database includes:
  • the feature points on the optical tracking image are spatially sorted, so as to make it possible to confirm that the spatially adjacent feature points are also adjacent after sorting on the optical tracking image;
  • the sliding window traversal calculation is performed, and the feature point matching is performed according to the spatial order corresponding to the feature points in the feature matching database.
  • the feature point matching can use Hamming distance matching, KNN (K-Nearest Neighbor, K nearest neighbor) matching, RANSAC matching (RANdom Sample Consensus) and other matching methods, preferably KNN matching method.
  • KNN K-Nearest Neighbor, K nearest neighbor
  • RANSAC matching Random Sample Consensus
  • the step of predicting the position coordinates of the wearable bracelet on the next frame of optical tracking images according to the detected feature points and the inertial navigation data includes:
  • the PNP algorithm pespective-n-point
  • the position of the wearable bracelet relative to the tracking camera and rotation information, that is, the 4DoF information of the wearable bracelet in the optical tracking image of the current frame is obtained;
  • the position coordinates of the wearable bracelet on the next frame of optical tracking images are predicted according to the position and rotation information and the inertial navigation data.
  • the method before the step of acquiring the predicted position of the feature point corresponding to each wearable bracelet in the feature point tracking queue on the optical tracking image of the current frame according to the image data, the method further includes: according to the optical tracking The image data of the image, determine the number of wearable bracelets in the feature point tracking queue, if the number of wearable bracelets is 0, return to execute the optical pattern marking on the wearable bracelet The step of detecting the feature points on the optical tracking image; if the number of the wearable bracelet is 1, return to execute the feature of the optical pattern marking point on the wearable bracelet on the optical tracking image The step of detecting the wearable wristband, and the step of obtaining the predicted position of the feature point corresponding to the wearable wristband on the optical tracking image of the current frame; if the number of the wearable wristband is 2, then obtaining the Describe the steps of predicting the position of the feature point corresponding to the wearable bracelet on the optical tracking image of the current frame.
  • the step of obtaining the predicted position of the feature point corresponding to each wearable bracelet in the feature point tracking queue on the optical tracking image of the current frame according to the image data it also includes:
  • the area range of the set size is used as the pixel window, for example, the pixel window of the 5*5 area;
  • the absolute two-dimensional position of the feature point on the optical tracking image of the current frame is obtained through an NCC (Normalized cross correlation) matching algorithm;
  • the 4DoF information of the wearable bracelet is obtained according to the absolute two-dimensional position and the corresponding inertial navigation data. Through the above prediction information, it can be obtained that the absolute two-dimensional position of each tracked feature point will not exceed the 5*5 pixel area of the predicted position.
  • the gesture tracking method of the present invention is preferably limited to the gesture tracking of the left and right hands of the user, regardless of other hands appearing in the tracking field of view.
  • the method further includes:
  • the present invention also provides a gesture tracking device, comprising:
  • a wearable wristband the surface of the wearable wristband is provided with optical pattern marking points, and an inertial navigation sensor is built in the wearable wristband, and the inertial navigation sensor is used to obtain the inertia of the wearable wristband navigation data;
  • a tracking camera used to capture the motion state of the optical pattern marker in space in real time, and obtain an optical tracking image; wherein, the tracking camera can be set in the helmet display device, or can be an independent camera tracking device;
  • the 4DoF information acquisition module acquires the 4DoF information of the wearable bracelet through computer vision technology, and uses the 4DoF information to indicate that the hand is in the three-dimensional space x-axis, y-axis, z-axis Position movement information in three directions and rotation information of the hand around the z-axis;
  • the hand information acquisition module calculates other DoF information of the hand through the 4DoF information, so as to obtain the position and posture information of the hand.
  • the 4DoF information acquisition module includes:
  • a feature point detection unit for detecting the feature points on the optical tracking image of the optical pattern marking points on the wearable wristband
  • the tracking queue construction unit predicts the position coordinates of the wearable bracelet on the next frame of the optical tracking image according to the detected feature points and the inertial navigation data, and constructs the detected feature point information and the prediction information of the next frame Form a feature point tracking queue;
  • the position prediction unit obtains the image data of the optical tracking image in real time, and obtains the predicted position of the feature point corresponding to each wearable bracelet in the described feature point tracking queue on the optical tracking image of the current frame according to the image data;
  • the information acquisition unit acquires the 4DoF information of the wearable bracelet according to the predicted position and the corresponding inertial navigation data.
  • the feature point detection unit detects the feature points on the optical tracking image of the optical pattern marking points on the wearable wristband in the following manner, including:
  • the feature matching database includes each optical pattern marking point on the wearable bracelet and its corresponding feature vector; Specifically, pre-set the optical pattern marking point distribution on the wearable bracelet, The feature points are detected by the image feature point detection algorithm, and the feature vectors of the feature points are extracted by the feature point feature vector calculation algorithm, so as to establish the optical pattern marker points on the wearable bracelet corresponding to each marker point on the optical tracking image. Feature vector. For various motion postures of the wearable wristband relative to the tracking camera, the corresponding features of each optical pattern mark point on the wearable wristband can be obtained.
  • the wearable wristband may have many kinds of relative to the VR head-mounted display In the case of the angle, that is, there are multiple feature vectors for each optical pattern marking point on the wearable bracelet.
  • the optical pattern marking points and the corresponding feature vectors are stored to form a feature matching database, which can be stored in a local file according to a certain data structure;
  • the tracking queue construction unit predicts the position coordinates of the wearable bracelet on the next frame of optical tracking images according to the detected feature points and the inertial navigation data in the following manner, including:
  • the position coordinates of the wearable bracelet on the next frame of optical tracking images are predicted according to the position and rotation information and the inertial navigation data.
  • gesture tracking device of the present invention is substantially the same as the specific implementation of the above-mentioned gesture tracking method, and will not be repeated here.
  • the gesture tracking device of the present invention can be applied to the field of VR/AR/MR.
  • the wearable wristband is used as a gesture tracker to track the position and posture information of the hand in the three-dimensional environment space in real time, which can assist in solving the problem of high-precision restoration of finger joints. problem.
  • the present invention can improve the tracking accuracy of the 26DoF information of the hand, improve the stability of the gesture tracking, and further improve the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明公开了一种手势追踪方法和装置,其中,方法包括:将可穿戴手环戴至手腕处,所述可穿戴手环的表面设置有光学图案标记点,且所述可穿戴手环中内置有惯性导航传感器;实时捕捉所述光学图案标记点在空间中的运动状态,获取光学追踪图像,并通过所述惯性导航传感器获取所述可穿戴手环的惯性导航数据;根据所述光学追踪图像和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息,以所述4DoF信息表示手部在三维空间x轴、y轴、z轴三个方向的位置移动信息和手部绕z轴方向的旋转信息;通过所述4DoF信息进行手部其他DoF信息的计算,从而获取手部的位置和姿态信息。本发明可以提高对手部位置和姿势的追踪精度。

Description

手势追踪方法及装置 技术领域
本发明涉及手势识别技术领域,具体地,涉及一种手势追踪方法及装置。
背景技术
目前,在VR/AR/MR领域,手势识别的方法多种多样,通常采用基于计算机视觉追踪识别技术或基于惯性传感器的动作捕捉技术来解决手势识别问题,但是,基于计算机视觉追踪识别技术和基于惯性传感器的动作捕捉技术都存在一些问题。
基于计算机视觉追踪识别技术对手的姿态、位置和手指关节姿态进行识别和还原,一般是通过环境架设的追踪相机,或者VR头戴显示设备上设置追踪相机,通过图像识别技术实时追踪手在相机下的位置、旋转信息和手指关节姿态信息。在实际使用中,用户的一些复杂手势比如双手交叉、双手叠交等,会在相机捕捉的图像上产生较少的识别信息,从而导致手势识别结果和用户的实际手势产生较大的误差;或者用户使用的场景背景较为复杂时,用户的手在图像上的分割检测容易出现错误检测,导致用户的手势识别精度不稳定等。
利用惯性传感器进行手部动作捕捉,操作简单,便携性高,不受外界光干扰,不受手的复杂手势影响,手势识别捕捉精度高,数据采样率高。但是对于手部关节的精细动作无法还原,不能定位到手部的位置移动信息;而且由于利用惯性传感器进行积分运算获得手部的姿态数据,会随着时间过程,手部姿态数据会产生累积漂移;而且容易受周围环境铁磁体的影响而加快累积漂移误差。
因此,无论是基于计算机视觉追踪识别技术,还是基于惯性传感器的 动作捕捉技术,对手势的识别精度均不稳定,易受到实际应用场景或周围环境的影响。
发明内容
鉴于以上问题,本发明的目的是提供一种手势追踪方法及装置,以解决现有的手势识别技术中存在的识别精度不稳定,且易受环境因素影响的问题。
为了实现上述目的,本发明采用以下技术方案:
本发明的一个方面是提供一种手势追踪方法,包括:
将可穿戴手环戴至手腕处,所述可穿戴手环的表面设置有光学图案标记点,且所述可穿戴手环中内置有惯性导航传感器;
实时捕捉所述光学图案标记点在空间中的运动状态,获取光学追踪图像,并通过所述惯性导航传感器获取所述可穿戴手环的惯性导航数据;
根据所述光学追踪图像和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息,以所述4DoF信息表示手部在三维空间x轴、y轴、z轴三个方向的位置移动信息和手部绕z轴方向的旋转信息;
通过所述4DoF信息进行手部其他DoF信息的计算,从而获取手部的位置和姿态信息。
本发明的另一个方面是提供一种手势追踪装置,包括:
可穿戴手环,所述可穿戴手环的表面设置有光学图案标记点,且所述可穿戴手环中内置有惯性导航传感器,所述惯性导航传感器用于获取所述可穿戴手环的惯性导航数据;
追踪相机,用于实时捕捉所述光学图案标记点在空间中的运动状态,获取光学追踪图像;
4DoF信息获取模块,根据所述光学追踪图像和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息,以所述4DoF信息表示手部在三维空间x轴、y轴、z轴三个方向的位置移动信息和手部绕z轴方向的旋转信息;
手部信息获取模块,通过所述4DoF信息进行手部其他DoF信息的计算,从而获取手部的位置和姿态信息。
与现有技术相比,本发明具有以下优点和有益效果:
本发明通过在可穿戴手环上设置一些光学图案标记点,结合可穿戴手环内置的惯性导航传感器,通过计算机视觉技术追踪可穿戴手环在空间中的位置和翻转角度信息,作为手部的4DOF信息。并通过手部的4DOF信息进行手的其他DoF的计算和追踪,极大改善了手部的DoF信息追踪的准确性和稳定性,提高对手势追踪和识别的精度,可以对手部位置和姿势进行精准定位。
附图说明
图1是本发明所述手势追踪方法的流程示意图;
图2是本发明中手部26DoF的位置示意图。
具体实施方式
下面将参考附图来描述本发明所述的实施例。本领域的普通技术人员可以认识到,在不偏离本发明的精神和范围的情况下,可以用各种不同的方式或其组合对所描述的实施例进行修正。因此,附图和描述在本质上是说明性的,而不是用于限制权利要求的保护范围。此外,在本说明书中,附图未按比例画出,并且相同的附图标记表示相同的部分。
图1是本发明所述手势追踪方法的流程示意图,如图1所示,本发明所述手势追踪方法,包括:
将可穿戴手环戴至手腕处,所述可穿戴手环的表面设置有光学图案标记点,以通过光学图案标记点追踪可穿戴手环的运动状态,且所述可穿戴手环中内置有惯性导航传感器,以通过惯性导航传感器得到惯性导航数据,进而便于获取可穿戴手环中光学图案标记点的空间位置,其中,惯性导航传感器可以是惯性测量单元(Inertial measurement unit,IMU);
实时捕捉所述光学图案标记点在空间中的运动状态,获取光学追踪图像,并通过所述惯性导航传感器获取所述可穿戴手环的惯性导航数据,其中,光学追踪图像可以通过设置在头盔显示设备上的追踪相机摄取得到,通过光学追踪图像可以实时捕捉到光学图案标记点的位置;当然,追踪相机也可以不设置在头盔显示设备上,而是设置在其他追踪设备上;
根据所述光学追踪图像和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF(Degrees of Freedom自由度)信息,可穿戴手环的4DoF信息包括可穿戴手环在空间中的位置和翻转角度(绕Z轴旋转的角度)信息,以可穿戴手环的4DoF信息表示手部在三维空间x轴、y轴、z轴三个方向的位置移动信息和手部绕z轴方向的旋转信息;
通过所述4DoF信息进行手部其他DoF信息的计算,从而获取手部的位置和姿态信息。
例如可以通过26个自由度来标识手部位置和姿态。参照图2,图2示出了本发明中手部26DoF的位置示意图,根据手部特征获取手部各个位置的自由度信息。通过对手部26DoF信息的追踪,可以精准识别手部位置和姿态。
需要说明的是,通过手部4DoF信息计算其他22DoF信息的方法是常规技术,本发明不做具体描述。
下面以本发明的手势追踪方法在VR/AR/MR领域中的应用为例进行详细说明。应用时,在头戴显示设备中内置多个追踪相机,为了使可穿戴手环的相机追踪范围达到180°*180°(H*V),优选在VR/AR/MR头戴显示设备内置4个追踪相机,通过4个追踪相机的视角拼接完成大视角追踪。以下仅以其中一个追踪相机对可穿戴手环的追踪为例进行说明,其他追踪相机对可穿戴手环的追踪与之大致相同,不再赘述。
在一个实施例中,根据所述运动状态和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息的步骤包括:
对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测;
根据检测出的特征点和所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标,将检测出的特征点信息和下一帧的预测信息构建形成特征点跟踪队列;
实时获取光学追踪图像的图像数据,并根据所述图像数据获取所述特征点跟踪队列中每一个可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置;
根据所述预测位置和对应的惯性导航数据获取可穿戴手环的4DoF信息。
进一步地,对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测的步骤包括:
建立特征匹配数据库,所述特征匹配数据库包括所述可穿戴手环上的每个光学图案标记点及其对应的特征向量;具体地,预先设定可穿戴手环上的光学图案标记点分布,通过图像特征点检测算法进行特征点检测,并通过特征点特征向量计算算法提取特征点的特征向量,从而建立可穿戴手环上的光学图案标记点对应每个标记点在光学追踪图像上对应的特征向量。对于可穿戴手环相对于追踪相机的各种运动姿态,均可以获取到可穿戴手环上每一个光学图案标记点对应的特征向量,由于可穿戴手环相对VR头戴显示器可能存在多种角度情况,即可穿戴手环上的每一个光学图案标记点存在多个特征向量。将光学图案标记点以及对应的特征向量进行存储,构建形成特征匹配数据库,可以将其按照一定的数据结构保存在本地文件中;
实时获取光学追踪图像,检测所述光学追踪图像的特征点,并提取与所述特征点对应的特征向量,其中,检测特征点和提取特征向量的方法与构建特征匹配数据库时所使用的方法相同;
将所述光学追踪图像的特征点与所述特征匹配数据库中的特征点进行匹配,得到特征匹配度最大且对应空间位置连续的预设个数个特征点。
需要说明的是,本发明可以使用FAST(Features from accelerated segment test)检测算法,尺度不变特征转换方法(Scale-invariant feature  transform,SIFT),SURF(Speed Up Robust Features)特征点提取等方法进行特征点检测。优选地,利用FAST检测算法,并根据可穿戴手环上的光学图案标记点的分布,检测所述光学追踪图像的特征点;利用区域提取法对检测出的每个特征点进行特征向量的提取。具体地,对特征点的邻域5*5窗口,窗口中的每一个像素灰度值和特征点对应的像素灰度值计算像素梯度,依次遍历窗口中所有像素,即可计算所有窗口像素的梯度,然后对梯度值进行归一化,可获得特征点的特征向量。
在一个实施例中,将所述光学追踪图像的特征点与所述特征匹配数据库中的特征点进行匹配的步骤包括:
将可穿戴手环中内置的惯性导航传感器的坐标系与追踪相机的坐标系标定对齐;通过将惯性导航传感器和追踪相机的时间戳对齐,使得通过惯性导航传感器的姿态数据,即可获取到当前可穿戴手环上的每一个光学图案标记点相对追踪相机的旋转角度;
通过惯性导航数据获取所述可穿戴手环上的每一个光学图案标记点相对于追踪相机的旋转角度,得到光学追踪图像上相邻两个特征点的物理位置关系;根据可穿戴手环上的相邻两个光学图案标记点的物理位置关系,通过旋转角度和相机的成像原理,可以确认空间物理上两个标记点的位置关系如果相邻,在图像上的成像一定也相邻,因此可以将可穿戴手环上的特征点与光学追踪图像上的特征点进行匹配,而相应的空间相对位置不会发生变化,可以减少误匹配操作;
根据所述物理位置关系对光学追踪图像上的特征点进行空间排序,使其尽可能确认空间上相邻的特征点在光学追踪图像上经过排序之后也相邻;
按照空间顺序,以预设个数为一组,进行滑动窗口遍历计算,与所述特征匹配数据库中的特征点对应的空间顺序,进行特征点匹配。其中,特征点匹配可以使用汉明距离匹配,KNN(K-NearestNeighbor,K最近邻)匹配,RANSAC匹配(RANdom Sample Consensus)等匹配方法,优选为KNN匹配方法。当以四个为一组,进行滑动窗口遍历计算时,可以获取特征匹配 度最大且对应空间位置连续的四个特征点。
在一个实施例中,根据检测出的特征点和所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标的步骤包括:
获取检测出的特征点在光学追踪图像上的二维位置坐标;
根据所述特征点在所述可穿戴手环上对应的三维位置坐标和所述二维位置坐标,利用PNP算法(pespective-n-point)计算出所述可穿戴手环相对于追踪相机的位置和旋转信息,即获取到当前帧光学追踪图像中可穿戴手环的4DoF信息;
根据所述位置和旋转信息、所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标。
在一个实施例中,根据所述图像数据获取所述特征点跟踪队列中每一个可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤之前,还包括:根据光学追踪图像的图像数据,判断所述特征点跟踪队列中的可穿戴手环的个数,若所述可穿戴手环的个数为0,则返回执行对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测的步骤;若所述可穿戴手环的个数为1,则返回执行对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测的步骤,同时执行获取所述可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤;若所述可穿戴手环的个数为2,则执行获取所述可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤。
进一步地,根据所述图像数据获取所述特征点跟踪队列中每一个可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤之后,还包括:
以所述预测位置为原点中心,以设定大小的区域范围作为像素窗口,例如,5*5区域的像素窗口;
在所述像素窗口内,通过NCC(Normalized cross correlation)匹配算法获取所述特征点在当前帧的光学追踪图像上的绝对二维位置;
根据所述绝对二维位置和对应的惯性导航数据获取可穿戴手环的 4DoF信息。通过上述预测信息,可以获得每一个被跟踪的特征点的绝对二维位置不会超出预测位置的5*5像素区域。
本发明的手势追踪方法优选为限定在对用户的左右两只手的手势追踪,而不考虑在追踪视场内出现的其他手部。优选地,通过计算机视觉技术获取可穿戴手环的4DoF信息的步骤之后,还包括:
判断追踪相机获取的光学追踪图像中出现的手部个数,若出现的手部个数大于2,则根据所述4DoF信息判断手部与追踪相机之间的距离,并取距离最小的两只手部作为手势追踪目标。
本发明还提供了一种手势追踪装置,包括:
可穿戴手环,所述可穿戴手环的表面设置有光学图案标记点,且所述可穿戴手环中内置有惯性导航传感器,所述惯性导航传感器用于获取所述可穿戴手环的惯性导航数据;
追踪相机,用于实时捕捉所述光学图案标记点在空间中的运动状态,获取光学追踪图像;其中,追踪相机可以设置于头盔显示设备中,也可以是独立的摄像追踪设备;
4DoF信息获取模块,根据所述光学追踪图像和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息,以所述4DoF信息表示手部在三维空间x轴、y轴、z轴三个方向的位置移动信息和手部绕z轴方向的旋转信息;
手部信息获取模块,通过所述4DoF信息进行手部其他DoF信息的计算,从而获取手部的位置和姿态信息。
在一个实施例中,所述4DoF信息获取模块包括:
特征点检测单元,用于对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测;
跟踪队列构建单元,根据检测出的特征点和所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标,将检测出的特征点信息和下一帧的预测信息构建形成特征点跟踪队列;
位置预测单元,实时获取光学追踪图像的图像数据,并根据所述图像 数据获取所述特征点跟踪队列中每一个可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置;
信息获取单元,根据所述预测位置和对应的惯性导航数据获取可穿戴手环的4DoF信息。
进一步地,所述特征点检测单元通过下述方式对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测,包括:
建立特征匹配数据库,所述特征匹配数据库包括所述可穿戴手环上的每个光学图案标记点及其对应的特征向量;具体地,预先设定可穿戴手环上的光学图案标记点分布,通过图像特征点检测算法进行特征点检测,并通过特征点特征向量计算算法提取特征点的特征向量,从而建立可穿戴手环上的光学图案标记点对应每个标记点在光学追踪图像上对应的特征向量。对于可穿戴手环相对于追踪相机的各种运动姿态,均可以获取到可穿戴手环上每一个光学图案标记点对应的特征相连个,由于可穿戴手环相对VR头戴显示器可能存在多种角度情况,即可穿戴手环上的每一个光学图案标记点存在多个特征向量。将光学图案标记点以及对应的特征向量进行存储,构建形成特征匹配数据库,可以将其按照一定的数据结构保存在本地文件中;
实时获取光学追踪图像,检测所述光学追踪图像的特征点,并提取与所述特征点对应的特征向量,其中,检测特征点和提取特征向量的方法与构建特征匹配数据库时所使用的方法相同;
将所述光学追踪图像的特征点与所述特征匹配数据库中的特征点进行匹配,得到特征匹配度最大且对应空间位置连续的预设个数个特征点。
在一个实施例中,跟踪队列构建单元通过下述方式根据检测出的特征点和所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标,包括:
获取检测出的特征点在光学追踪图像上的二维位置坐标;
根据所述特征点在所述可穿戴手环上对应的三维位置坐标和所述二维位置坐标,利用PNP算法计算出所述可穿戴手环相对于追踪相机的位置和 旋转信息;
根据所述位置和旋转信息、所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标。
需要说明的是,本发明之手势追踪装置的具体实施方式与上述手势追踪方法的具体实施方式大致相同,在此不再赘述。
本发明的手势追踪装置可以应用于VR/AR/MR领域,将可穿戴手环作为手势追踪器,实时追踪手部在三维环境空间中的位置和姿态信息,可以辅助解决手指关节高精度还原的问题。并且,本发明能够提高对手部的26DoF信息的追踪精度,提高手势追踪的稳定性,进而提升用户体验。
以上所述,仅为本发明的具体实施方式,在本发明的上述教导下,本领域技术人员可以在上述实施例的基础上进行其他的改进或变形。本领域技术人员应该明白,上述的具体描述只是更好的解释本发明的目的,本发明的保护范围以权利要求的保护范围为准。

Claims (10)

  1. 一种手势追踪方法,其特征在于,包括:
    将可穿戴手环戴至手腕处,所述可穿戴手环的表面设置有光学图案标记点,且所述可穿戴手环中内置有惯性导航传感器;
    实时捕捉所述光学图案标记点在空间中的运动状态,获取光学追踪图像,并通过所述惯性导航传感器获取所述可穿戴手环的惯性导航数据;
    根据所述光学追踪图像和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息,以所述4DoF信息表示手部在三维空间x轴、y轴、z轴三个方向的位置移动信息和手部绕z轴方向的旋转信息;
    通过所述4DoF信息进行手部其他DoF信息的计算,从而获取手部的位置和姿态信息。
  2. 根据权利要求1所述的手势追踪方法,其特征在于,根据所述运动状态和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息的步骤包括:
    对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测;
    根据检测出的特征点和所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标,将检测出的特征点信息和下一帧的预测信息构建形成特征点跟踪队列;
    实时获取光学追踪图像的图像数据,并根据所述图像数据获取所述特征点跟踪队列中每一个可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置;
    根据所述预测位置和对应的惯性导航数据获取可穿戴手环的4DoF信息。
  3. 根据权利要求2所述的手势追踪方法,其特征在于,对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测的步骤包括:
    建立特征匹配数据库,所述特征匹配数据库包括所述可穿戴手环上的每个光学图案标记点及其对应的特征向量;
    实时获取光学追踪图像,检测所述光学追踪图像的特征点,并提取与所述特征点对应的特征向量;
    将所述光学追踪图像的特征点与所述特征匹配数据库中的特征点进行匹配,得到特征匹配度最大且对应空间位置连续的预设个数个特征点。
  4. 根据权利要求3所述的手势追踪方法,其特征在于,利用FAST检测算法,并根据可穿戴手环上的光学图案标记点的分布,检测所述光学追踪图像的特征点;利用区域提取法对检测出的每个特征点进行特征向量的提取。
  5. 根据权利要求3所述的手势追踪方法,其特征在于,将所述光学追踪图像的特征点与所述特征匹配数据库中的特征点进行匹配的步骤包括:
    将可穿戴手环中内置的惯性导航传感器的坐标系与追踪相机的坐标系标定对齐;
    通过惯性导航数据获取所述可穿戴手环上的每一个光学图案标记点相对于追踪相机的旋转角度,得到光学追踪图像上相邻两个特征点的物理位置关系;
    根据所述物理位置关系对光学追踪图像上的特征点进行空间排序;
    按照空间顺序,以预设个数为一组,进行滑动窗口遍历计算,与所述特征匹配数据库中的特征点对应的空间顺序,进行特征点匹配。
  6. 根据权利要求2所述的手势追踪方法,其特征在于,根据检测 出的特征点和所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标的步骤包括:
    获取检测出的特征点在光学追踪图像上的二维位置坐标;
    根据所述特征点在所述可穿戴手环上对应的三维位置坐标和所述二维位置坐标,利用PNP算法计算出所述可穿戴手环相对于追踪相机的位置和旋转信息;
    根据所述位置和旋转信息、所述惯性导航数据预测所述可穿戴手环在下一帧光学追踪图像上的位置坐标。
  7. 根据权利要求2所述的手势追踪方法,其特征在于,根据所述图像数据获取所述特征点跟踪队列中每一个可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤之前,还包括:
    根据光学追踪图像的图像数据,判断所述特征点跟踪队列中的可穿戴手环的个数,
    若所述可穿戴手环的个数为0,则返回执行对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测的步骤;
    若所述可穿戴手环的个数为1,则返回执行对所述可穿戴手环上的光学图案标记点在光学追踪图像上的特征点进行检测的步骤,同时执行获取所述可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤;
    若所述可穿戴手环的个数为2,则执行获取所述可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤。
  8. 根据权利要求2所述的手势追踪方法,其特征在于,根据所述图像数据获取所述特征点跟踪队列中每一个可穿戴手环对应的特征点在当前帧的光学追踪图像上的预测位置的步骤之后,还包括:
    以所述预测位置为原点中心,以设定大小的区域范围作为像素窗口;
    在所述像素窗口内,通过NCC匹配算法获取所述特征点在当前 帧的光学追踪图像上的绝对二维位置;
    根据所述绝对二维位置和对应的惯性导航数据获取可穿戴手环的4DoF信息。
  9. 根据权利要求1所述的手势追踪方法,其特征在于,通过计算机视觉技术获取可穿戴手环的4DoF信息的步骤之后,还包括:
    判断追踪相机获取的光学追踪图像中出现的手部个数,若出现的手部个数大于2,则根据所述4DoF信息判断手部与追踪相机之间的距离,并取距离最小的两只手部作为手势追踪目标。
  10. 一种手势追踪装置,其特征在于,包括:
    可穿戴手环,所述可穿戴手环的表面设置有光学图案标记点,且所述可穿戴手环中内置有惯性导航传感器,所述惯性导航传感器用于获取所述可穿戴手环的惯性导航数据;
    追踪相机,用于实时捕捉所述光学图案标记点在空间中的运动状态,获取光学追踪图像;
    4DoF信息获取模块,根据所述光学追踪图像和所述惯性导航数据,通过计算机视觉技术获取可穿戴手环的4DoF信息,以所述4DoF信息表示手部在三维空间x轴、y轴、z轴三个方向的位置移动信息和手部绕z轴方向的旋转信息;
    手部信息获取模块,通过所述4DoF信息进行手部其他DoF信息的计算,从而获取手部的位置和姿态信息。
PCT/CN2021/103545 2020-07-01 2021-06-30 手势追踪方法及装置 WO2022002133A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010627539.5A CN111930226A (zh) 2020-07-01 2020-07-01 手势追踪方法及装置
CN202010627539.5 2020-07-01

Publications (1)

Publication Number Publication Date
WO2022002133A1 true WO2022002133A1 (zh) 2022-01-06

Family

ID=73317575

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103545 WO2022002133A1 (zh) 2020-07-01 2021-06-30 手势追踪方法及装置

Country Status (2)

Country Link
CN (1) CN111930226A (zh)
WO (1) WO2022002133A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114569250A (zh) * 2022-02-21 2022-06-03 北京唯迈医疗设备有限公司 一种采用手势操作的介入机器人主端控制系统
CN116784837A (zh) * 2023-08-07 2023-09-22 北京工业大学 一种上肢运动障碍评估方法及装置
WO2023207345A1 (zh) * 2022-04-29 2023-11-02 惠州Tcl移动通信有限公司 数据交互方法、装置、计算机设备及计算机可读存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930226A (zh) * 2020-07-01 2020-11-13 青岛小鸟看看科技有限公司 手势追踪方法及装置
CN112527102B (zh) * 2020-11-16 2022-11-08 青岛小鸟看看科技有限公司 头戴式一体机系统及其6DoF追踪方法和装置
CN113158845A (zh) * 2021-04-02 2021-07-23 歌尔光学科技有限公司 手势识别方法、头戴显示设备及非易失性存储介质
CN113368486B (zh) * 2021-05-17 2023-03-14 青岛小鸟看看科技有限公司 一种用于vr头戴设备的光学追踪器和运动健身系统
CN113538514B (zh) * 2021-07-14 2023-08-08 厦门大学 一种踝关节运动跟踪方法、系统和存储介质
CN114363513A (zh) * 2021-12-24 2022-04-15 歌尔光学科技有限公司 头戴显示设备及其摄像追踪方法、装置、系统和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018151449A1 (en) * 2017-02-17 2018-08-23 Samsung Electronics Co., Ltd. Electronic device and methods for determining orientation of the device
CN109313497A (zh) * 2016-06-09 2019-02-05 微软技术许可有限责任公司 用于六自由度混合现实输入的惯性控制器的模块化扩展
US20190087021A1 (en) * 2016-06-09 2019-03-21 Microsoft Technology Licensing, Llc Passive optical and inertial tracking in slim form-factor
CN110275603A (zh) * 2018-03-13 2019-09-24 脸谱科技有限责任公司 分布式人造现实系统、手镯设备和头戴式显示器
CN111930226A (zh) * 2020-07-01 2020-11-13 青岛小鸟看看科技有限公司 手势追踪方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109313497A (zh) * 2016-06-09 2019-02-05 微软技术许可有限责任公司 用于六自由度混合现实输入的惯性控制器的模块化扩展
US20190087021A1 (en) * 2016-06-09 2019-03-21 Microsoft Technology Licensing, Llc Passive optical and inertial tracking in slim form-factor
WO2018151449A1 (en) * 2017-02-17 2018-08-23 Samsung Electronics Co., Ltd. Electronic device and methods for determining orientation of the device
CN110275603A (zh) * 2018-03-13 2019-09-24 脸谱科技有限责任公司 分布式人造现实系统、手镯设备和头戴式显示器
CN111930226A (zh) * 2020-07-01 2020-11-13 青岛小鸟看看科技有限公司 手势追踪方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114569250A (zh) * 2022-02-21 2022-06-03 北京唯迈医疗设备有限公司 一种采用手势操作的介入机器人主端控制系统
CN114569250B (zh) * 2022-02-21 2023-11-17 北京唯迈医疗设备有限公司 一种采用手势操作的介入机器人主端控制系统
WO2023207345A1 (zh) * 2022-04-29 2023-11-02 惠州Tcl移动通信有限公司 数据交互方法、装置、计算机设备及计算机可读存储介质
CN116784837A (zh) * 2023-08-07 2023-09-22 北京工业大学 一种上肢运动障碍评估方法及装置

Also Published As

Publication number Publication date
CN111930226A (zh) 2020-11-13

Similar Documents

Publication Publication Date Title
WO2022002133A1 (zh) 手势追踪方法及装置
US7489806B2 (en) Motion detection apparatus
US10353482B2 (en) Systems and methods for tracking motion and gesture of heads and eyes
EP3067861B1 (en) Determination of a coordinate conversion parameter
Han A low-cost visual motion data glove as an input device to interpret human hand gestures
US10755422B2 (en) Tracking system and method thereof
WO2016199605A1 (ja) 画像処理装置および方法、並びにプログラム
CN111353355B (zh) 动作追踪系统及方法
CN109242887A (zh) 一种基于多摄像机和imu的实时人体上肢动作捕捉方法
Fang et al. Multi-sensor based real-time 6-DoF pose tracking for wearable augmented reality
JP2008309595A (ja) オブジェクト認識装置及びそれに用いられるプログラム
KR20190036864A (ko) 가상현실 전망용 망원경, 이를 이용한 전망용 가상현실 구동 방법 및 매체에 기록된 어플리케이션
JP5863034B2 (ja) 情報端末装置
Li et al. A hybrid pose tracking approach for handheld augmented reality
KR102456872B1 (ko) 영상센서와 관성센서의 강결합 융합을 이용한 손동작 추적시스템 및 방법
JP6810442B2 (ja) カメラアセンブリ、そのカメラアセンブリを用いる手指形状検出システム、そのカメラアセンブリを用いる手指形状検出方法、その検出方法を実施するプログラム、及び、そのプログラムの記憶媒体
CN111489376B (zh) 跟踪交互设备的方法、装置、终端设备及存储介质
Ogata et al. A robust position and posture measurement system using visual markers and an inertia measurement unit
JP2832333B2 (ja) 物体の形状・姿勢検出装置
Ruppel et al. Low-cost multi-view pose tracking using active markers
Kempfle et al. Quaterni-On: Calibration-free Matching of Wearable IMU Data to Joint Estimates of Ambient Cameras
EP4292777A1 (en) Assistance system, image processing device, assistance method and program
JP4027294B2 (ja) 移動体検出装置、移動体検出方法及び移動体検出プログラム
Avni et al. Recovery of 3D animal motions using cameras and mirrors
Kam Robust Combined Approach for Human Action Recognition and Medical Diagnostics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21833002

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.04.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21833002

Country of ref document: EP

Kind code of ref document: A1