CN111241971A - Three-dimensional tracking gesture observation likelihood modeling method - Google Patents

Three-dimensional tracking gesture observation likelihood modeling method Download PDF

Info

Publication number
CN111241971A
CN111241971A CN202010010969.2A CN202010010969A CN111241971A CN 111241971 A CN111241971 A CN 111241971A CN 202010010969 A CN202010010969 A CN 202010010969A CN 111241971 A CN111241971 A CN 111241971A
Authority
CN
China
Prior art keywords
gesture
dimensional
information
classic
observation likelihood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010010969.2A
Other languages
Chinese (zh)
Inventor
周智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unicloud Technology Co Ltd
Original Assignee
Unicloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unicloud Technology Co Ltd filed Critical Unicloud Technology Co Ltd
Priority to CN202010010969.2A priority Critical patent/CN111241971A/en
Publication of CN111241971A publication Critical patent/CN111241971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Abstract

The invention provides a three-dimensional tracking gesture observation likelihood modeling method, which comprises the steps of firstly defining a gesture state model and scene information of a gesture as similarity measurement likelihood models, adopting a classic three-dimensional gesture modeling method for the gesture state model, mainly relating to depth information similarity measurement extraction of a three-dimensional scene, then establishing the gesture observation likelihood modeling method by using foreground information of the classic three-dimensional gesture modeling method and high-dimensional well depth information matched with a Chamfer distance, and obtaining the three-dimensional tracking gesture observation likelihood model according to a gesture outline and a classic three-dimensional gesture modeling result.

Description

Three-dimensional tracking gesture observation likelihood modeling method
Technical Field
The invention belongs to the field of gesture estimation, and particularly relates to a gesture observation likelihood modeling method based on three-dimensional tracking.
Background
With the development and popularization of artificial intelligence technology, modeling and recognition of gesture gestures are increasingly applied to human emotion recognition and intelligent traffic control, and an efficient gesture model needs to be established to obtain better understanding and recognition results, wherein particularly, gesture state tracking is most widely applied to gesture understanding and recognition.
Disclosure of Invention
In view of the above, the present invention is directed to a three-dimensional tracking gesture observation likelihood modeling method, so as to solve the problems mentioned in the background art.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a three-dimensional tracking gesture observation likelihood modeling method utilizes foreground information of a classic three-dimensional gesture modeling method and high-dimensional well depth information matched with a Chamfer distance to perform gesture observation likelihood modeling, and obtains a three-dimensional tracking gesture observation likelihood model according to a gesture outline and a classic three-dimensional gesture modeling result.
Further, the gesture observation likelihood modeling method comprises preprocessing of high-dimensional well depth image information and gesture similarity measurement of the high-dimensional well depth image information.
Further, the preprocessing of the high-dimensional well depth image information comprises gesture similarity measurement preprocessing, specifically comprises acquiring a gesture data source, and then performing gesture segmentation, gesture enhancement, gesture modeling and edge detection.
The method comprises the steps of establishing a capture window of a data source by calling a CreatCaptureWindow () method, establishing connection between a gesture camera and a VS by calling a CraetWebDriver () method, calling a DlgCammmorsource () method to complete setting of relevant parameters of the camera capture gesture image, and finally completing transfer of digital images by pointing to the capture window by using a callback pointer of a CameraCallbackImg () method.
Further, the high-dimensional well depth image information gesture similarity measurement specific packageFirst, two data points are defined
Figure BDA0002357147860000021
And
Figure BDA0002357147860000022
the Chamfer distance transformation can be performed on the binarized discontinuous edge as follows, and the Chamfer distance transformation is defined as shown in equation (1-2):
Figure BDA0002357147860000023
then constructing the scanning and traversing process of each digital primitive: we first define P and P to represent a high-dimensional well depth pixel set and a three-dimensional gesture modeling pixel set in a binary image, and perform one traversal by using a binary primitive method as shown in equation (1-3), that is, when a point coordinate element of a next edge image does not belong to an edge any more, a result of one scan is recorded as 0, and when a point coordinate element of the next edge image belongs to an edge, a result of one scan is +1 in a previous state, and finally, a minimization process is used, which is to ensure that a Chamfer distance has a certain digital frame gradient.
f1(p)=min{f1(q)+1:q∈B(p)}fp∈<P>(1-3)
Figure BDA0002357147860000024
Then we define the coordinate of p as (x, y), four element points of (x +1, y), (x-1, y), (x, y +1) and (x, y-1) exist in the four adjacent domains at the point, and for the edge point after the first scanning, a second scanning is performed as shown in the formula (1-4), and such a process can ensure that the well depth of the second-order gradient can match the model obtained by the classical three-dimensional gesture modeling. In order to ensure that the accuracy of the two scans can be superimposed on each other, the second scan is performed in the opposite direction to the first scan.
f2(p)=min{f1(p),f2(q)+1:q∈A(p)} (1-4)
After the edges are introduced into the Chamfer distance transformation, a gesture similarity measurement of the edges of the classic three-dimensional gesture model and the gesture outline can be defined, as shown in the formula (3-5), and therefore gesture observation likelihood information of three-dimensional tracking can be obtained.
pedge=exp(-dchamfef(edge,counter)) (1-5)。
Further, the similarity measure of the classic three-dimensional gesture model includes a similarity likelihood function defining foreground information of the classic three-dimensional gesture model and high-order projected gesture information, as shown in the formula (1-6), wherein a union of the two represents a maximum merged pixel region of foreground information pixel points of the classic three-dimensional gesture model and high-order projected gesture information pixel points, and an intersection of the two represents a region common to the foreground information pixel points of the classic three-dimensional gesture model and the high-order projected gesture information pixel points, and the similarity measure formed in this way is denoted as PforegroundThe expression shows that the similarity measurement of the latter adds high-order projection gesture information on the basis of the classical model.
Figure BDA0002357147860000031
Compared with the prior art, the three-dimensional tracking gesture observation likelihood modeling method has the following advantages: according to the gesture recognition method, the gesture observation likelihood model of the three-dimensional tracking is established through the gesture observation likelihood modeling method of the three-dimensional tracking and the high-dimensional well depth image information gesture similarity measurement method, the effectiveness of the model is verified, and the efficiency and the precision of gesture recognition can be improved by applying the model.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram illustrating a gesture similarity metric preprocessing according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a Canny edge detection result according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a similarity measurement result obtained by adding high-order projection gesture information based on a classical model according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an experimental result according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
11 preprocessing of high-dimensional well depth image information
According to the method, a gesture state model and scene information of a gesture are defined to be a similarity measurement likelihood model, the gesture state model is a classic three-dimensional gesture modeling method provided in chapter II, the scene information of the gesture mainly relates to depth information similarity measurement extraction of a three-dimensional scene, then the establishment of the gesture observation likelihood modeling method is carried out on foreground information of the classic three-dimensional gesture modeling method and high-dimensional well depth information matched with a Chamfer distance, and a three-dimensional tracking gesture observation likelihood model can be obtained according to a gesture outline and a classic three-dimensional gesture modeling result.
For the gesture similarity measurement, basic preprocessing is firstly carried out according to the expression of fig. 1, a gesture original signal is obtained from a calibration camera, but interference events such as overexposure and non-gaussian environment intervention can occur in the image acquisition process of the camera, so that the acquired gesture digital image has relatively serious distortion, the distortion can influence the recognition of the gesture posture in the later period to a great extent, and therefore preprocessing operation must be carried out on the interference signal in advance to meet the requirement of high-quality digital image processing.
The method comprises the steps that a gesture image is obtained as the first step of a preprocessing process, the gesture image is built under an OpenCV library function of a C + + integrated development environment, so that a CreatCaputereWindow () method can be sequentially called to build a capture window of a data source, a CraetWebdriver () method is called to build connection between a gesture camera and a VS, a DlgCammmorSource () method is called to complete setting of relevant parameters of the camera capture gesture image, finally, a callback pointer of a CameraCallbackImg () method is used to point to the capture window to complete transmission of a digital image, after each transmission, the VS automatically judges whether a captured object meets a minimum frame frequency rule or not, if the minimum frame frequency rule is met, the digital signal is continuously captured, and if the minimum frame frequency rule is met, the image is sampled and extracted again.
The gesture image serialization is to add a fourth time dimension in a three-dimensional RGB digital image signal, generally set by adopting a SetTimeCapture () method, and after the image has the fourth dimension related to time, various digital image processing methods such as segmentation and enhancement can be adopted to carry out corresponding processing. We know that digital image processing is based on gray scale image, so it is necessary to provide gray scale for digital image processing, laplacian is a common operator for correcting color RGB image, and we conclude here without proof that a color gesture image can be grayed by using laplacian gray scale transformation as shown in equation (3-1):
Gray(x,y)=0.286*R(x,y)+0.584*G(x,y)+0.128*B(x,y) (1-1)
and then Canny edge detection is carried out, a Gaussian filter is used for smoothing an image, the amplitude and the direction of a gradient are calculated by using finite difference of first-order partial derivatives, then depth information is extracted from the corner points, non-maximum suppression is carried out on the combined Canny detection image, finally, the edge is detected and connected by using a dual-threshold algorithm, and hysteresis threshold processing is carried out, so that high-dimensional well depth information of edge detection can be obtained. Although the depth information of the background can be extracted to the maximum extent by edge detection, the problem of edge discontinuity still can be caused by insufficient extraction of corner points, so that a Chamfer distance transformation formed according to hand-type topological information is provided, and missing contours can be completed by utilizing the complementary action of topology. The edge detection results are shown in fig. 2.
12 high-dimensional well depth image information gesture similarity measurement
We first define two data points
Figure BDA0002357147860000061
And
Figure BDA0002357147860000062
the Chamfer distance transformation can be performed on the binarized discontinuous edge as follows, and the Chamfer distance transformation is defined as shown in equation (1-2):
Figure BDA0002357147860000063
then construct each figureThe scanning and traversing process of the element: we first define P and
Figure BDA0002357147860000064
the method comprises the steps of respectively representing a high-dimensional well depth pixel set and a three-dimensional gesture modeling pixel set in a binary image, performing one-time traversal by using a binary primitive method shown as a formula (1-3), namely recording a scanning result of one time when a point coordinate element of a next edge image does not belong to an edge as 0, and performing a minimum processing on a scanning result of one time when the point coordinate element of the next edge image belongs to the edge in a previous state by +1, wherein the minimum processing is adopted to ensure that a Chamfer distance has a certain digital frame gradient.
f1(p)=min{f1(q)+1:q∈B(p)}fp∈<P>(1-3)
Figure BDA0002357147860000071
Then we define the coordinate of p as (x, y), four element points of (x +1, y), (x-1, y), (x, y +1) and (x, y-1) exist in the four adjacent domains at the point, and for the edge point after the first scanning, a second scanning is performed as shown in the formula (1-4), and such a process can ensure that the well depth of the second-order gradient can match the model obtained by the classical three-dimensional gesture modeling. In order to ensure that the accuracy of the two scans can be superimposed on each other, the second scan is performed in the opposite direction to the first scan.
f2(p)=min{f1(p),f2(q)+1:q∈A(p)} (1-4)
After the edges are introduced into the Chamfer distance transformation, a gesture similarity measurement of the edges of the classic three-dimensional gesture model and the gesture outline can be defined, as shown in the formula (3-5), and therefore gesture observation likelihood information of three-dimensional tracking can be obtained.
pedge=exp(-dchamfef(edge,counter)) (1-5)
1.3 classical three-dimensional gesture model similarity measurement
After similarity measurement of high-order projection gesture information is obtained, a classic three-dimensional gesture model is carried outTo obtain an observation likelihood model. Firstly, defining a similarity likelihood function of foreground information of a classic three-dimensional gesture model and high-order projection gesture information, as shown in a formula (1-6), wherein a union represents a maximum merging pixel region of foreground information pixel points of the classic three-dimensional gesture model and high-order projection gesture information pixel points, and an intersection represents a common region of the foreground information pixel points of the classic three-dimensional gesture model and the high-order projection gesture information pixel points, and the similarity measurement formed in the way is recorded as PforegroundThe expression shows that the similarity measurement of the latter adds high-order projection gesture information on the basis of the classical model.
Figure BDA0002357147860000081
As shown in fig. 3, the first graph represents original gesture information, the second graph represents that high-order information is added to gesture projection, green information represents a maximum combined pixel region of a foreground information pixel point of a classic three-dimensional gesture model and a high-order projection gesture information pixel point, and a blue part represents a common region of the foreground information pixel point of the classic three-dimensional gesture model and the high-order projection gesture information pixel point.
1.4 Experimental results of the invention
The test is that the user keeps the palm posture unchanged, the posture of the index finger generates translational motion according to an x coordinate axis, slow rotation motion according to a Y coordinate axis and slow rotation motion along a z coordinate axis, then three groups of representative hand motions are selected according to the characteristics of the test to respectively represent the translation of the palm and the rotation of the fingers around two axes, as shown in figure 4, the left side of each figure represents the gesture of classical three-dimensional tracking and the gesture information with high-order projection, the gesture observation likelihood model result which can be observed from the figure and is fused with new three-dimensional tracking can be well matched with the original gesture information of the gesture observation likelihood model basic platform of three-dimensional tracking, and a better modeling effect is obtained.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A three-dimensional tracking gesture observation likelihood modeling method is characterized in that: and performing gesture observation likelihood modeling by using foreground information of the classic three-dimensional gesture modeling method and high-dimensional well depth information matched with the Chamfer distance, and obtaining a three-dimensional tracking gesture observation likelihood model according to a gesture outline and a classic three-dimensional gesture modeling result.
2. The three-dimensional tracked gesture observation likelihood modeling method according to claim 1, characterized by: the gesture observation likelihood modeling method comprises preprocessing of high-dimensional well depth image information and gesture similarity measurement of the high-dimensional well depth image information.
3. The three-dimensional tracking gesture observation likelihood modeling method according to claim 2, characterized in that: the preprocessing of the high-dimensional well depth image information comprises gesture similarity measurement preprocessing, specifically comprises the steps of acquiring a gesture data source, and then performing gesture segmentation, gesture enhancement, gesture modeling and edge detection.
4. The three-dimensional tracked gesture observation likelihood modeling method according to claim 3, characterized by: the method comprises the steps of establishing a capture window of a data source by calling a CreatCaptureWindow () method, establishing connection between a gesture camera and a VS by calling a CraetWebDriver () method, setting relevant parameters of the camera capture gesture image by calling a DlgCammmorsource () method, and finally finishing the transmission of the digital image by using a callback pointer of a CameraCallbackImg () method to point to the capture window, wherein the VS automatically judges whether a captured object meets a minimum frame frequency rule after each transmission, continues to capture the digital signal if the captured object meets the minimum frame frequency rule, and otherwise, samples and extracts the image again.
5. The three-dimensional tracked gesture observation likelihood modeling method according to claim 1, characterized by: the high-dimensional well depth image information gesture similarity measurement specifically comprises the steps of firstly defining two data points
Figure FDA0002357147850000011
And
Figure FDA0002357147850000012
the Chamfer distance transformation can be performed on the binarized discontinuous edge as follows, and the Chamfer distance transformation is defined as shown in equation (1-2):
Figure FDA0002357147850000021
then constructing the scanning and traversing process of each digital primitive: we first define P and P to represent a high-dimensional well depth pixel set and a three-dimensional gesture modeling pixel set in a binary image, and perform one traversal by using a binary primitive method as shown in equation (1-3), that is, when a point coordinate element of a next edge image does not belong to an edge any more, a result of one scan is recorded as 0, and when a point coordinate element of the next edge image belongs to an edge, a result of one scan is +1 in a previous state, and finally, a minimization process is used, which is to ensure that a Chamfer distance has a certain digital frame gradient.
f1(p)=min{f1(q)+1:q∈B(p)}fp∈<P>(1-3)
Figure FDA0002357147850000022
Then we define the coordinate of p as (x, y), four element points of (x +1, y), (x-1, y), (x, y +1) and (x, y-1) exist in the four adjacent domains at the point, and for the edge point after the first scanning, a second scanning is performed as shown in the formula (1-4), and such a process can ensure that the well depth of the second-order gradient can match the model obtained by the classical three-dimensional gesture modeling. In order to ensure that the accuracy of the two scans can be superimposed on each other, the second scan is performed in the opposite direction to the first scan.
f2(p)=min{f1(p),f2(q)+1:q∈A(p)}(1-4)
After the edges are introduced into the Chamfer distance transformation, a gesture similarity measurement of the edges of the classic three-dimensional gesture model and the gesture outline can be defined, as shown in the formula (3-5), and therefore gesture observation likelihood information of three-dimensional tracking can be obtained.
pedge=exp(-dchamfef(edge,counter)) (1-5)。
6. The three-dimensional tracked gesture observation likelihood modeling method according to claim 1, characterized by: the similarity measurement of the classic three-dimensional gesture model comprises a similarity likelihood function for defining the foreground information of the classic three-dimensional gesture model and the high-order projection gesture information, as shown in the formula (1-6), wherein a union represents the maximum combination pixel region of the foreground information pixel point of the classic three-dimensional gesture model and the high-order projection gesture information pixel point, an intersection represents the common region of the foreground information pixel point of the classic three-dimensional gesture model and the high-order projection gesture information pixel point, and the similarity measurement formed in the way is recorded as PforegroundThe expression shows that the similarity measurement of the latter adds high-order projection gesture information on the basis of the classical model.
Pforeground=exp{-[Sforeground∪Sprojection]-[Sforeground∩Sprojection]} (1-6)。
CN202010010969.2A 2020-01-06 2020-01-06 Three-dimensional tracking gesture observation likelihood modeling method Pending CN111241971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010010969.2A CN111241971A (en) 2020-01-06 2020-01-06 Three-dimensional tracking gesture observation likelihood modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010010969.2A CN111241971A (en) 2020-01-06 2020-01-06 Three-dimensional tracking gesture observation likelihood modeling method

Publications (1)

Publication Number Publication Date
CN111241971A true CN111241971A (en) 2020-06-05

Family

ID=70874281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010010969.2A Pending CN111241971A (en) 2020-01-06 2020-01-06 Three-dimensional tracking gesture observation likelihood modeling method

Country Status (1)

Country Link
CN (1) CN111241971A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236793A (en) * 2010-04-29 2011-11-09 比亚迪股份有限公司 Method for rapidly detecting skin color
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information
CN103649967A (en) * 2011-06-23 2014-03-19 阿尔卡特朗讯 Dynamic gesture recognition process and authoring system
CN103679154A (en) * 2013-12-26 2014-03-26 中国科学院自动化研究所 Three-dimensional gesture action recognition method based on depth images
US20140211991A1 (en) * 2013-01-30 2014-07-31 Imimtek, Inc. Systems and methods for initializing motion tracking of human hands
CN108256421A (en) * 2017-12-05 2018-07-06 盈盛资讯科技有限公司 A kind of dynamic gesture sequence real-time identification method, system and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236793A (en) * 2010-04-29 2011-11-09 比亚迪股份有限公司 Method for rapidly detecting skin color
CN103649967A (en) * 2011-06-23 2014-03-19 阿尔卡特朗讯 Dynamic gesture recognition process and authoring system
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information
US20140211991A1 (en) * 2013-01-30 2014-07-31 Imimtek, Inc. Systems and methods for initializing motion tracking of human hands
CN103679154A (en) * 2013-12-26 2014-03-26 中国科学院自动化研究所 Three-dimensional gesture action recognition method based on depth images
CN108256421A (en) * 2017-12-05 2018-07-06 盈盛资讯科技有限公司 A kind of dynamic gesture sequence real-time identification method, system and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘炳超: "手势跟踪中的观测似然模型研究" *

Similar Documents

Publication Publication Date Title
CN110363858B (en) Three-dimensional face reconstruction method and system
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
US8269722B2 (en) Gesture recognition system and method thereof
JP5167248B2 (en) Modeling of humanoid shape by depth map
KR101616926B1 (en) Image processing apparatus and method
CN110443205B (en) Hand image segmentation method and device
KR20170008638A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
JP2009536731A5 (en)
KR20050022306A (en) Method and Apparatus for image-based photorealistic 3D face modeling
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
US11908081B2 (en) Method and system for automatic characterization of a three-dimensional (3D) point cloud
CN111739071B (en) Initial value-based rapid iterative registration method, medium, terminal and device
CN111915723A (en) Indoor three-dimensional panorama construction method and system
KR20110021500A (en) Method for real-time moving object tracking and distance measurement and apparatus thereof
CN111199169A (en) Image processing method and device
CN111583386A (en) Multi-view human body posture reconstruction method based on label propagation algorithm
JP3144400B2 (en) Gesture recognition device and method
JP2016009448A (en) Determination device, determination method, and determination program
JP6127958B2 (en) Information processing apparatus, information processing method, and program
CN113689365B (en) Target tracking and positioning method based on Azure Kinect
CN111241971A (en) Three-dimensional tracking gesture observation likelihood modeling method
CN113140031B (en) Three-dimensional image modeling system and method and oral cavity scanning equipment applying same
CN115841602A (en) Construction method and device of three-dimensional attitude estimation data set based on multiple visual angles
CN111079618A (en) Three-dimensional tracking gesture observation likelihood modeling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination