CN108846348B - Human behavior recognition method based on three-dimensional skeleton characteristics - Google Patents

Human behavior recognition method based on three-dimensional skeleton characteristics Download PDF

Info

Publication number
CN108846348B
CN108846348B CN201810577437.XA CN201810577437A CN108846348B CN 108846348 B CN108846348 B CN 108846348B CN 201810577437 A CN201810577437 A CN 201810577437A CN 108846348 B CN108846348 B CN 108846348B
Authority
CN
China
Prior art keywords
projection
dimensional
human body
coordinate system
joint point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810577437.XA
Other languages
Chinese (zh)
Other versions
CN108846348A (en
Inventor
冯子亮
齐凌云
黄潇逸
董朋林
丁健伟
邓茜文
韩震博
闫秋芳
赵洋
赵伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201810577437.XA priority Critical patent/CN108846348B/en
Publication of CN108846348A publication Critical patent/CN108846348A/en
Application granted granted Critical
Publication of CN108846348B publication Critical patent/CN108846348B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a human body behavior recognition method based on three-dimensional bone characteristics, which uses three-dimensional coordinate information of human body bone joint points in a three-dimensional human body behavior sequence to obtain bone joint point motion projection characteristic vectors, uses a classification algorithm to train and classify the bone joint point motion projection characteristic vectors, and can realize the recognition of human body behaviors. The method only uses three-dimensional coordinate information of bones, and expresses the space-time relationship of all the bone joint points in the motion process more accurately through space-time projection of the bone joint points; the method has the advantages of small data volume, low calculation complexity and the like, and is favorable for improving the real-time performance in practical application.

Description

Human behavior recognition method based on three-dimensional skeleton characteristics
Technical Field
The invention relates to the technical field of computer vision and the technical field of human behavior recognition, in particular to a human behavior recognition method based on three-dimensional bone characteristics.
Background
The human behavior recognition technology aims to solve human behaviors through a computer mechanism, and further realize various interactions and treatments. According to the complexity of human behaviors, the human behaviors can be roughly classified into three categories: individual behavior, interactive behavior, and group behavior; the present invention concerns the identification of individual behaviors.
Early behavior recognition technology is mainly realized by using a color image sequence acquired by a common camera, but three-dimensional information is lost because the common camera maps a three-dimensional scene into a two-dimensional image during imaging, so that behavior recognition is very easily influenced by factors such as complex background interference, illumination intensity change, camera motion and the like, and the recognition result is not ideal.
With the advent of depth sensors such as Kinect, three-dimensional information such as depth can be obtained, and people begin to use them to conduct human behavior recognition research, mainly using the obtained depth and three-dimensional bone information, but there are still many problems in application at present.
At present, the challenges and difficult problems existing in human behavior recognition mainly comprise:
(1) the data volume of the depth data is large, so that the behavior recognition and calculation are complex, and the calculation speed is low;
(2) the accuracy of the depth data is greatly influenced by scenes, and the accuracy of the acquired depth data is influenced by shooting angles, behavior occlusion and the like.
Disclosure of Invention
The invention provides a human body behavior recognition method based on three-dimensional bone characteristics, which performs sub-grid division on a projection graph by performing space-time projection on human body bone joint points, reduces the calculation data amount and reduces the calculation complexity.
A human behavior recognition method based on three-dimensional skeleton features is characterized by comprising the following steps.
Step S1, obtaining three-dimensional human body behavior sequence data of single behavior, wherein the sequence data comprises three-dimensional coordinate information of human body bone joint points.
And step S2, converting the skeleton joint point coordinates of each frame in the three-dimensional human behavior sequence data from a camera coordinate system to a human coordinate system, and then projecting the coordinates to three orthogonal planes formed by coordinate axes in the human coordinate system.
And step S3, respectively accumulating the projection points of each frame data in the three orthogonal planes of the human body coordinate system in the three-dimensional human body behavior sequence data to form three bone joint point motion projection graphs.
And step S4, performing normalization and sub-grid division on the three bone joint point motion projection graphs.
And step S5, counting the number of the projection points of the bone joint points of each sub-grid, forming a projection feature vector of the motion of the bone joint points, and normalizing to the interval of [0,1 ].
And step S6, training the normalized bone joint point motion projection feature vector by using a classification algorithm to obtain classification parameters, and finally realizing human behavior recognition.
The camera coordinate system, comprising:
the camera coordinate system is a three-dimensional coordinate system with the camera as an origin, and generally, the imaging plane of the camera is an xy plane, and the optical axis direction of the camera is a z-axis direction, thereby constituting a right-hand coordinate system.
The human body coordinate system comprises:
the human body coordinate system is a three-dimensional coordinate system taking a human body as a center, usually the lowest spine joint point in skeleton joint points is taken as an origin, the direction of the human body perpendicular to the ground is taken as the z-axis direction, the left-right direction of the human body is taken as the y-direction, and the front-back direction of the human body is taken as the x-direction, so that a right-hand coordinate system is formed; the coordinate system is a relative coordinate system, that is, if only the position of the human body changes and the relative position of each joint point does not change during the movement, the value of each joint point in the camera coordinate system changes, but the value in the human body coordinate system does not change.
The bone articulation point motion projection view comprising:
the bone joint point motion projection graph comprises three two-dimensional projection graphs, each projection graph is formed by accumulating two-dimensional projection coordinate points of three-dimensional bone joint points of each frame in three-dimensional human body behavior sequence data on a corresponding plane, and the essence is space-time projection of the three-dimensional bone joint point data on the corresponding plane.
The two-dimensional projection coordinate point accumulation on the corresponding plane comprises the following steps:
adding 1 to the value at the projection position in the projection image once per projection; that is, the planar projection is regarded as a two-dimensional image, and the value at a certain coordinate position of the image is the number of the joint points falling on the projection point.
The normalizing and sub-grid dividing of the bone joint point motion projection graph comprises the following steps:
respectively normalizing the three bone joint point motion projection drawings to form an image with a fixed size;
and dividing the normalized bone joint point motion projection graph into m × n rectangular sub grids with the same size.
The forming of the bone joint point motion projection feature vector comprises the following steps:
respectively counting the number of skeletal joint points in each sub grid in the three skeletal joint point motion projection graphs to form skeletal joint point motion projection feature vectors with three dimensions, such as m x n x 3;
the number of the bone joint points of each sub-grid is counted, namely the values of all the points in the range of the sub-grid in the motion projection graph are accumulated, because the value of each position point is equal to the number of the bone joint points projected at the point.
The normalization to [0,1] interval includes:
dividing the three-dimensional bone joint point motion projection feature vector by the frame number of the behavior sequence data and then dividing by the number of bone points in each frame to obtain a normalized bone joint point motion projection feature vector, wherein the value of each element in the vector is positioned in the range of [0,1 ].
In the step S6, the normalized bone joint point motion projection feature vector is trained by using a classification algorithm to obtain classification parameters, and finally, the human behavior recognition is realized, including:
calculating normalized bone joint point motion projection characteristic vectors aiming at a three-dimensional human body behavior sequence data set comprising a plurality of human bodies and a plurality of behaviors, training by using a classification algorithm, performing classification training by using training set data according to a classification method of the training set and a test set, and performing test by using the test set data, thereby obtaining the optimal classification parameters of a classifier and finally realizing the identification of the human body behaviors.
Compared with the prior art, the invention has the following advantages:
(1) the invention only uses the three-dimensional coordinate data of the skeletal joint points in the behavior sequence, and has the characteristics of small data volume and low calculation complexity;
(2) the bone joint point projection motion projection graph and the characteristic vector are used, the essence of the projection is the space-time projection of the bone joint points, and the space-time relation of all the bone joint points in the motion process is accurately expressed;
(3) the invention reduces the calculation complexity by generating the motion projection drawing and dividing the motion projection drawing into the sub lattices, and is beneficial to improving the real-time performance in practical application.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are described in more detail and completely below, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments.
A human behavior recognition method based on three-dimensional bone features, as shown in fig. 1, includes the following steps.
Step S1, using the disclosed three-dimensional human behavior data set, comprising:
if the MSR Action 3D data set is adopted, 20 skeletal joint point three-dimensional data can be obtained per frame.
Step S2, projecting the bone joint point data for each frame, comprising:
for each frame of human behavior sequence data, converting three-dimensional coordinates of the skeletal joint points from a camera coordinate system to a human coordinate system, and projecting the three-dimensional coordinates to three orthogonal planes formed by coordinate axes in the human coordinate system.
The conversion from the camera coordinate system to the human body coordinate system comprises the following steps:
setting the forward optical axis of a camera coordinate system as the positive direction of a z axis, the vertical upward direction as the positive direction of a y axis, and the right direction opposite to the camera as the positive direction of an x axis;
setting a human body coordinate system as a positive direction of a z axis in the vertical direction of the ground, a positive direction of an x axis in the forward direction and a positive direction of a y axis in the leftward direction;
assuming that the coordinate axes of the camera coordinate system are ox, oy and oz, the coordinate axes of the human body coordinate system are ox ', oy' and oz ', and the coordinate of the origin of the human body coordinate system in the camera coordinate system is (x0, y0, z0), any point (x, y, z) in the camera coordinate system is converted into the coordinate (x', y ', z') in the human body coordinate system, and the conversion formula between the two is as follows:
Figure 924984DEST_PATH_IMAGE001
wherein a1, b1, c1, a2, b2, c2, a3, b3 and c3 are respectively included angles between coordinate axes ox ', oy ' and oz ' in the human body coordinate system and coordinate axes ox, oy and oz in the camera coordinate system.
Step S3, accumulating the projection data of the bone joint points of each frame to form a projection diagram of the bone joint point movement, including:
and accumulating all projection points of each frame of the human behavior sequence data on three orthogonal surfaces of the human coordinate system to form three bone joint point motion projection graphs of the sequence data.
If each frame contains 20 joint points, and the sequence has n frames, each plane has 20 x n projection points after the accumulation, and the total number of the projection points of the three planes is 3 x 20 x n.
The three projected points on the three orthogonal surfaces of the human body coordinate system are accumulated to form three bone joint point motion projection graphs of the sequence data, and the three projected points comprise:
the bone joint point movement projection graph comprises three two-dimensional projection graphs, each projection graph is formed by accumulating two-dimensional projection coordinate points of three-dimensional bone points in each frame in three-dimensional human body behavior sequence data on a corresponding plane, and the essence is space-time projection of three-dimensional bone node data on the corresponding plane;
adding 1 to the value at the projection position of the projection image once per projection; that is, the planar projection is regarded as a two-dimensional image, and the value at a certain coordinate position of the image is the number of the joint points falling on the projection point.
Step S4, respectively performing normalization and sub-grid division on the bone joint point motion projection map, including:
normalizing the three bone joint point motion projection images respectively to form images with fixed sizes, such as 512 x 512;
dividing the normalized bone joint point motion projection graph into m × n rectangular sub-grids with the same size, namely dividing the image into m × n sub-grids, such as 4 × 4 sub-grids, wherein the size of each sub-grid is 128 × 128;
therefore, the size difference of the projection images of the skeletal joint point movement, which is introduced by different human body sizes and different lens distances, can be removed, and the three projection images formed have comparability.
Step S5, generating a normalized bone joint motion projection feature vector, including:
counting the number of projection points of the bone joint points of each sub-lattice to form a projection feature vector of the motion of the bone joint points; accumulating the values of each point in each sub-grid in the bone joint point motion projection graph to form a bone joint point motion projection characteristic vector with three dimensions, such as m x n x 3; such as forming vectors of 4 x 3;
the normalization means normalizing to a [0,1] interval, namely dividing the motion projection feature vector of the bone joint point with three dimensions by the frame number of the behavior sequence data and then dividing by the number of the bone points in each frame to obtain the normalized motion projection feature vector of the bone joint point, wherein the value of each element in the vector is positioned in the range of the [0,1] interval;
if the sequence data comprises K frames and each frame comprises 20 skeletal joint points, the projected feature vector of skeletal joint point motion in three dimensions is divided by K × 20 to obtain the normalized projected feature vector of skeletal joint point motion.
Step S6, training the projection feature vector of the bone joint point movement by using a classification algorithm, comprising the following steps:
selecting half of the data as a training data set and the other half as a test data set by using a cross validation test method;
training by using a classification algorithm, performing classification training by using training set data according to a classification method of a training set and a test set, and performing test by using test set data so as to obtain the optimal classification parameters.
Step S7, finishing behavior recognition by using a classification algorithm aiming at the three-dimensional human body behavior data sequence, comprising the following steps:
and generating normalized bone joint point motion projection feature vectors according to the obtained three-dimensional human body behavior data sequence in the steps of S2-S5, and then identifying the current behavior data by using the classification parameters obtained before and a classification algorithm.
The classification algorithm may use a Support Vector Machine (SVM).
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: it remains
Modifications can be made to the technical solutions described in the foregoing embodiments, or some or all of the technical features can be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (3)

1. A human behavior recognition method based on three-dimensional skeleton features is characterized by comprising the following steps:
step S1, obtaining three-dimensional human body behavior sequence data of a single behavior, wherein the sequence data comprises three-dimensional coordinate information of human body skeletal joint points;
step S2, converting the skeleton joint point coordinates of each frame in the three-dimensional human behavior sequence data from a camera coordinate system to a human coordinate system, and then projecting the coordinates to three orthogonal planes formed by coordinate axes in the human coordinate system;
step S3, respectively accumulating the projection diagrams of each frame data in the three orthogonal planes of the human body coordinate system in the three-dimensional human body behavior sequence data to form three bone joint point motion projection diagrams;
the bone articulation point motion projection view comprising:
the bone joint point movement projection graph comprises three two-dimensional projection graphs, each projection graph is formed by accumulating two-dimensional projection coordinate points of three-dimensional bone points of each frame in three-dimensional human body behavior sequence data on a corresponding plane, and the essence is space-time projection of three-dimensional bone node data on the corresponding plane;
the two-dimensional projection coordinate point accumulation on the corresponding plane comprises the following steps:
adding 1 to the value at the projection position in the projection image once per projection; regarding the plane projection as a two-dimensional image, wherein the value of a certain coordinate position of the image is the number of the joint points falling on the projection point;
step S4, performing normalization and sub-grid division on the three bone joint point motion projection views, including:
respectively normalizing the three bone joint point motion projection drawings to form an image with a fixed size;
dividing the normalized bone joint point motion projection graph into m × n rectangular sub grids with the same size;
step S5, counting the number of projection points of skeletal joint points in each sub lattice, forming a projection feature vector of skeletal joint point movement, and normalizing to a [0,1] interval; the method comprises the following steps:
respectively counting the number of skeletal joint points in each sub-grid in the three skeletal joint point motion projection drawings, namely accumulating the values of all the points in the range of the sub-grid to form a skeletal joint point motion projection feature vector of m x n x 3 of three dimensions;
dividing the projection feature vector of the bone joint point motion of three dimensions by the frame number of the behavior sequence data and then dividing by the number of the bone points in each frame to obtain the projection feature vector of the bone joint point motion normalized to the [0,1] interval;
and step S6, training and classifying the normalized bone joint point motion projection feature vectors by using a classification algorithm.
2. The method of claim 1, comprising:
the camera coordinate system is a three-dimensional coordinate system taking a camera as an origin, an imaging plane of the camera is an xy plane, and the optical axis direction of the camera is the z-axis direction, so that a right-hand coordinate system is formed;
the human body coordinate system is a three-dimensional coordinate system taking a human body as a center, takes a lowest spinal joint point as an origin, and forms a right-hand coordinate system, wherein the direction of the human body perpendicular to the ground is the z-axis direction, the left-right direction of the human body is the y-direction, and the front-back direction of the human body is the x-direction; the coordinate system is a relative coordinate system.
3. The method of claim 1, wherein the training and classifying of the normalized bone joint motion projection feature vectors using a classification algorithm comprises:
calculating normalized bone joint point motion projection characteristic vectors aiming at a three-dimensional human body behavior sequence data set comprising a plurality of human bodies and a plurality of behaviors, training by using a classification algorithm, performing classification training by using training set data according to a classification method of the training set and a test set, and performing test by using the test set data, thereby obtaining the optimal classification parameters of a classifier and realizing the identification of the human body behaviors.
CN201810577437.XA 2018-06-07 2018-06-07 Human behavior recognition method based on three-dimensional skeleton characteristics Active CN108846348B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810577437.XA CN108846348B (en) 2018-06-07 2018-06-07 Human behavior recognition method based on three-dimensional skeleton characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810577437.XA CN108846348B (en) 2018-06-07 2018-06-07 Human behavior recognition method based on three-dimensional skeleton characteristics

Publications (2)

Publication Number Publication Date
CN108846348A CN108846348A (en) 2018-11-20
CN108846348B true CN108846348B (en) 2022-02-11

Family

ID=64210529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810577437.XA Active CN108846348B (en) 2018-06-07 2018-06-07 Human behavior recognition method based on three-dimensional skeleton characteristics

Country Status (1)

Country Link
CN (1) CN108846348B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3953857A4 (en) * 2019-04-12 2022-11-16 INTEL Corporation Technology to automatically identify the frontal body orientation of individuals in real-time multi-camera video feeds
CN111931804B (en) * 2020-06-18 2023-06-27 南京信息工程大学 Human body action automatic scoring method based on RGBD camera
CN111898576B (en) * 2020-08-06 2022-06-24 电子科技大学 Behavior identification method based on human skeleton space-time relationship
CN111914796B (en) * 2020-08-17 2022-05-13 四川大学 Human body behavior identification method based on depth map and skeleton points
CN112836824B (en) * 2021-03-04 2023-04-18 上海交通大学 Monocular three-dimensional human body pose unsupervised learning method, system and medium
CN114550308B (en) * 2022-04-22 2022-07-05 成都信息工程大学 Human skeleton action recognition method based on space-time diagram

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810496A (en) * 2014-01-09 2014-05-21 江南大学 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information
CN104598890A (en) * 2015-01-30 2015-05-06 南京邮电大学 Human body behavior recognizing method based on RGB-D video
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
CN107742097A (en) * 2017-09-30 2018-02-27 长沙湘计海盾科技有限公司 A kind of Human bodys' response method based on depth camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140347479A1 (en) * 2011-11-13 2014-11-27 Dor Givon Methods, Systems, Apparatuses, Circuits and Associated Computer Executable Code for Video Based Subject Characterization, Categorization, Identification, Tracking, Monitoring and/or Presence Response
CN103747196B (en) * 2013-12-31 2017-08-01 北京理工大学 A kind of projecting method based on Kinect sensor
US9489570B2 (en) * 2013-12-31 2016-11-08 Konica Minolta Laboratory U.S.A., Inc. Method and system for emotion and behavior recognition
KR101554677B1 (en) * 2014-04-11 2015-10-01 주식회사 제론헬스케어 System for sensoring body behavior
CN104866860A (en) * 2015-03-20 2015-08-26 武汉工程大学 Indoor human body behavior recognition method
CN106228109A (en) * 2016-07-08 2016-12-14 天津大学 A kind of action identification method based on skeleton motion track

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810496A (en) * 2014-01-09 2014-05-21 江南大学 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information
CN104598890A (en) * 2015-01-30 2015-05-06 南京邮电大学 Human body behavior recognizing method based on RGB-D video
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
CN107742097A (en) * 2017-09-30 2018-02-27 长沙湘计海盾科技有限公司 A kind of Human bodys' response method based on depth camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Learning features combination for human action recognition from skeleton sequences;DiogoCarbonera Luvizon等;《Pattern Recognition Letters》;20171130;第99卷;第13-20页 *
Wanqing Li等.Action Recognition Based on A Bag of 3D Points.《2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops》.2010, *
一种基于骨骼关节点的行为识别算法研究;邓美玲;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20170215;第I138-2654页 *
基于深度图像与骨骼数据的行为识别;陆中秋等;《计算机应用》;20161110;第36卷(第11期);第2979-2984+2992页 *

Also Published As

Publication number Publication date
CN108846348A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN108846348B (en) Human behavior recognition method based on three-dimensional skeleton characteristics
CN104156937B (en) shadow detection method and device
US10789765B2 (en) Three-dimensional reconstruction method
Bodor et al. View-independent human motion classification using image-based reconstruction
CN102971768B (en) Posture state estimation unit and posture state method of estimation
EP3646244A1 (en) Method and system for performing simultaneous localization and mapping using convolutional image transformation
CN106910242A (en) The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN106155299B (en) A kind of pair of smart machine carries out the method and device of gesture control
US20210044787A1 (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, and computer
CN110220493A (en) A kind of binocular distance measuring method and its device
CN102063607B (en) Method and system for acquiring human face image
CN109559332B (en) Sight tracking method combining bidirectional LSTM and Itracker
CN110567441B (en) Particle filter-based positioning method, positioning device, mapping and positioning method
CN112070782B (en) Method, device, computer readable medium and electronic equipment for identifying scene contour
CN113850865A (en) Human body posture positioning method and system based on binocular vision and storage medium
Nonaka et al. Dynamic 3d gaze from afar: Deep gaze estimation from temporal eye-head-body coordination
CN108537214B (en) Automatic construction method of indoor semantic map
CN112379773B (en) Multi-person three-dimensional motion capturing method, storage medium and electronic equipment
CN114766042A (en) Target detection method, device, terminal equipment and medium
US10791321B2 (en) Constructing a user's face model using particle filters
JP2018195070A (en) Information processing apparatus, information processing method, and program
Lin et al. Extracting 3D facial animation parameters from multiview video clips
CN106096516A (en) The method and device that a kind of objective is followed the tracks of
Hu et al. R-CNN based 3D object detection for autonomous driving
CN108694348B (en) Tracking registration method and device based on natural features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant