CN106980116B - High-precision indoor figure ranging method based on Kinect camera - Google Patents

High-precision indoor figure ranging method based on Kinect camera Download PDF

Info

Publication number
CN106980116B
CN106980116B CN201710226448.9A CN201710226448A CN106980116B CN 106980116 B CN106980116 B CN 106980116B CN 201710226448 A CN201710226448 A CN 201710226448A CN 106980116 B CN106980116 B CN 106980116B
Authority
CN
China
Prior art keywords
data
fitting
kinect
distance
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710226448.9A
Other languages
Chinese (zh)
Other versions
CN106980116A (en
Inventor
陈方飞
冯瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201710226448.9A priority Critical patent/CN106980116B/en
Publication of CN106980116A publication Critical patent/CN106980116A/en
Application granted granted Critical
Publication of CN106980116B publication Critical patent/CN106980116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of digital image processing, and particularly relates to a high-precision indoor figure distance measuring method based on a Kinect camera. The hardware configuration realized by the invention is a Kinect v2 somatosensory camera, a PC with USB3.0, a PC processor Core-i3 or more, and an operating system in the PC is Windows 8 or more version. The method comprises the steps of firstly obtaining main skeleton three-dimensional position data of a human body from a Kinect somatosensory camera, fitting and calibrating the obtained skeleton position data according to a calibrated position data model, then smoothing the calibrated position data through a Kalman filter according to historical position data of each skeleton point, obtaining final position information of each skeleton point of the human body through position data filtered by the Kalman filter, and obtaining the distance between each skeleton point of a person and a target point through calculating the position coordinate distance between the position coordinate and the target point position coordinate. The figure ranging precision realized by the method is within 20 mm.

Description

High-precision indoor figure ranging method based on Kinect camera
Technical Field
The invention belongs to the field of digital image processing, and particularly relates to an indoor figure ranging method.
Background
With the application and development of intelligent monitoring in machine vision in the practical field, people detection and tracking technology using a camera has been greatly developed, but more research is needed in the aspect of human body space positioning. The current methods for indoor positioning are mainly classified into APGS (active-GPS) positioning technology, wireless positioning technology, and computer vision positioning technology.
The GPS is an outdoor positioning technology widely applied to the fields of aviation and navigation, vehicle navigation and the like, compared with indoor positioning, the GPS positioning error is larger, and AGPS improves the sensitivity of signals by obtaining satellite ephemeris and prolonging the delay time of each code on the basis of the GPS, although APGS increases the positioning accuracy to a certain extent, the APGS still cannot meet the requirement (within 20 mm) of the current indoor accurate positioning system.
The wireless positioning technology mainly utilizes information such as arrival time, arrival time difference, arrival signal angle, arrival signal strength and the like of signals between beacon nodes to determine position coordinates, and includes an indoor positioning method based on infrared rays, an indoor positioning algorithm based on RFID, an indoor positioning method based on ultrasonic waves, a RADAR indoor positioning system and the like. Although these current wireless indoor positioning methods can satisfy a certain application field, and the best positioning accuracy therein can be controlled within 15 cm. However, the accuracy is different from the ideal accuracy of 10mm, and the wireless positioning technology environment is complex to build and high in cost.
The positioning technology of machine vision mainly collects indoor environment deep well data through a camera to perform indoor space positioning, after researching a plurality of mature body sensing cameras, a Kinect body sensing camera developed by Microsoft is found to obtain a good effect, the Kinect mainly obtains deep well images in a scene through infrared camera shooting and receiving structured light, and then the positions of all parts of a human body are estimated through a random forest method, so that a relatively accurate human body space position [1] is provided.
The Kinect camera can accurately detect pedestrians in the visual field and accurately position three-dimensional coordinates of a human body, but the data accuracy and stability are further improved, so that the invention provides a series of methods for processing Kinect measurement data, and the accuracy of the processed data is controlled within 20 mm.
Disclosure of Invention
The invention aims to provide a high-precision (within 20 mm) indoor person ranging method.
The high-precision indoor figure ranging method is based on a Kinect somatosensory camera, namely data of positions of main joints of a human body are acquired by the Kinect somatosensory camera and processed, so that the indoor figure ranging precision is within 20 mm.
The hardware of the invention is provided with a Kinect v2 somatosensory camera, a PC with USB3.0, a PC processor Core-i3 or more, and an operating system in the PC is Windows 8 or more. The method comprises the following steps:
firstly, acquiring three-dimensional position data of main bones of a human body by a Kinect somatosensory camera;
then, fitting and calibrating the acquired three-dimensional bone position data by the PC according to the calibrated position data model;
then, according to the historical position data of each bone point, smoothing the bone three-dimensional position data after fitting and calibration by adopting a Kalman filter, wherein the position data after Kalman filtering is the final position information of each bone point of the human body;
and finally, calculating the distance between the position coordinate and the position coordinate of the target point to obtain the distance between each skeleton point of the character and the target point. Namely, the method of the invention comprises 4 processing procedures: data acquisition, fitting calibration, Kalman filtering and distance calculation.
1. Data acquisition
The Kinect camera monitors an indoor scene in real time, identifies people in the scene, accurately calculates three-dimensional position coordinates of a plurality of main joint points of a human body, and transmits the three-dimensional position coordinates to the PC through the USB3.0 data interface. The main joint points of the human body calculated by Kinect are shown in FIG. 1.
2. Fitting calibration
The specific process comprises two parts of fitting model training and fitting calibration:
and (3) training a fitting model, namely calculating distance deviation by calculating Kinect figure distance measurement data and a laboratory calibration figure distance measurement device result, and calculating a multi-element fitting parameter of the Kinect distance measurement deviation and figure three-dimensional position coordinate relation.
And fitting and calibrating, namely calculating the position deviation of the figure at the position according to the trained multivariate fitting parameters after acquiring the original data of the figure position of the Kinect camera, wherein the original data minus the deviation is calibrated data:
Figure 663339DEST_PATH_IMAGE001
the first term on the right in the equation is the raw data and the second term is the offset.
Specifically, in the training stage of the fitting model, the system adopts multiple regression analysis to perform linear fitting on data, the multiple regression analysis is an important method for calculating the relation between a target variable and a known variable in statistical application, and the predicted object is set as w, and m influencing factors are respectively set as
Figure 884368DEST_PATH_IMAGE002
. There is the following linear relationship between them:
Figure 552109DEST_PATH_IMAGE003
(1)。
according to the multiple regression analysis idea, if n groups of statistical data are given
Figure 424250DEST_PATH_IMAGE004
The parameter values of the corresponding variables can be obtained by solving the partial derivatives of each variable and then solving the multivariate homogeneous equation. Based on the model, the system takes error as a prediction object w, spatial position information (x, y, z) is used as an influence factor, and experiments find that w is most influenced by z and does not satisfy a linear relation.
In the fitting calibration stage, the system calculates the difference value between the detection distance of the Kinect camera and the actual distance at each calibration point as a detection error, and then calculates the influence parameters of each influence factor according to the multivariate regression analysis method, so as to obtain a relational expression between the error and the detection position. When the actual detection is performed, the detection position is substituted into the relational expression to obtain the corresponding error.
3. Kalman filtering
In real-time scene detection, the Kinect camera detection result shows better stability on the whole, but still has some larger noises, and in order to reduce the noises and enable the measured data to be smoother and more accurate, the method introduces a Kalman filtering method to carry out data filtering. Kalman Filter is an optimized autoregressive data processing algorithm. Based on the idea of differentiation, in the real-time detection process, pedestrians can approximately move at a constant speed in each very short time period, so that Kalman filtering data filtering can be performed by taking the pedestrian as a priori and comprises two steps: (1) calculating a predicted value, and (2) updating the predicted value.
The prediction value is calculated based on the previous frame measurement value
Figure 135723DEST_PATH_IMAGE005
And current statistical motion variables
Figure 358894DEST_PATH_IMAGE006
The current motion model may be expressed as:
Figure 564748DEST_PATH_IMAGE007
(2)。
in the experimental system, the predicted state value and the previous state value present a linear relationship with a multiplication factor of 1, so that the predicted value of the covariance is:
Figure 556974DEST_PATH_IMAGE008
(3)。
where Q is the system process covariance.
According to the kalman filtering principle, the more recent predicted values are:
Figure 940813DEST_PATH_IMAGE009
(5)。
in the formula
Figure 385701DEST_PATH_IMAGE010
For the measurement, Kg is the kalman gain value, which is calculated as:
Figure 395246DEST_PATH_IMAGE011
(4)。
wherein
Figure 491246DEST_PATH_IMAGE012
And R is the covariance between the observed value and the real value.
And simultaneously updating the covariance:
Figure 295254DEST_PATH_IMAGE013
(6) 。
4. distance calculation
And calculating the distance between the position coordinate and the position coordinate of the target point to obtain the distance between each skeleton point of the character and the target point.
The indoor figure ranging realized through the steps can calculate the position information of each trunk part of the human body, and the ranging precision of each trunk part is within 20 mm.
Drawings
Fig. 1 shows the distribution of bone points of human body position information obtained by Kinect.
Fig. 2 is an indoor people ranging interface.
FIG. 3 is a flow chart of the method of the present invention.
Fig. 4 is a schematic diagram of the influence of coordinate deviation (the elevation angle of the camera is α in the diagram).
Fig. 5 shows X, Y, Z coordinate measurement residuals of laser contrast experiments.
Fig. 6 shows the residue of the X, Y, Z coordinate measurement from the verification experiment.
Detailed Description
1. Hardware preparation
The Kinect v2 somatosensory camera is powered on, and the data line is connected with the USB3.0 serial port in the PC.
2. Starting indoor figure ranging
The figure ranging method is used in cooperation with a radar security system and is an auxiliary system for indoor radar safety holographic imaging. When a radar system senses that pedestrians exist in a scene area, an indoor character detection system developed based on the method is started in a subprocess calling mode, an interface is shown in figure 2 after a program is opened, an indoor character ranging system receives a main process command in a pipeline communication mode and starts to detect the positions of characters in the indoor scene in real time, the distances between all joint points of the characters and a camera are arranged on the right side in figure 2, and the real-time distance positions of all the joint points are stored in a file under a target directory respectively and are used by a main process.
Distance measurement precision contrast test
The method adopts two distance measurement precision experiment result verification methods, one is a verification experiment for comparing measurement results with a laser distance meter, the other is a verification experiment for simultaneously detecting through two identical project research systems, and data of one detection device is used as standard data to detect data of the other detection device.
1. Laser rangefinder contrast verification
Let the human model be uniform velocity linear motion on motion platform, laser range finder real-time detection motion platform distance, detecting system detects each articulated point three-dimensional position coordinate of human body simultaneously, after the experiment, utilizes uniform velocity linear motion and laser range finder data verification detecting system to detect the accuracy of data, the experimental mode as follows:
verifying the principle: because the laser range finder can only measure the distance in one direction, the position in the Z (moving direction) direction can be compared with the laser range finder for testing, and for the X (horizontal direction), the Y (vertical direction) can verify the error by utilizing the geometric relationship caused by the angle difference between the camera direction and the horizontal direction in the moving process, and the geometric principle of the angle difference is shown in figure 4.
For the same object, the object moves towards the camera along the moving track, the geometric relation exists between the position change in the vertical direction and the position change in the moving direction, the object moves from z1 to z2 in the camera direction, the moving distance is delta z, the position of the corresponding same position in the vertical direction is changed from y1 to y2, and the position variable is delta y. From the geometric trigonometric relationship in fig. 4, one can find:
Figure 493017DEST_PATH_IMAGE014
(7)
Figure 525827DEST_PATH_IMAGE015
(8)
from (7) and (8), it can be derived:
Figure 492646DEST_PATH_IMAGE016
(9)
and the following steps:
Figure 467555DEST_PATH_IMAGE017
(10)
Figure 887035DEST_PATH_IMAGE018
(11)
from (9), (10), (11) it is possible to calculate:
Figure 487649DEST_PATH_IMAGE019
(12)
the formula (12) is generalized, and it is assumed that at time t, the positions of some joint points of the model in the motion direction and the vertical direction are respectively z and y
Figure 308975DEST_PATH_IMAGE020
(13)
It can be known from equation (13) that y is linearly related to z, and similarly, x is linearly related to z, so that the accuracy of the detected data in x and y coordinates can be verified by using the correlation.
The experimental equipment requirements are as follows: 1 Kinect v2 camera, 1 high accuracy turbine motor drive track, 1 model, 1 desktop of installing positioning system software.
The experimental process comprises the following steps: the model is arranged on a high-precision turbine motor driving track platform, a Kinect camera is respectively placed in the middle of the vertical direction of a track and connected with a desktop, the model slowly moves under the driving of a motor on a motor track, a laser range finder ranging system is used for detecting the moving distance of the model in the track direction in real time, the positioning system is started simultaneously, three-dimensional data of the model in a moving scene are detected, and two systems independently detect positioning data in real time.
The experimental results are as follows: according to the verification principle, the matlab tool is used for data fitting, firstly, fitting correlation degrees and residual errors are obtained by x = az + b and y = az + b, all the fitting correlation degrees are greater than 0.95, since the residual error result and the correlation in fig. 6 are all verified, the correctness of the system for measuring the spatial position of the moving human body is illustrated, for the precision of the z direction, the precision is required to be verified to be within 10mm by comparing with the measurement data of the laser sensor, according to the analysis in 4.1, firstly, linear interpolation is carried out on real _ z and deal _ z through matlab, then the difference diff at the corresponding position is obtained, since the laser sensor and the zero point of the system have an absolute distance d, the result of diff-d on the measuring point is the measuring precision in the z direction, the X, Y, Z coordinate measuring result residual is shown in fig. 5, and the positioning error in each coordinate direction is obtained according to the residual result as follows:
X:1.1508 mm Y: 1.1708mm Z:2.2589mm。
2. self-verification experiment
The measurement error is verified by respectively and simultaneously detecting two same systems in the following experimental mode:
verifying the principle: because the two Kinect cameras have different coordinate systems, data cannot be directly compared, the transformation of the two coordinate systems mainly comprises the changes of rotation and displacement, and the coordinate transformation matrix of the camera is not changed after the position of the camera is fixed.
Figure 720365DEST_PATH_IMAGE021
Assuming that the coordinates of one camera a are (x 1, y1, z 1) and the coordinates of the corresponding point of the other camera B are (x 2, y2, z 2), there are
Figure 112294DEST_PATH_IMAGE022
The coordinate in A and the coordinate in B form a multi-element linear relation, so that the relation can be used for data verification. And carrying out Matlab multivariate linear fitting on the coordinates in the A and the coordinates in the B, wherein the residual error can be expressed as a data difference value of the two cameras.
The experimental equipment requirements are as follows: 2 Kinect v2 cameras, 1 motor drive track, 1 model, have installed 2 desktop of positioning system software.
The experimental process comprises the following steps: the model is arranged on a motor-driven track platform, two Kinect cameras are respectively arranged on the left side and the right side of a track and are respectively connected with two desktops, the model slowly moves under the drive of a motor on the motor track, a positioning system is started, and the two systems independently detect positioning data in real time.
The experimental results are as follows: in the experiment, a person freely moves in a scene, because the time of the two cameras is not synchronous, the measurement curves of the two cameras are displayed, the time is gradually adjusted to enable the two curves to be more consistent as much as possible, so that an approximate synchronous result is obtained, then, the public data segment is interpolated by using a linear interpolation method and taking 5ms as a gap, and then, a multi-linear fitting function is used for solving a residual error, wherein in the experiment, data of a group of A cameras 301 and data of a group of B cameras 317 are shown in figure 6, and the results of the residual errors corresponding to x, y and z coordinates are shown in figure 6.
The positioning errors in the coordinate directions are obtained according to the residual error result as follows:
X:10.5268mm Y: 3.7675mm Z: 12.8482mm
in the case of free walking of indoor pedestrians, the standard deviation of the coordinate detection error at each node is as shown in table 1:
TABLE 1 results of standard deviation of detection errors for each joint
Figure 1753DEST_PATH_IMAGE023
Reference to the literature
[1] Jamie Shotton, Andrew Fitzgibbon, Mat Cook.Real-Time Human Pose Recognition in Parts from Single Depth Images[C].CVPR 2011 1297 - 1304。

Claims (1)

1. High-precision indoor figure ranging method based on Kinect camera is characterized by comprising the following specific steps:
(1) data acquisition, namely acquiring three-dimensional position data of main bones of a human body by a Kinect somatosensory camera;
(2) fitting and calibrating, namely fitting and calibrating the acquired three-dimensional bone position data by the PC according to the calibrated position data model;
(3) performing Kalman filtering, namely smoothing the bone three-dimensional position data after fitting and calibration by adopting a Kalman filter according to historical position data of each bone point, wherein the position data after Kalman filtering is the final position information of each bone point of the human body;
(4) calculating the distance between the position coordinate and the position coordinate of the target point to obtain the distance between each skeleton point of the character and the target point;
the specific process of data acquisition is as follows: the Kinect camera monitors an indoor scene in real time, identifies people in the scene, accurately calculates three-dimensional position coordinates of a plurality of main joint points of the human body, transmits the three-dimensional position coordinates to the PC through the USB3.0 data interface, and calls a Kinect camera unified programming interface API to obtain three-dimensional position data of main bones of the human body processed by the Kinect in real time;
the specific process of the fitting calibration comprises two parts of fitting model training and fitting calibration:
training a fitting model: calculating distance deviation by calculating Kinect figure distance measurement data and a laboratory calibration figure distance measurement device result, and calculating a multi-element fitting parameter of a Kinect distance measurement deviation and figure three-dimensional position coordinate relation;
fitting and calibrating: after the original data of the figure position of the Kinect camera is collected, calculating the figure position deviation of the position according to the trained multivariate fitting parameters, and subtracting the deviation from the original data to obtain the corrected data:
Figure 349713DEST_PATH_IMAGE001
in the formula
Figure 795738DEST_PATH_IMAGE002
As the original data, it is the original data,
Figure 395346DEST_PATH_IMAGE003
is a deviation;
wherein:
in the training stage of the fitting model, the system adopts multiple regression analysis to perform linear fitting of data, the multiple regression analysis is an important method for calculating the relation between a target variable and a known variable in statistical application, a predicted object is set as w, and m influencing factors are respectively:
Figure 252313DEST_PATH_IMAGE004
(ii) a There is the following linear relationship between them:
Figure 570161DEST_PATH_IMAGE005
based on the model, the system takes an error as a prediction object w, spatial position information (x, y, z) is used as an influence factor, w is influenced most by z and does not meet a linear relation, and in order to convert a nonlinear relation between z and w into the linear relation, a square term and a cubic term of z and z are equal to the influence factor of w;
in the fitting calibration stage, the system calculates the difference value between the detection distance of the Kinect camera and the actual distance at each calibration point as a detection error, and then calculates the influence parameters of each influence factor according to the multivariate regression analysis method, so as to obtain a relational expression between the error and the detection position; substituting the detection position into the relational expression during formal detection to obtain corresponding error;
the Kalman filtering process comprises two steps: (1) calculating a predicted value, (2) updating the predicted value:
the prediction value is calculated based on the previous frame measurement valued t-1And current statistical motion variablesV t-1The current motion model is expressed as:
Figure 554298DEST_PATH_IMAGE006
the predicted state value and the previous state value in the experimental system present a linear relationship with a multiplication factor of 1, so the predicted value of the covariance is:
Figure 273992DEST_PATH_IMAGE008
wherein Q is the covariance of the system process;
according to the kalman filtering principle, the more recent predicted values are:
Figure 318172DEST_PATH_IMAGE009
in the formulad zFor the measurement, Kg is the kalman gain value, which is calculated as:
Figure 857737DEST_PATH_IMAGE010
wherein
Figure 894832DEST_PATH_IMAGE011
The covariance is a predicted value, and R is the covariance between the observed value and the true value;
and simultaneously updating the covariance:
Figure 203454DEST_PATH_IMAGE012
CN201710226448.9A 2017-04-09 2017-04-09 High-precision indoor figure ranging method based on Kinect camera Active CN106980116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710226448.9A CN106980116B (en) 2017-04-09 2017-04-09 High-precision indoor figure ranging method based on Kinect camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710226448.9A CN106980116B (en) 2017-04-09 2017-04-09 High-precision indoor figure ranging method based on Kinect camera

Publications (2)

Publication Number Publication Date
CN106980116A CN106980116A (en) 2017-07-25
CN106980116B true CN106980116B (en) 2021-06-22

Family

ID=59346091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710226448.9A Active CN106980116B (en) 2017-04-09 2017-04-09 High-precision indoor figure ranging method based on Kinect camera

Country Status (1)

Country Link
CN (1) CN106980116B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919983B (en) * 2019-03-16 2021-05-14 哈尔滨理工大学 Kinect doctor visual angle tracking-oriented Kalman filter
CN110208780B (en) * 2019-05-14 2021-10-19 北京华捷艾米科技有限公司 Method and device for measuring distance based on somatosensory camera and storage medium
CN110414339A (en) * 2019-06-21 2019-11-05 武汉倍特威视系统有限公司 Hearing room personnel's close contact recognition methods based on video stream data
CN114522410B (en) * 2022-02-14 2023-04-25 复旦大学 Badminton net passing height detection method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
CN103760976A (en) * 2014-01-09 2014-04-30 华南理工大学 Kinect based gesture recognition smart home control method and Kinect based gesture recognition smart home control system
CN104390645A (en) * 2014-12-09 2015-03-04 重庆邮电大学 Intelligent wheelchair indoor navigation method based on visual information
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105512621A (en) * 2015-11-30 2016-04-20 华南理工大学 Kinect-based badminton motion guidance system
CN106292710A (en) * 2016-10-20 2017-01-04 西北工业大学 Four rotor wing unmanned aerial vehicle control methods based on Kinect sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
CN103760976A (en) * 2014-01-09 2014-04-30 华南理工大学 Kinect based gesture recognition smart home control method and Kinect based gesture recognition smart home control system
CN104390645A (en) * 2014-12-09 2015-03-04 重庆邮电大学 Intelligent wheelchair indoor navigation method based on visual information
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105512621A (en) * 2015-11-30 2016-04-20 华南理工大学 Kinect-based badminton motion guidance system
CN106292710A (en) * 2016-10-20 2017-01-04 西北工业大学 Four rotor wing unmanned aerial vehicle control methods based on Kinect sensor

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Hand Grasped Object Segmentation Method with Kinect by using Body Dimension Database;Naruyuki HISATSUKA 等;《The Japan Society of Mechanical Engineers》;20130525;正文第1-4页 *
基于Kinect2的光伏清洗机器人实时环境重建与自主导航技术研究;张明明;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215(第02期);正文全文 *
基于Kinect的实时手语识别技术研究;叶平;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315(第03期);第6-17、47-48页 *
基于像素级别的Kinect深度测量误差补偿方法;牛振岐 等;《光电子 激光》;20161130;第27卷(第11期);第1169-1174页 *
基于深度图像的运动人手检测与指尖点跟踪算法;刘伟华 等;《计算机应用》;20140510(第5期);第1442-1447页 *

Also Published As

Publication number Publication date
CN106980116A (en) 2017-07-25

Similar Documents

Publication Publication Date Title
CN106980116B (en) High-precision indoor figure ranging method based on Kinect camera
CN114080547A (en) Calibration method and calibration device for multiple groups of laser radar external parameters and computer storage medium
US20100148977A1 (en) Localization and detection system applying sensors and method thereof
CA2835306A1 (en) Sensor positioning for 3d scanning
Rodríguez-Garavito et al. Automatic laser and camera extrinsic calibration for data fusion using road plane
Teslić et al. Using a LRF sensor in the Kalman-filtering-based localization of a mobile robot
KR101737950B1 (en) Vision-based navigation solution estimation system and method in terrain referenced navigation
CN109029465A (en) A kind of unmanned boat tracking and obstacle avoidance system based on millimetre-wave radar
Lee et al. Extrinsic and temporal calibration of automotive radar and 3D LiDAR
Arras et al. Multisensor on-the-fly localization using laser and vision
CN110471029B (en) Single-station passive positioning method and device based on extended Kalman filtering
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
Liu et al. A large scale 3D positioning method based on a network of rotating laser automatic theodolites
Bikmaev et al. Improving the accuracy of supporting mobile objects with the use of the algorithm of complex processing of signals with a monocular camera and LiDAR
Zheng et al. Mobile robot integrated navigation algorithm based on template matching VO/IMU/UWB
Wise et al. Spatiotemporal Calibration of 3-D Millimetre-Wavelength Radar-Camera Pairs
RU2692837C2 (en) Method for determining parameters of movement of noisy object
EP4332631A1 (en) Global optimization methods for mobile coordinate scanners
Sternberg et al. Precise indoor mapping as a basis for coarse indoor navigation
Huang et al. Multi-radar inertial odometry for 3d state estimation using mmwave imaging radar
CN117308982A (en) Positioning method and device for underwater cleaning robot of nuclear power station
CN110388917B (en) Aircraft monocular vision scale estimation method and device, aircraft navigation system and aircraft
Hartmann et al. High accurate pointwise (geo-) referencing of a k-tls based multi-sensor-system
CN112741617A (en) CSI-based omnidirectional gait detection algorithm
EP4332632A1 (en) Three-dimensional ultrasonic imaging method and system based on laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant