CN113077519B - Multi-phase external parameter automatic calibration method based on human skeleton extraction - Google Patents

Multi-phase external parameter automatic calibration method based on human skeleton extraction Download PDF

Info

Publication number
CN113077519B
CN113077519B CN202110289301.0A CN202110289301A CN113077519B CN 113077519 B CN113077519 B CN 113077519B CN 202110289301 A CN202110289301 A CN 202110289301A CN 113077519 B CN113077519 B CN 113077519B
Authority
CN
China
Prior art keywords
camera
image
points
cameras
joint points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110289301.0A
Other languages
Chinese (zh)
Other versions
CN113077519A (en
Inventor
关俊志
耿虎军
高峰
柴兴华
陈彦桥
张泽勇
李晨阳
王雅涵
彭会湘
陈韬亦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202110289301.0A priority Critical patent/CN113077519B/en
Publication of CN113077519A publication Critical patent/CN113077519A/en
Application granted granted Critical
Publication of CN113077519B publication Critical patent/CN113077519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention discloses a multi-phase external parameter automatic calibration method based on human skeleton extraction, and belongs to the technical field of computer vision. Processing each frame of image, and extracting the positions of human skeletal joint points in the image by using a deep learning method; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through an essential matrix; the translation vector scale is calculated by using the human body size information. The method takes human skeleton joint points as characteristic points, takes point cloud formed by motion tracks of the human skeleton joint points as a virtual calibration object, then calculates an essential matrix between cameras, obtains the relative pose between the cameras through essential matrix decomposition, and completes real-time online accurate external reference calibration of a multi-camera system.

Description

Multi-phase external parameter automatic calibration method based on human skeleton extraction
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a method for calibrating online external parameters of a multi-camera system through pedestrian skeleton extraction.
Background
In the related fields of computer vision technology and artificial intelligence, the application of a multi-camera system in the fields of scene reconstruction, smart city safety monitoring, airport monitoring, motion capture, sports motion video analysis, industrial measurement and the like needs to carry out accurate and quick external parameter calibration on the multi-camera system. The camera external parameters are a set of parameters representing attributes such as the position, the rotation direction and the like of the camera in a world coordinate system, so that calibration needs to be performed after the camera is installed. The calibration of the external parameters of the multi-phase machine is the process of obtaining the external parameters of the multi-phase machine
In the conventional calibration method, known scene structure information is used for calibration, and the conventional calibration method usually involves the manufacture of an accurate calibration object, a complex calibration process and high-precision known calibration information and requires complex operation of a professional. Moreover, each subsequent replacement of the position of the camera set requires a recalibration operation.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-phase external parameter automatic calibration method, which takes pedestrians frequently existing in a scene as a calibration object, can realize online real-time calibration of a camera system, and provides a basis for later application such as scene understanding monitoring and the like.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a multi-phase external parameter automatic calibration method based on human skeleton extraction comprises the following steps:
(1) Enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) Intercepting images of pedestrians with the same frame number at different positions from each video;
(3) Processing each frame of image, and extracting pedestrian bone joint points in the image by using a deep learning algorithm to obtain an image pixel coordinate of each bone joint point;
(4) Calculating the image physical dimension coordinates of each bone joint point by using the image pixel coordinates of each bone joint point according to the known camera internal parameters;
(5) And selecting any one camera coordinate system as a world coordinate system, and calculating the external parameters of other cameras by using the image physical dimension coordinates of the skeletal joint points and the essential matrix.
Wherein, the specific mode of the step (3) is as follows:
(301) Performing neural network prediction on each frame of image to obtain a thermodynamic diagram and a partial affinity field of each skeletal joint;
(302) Extracting specific image positions and confidence degrees of the joints from the thermodynamic diagram by applying a non-maximum suppression algorithm;
(303) Finding limb links by using the extracted joint information and part of the affinity fields to obtain all connections, wherein each connection is regarded as a limb;
(304) The limbs with the same joint are regarded as the limbs of the same person, the limbs are assembled to form the person, and image pixel coordinates of all the skeletal joint points are obtained.
Wherein, the specific mode of the step (5) is as follows:
(501) Recording discrete three-dimensional point cloud of three-dimensional positions of skeleton joint points of human body under different cameras
Figure BDA0002981799460000021
k is a camera mark, i is a skeletal joint mark, and t represents different moments;
(502) The camera coordinate system of one camera is arbitrarily selected as a first world coordinate system, and the discrete three-dimensional point cloud of the camera is
Figure BDA0002981799460000022
Then the
Figure BDA0002981799460000023
Wherein R is k And c k The external parameters of the camera k are a rotation matrix and a translation vector;
(503) Selecting matching points of at least eight pairs of skeletal joint points to calculate an essential matrix E k Then by decomposing the essential matrix E k To obtain c k And R k (ii) a Wherein the essential matrix E k The calculation method of (A) is as follows:
the imaging bone joint points and the center points of the two cameras form a plane, i.e.
Figure BDA0002981799460000024
And c k Three vectors are in the same plane, and can be obtained:
Figure BDA0002981799460000025
will be provided with
Figure BDA0002981799460000026
And
Figure BDA0002981799460000027
brought into the above formula and eliminated
Figure BDA0002981799460000028
And
Figure BDA0002981799460000029
the following can be obtained:
Figure BDA00029817994600000210
wherein E is k =|c k | × R k Is the essential matrix, | c k | × Is a vector c k The anti-symmetric matrix of (a) is,
Figure BDA00029817994600000211
and
Figure BDA00029817994600000212
respectively the physical size coordinates of the images of the bone joint point i at the moment t in the selection camera and the camera marked as k,
Figure BDA00029817994600000213
and
Figure BDA00029817994600000214
respectively the vertical coordinates of the bone joint point i at the time t in the selected camera and the camera marked as k;
(504) C obtained by triangulation and calculation in step (503) k And R k Calculating the coordinates of two different skeletal joint points in the camera coordinate system denoted by k
Figure BDA00029817994600000215
And
Figure BDA00029817994600000216
the distance between two different skeletal joint points is
Figure BDA00029817994600000217
And using the actual physical length between two known skeletal joint points
Figure BDA00029817994600000218
Calculating to obtain scale information lambda k
Figure BDA00029817994600000219
(505) Adding the scale information to the translation vector to obtain the actual translation vector of each camera as lambda k c k
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an effective multi-camera system method, which can obtain good calibration effect without additional calibration objects and fussy calibration processes.
2. The method is simple and easy to implement, and can carry out automatic online calibration under the condition that a multi-camera system does not shut down, thereby greatly improving the calibration efficiency.
3. Feature point matching, scale calculation and online calibration of a multi-camera system have been always a research hotspot in the field, and at present, common methods are roughly divided into two types: one is a calibration method based on the traditional calibration object, which can achieve good effect, but has high requirement on the manufacturing precision of the calibration object, complicated calibration process and can not realize on-line calibration; the other type is a self-calibration method, a specially-made calibration object is not needed in the method, the corresponding relation between cameras is established by depending on feature points in an image, but the method cannot establish the corresponding relation of the feature points under the condition that the visual angle between the cameras is large, so that the application difficulty in a real scene is high, and the translation vector has no actual scale information. In view of the above, the invention firstly uses human skeleton joint points as feature points, uses point cloud formed by motion tracks of the human skeleton joint points as a virtual calibration object, calculates the relative pose between cameras by an essential matrix principle, and provides a scale calculation method based on human physical dimensions to solve the problem of scale uncertainty in camera calibration. This approach is an important innovation over the prior art.
Drawings
Fig. 1 is a flowchart of a calibration method of a multi-camera system according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a human skeleton extracted by a deep learning algorithm in the embodiment of the present invention.
Fig. 3 is a schematic diagram of an essential matrix adopted in the embodiment of the present invention.
Detailed description of the invention
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
A multiphase external parameter automatic calibration method based on human body skeleton extraction comprises the following steps:
step 1, after a multi-camera system is installed, enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by multiple cameras to obtain synchronized videos;
step 2, intercepting images of pedestrians with the same frame number at different positions from each video;
step 3, processing each frame of image, extracting pedestrian skeleton joint points in the image by using a convolutional neural network, and obtaining the pixel coordinate of each joint point:
step 3.1, carrying out neural network prediction on the image to obtain a thermodynamic diagram (Heatmap) and a partial Affinity Field (Part Affinity Field) of each skeletal joint point;
3.2, extracting the specific image position and the confidence coefficient of the joint from the thermodynamic diagram by applying a non-maximum suppression (NMS) algorithm;
3.3, finding the limb links by utilizing the joint information and part of the affinity fields to obtain all connections, wherein each connection can be regarded as a limb;
and 3.4, after all the limbs are obtained, regarding the limbs with the same joint as the limbs of the same person, assembling the limbs to form a person, and obtaining image pixel coordinates of the bone joint points of the person.
Step 4, calculating the image physical dimension coordinates of each bone joint point according to the known camera internal parameters, and recording the coordinates as
Figure BDA0002981799460000041
k is a camera mark, i is a skeletal joint mark, and t represents different moments;
and 5, selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through the essential matrix:
step 5.1, recording three-dimensional human skeleton joint pointsDiscrete three-dimensional point cloud with positions under different cameras
Figure BDA0002981799460000042
k is a camera mark, i is a skeletal joint mark, and t represents different moments;
step 5.2, selecting a camera coordinate system of the camera marked as 1 as a first world coordinate system, wherein the dispersed three-dimensional point cloud of the camera is
Figure BDA0002981799460000043
Figure BDA0002981799460000044
Wherein R is k And c k The external parameters of the camera k are the rotation matrix and the translation vector;
step 5.3, selecting matching points of a plurality of pairs of skeletal joint points to calculate an essential matrix E k Then by decomposing the essential matrix E k To obtain c k And R k (ii) a Wherein the essential matrix E k The calculation method is as follows:
the joint point of the imaged skeleton and the central points of the two cameras form a plane
Figure BDA0002981799460000045
c k Three vectors are in the same plane, so that the following can be obtained:
Figure BDA0002981799460000046
will be provided with
Figure BDA0002981799460000047
And
Figure BDA0002981799460000048
brought into the above formula and eliminated
Figure BDA0002981799460000049
And
Figure BDA00029817994600000410
the following can be obtained:
Figure BDA00029817994600000411
wherein E is k =|c k | × R k Is the essential matrix, | c k | × Is a vector c k An antisymmetric matrix of (a);
step 5.4, c obtained in step (503) k All lengths of (a) are normalized to 1, i.e., | | | c k I | =1, in practical cases, generally | | | c k ||≠||c m I.e. the distances of the different cameras to the camera 1 are unequal, so that the scale information needs to be calculated. C is obtained by the above calculation assuming two different skeletal joint points k And R k And triangulation can calculate its coordinates in the k-coordinate system of the camera as
Figure BDA00029817994600000412
Figure BDA0002981799460000051
The distance between them is
Figure BDA0002981799460000052
If the actual physical length between two skeletal joint points is known as
Figure BDA0002981799460000053
For example, the average length of a human arm, the following can be calculated:
Figure BDA0002981799460000054
step 5.5, finally, calculating and adding the scale information to the translation vector to obtain the translation vector lambda of each camera k c k
The following is a more specific example:
referring to fig. 1, a method for calibrating a multi-camera system based on pedestrian head recognition includes the following steps:
a multi-camera system calibration method based on pedestrian head recognition comprises the following steps:
step 1, after a multi-camera system is installed, enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by multiple cameras to obtain synchronized videos;
step 2, intercepting images of pedestrians with the same frame number at different positions from each video;
step 3, processing each frame of image, extracting pedestrian skeleton joint points in the image by using a convolutional neural network, and obtaining pixel coordinates of each joint point, as shown in fig. 2, the method comprises the following substeps:
step 3.1, carrying out neural network prediction on the image to obtain a thermodynamic diagram (Heatmap) and a partial Affinity Field (Part Affinity Field) of each joint point;
3.2, extracting the specific image position and the confidence coefficient of the joint from the thermodynamic diagram by applying a non-maximum suppression (NMS) algorithm;
3.3, finding the limb links by utilizing the joint information and part of the affinity fields to obtain all connections, wherein each connection can be regarded as a limb;
step 3.4, after all limbs are obtained, regarding the limbs with the same joint as the limbs of the same person, assembling the limbs to form a person, and obtaining image pixel coordinates of human skeletal joint points, wherein a specific skeleton extraction algorithm is shown in a document [1], [1], Z.Cao and G.Hidalgo Martinez and T.Simon and S.Wei and Y.A.sheikh.Openpos: real Multi-person 2D position Estimation using Part Affinity fields, IEEE Transactions on Pattern Recognition and Machine understanding, doi: 10.1109/TPAMI.2019.29257.
Step 4, calculating the image physical dimension coordinates of each bone joint point according to the known camera internal parameters, and recording the coordinates as
Figure BDA0002981799460000055
k is camera mark, i is skeleton joint point markT represents different time;
step 5, selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through the intrinsic matrix, wherein the method comprises the following substeps:
step 5.1, recording discrete three-dimensional point clouds of three-dimensional positions of human skeleton joint points under different cameras
Figure BDA0002981799460000056
k is a camera mark, i is a skeletal joint mark, and t represents different moments;
step 5.2, selecting a camera coordinate system of the camera marked as 1 as a first world coordinate system, wherein the dispersed three-dimensional point cloud of the camera is
Figure BDA0002981799460000061
Figure BDA0002981799460000062
Wherein R is k And c k The external parameters of the camera k are the rotation matrix and the translation vector;
step 5.3, obtaining the essence matrix E by calculating at least eight pairs of matching points of the skeletal joint points k Then by decomposing the essential matrix E k To obtain c k And R k The specific algorithm is shown in the literature [2]:
[2]H.C.Longuet-Higgins.A computer algorithm for reconstructinga scene from two projections.Nature,vol.293,pages 133-135,September 1981.
Essence matrix E k The calculation method is as follows:
the imaging bone joint point and the central points of the two cameras form a plane
Figure BDA0002981799460000063
c k Three vectors are in the same plane, as shown in fig. 3, and thus:
Figure BDA0002981799460000064
will be provided with
Figure BDA0002981799460000065
And
Figure BDA0002981799460000066
brought into the above formula and eliminated
Figure BDA0002981799460000067
And
Figure BDA0002981799460000068
the following can be obtained:
Figure BDA0002981799460000069
wherein E is k =|c k | × R k Is the essential matrix, | c k | × Is a vector c k An antisymmetric matrix of (a);
step 5.4, c obtained in step (503) k All lengths of (a) are normalized to 1, i.e., | | | c k I | =1, in practical cases, generally | | | c k ||≠||c m I, k and m are two different camera labels, i.e. the distances from the different cameras to camera 1 are not equal, so that the scale information needs to be calculated. C is obtained by the above calculation assuming two different skeletal joint points k And R k And triangulation can calculate its coordinates in the k-coordinate system of the camera as
Figure BDA00029817994600000610
Figure BDA00029817994600000611
The distance between them is
Figure BDA00029817994600000612
If the actual physical length between two skeletal joint points is known as
Figure BDA00029817994600000613
For example, the average length of a human arm, the following can be calculated:
Figure BDA00029817994600000614
step 5.5, finally, calculating and adding the scale information to the translation vector to obtain the translation vector lambda of each camera k c k . The calibration error projection error of the method is 1.4 pixels, the attitude error is 0.5 degrees, the offset error is 1.0 percent, and the calibration result is accurate.
In a word, the method processes each frame of image, and extracts the positions of human skeletal joint points in the image by using a deep learning method; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through an essential matrix; the scale of the translation vector is calculated by using the human body size information. The invention takes human skeleton joint points as characteristic points, takes point cloud formed by motion tracks of the human joint points as a virtual calibration object, solves a camera rotation matrix and a translation vector by using an essential matrix, and provides a translation vector scale calculation method based on human body dimension information to complete real-time online accurate external parameter calibration of a multi-camera system.
The above description is only one embodiment of the present invention, and is not intended to limit the present invention. Any modification, improvement or the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

1. A multi-phase external parameter automatic calibration method based on human skeleton extraction is characterized by comprising the following steps:
(1) Enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) Intercepting images of pedestrians with the same frame number at different positions from each video;
(3) Processing each frame of image, and extracting pedestrian bone joint points in the image by using a deep learning algorithm to obtain image pixel coordinates of each bone joint point;
(4) Calculating the image physical dimension coordinates of each bone joint point by using the image pixel coordinates of each bone joint point according to the known camera internal parameters;
(5) Selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras by using the image physical dimension coordinates of the skeletal joint points and the essential matrix;
wherein, the concrete mode of the step (5) is as follows:
(501) Recording discrete three-dimensional point cloud of three-dimensional positions of human skeleton joint points under different cameras
Figure FDA0003840931370000011
k is a camera mark, i is a skeletal joint mark, and t represents different moments;
(502) The camera coordinate system of one camera is arbitrarily selected as a first world coordinate system, and the discrete three-dimensional point cloud of the camera is
Figure FDA0003840931370000012
Then
Figure FDA0003840931370000013
Wherein R is k And c k The external parameters of the camera k are rotation matrix and translation vector;
(503) Selecting matching points of a plurality of pairs of skeletal joint points to calculate an essential matrix E k Then by decomposing the essential matrix E k To obtain c k And R k (ii) a Wherein the essential matrix E k The calculation method is as follows:
the imaging bone joint points and the center points of the two cameras form a plane, i.e.
Figure FDA0003840931370000014
And c k Three vectors are in the same plane, and can be obtained:
Figure FDA0003840931370000015
will be provided with
Figure FDA0003840931370000016
And
Figure FDA0003840931370000017
brought into the above formula and eliminated
Figure FDA0003840931370000018
And
Figure FDA0003840931370000019
the following can be obtained:
Figure FDA00038409313700000110
wherein, E k =|c k | × R k Being the essential matrix, | c k | × Is a vector c k The anti-symmetric matrix of (a) is,
Figure FDA00038409313700000111
and
Figure FDA00038409313700000112
respectively the physical size coordinates of the image of the bone joint point i at the time t in the selection camera and the camera marked as k,
Figure FDA00038409313700000113
and
Figure FDA00038409313700000114
respectively the vertical coordinates of the bone joint point i at the time t in the selected camera and the camera marked as k;
(504) By triangulation and procedure (503)Calculated c k And R k Calculating the coordinates of two different skeletal joint points in the camera coordinate system denoted by k
Figure FDA0003840931370000021
And
Figure FDA0003840931370000022
the distance between two different skeletal joint points is
Figure FDA0003840931370000023
And using the actual physical length between two known skeletal joint points
Figure FDA0003840931370000024
Calculating to obtain scale information lambda k
Figure FDA0003840931370000025
(505) Adding the scale information to the translation vector to obtain the actual translation vector of each camera as lambda k c k
2. The method for automatically calibrating the external parameters of the multiple phases based on the human body skeleton extraction as claimed in claim 1, wherein the specific manner of the step (3) is as follows:
(301) Performing neural network prediction on each frame of image to obtain a thermodynamic diagram and a partial affinity field of each skeletal joint;
(302) Extracting specific image positions and confidence degrees of joints from the thermodynamic diagram by applying a non-maximum inhibition algorithm;
(303) Finding limb links by using the extracted joint information and part of the affinity fields to obtain all connections, wherein each connection is regarded as a limb;
(304) The limbs with the same joint are regarded as the limbs of the same person, the limbs are assembled to form the person, and image pixel coordinates of all the skeletal joint points are obtained.
CN202110289301.0A 2021-03-18 2021-03-18 Multi-phase external parameter automatic calibration method based on human skeleton extraction Active CN113077519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110289301.0A CN113077519B (en) 2021-03-18 2021-03-18 Multi-phase external parameter automatic calibration method based on human skeleton extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110289301.0A CN113077519B (en) 2021-03-18 2021-03-18 Multi-phase external parameter automatic calibration method based on human skeleton extraction

Publications (2)

Publication Number Publication Date
CN113077519A CN113077519A (en) 2021-07-06
CN113077519B true CN113077519B (en) 2022-12-09

Family

ID=76612748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110289301.0A Active CN113077519B (en) 2021-03-18 2021-03-18 Multi-phase external parameter automatic calibration method based on human skeleton extraction

Country Status (1)

Country Link
CN (1) CN113077519B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925497B (en) * 2021-10-22 2023-09-15 吉林大学 Binocular vision measurement system-based automobile passenger riding posture extraction method
CN116030137A (en) * 2021-10-27 2023-04-28 华为技术有限公司 Parameter determination method and related equipment
CN114758016B (en) * 2022-06-15 2022-09-13 超节点创新科技(深圳)有限公司 Camera equipment calibration method, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456016A (en) * 2013-09-06 2013-12-18 同济大学 Body-feeling camera network calibration method unrelated to visual angles
CN108288291A (en) * 2018-06-07 2018-07-17 北京轻威科技有限责任公司 Polyphaser calibration based on single-point calibration object
CN110458897A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Multi-cam automatic calibration method and system, monitoring method and system
CN111667540A (en) * 2020-06-09 2020-09-15 中国电子科技集团公司第五十四研究所 Multi-camera system calibration method based on pedestrian head recognition

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034238B (en) * 2010-12-13 2012-07-18 西安交通大学 Multi-camera system calibrating method based on optical imaging probe and visual graph structure
CN110969668B (en) * 2019-11-22 2023-05-02 大连理工大学 Stereo calibration algorithm of long-focus binocular camera
CN111028271B (en) * 2019-12-06 2023-04-14 浩云科技股份有限公司 Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection
CN111739103A (en) * 2020-06-18 2020-10-02 苏州炫感信息科技有限公司 Multi-camera calibration system based on single-point calibration object
CN112001926B (en) * 2020-07-04 2024-04-09 西安电子科技大学 RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456016A (en) * 2013-09-06 2013-12-18 同济大学 Body-feeling camera network calibration method unrelated to visual angles
CN108288291A (en) * 2018-06-07 2018-07-17 北京轻威科技有限责任公司 Polyphaser calibration based on single-point calibration object
CN110458897A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Multi-cam automatic calibration method and system, monitoring method and system
CN111667540A (en) * 2020-06-09 2020-09-15 中国电子科技集团公司第五十四研究所 Multi-camera system calibration method based on pedestrian head recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Automatic Multi-Camera Extrinsic Parameter Calibration Based on Pedestrian Torsors;Anh Minh Truong et al;《sensors》;20191115;全文 *

Also Published As

Publication number Publication date
CN113077519A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
Xiang et al. Monocular total capture: Posing face, body, and hands in the wild
CN113077519B (en) Multi-phase external parameter automatic calibration method based on human skeleton extraction
Bogo et al. Dynamic FAUST: Registering human bodies in motion
CN108154550B (en) RGBD camera-based real-time three-dimensional face reconstruction method
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN109919141A (en) A kind of recognition methods again of the pedestrian based on skeleton pose
US20150243035A1 (en) Method and device for determining a transformation between an image coordinate system and an object coordinate system associated with an object of interest
CN109758756B (en) Gymnastics video analysis method and system based on 3D camera
CN112907631B (en) Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
CN108073855A (en) A kind of recognition methods of human face expression and system
CN111998862A (en) Dense binocular SLAM method based on BNN
CN115376034A (en) Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition
CN113255487A (en) Three-dimensional real-time human body posture recognition method
CN111667540B (en) Multi-camera system calibration method based on pedestrian head recognition
Rosenhahn et al. Automatic human model generation
CN116051648A (en) Camera internal and external parameter calibration method based on cross visual angle multi-human semantic matching
CN113284249B (en) Multi-view three-dimensional human body reconstruction method and system based on graph neural network
CN109631850B (en) Inclined camera shooting relative positioning method based on deep learning
CN114548224A (en) 2D human body pose generation method and device for strong interaction human body motion
Morency et al. Fast 3d model acquisition from stereo images
CN112767481A (en) High-precision positioning and mapping method based on visual edge features
JP3860287B2 (en) Motion extraction processing method, motion extraction processing device, and program storage medium
CN117671738B (en) Human body posture recognition system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant