CN113077519B - Multi-phase external parameter automatic calibration method based on human skeleton extraction - Google Patents
Multi-phase external parameter automatic calibration method based on human skeleton extraction Download PDFInfo
- Publication number
- CN113077519B CN113077519B CN202110289301.0A CN202110289301A CN113077519B CN 113077519 B CN113077519 B CN 113077519B CN 202110289301 A CN202110289301 A CN 202110289301A CN 113077519 B CN113077519 B CN 113077519B
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- points
- cameras
- joint points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Abstract
The invention discloses a multi-phase external parameter automatic calibration method based on human skeleton extraction, and belongs to the technical field of computer vision. Processing each frame of image, and extracting the positions of human skeletal joint points in the image by using a deep learning method; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through an essential matrix; the translation vector scale is calculated by using the human body size information. The method takes human skeleton joint points as characteristic points, takes point cloud formed by motion tracks of the human skeleton joint points as a virtual calibration object, then calculates an essential matrix between cameras, obtains the relative pose between the cameras through essential matrix decomposition, and completes real-time online accurate external reference calibration of a multi-camera system.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a method for calibrating online external parameters of a multi-camera system through pedestrian skeleton extraction.
Background
In the related fields of computer vision technology and artificial intelligence, the application of a multi-camera system in the fields of scene reconstruction, smart city safety monitoring, airport monitoring, motion capture, sports motion video analysis, industrial measurement and the like needs to carry out accurate and quick external parameter calibration on the multi-camera system. The camera external parameters are a set of parameters representing attributes such as the position, the rotation direction and the like of the camera in a world coordinate system, so that calibration needs to be performed after the camera is installed. The calibration of the external parameters of the multi-phase machine is the process of obtaining the external parameters of the multi-phase machine
In the conventional calibration method, known scene structure information is used for calibration, and the conventional calibration method usually involves the manufacture of an accurate calibration object, a complex calibration process and high-precision known calibration information and requires complex operation of a professional. Moreover, each subsequent replacement of the position of the camera set requires a recalibration operation.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-phase external parameter automatic calibration method, which takes pedestrians frequently existing in a scene as a calibration object, can realize online real-time calibration of a camera system, and provides a basis for later application such as scene understanding monitoring and the like.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a multi-phase external parameter automatic calibration method based on human skeleton extraction comprises the following steps:
(1) Enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) Intercepting images of pedestrians with the same frame number at different positions from each video;
(3) Processing each frame of image, and extracting pedestrian bone joint points in the image by using a deep learning algorithm to obtain an image pixel coordinate of each bone joint point;
(4) Calculating the image physical dimension coordinates of each bone joint point by using the image pixel coordinates of each bone joint point according to the known camera internal parameters;
(5) And selecting any one camera coordinate system as a world coordinate system, and calculating the external parameters of other cameras by using the image physical dimension coordinates of the skeletal joint points and the essential matrix.
Wherein, the specific mode of the step (3) is as follows:
(301) Performing neural network prediction on each frame of image to obtain a thermodynamic diagram and a partial affinity field of each skeletal joint;
(302) Extracting specific image positions and confidence degrees of the joints from the thermodynamic diagram by applying a non-maximum suppression algorithm;
(303) Finding limb links by using the extracted joint information and part of the affinity fields to obtain all connections, wherein each connection is regarded as a limb;
(304) The limbs with the same joint are regarded as the limbs of the same person, the limbs are assembled to form the person, and image pixel coordinates of all the skeletal joint points are obtained.
Wherein, the specific mode of the step (5) is as follows:
(501) Recording discrete three-dimensional point cloud of three-dimensional positions of skeleton joint points of human body under different camerask is a camera mark, i is a skeletal joint mark, and t represents different moments;
(502) The camera coordinate system of one camera is arbitrarily selected as a first world coordinate system, and the discrete three-dimensional point cloud of the camera isThen theWherein R is k And c k The external parameters of the camera k are a rotation matrix and a translation vector;
(503) Selecting matching points of at least eight pairs of skeletal joint points to calculate an essential matrix E k Then by decomposing the essential matrix E k To obtain c k And R k (ii) a Wherein the essential matrix E k The calculation method of (A) is as follows:
the imaging bone joint points and the center points of the two cameras form a plane, i.e.And c k Three vectors are in the same plane, and can be obtained:
will be provided withAndbrought into the above formula and eliminatedAndthe following can be obtained:
wherein E is k =|c k | × R k Is the essential matrix, | c k | × Is a vector c k The anti-symmetric matrix of (a) is,andrespectively the physical size coordinates of the images of the bone joint point i at the moment t in the selection camera and the camera marked as k,andrespectively the vertical coordinates of the bone joint point i at the time t in the selected camera and the camera marked as k;
(504) C obtained by triangulation and calculation in step (503) k And R k Calculating the coordinates of two different skeletal joint points in the camera coordinate system denoted by kAndthe distance between two different skeletal joint points isAnd using the actual physical length between two known skeletal joint pointsCalculating to obtain scale information lambda k :
(505) Adding the scale information to the translation vector to obtain the actual translation vector of each camera as lambda k c k 。
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an effective multi-camera system method, which can obtain good calibration effect without additional calibration objects and fussy calibration processes.
2. The method is simple and easy to implement, and can carry out automatic online calibration under the condition that a multi-camera system does not shut down, thereby greatly improving the calibration efficiency.
3. Feature point matching, scale calculation and online calibration of a multi-camera system have been always a research hotspot in the field, and at present, common methods are roughly divided into two types: one is a calibration method based on the traditional calibration object, which can achieve good effect, but has high requirement on the manufacturing precision of the calibration object, complicated calibration process and can not realize on-line calibration; the other type is a self-calibration method, a specially-made calibration object is not needed in the method, the corresponding relation between cameras is established by depending on feature points in an image, but the method cannot establish the corresponding relation of the feature points under the condition that the visual angle between the cameras is large, so that the application difficulty in a real scene is high, and the translation vector has no actual scale information. In view of the above, the invention firstly uses human skeleton joint points as feature points, uses point cloud formed by motion tracks of the human skeleton joint points as a virtual calibration object, calculates the relative pose between cameras by an essential matrix principle, and provides a scale calculation method based on human physical dimensions to solve the problem of scale uncertainty in camera calibration. This approach is an important innovation over the prior art.
Drawings
Fig. 1 is a flowchart of a calibration method of a multi-camera system according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a human skeleton extracted by a deep learning algorithm in the embodiment of the present invention.
Fig. 3 is a schematic diagram of an essential matrix adopted in the embodiment of the present invention.
Detailed description of the invention
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
A multiphase external parameter automatic calibration method based on human body skeleton extraction comprises the following steps:
step 1, after a multi-camera system is installed, enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by multiple cameras to obtain synchronized videos;
step 2, intercepting images of pedestrians with the same frame number at different positions from each video;
step 3, processing each frame of image, extracting pedestrian skeleton joint points in the image by using a convolutional neural network, and obtaining the pixel coordinate of each joint point:
step 3.1, carrying out neural network prediction on the image to obtain a thermodynamic diagram (Heatmap) and a partial Affinity Field (Part Affinity Field) of each skeletal joint point;
3.2, extracting the specific image position and the confidence coefficient of the joint from the thermodynamic diagram by applying a non-maximum suppression (NMS) algorithm;
3.3, finding the limb links by utilizing the joint information and part of the affinity fields to obtain all connections, wherein each connection can be regarded as a limb;
and 3.4, after all the limbs are obtained, regarding the limbs with the same joint as the limbs of the same person, assembling the limbs to form a person, and obtaining image pixel coordinates of the bone joint points of the person.
Step 4, calculating the image physical dimension coordinates of each bone joint point according to the known camera internal parameters, and recording the coordinates ask is a camera mark, i is a skeletal joint mark, and t represents different moments;
and 5, selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through the essential matrix:
step 5.1, recording three-dimensional human skeleton joint pointsDiscrete three-dimensional point cloud with positions under different camerask is a camera mark, i is a skeletal joint mark, and t represents different moments;
step 5.2, selecting a camera coordinate system of the camera marked as 1 as a first world coordinate system, wherein the dispersed three-dimensional point cloud of the camera is Wherein R is k And c k The external parameters of the camera k are the rotation matrix and the translation vector;
step 5.3, selecting matching points of a plurality of pairs of skeletal joint points to calculate an essential matrix E k Then by decomposing the essential matrix E k To obtain c k And R k (ii) a Wherein the essential matrix E k The calculation method is as follows:
the joint point of the imaged skeleton and the central points of the two cameras form a planec k Three vectors are in the same plane, so that the following can be obtained:
will be provided withAndbrought into the above formula and eliminatedAndthe following can be obtained:
wherein E is k =|c k | × R k Is the essential matrix, | c k | × Is a vector c k An antisymmetric matrix of (a);
step 5.4, c obtained in step (503) k All lengths of (a) are normalized to 1, i.e., | | | c k I | =1, in practical cases, generally | | | c k ||≠||c m I.e. the distances of the different cameras to the camera 1 are unequal, so that the scale information needs to be calculated. C is obtained by the above calculation assuming two different skeletal joint points k And R k And triangulation can calculate its coordinates in the k-coordinate system of the camera as The distance between them isIf the actual physical length between two skeletal joint points is known asFor example, the average length of a human arm, the following can be calculated:
step 5.5, finally, calculating and adding the scale information to the translation vector to obtain the translation vector lambda of each camera k c k 。
The following is a more specific example:
referring to fig. 1, a method for calibrating a multi-camera system based on pedestrian head recognition includes the following steps:
a multi-camera system calibration method based on pedestrian head recognition comprises the following steps:
step 1, after a multi-camera system is installed, enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by multiple cameras to obtain synchronized videos;
step 2, intercepting images of pedestrians with the same frame number at different positions from each video;
step 3, processing each frame of image, extracting pedestrian skeleton joint points in the image by using a convolutional neural network, and obtaining pixel coordinates of each joint point, as shown in fig. 2, the method comprises the following substeps:
step 3.1, carrying out neural network prediction on the image to obtain a thermodynamic diagram (Heatmap) and a partial Affinity Field (Part Affinity Field) of each joint point;
3.2, extracting the specific image position and the confidence coefficient of the joint from the thermodynamic diagram by applying a non-maximum suppression (NMS) algorithm;
3.3, finding the limb links by utilizing the joint information and part of the affinity fields to obtain all connections, wherein each connection can be regarded as a limb;
step 3.4, after all limbs are obtained, regarding the limbs with the same joint as the limbs of the same person, assembling the limbs to form a person, and obtaining image pixel coordinates of human skeletal joint points, wherein a specific skeleton extraction algorithm is shown in a document [1], [1], Z.Cao and G.Hidalgo Martinez and T.Simon and S.Wei and Y.A.sheikh.Openpos: real Multi-person 2D position Estimation using Part Affinity fields, IEEE Transactions on Pattern Recognition and Machine understanding, doi: 10.1109/TPAMI.2019.29257.
Step 4, calculating the image physical dimension coordinates of each bone joint point according to the known camera internal parameters, and recording the coordinates ask is camera mark, i is skeleton joint point markT represents different time;
step 5, selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through the intrinsic matrix, wherein the method comprises the following substeps:
step 5.1, recording discrete three-dimensional point clouds of three-dimensional positions of human skeleton joint points under different camerask is a camera mark, i is a skeletal joint mark, and t represents different moments;
step 5.2, selecting a camera coordinate system of the camera marked as 1 as a first world coordinate system, wherein the dispersed three-dimensional point cloud of the camera is Wherein R is k And c k The external parameters of the camera k are the rotation matrix and the translation vector;
step 5.3, obtaining the essence matrix E by calculating at least eight pairs of matching points of the skeletal joint points k Then by decomposing the essential matrix E k To obtain c k And R k The specific algorithm is shown in the literature [2]:
[2]H.C.Longuet-Higgins.A computer algorithm for reconstructinga scene from two projections.Nature,vol.293,pages 133-135,September 1981.
Essence matrix E k The calculation method is as follows:
the imaging bone joint point and the central points of the two cameras form a planec k Three vectors are in the same plane, as shown in fig. 3, and thus:
will be provided withAndbrought into the above formula and eliminatedAndthe following can be obtained:
wherein E is k =|c k | × R k Is the essential matrix, | c k | × Is a vector c k An antisymmetric matrix of (a);
step 5.4, c obtained in step (503) k All lengths of (a) are normalized to 1, i.e., | | | c k I | =1, in practical cases, generally | | | c k ||≠||c m I, k and m are two different camera labels, i.e. the distances from the different cameras to camera 1 are not equal, so that the scale information needs to be calculated. C is obtained by the above calculation assuming two different skeletal joint points k And R k And triangulation can calculate its coordinates in the k-coordinate system of the camera as The distance between them isIf the actual physical length between two skeletal joint points is known asFor example, the average length of a human arm, the following can be calculated:
step 5.5, finally, calculating and adding the scale information to the translation vector to obtain the translation vector lambda of each camera k c k . The calibration error projection error of the method is 1.4 pixels, the attitude error is 0.5 degrees, the offset error is 1.0 percent, and the calibration result is accurate.
In a word, the method processes each frame of image, and extracts the positions of human skeletal joint points in the image by using a deep learning method; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras through an essential matrix; the scale of the translation vector is calculated by using the human body size information. The invention takes human skeleton joint points as characteristic points, takes point cloud formed by motion tracks of the human joint points as a virtual calibration object, solves a camera rotation matrix and a translation vector by using an essential matrix, and provides a translation vector scale calculation method based on human body dimension information to complete real-time online accurate external parameter calibration of a multi-camera system.
The above description is only one embodiment of the present invention, and is not intended to limit the present invention. Any modification, improvement or the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (2)
1. A multi-phase external parameter automatic calibration method based on human skeleton extraction is characterized by comprising the following steps:
(1) Enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) Intercepting images of pedestrians with the same frame number at different positions from each video;
(3) Processing each frame of image, and extracting pedestrian bone joint points in the image by using a deep learning algorithm to obtain image pixel coordinates of each bone joint point;
(4) Calculating the image physical dimension coordinates of each bone joint point by using the image pixel coordinates of each bone joint point according to the known camera internal parameters;
(5) Selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras by using the image physical dimension coordinates of the skeletal joint points and the essential matrix;
wherein, the concrete mode of the step (5) is as follows:
(501) Recording discrete three-dimensional point cloud of three-dimensional positions of human skeleton joint points under different camerask is a camera mark, i is a skeletal joint mark, and t represents different moments;
(502) The camera coordinate system of one camera is arbitrarily selected as a first world coordinate system, and the discrete three-dimensional point cloud of the camera isThenWherein R is k And c k The external parameters of the camera k are rotation matrix and translation vector;
(503) Selecting matching points of a plurality of pairs of skeletal joint points to calculate an essential matrix E k Then by decomposing the essential matrix E k To obtain c k And R k (ii) a Wherein the essential matrix E k The calculation method is as follows:
the imaging bone joint points and the center points of the two cameras form a plane, i.e.And c k Three vectors are in the same plane, and can be obtained:
will be provided withAndbrought into the above formula and eliminatedAndthe following can be obtained:
wherein, E k =|c k | × R k Being the essential matrix, | c k | × Is a vector c k The anti-symmetric matrix of (a) is,andrespectively the physical size coordinates of the image of the bone joint point i at the time t in the selection camera and the camera marked as k,andrespectively the vertical coordinates of the bone joint point i at the time t in the selected camera and the camera marked as k;
(504) By triangulation and procedure (503)Calculated c k And R k Calculating the coordinates of two different skeletal joint points in the camera coordinate system denoted by kAndthe distance between two different skeletal joint points isAnd using the actual physical length between two known skeletal joint pointsCalculating to obtain scale information lambda k :
(505) Adding the scale information to the translation vector to obtain the actual translation vector of each camera as lambda k c k 。
2. The method for automatically calibrating the external parameters of the multiple phases based on the human body skeleton extraction as claimed in claim 1, wherein the specific manner of the step (3) is as follows:
(301) Performing neural network prediction on each frame of image to obtain a thermodynamic diagram and a partial affinity field of each skeletal joint;
(302) Extracting specific image positions and confidence degrees of joints from the thermodynamic diagram by applying a non-maximum inhibition algorithm;
(303) Finding limb links by using the extracted joint information and part of the affinity fields to obtain all connections, wherein each connection is regarded as a limb;
(304) The limbs with the same joint are regarded as the limbs of the same person, the limbs are assembled to form the person, and image pixel coordinates of all the skeletal joint points are obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110289301.0A CN113077519B (en) | 2021-03-18 | 2021-03-18 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110289301.0A CN113077519B (en) | 2021-03-18 | 2021-03-18 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113077519A CN113077519A (en) | 2021-07-06 |
CN113077519B true CN113077519B (en) | 2022-12-09 |
Family
ID=76612748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110289301.0A Active CN113077519B (en) | 2021-03-18 | 2021-03-18 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077519B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113925497B (en) * | 2021-10-22 | 2023-09-15 | 吉林大学 | Binocular vision measurement system-based automobile passenger riding posture extraction method |
CN116030137A (en) * | 2021-10-27 | 2023-04-28 | 华为技术有限公司 | Parameter determination method and related equipment |
CN114758016B (en) * | 2022-06-15 | 2022-09-13 | 超节点创新科技(深圳)有限公司 | Camera equipment calibration method, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103456016A (en) * | 2013-09-06 | 2013-12-18 | 同济大学 | Body-feeling camera network calibration method unrelated to visual angles |
CN108288291A (en) * | 2018-06-07 | 2018-07-17 | 北京轻威科技有限责任公司 | Polyphaser calibration based on single-point calibration object |
CN110458897A (en) * | 2019-08-13 | 2019-11-15 | 北京积加科技有限公司 | Multi-cam automatic calibration method and system, monitoring method and system |
CN111667540A (en) * | 2020-06-09 | 2020-09-15 | 中国电子科技集团公司第五十四研究所 | Multi-camera system calibration method based on pedestrian head recognition |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034238B (en) * | 2010-12-13 | 2012-07-18 | 西安交通大学 | Multi-camera system calibrating method based on optical imaging probe and visual graph structure |
CN110969668B (en) * | 2019-11-22 | 2023-05-02 | 大连理工大学 | Stereo calibration algorithm of long-focus binocular camera |
CN111028271B (en) * | 2019-12-06 | 2023-04-14 | 浩云科技股份有限公司 | Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection |
CN111739103A (en) * | 2020-06-18 | 2020-10-02 | 苏州炫感信息科技有限公司 | Multi-camera calibration system based on single-point calibration object |
CN112001926B (en) * | 2020-07-04 | 2024-04-09 | 西安电子科技大学 | RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping |
-
2021
- 2021-03-18 CN CN202110289301.0A patent/CN113077519B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103456016A (en) * | 2013-09-06 | 2013-12-18 | 同济大学 | Body-feeling camera network calibration method unrelated to visual angles |
CN108288291A (en) * | 2018-06-07 | 2018-07-17 | 北京轻威科技有限责任公司 | Polyphaser calibration based on single-point calibration object |
CN110458897A (en) * | 2019-08-13 | 2019-11-15 | 北京积加科技有限公司 | Multi-cam automatic calibration method and system, monitoring method and system |
CN111667540A (en) * | 2020-06-09 | 2020-09-15 | 中国电子科技集团公司第五十四研究所 | Multi-camera system calibration method based on pedestrian head recognition |
Non-Patent Citations (1)
Title |
---|
Automatic Multi-Camera Extrinsic Parameter Calibration Based on Pedestrian Torsors;Anh Minh Truong et al;《sensors》;20191115;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113077519A (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xiang et al. | Monocular total capture: Posing face, body, and hands in the wild | |
CN113077519B (en) | Multi-phase external parameter automatic calibration method based on human skeleton extraction | |
Bogo et al. | Dynamic FAUST: Registering human bodies in motion | |
CN108154550B (en) | RGBD camera-based real-time three-dimensional face reconstruction method | |
CN107194991B (en) | Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update | |
CN111126304A (en) | Augmented reality navigation method based on indoor natural scene image deep learning | |
CN109919141A (en) | A kind of recognition methods again of the pedestrian based on skeleton pose | |
US20150243035A1 (en) | Method and device for determining a transformation between an image coordinate system and an object coordinate system associated with an object of interest | |
CN109758756B (en) | Gymnastics video analysis method and system based on 3D camera | |
CN112907631B (en) | Multi-RGB camera real-time human body motion capture system introducing feedback mechanism | |
CN112330813A (en) | Wearing three-dimensional human body model reconstruction method based on monocular depth camera | |
CN108073855A (en) | A kind of recognition methods of human face expression and system | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN115376034A (en) | Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition | |
CN113255487A (en) | Three-dimensional real-time human body posture recognition method | |
CN111667540B (en) | Multi-camera system calibration method based on pedestrian head recognition | |
Rosenhahn et al. | Automatic human model generation | |
CN116051648A (en) | Camera internal and external parameter calibration method based on cross visual angle multi-human semantic matching | |
CN113284249B (en) | Multi-view three-dimensional human body reconstruction method and system based on graph neural network | |
CN109631850B (en) | Inclined camera shooting relative positioning method based on deep learning | |
CN114548224A (en) | 2D human body pose generation method and device for strong interaction human body motion | |
Morency et al. | Fast 3d model acquisition from stereo images | |
CN112767481A (en) | High-precision positioning and mapping method based on visual edge features | |
JP3860287B2 (en) | Motion extraction processing method, motion extraction processing device, and program storage medium | |
CN117671738B (en) | Human body posture recognition system based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |