CN111667540A - Multi-camera system calibration method based on pedestrian head recognition - Google Patents

Multi-camera system calibration method based on pedestrian head recognition Download PDF

Info

Publication number
CN111667540A
CN111667540A CN202010520089.XA CN202010520089A CN111667540A CN 111667540 A CN111667540 A CN 111667540A CN 202010520089 A CN202010520089 A CN 202010520089A CN 111667540 A CN111667540 A CN 111667540A
Authority
CN
China
Prior art keywords
camera
coordinate system
ellipse
calculating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010520089.XA
Other languages
Chinese (zh)
Other versions
CN111667540B (en
Inventor
关俊志
耿虎军
高峰
柴兴华
陈彦桥
王雅涵
张泽勇
彭会湘
陈韬亦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202010520089.XA priority Critical patent/CN111667540B/en
Publication of CN111667540A publication Critical patent/CN111667540A/en
Application granted granted Critical
Publication of CN111667540B publication Critical patent/CN111667540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a multi-camera system calibration method based on pedestrian head recognition, and belongs to the technical field of computer vision. Processing each frame of image, and extracting an ellipse of a head portrait in the image by using a CNN (content-based network) method; calculating the three-dimensional position of the human head in each frame under a camera coordinate system according to the position and the size of the ellipse; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras; and optimizing the obtained camera external parameters, and performing alignment conversion on the camera world coordinate system and the selected world coordinate system. The invention takes the human head as a characteristic point, and the point cloud formed by the motion track of the human head as a virtual calibration object, and provides a method for calculating the three-dimensional coordinate of the head under a monocular single-frame image, so that the external parameter calibration problem of a multi-camera is converted into a three-dimensional point cloud alignment problem. Therefore, real-time online accurate external reference calibration of the multi-camera system is completed by calculating the relative pose between the three-dimensional point clouds.

Description

Multi-camera system calibration method based on pedestrian head recognition
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a multi-camera system calibration method based on pedestrian head recognition.
Background
With the rapid development of computer vision technology and related fields of artificial intelligence, a multi-camera system is more and more widely applied in the fields of scene reconstruction, security monitoring of smart cities, airport monitoring, motion capture, sports video analysis, industrial measurement and the like. In recent years, a solution using a camera as an input rapidly occupies a powerful position in the market by virtue of excellent characteristics such as high performance, high convenience and high stability. Although the multiple cameras have great advantages in information processing and integration, the stable and normal operation of the multiple-camera system requires an accurate and fast calibration process.
In the conventional calibration method, known scene structure information is used for calibration, and the conventional calibration method usually involves the manufacture of an accurate calibration object, a complex calibration process and high-precision known calibration information and requires complex operation of a professional. Moreover, each time the position of the camera set is changed, calibration operation needs to be carried out again.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-camera system calibration method based on pedestrian head recognition, which takes people frequently existing in a scene as calibration objects, can realize online real-time calibration of a camera system, and provides a basis for application such as later monitoring scene understanding.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a multi-camera system calibration method based on pedestrian head recognition comprises the following steps:
(1) enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) intercepting at least three frames of images from the video of each camera;
(3) processing each frame of image, and extracting the head ellipse of the pedestrian in the image by using a convolutional neural network to obtain the central point position of the ellipse and the lengths of the long axis and the short axis;
(4) calculating the three-dimensional position of the ellipse in each frame of image under the coordinate system of the camera according to the position of the central point of the ellipse and the lengths of the long axis and the short axis;
(5) selecting any one camera coordinate system as a first world coordinate system, and calculating external parameters of other cameras;
(6) and optimizing the obtained camera external parameters, establishing a second world coordinate system by taking a certain point in a room as an origin, and performing alignment conversion on the second world coordinate system and the first world coordinate system to obtain the positions of all cameras in the second world coordinate system so as to finish the calibration of the multi-camera system.
Further, the specific manner of the step (3) is as follows:
(301) segmenting the image, and selecting a rectangular frame with the proportion between [2/3,3/2] as a candidate frame;
(302) performing convolution operation on all candidate frames by using a convolution neural network, and selecting the candidate frame with the highest score as an image of the head of the pedestrian in the image;
(303) and converting the candidate frame with the highest score into an ellipse to obtain the head ellipse of the pedestrian, the central point position of the ellipse and the lengths of the long axis and the short axis.
Further, the specific manner of step (4) is as follows:
(401) obtaining the pixel coordinate (u) of the ellipse center in the image coordinate system according to the ellipse central point position and the length of the long axis and the short axiss,vs) The pixel coordinates are then converted to physical coordinates (x) according to the camera's internal parameterss,ys);
(402) Calculating to obtain an elliptical area A according to the position of the central point of the ellipse and the lengths of the long axis and the short axis, and further obtaining a Z-axis coordinate of the ellipse under the coordinate system of the camera
Figure BDA0002531721210000021
Wherein R issTo model the pedestrian's head as a radius of the sphere, theta is an intermediate variable,
Figure BDA0002531721210000022
(403) calculating the X-axis coordinate and the Y-axis coordinate of the ellipse under the camera coordinate system: xs=xsZs,Ys=ysZsObtaining the three-dimensional coordinates (X) of the ellipse under the coordinate system of the cameras,Ys,Zs)。
Further, the specific manner of the step (5) is as follows:
(501) recording discrete three-dimensional point cloud r of three-dimensional positions of heads of pedestrians under different camerask(t), k is a camera mark;
(502) selecting as a first world coordinate system the camera coordinate system of a camera denoted 1 whose discrete three-dimensional point cloud is r1(t);
(503) Calculating center points of point clouds of camera 1 and camera k
Figure BDA0002531721210000023
And
Figure BDA0002531721210000024
Figure BDA0002531721210000025
wherein N is the total number of discrete points;
(504) moving the coordinate origin points of the two point clouds to the point cloud center respectively:
Figure BDA0002531721210000026
wherein, g1(t)、gk(t) moving the origin of coordinates to obtain a discrete three-dimensional point cloud;
(505) according to equation gk(t)=Rkg1(t) obtaining a rotation vector R of the camera k with respect to the camera 1 by singular value decomposition calculationk
(506) Calculating the offset vector c of camera k relative to camera 1k
Figure BDA0002531721210000031
RkAnd ckI.e. the external parameters of camera k, namely:
rk(t)=Rkr1(t)+ck
compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an effective multi-camera system method, which can obtain good calibration effect without additional calibration objects and complicated calibration processes.
2. The method is simple and easy to implement, and can carry out automatic online calibration under the condition that a multi-camera system does not shut down, thereby greatly improving the calibration efficiency.
3. The on-line calibration through monocular depth measurement and a multi-camera system has been a research hotspot in the field, and at present, common methods are roughly divided into two types: one type is a calibration method based on the traditional calibration object, and although the method can obtain good effect, the method has high requirement on the manufacturing precision of the calibration object, the calibration flow is complicated, and online calibration cannot be realized; the other type is a self-calibration method, a specially-made calibration object is not needed in the method, the corresponding relation between cameras is established by relying on feature points in an image, but the method cannot establish the corresponding relation of the feature points under the condition that the visual angle between the cameras is large, so that the application difficulty in a real scene is high. In view of this, the invention firstly uses the human head as a feature point, and uses the point cloud formed by the motion track of the human head as a virtual calibration object, and provides a method for calculating the three-dimensional coordinates of the head under a monocular single-frame image, so as to convert the external reference calibration problem of the multi-camera into the three-dimensional point cloud alignment problem. And the real-time online accurate external reference calibration of the multi-camera system is completed by calculating the relative pose between the three-dimensional point clouds. This approach is an important innovation over the prior art.
Drawings
Fig. 1 is a flowchart of a calibration method of a multi-camera system according to an embodiment of the present invention.
Fig. 2 is a schematic projection diagram of a ball in an image according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a human head extracted by a convolutional neural network in the embodiment of the present invention.
Detailed description of the invention
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
A multi-camera system calibration method based on pedestrian head recognition comprises the following steps:
step 1, after a multi-camera system is installed, firstly, a single pedestrian walks in a camera monitoring area, and then, a plurality of cameras record videos simultaneously to obtain synchronized videos;
step 2, at least three frames of images are intercepted from each video;
step 3, processing each frame of image, extracting the ellipse of the head portrait in the image by using a CNN convolutional neural network method, comprising the following steps:
step 3.1, carrying out segmentation operation on the image, and selecting a rectangular frame with the proportion between [2/3,3/2] as a candidate frame;
step 3.2, performing convolution operation on all candidate frames by utilizing a convolution neural network, and selecting the candidate frame with the highest score as an image of the head in the image;
and 3.3, converting the candidate frame into an ellipse.
Step 4, calculating the three-dimensional position of the human head in each frame under the camera coordinate system according to the position and the size of the ellipse, and the method comprises the following steps:
step 4.1, obtaining the coordinate (u) of the ellipse center under the image coordinate system according to the ellipse parameterss,vs) Then (x) is obtained from camera parameterss,ys) Further obtain
Figure BDA0002531721210000041
Step 4.2, calculating to obtain an elliptical area A according to the elliptical parameters, and combining A with the value of pi ∈2[ cos ] theta and Zs=Rs∈ can be calculated
Figure BDA0002531721210000042
Step 4.3, finally calculating to obtain Xs=xsZs,Ys=ysZs
And 5, selecting any one of the camera coordinate systems as a world coordinate system, and calculating the external parameters of other cameras.
And 6, optimizing the obtained camera external parameters, and performing alignment conversion on the camera world coordinate system and the selected world coordinate system.
The following is a more specific example:
referring to fig. 1, a method for calibrating a multi-camera system based on pedestrian head recognition includes the following steps:
step 1, after a multi-camera system is installed, firstly, a single pedestrian walks in a camera monitoring area, and then, a plurality of cameras record videos simultaneously to obtain synchronized videos;
step 2, at least three frames of images are intercepted from each video;
step 3, processing each frame of image, and extracting an ellipse of the head portrait in the image by using a CNN method to obtain a detection graph as shown in fig. 3, including the following substeps:
step 3.1, performing segmentation operation on the image by adopting an image segmentation algorithm, and selecting a rectangular frame with the proportion between [2/3,3/2] as a candidate frame, wherein the specific image segmentation algorithm is disclosed in a document [1 ]:
[1]K.E.A.van de Sande,J.R.R.Uijlings,T.Gevers&A.W.M.Smeulders.Segmentation as selective search for object recognition.InInternational Conference on Computer Vision,pages1879–1886,Nov 2011.
step 3.2, performing convolution operation on all rectangular frames by using a convolution neural network, selecting the rectangular frame with the highest score as an image of a head in an image, and specifically, a human head detection algorithm based on the convolution neural network is disclosed in a document [2 ]:
[2]T.H.Vu,A.Osokin&I.Laptev.Context-Aware CNNsfor Person HeadDetection.In IEEE International Conferenceon Computer Vision,pages 2893–2901,Dec 2015.
and 3.3, converting the rectangular frame obtained in the previous detection into an ellipse.
Step 4, calculating the three-dimensional position of the human head in each frame under the camera coordinate system according to the position and the size of the ellipse, and comprising the following substeps:
step 4.1, the projection schematic diagram of the ball in the image is shown in fig. 2, and the coordinates (u) of the center of the ellipse under the image coordinate system are obtained according to the ellipse parameterss,vs) Then (x) is obtained from camera parameterss,ys) Further obtain
Figure BDA0002531721210000051
Step 4.2, calculating to obtain an elliptical area A according to the elliptical parameters, and combining A with the value of pi ∈2[ cos ] theta and Zs=Rs∈ can be calculated
Figure BDA0002531721210000052
And 4.3, finally, calculating to obtain the three-dimensional position of the human head in a camera coordinate system: xs=xsZs,Ys=ysZs
Step 5, calculating the external parameters of the cameras by adopting a singular value decomposition algorithm, selecting any one of the camera coordinate systems as a world coordinate system, and calculating the external parameters of other cameras, wherein the method comprises the following substeps:
step 5.1, assuming that a person moves, the three-dimensional position of the head of the person obtains a series of discrete three-dimensional point cloud partial tables r under different camerask(t),r1(t), where k is the camera designation. The conversion relationship between two point clouds is shown in formula (1), wherein RkAnd tkThe rotation and offset of the camera k relative to the first camera are the parameters of the camera k.
rk(t)=Rkr1(t)+ck(I)
First, the central points of two point clouds are calculated
Figure BDA0002531721210000053
Moving the coordinate origin of the two point clouds to the point cloud center
Figure BDA0002531721210000054
Then obtain gk(t)=Rkg1(t) calculating by singular value decomposition algorithm to obtain Rk
Step 5.2 finally calculating to obtain an offset vector
Figure BDA0002531721210000055
The specific algorithm is described in the literature [3]:
[3]K.S.Arun,T.S.Huang&S.D.Blostein.Least-SquaresFitting of Two 3-DPoint Sets.IEEE Transactions onPattern Analysis and Machine Intelligence,vol.9,no.5,pages 698–700,Sept 1987.
And 6, optimizing the obtained camera external parameters, and performing registration conversion on the camera world coordinate system and the selected world coordinate system. The calibration error projection error of the method is 1.6 pixels, the attitude error is 0.6 degrees, the offset error is 1.1 percent, and the calibration result is accurate.
In a word, the method processes each frame of image, and extracts the ellipse of the head portrait in the image by using a CNN method; calculating the three-dimensional position of the human head in each frame under a camera coordinate system according to the position and the size of the ellipse; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras; and optimizing the obtained camera external parameters, and performing alignment conversion on the camera world coordinate system and the selected world coordinate system. The invention takes the human head as a characteristic point, and the point cloud formed by the motion track of the human head as a virtual calibration object, and provides a method for calculating the three-dimensional coordinate of the head under a monocular single-frame image, so that the external parameter calibration problem of a multi-camera is converted into a three-dimensional point cloud alignment problem. Therefore, real-time online accurate external reference calibration of the multi-camera system is completed by calculating the relative pose between the three-dimensional point clouds.
The above description is only one embodiment of the present invention, and is not intended to limit the present invention. Any modification, improvement or the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A multi-camera system calibration method based on pedestrian head recognition is characterized by comprising the following steps:
(1) enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) intercepting at least three frames of images from the video of each camera;
(3) processing each frame of image, and extracting the head ellipse of the pedestrian in the image by using a convolutional neural network to obtain the central point position of the ellipse and the lengths of the long axis and the short axis;
(4) calculating the three-dimensional position of the ellipse in each frame of image under the coordinate system of the camera according to the position of the central point of the ellipse and the lengths of the long axis and the short axis;
(5) selecting any one camera coordinate system as a first world coordinate system, and calculating external parameters of other cameras;
(6) and optimizing the obtained camera external parameters, establishing a second world coordinate system by taking a certain point in a room as an origin, and performing alignment conversion on the second world coordinate system and the first world coordinate system to obtain the positions of all cameras in the second world coordinate system so as to finish the calibration of the multi-camera system.
2. The method for calibrating a multi-camera system based on pedestrian head recognition according to claim 1, wherein the step (3) is implemented by:
(301) segmenting the image, and selecting a rectangular frame with the proportion between [2/3,3/2] as a candidate frame;
(302) performing convolution operation on all candidate frames by using a convolution neural network, and selecting the candidate frame with the highest score as an image of the head of the pedestrian in the image;
(303) and converting the candidate frame with the highest score into an ellipse to obtain the head ellipse of the pedestrian, the central point position of the ellipse and the lengths of the long axis and the short axis.
3. The method for calibrating a multi-camera system based on pedestrian head recognition according to claim 1, wherein the step (4) is implemented by:
(401) obtaining the pixel coordinate (u) of the ellipse center in the image coordinate system according to the ellipse central point position and the length of the long axis and the short axiss,vs) The pixel coordinates are then converted to physical coordinates (x) according to the camera's internal parameterss,ys);
(402) Calculating to obtain an elliptical area A according to the position of the central point of the ellipse and the lengths of the long axis and the short axis, and further obtaining a Z-axis coordinate of the ellipse under the coordinate system of the camera
Figure FDA0002531721200000011
Wherein R issTo model the pedestrian's head as a radius of the sphere, theta is an intermediate variable,
Figure FDA0002531721200000012
(403) calculating the X-axis coordinate and the Y-axis coordinate of the ellipse under the camera coordinate system: xs=xsZs,Ys=ysZsObtaining the three-dimensional coordinates (X) of the ellipse under the coordinate system of the cameras,Ys,Zs)。
4. The method for calibrating a multi-camera system based on pedestrian head recognition according to claim 1, wherein the step (5) is implemented by:
(501) recording discrete three-dimensional point cloud r of three-dimensional positions of heads of pedestrians under different camerask(t), k is a camera mark;
(502) selecting as a first world coordinate system the camera coordinate system of a camera denoted 1 whose discrete three-dimensional point cloud is r1(t);
(503) Calculating center points of point clouds of camera 1 and camera k
Figure FDA0002531721200000021
And
Figure FDA0002531721200000022
Figure FDA0002531721200000023
wherein N is the total number of discrete points;
(504) moving the coordinate origin points of the two point clouds to the point cloud center respectively:
Figure FDA0002531721200000024
wherein, g1(t)、gk(t) moving the origin of coordinates to obtain a discrete three-dimensional point cloud;
(505) according to the equationgk(t)=Rkg1(t) obtaining a rotation vector R of the camera k with respect to the camera 1 by singular value decomposition calculationk
(506) Calculating the offset vector c of camera k relative to camera 1k
Figure FDA0002531721200000025
RkAnd ckI.e. the external parameters of camera k, namely:
rk(t)=Rkr1(t)+ck
CN202010520089.XA 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition Active CN111667540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010520089.XA CN111667540B (en) 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010520089.XA CN111667540B (en) 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition

Publications (2)

Publication Number Publication Date
CN111667540A true CN111667540A (en) 2020-09-15
CN111667540B CN111667540B (en) 2023-04-18

Family

ID=72386357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010520089.XA Active CN111667540B (en) 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition

Country Status (1)

Country Link
CN (1) CN111667540B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077519A (en) * 2021-03-18 2021-07-06 中国电子科技集团公司第五十四研究所 Multi-phase external parameter automatic calibration method based on human skeleton extraction

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804541A (en) * 2005-01-10 2006-07-19 北京航空航天大学 Spatial three-dimensional position attitude measurement method for video camera
JP2013003970A (en) * 2011-06-20 2013-01-07 Nippon Telegr & Teleph Corp <Ntt> Object coordinate system conversion device, object coordinate system conversion method and object coordinate system conversion program
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
US20160170603A1 (en) * 2014-12-10 2016-06-16 Microsoft Technology Licensing, Llc Natural user interface camera calibration
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN108648241A (en) * 2018-05-17 2018-10-12 北京航空航天大学 A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804541A (en) * 2005-01-10 2006-07-19 北京航空航天大学 Spatial three-dimensional position attitude measurement method for video camera
JP2013003970A (en) * 2011-06-20 2013-01-07 Nippon Telegr & Teleph Corp <Ntt> Object coordinate system conversion device, object coordinate system conversion method and object coordinate system conversion program
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
US20160170603A1 (en) * 2014-12-10 2016-06-16 Microsoft Technology Licensing, Llc Natural user interface camera calibration
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN108648241A (en) * 2018-05-17 2018-10-12 北京航空航天大学 A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周富强 等: ""镜像双目视觉精密测量技术综述"" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077519A (en) * 2021-03-18 2021-07-06 中国电子科技集团公司第五十四研究所 Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN113077519B (en) * 2021-03-18 2022-12-09 中国电子科技集团公司第五十四研究所 Multi-phase external parameter automatic calibration method based on human skeleton extraction

Also Published As

Publication number Publication date
CN111667540B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
US10719940B2 (en) Target tracking method and device oriented to airborne-based monitoring scenarios
Xu et al. Flycap: Markerless motion capture using multiple autonomous flying cameras
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
CN108416428B (en) Robot vision positioning method based on convolutional neural network
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
Lin et al. Topology aware object-level semantic mapping towards more robust loop closure
WO2021098080A1 (en) Multi-spectral camera extrinsic parameter self-calibration algorithm based on edge features
CN103607554A (en) Fully-automatic face seamless synthesis-based video synthesis method
CN111476710B (en) Video face changing method and system based on mobile platform
CN113077519B (en) Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN104376334B (en) A kind of pedestrian comparison method of multi-scale feature fusion
CN114036969B (en) 3D human body action recognition algorithm under multi-view condition
CN110555408A (en) Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation
CN110135277B (en) Human behavior recognition method based on convolutional neural network
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
CN114187665A (en) Multi-person gait recognition method based on human body skeleton heat map
CN111582036A (en) Cross-view-angle person identification method based on shape and posture under wearable device
CN111860651A (en) Monocular vision-based semi-dense map construction method for mobile robot
CN114612933B (en) Monocular social distance detection tracking method
CN111667540B (en) Multi-camera system calibration method based on pedestrian head recognition
CN110378995B (en) Method for three-dimensional space modeling by using projection characteristics
Nguyen et al. Combined YOLOv5 and HRNet for high accuracy 2D keypoint and human pose estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant