CN116704047B - Pedestrian ReID-based calibration method for monitoring camera equipment position - Google Patents

Pedestrian ReID-based calibration method for monitoring camera equipment position Download PDF

Info

Publication number
CN116704047B
CN116704047B CN202310953744.4A CN202310953744A CN116704047B CN 116704047 B CN116704047 B CN 116704047B CN 202310953744 A CN202310953744 A CN 202310953744A CN 116704047 B CN116704047 B CN 116704047B
Authority
CN
China
Prior art keywords
camera
coordinates
matrix
representing
pixel coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310953744.4A
Other languages
Chinese (zh)
Other versions
CN116704047A (en
Inventor
万森
周志鹏
李月明
高东奇
朱前进
袁泽川
齐贤龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Yunsen Internet Of Things Technology Co ltd
Original Assignee
Anhui Yunsen Internet Of Things Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Yunsen Internet Of Things Technology Co ltd filed Critical Anhui Yunsen Internet Of Things Technology Co ltd
Priority to CN202310953744.4A priority Critical patent/CN116704047B/en
Publication of CN116704047A publication Critical patent/CN116704047A/en
Application granted granted Critical
Publication of CN116704047B publication Critical patent/CN116704047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of image processing, in particular to a pedestrian ReID-based calibration method for the position of monitoring camera equipment, which comprises the following steps: acquiring the coordinate position of the pedestrian in the monitoring picture; selecting two cameras adjacent to each other, and calculating an external parameter matrix between the two cameras; selecting any one of the two cameras as a reference camera, and calculating the position of the other camera through an external reference matrix; selecting any one of the two cameras with the determined positions as a new reference camera, and calculating the positions of cameras adjacent to the new reference camera; and so on, further calculating to obtain the positions of all cameras; the invention can accurately calibrate the position of the camera according to the result of the ReID of the pedestrian, thereby laying a foundation for calculating the motion trail of the pedestrian.

Description

Pedestrian ReID-based calibration method for monitoring camera equipment position
Technical Field
The invention relates to the technical field of image processing, in particular to a pedestrian ReID-based calibration method for the position of monitoring camera equipment.
Background
The building monitoring system can provide a basic video monitoring function, so that different pedestrians can be effectively distinguished and tracked. Meanwhile, with the progress of building monitoring technology and the rise of intelligent buildings, pedestrian ReIDs are generated accordingly.
The pedestrian ReID can accurately identify pedestrians in complex scenes by utilizing a deep learning and pattern recognition algorithm, detect abnormal behaviors and potential threats of the pedestrians, and further can timely take safety measures. Meanwhile, the pedestrian ReID can further understand the flow and use habit of personnel through big data mining, optimize layout and resource allocation, and improve the energy utilization efficiency and the working efficiency. In some important monitoring areas, such as squares, public institutions lobbies, machine rooms, corridor, etc., it is often necessary to use multiple monitoring cameras for monitoring, avoiding dead angles.
Acquiring a pedestrian trajectory is one of the key steps of this type of monitoring system, which requires calibrating the positions of a plurality of cameras in a monitored area to establish a spatial relationship between them. The accurate calibration of the camera position is the basis for realizing pedestrian ReID and track analysis, and can ensure that pedestrian images captured by different cameras are mapped into a uniform space coordinate system, so that continuous tracking and analysis of pedestrians are realized. However, camera calibration faces difficulties, mainly because the field of view of the monitoring camera is larger and the overlapping area is smaller, so that time and labor are wasted when calibration is performed by using the calibration plate; and the smaller calibration plate can only occupy smaller pixels in the image, so that internal parameters and external parameters between monitoring cameras are difficult to calibrate accurately, and further accurate positioning of the cameras cannot be performed, so that the problem is to be solved.
Disclosure of Invention
In order to avoid and overcome the technical problems in the prior art, the invention provides a calibration method for the position of monitoring camera equipment based on pedestrian ReID. The invention can accurately calibrate the position of the camera according to the result of the ReID of the pedestrian, thereby laying a foundation for calculating the motion trail of the pedestrian.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a method for calibrating the position of a monitoring camera based on a pedestrian ReID comprises the following steps:
s1, acquiring coordinate positions of pedestrians in monitoring pictures of all cameras;
s2, selecting two cameras adjacent to each other, and calculating an external parameter matrix between the two cameras according to the coordinate positions of pedestrians;
s3, selecting any one of the two cameras in the step S2 as a reference camera, and calculating the position of the other camera through the external reference matrix;
s4, selecting any one of the two cameras with the determined positions in the step S3 as a new reference camera, and calculating the positions of cameras adjacent to the new reference camera; and so on, and then calculating the positions of all cameras.
As still further aspects of the invention: the specific steps of step S2 are as follows:
s21, selecting two cameras which are installed adjacently to each other according to an adjacent criterion; the adjacency criteria are as follows: the two cameras are arranged adjacent to each other, and the same pedestrian appears in the monitoring pictures of the two cameras at the same time; calculating an external parameter matrix between the two cameras through the position coordinates of the pedestrians; the extrinsic matrix comprises a rotation matrix and an initial translation matrix;
s22, calculating a scale factor, solving the product of the scale factor and the initial translation matrix, and replacing and updating the initial translation matrix by using the product, so as to obtain an updated external parameter matrix.
As still further aspects of the invention: the specific steps of step S21 are as follows:
s211, selecting a camera A and a camera B which meet the adjacent criterion;
s212, acquiring the same human body key points of the same pedestrian from the databaseAt the position ofmPixel coordinates in the camera A and the camera B at different moments to formmA group coordinate set;mthe group coordinates are expressed as follows:
P A 1 =(u A1 ,v A1 ,1)、P B1 =(u B1 ,v B1 ,1);
P A 2 =(u A2 ,v A2 ,1)、P B2 =(u B2 ,v B2 ,1);
…、…
P Ai =(u Ai ,v Ai ,1)、P Bi =(u Bi ,v Bi ,1);
…、…
P Am =(u Am ,v Am ,1)、P Bm =(u Bm ,v Bm ,1);
wherein ,P A1 representing the 1 st group of pixel coordinates of the key points of the human body in the camera A,u A1 representation ofP A1 Is defined by the transverse axis of (c),v A1 representation ofP A1 Is the ordinate of (2);P B1 representing the 1 st group of pixel coordinates of the key points of the human body in the camera B,u B1 representation ofP B1 Is defined by the transverse axis of (c),v B1 representation ofP B1 Is the ordinate of (2);
P A 2 representing key points of human bodyGroup 2 pixel coordinates in camera a,u A2 representation ofP A2 Is defined by the transverse axis of (c),v A2 representation ofP A2 Is the ordinate of (2);P B2 representing the 2 nd group of pixel coordinates of the key points of the human body in the camera B,u B2 representation ofP B2 Is defined by the transverse axis of (c),v B2 representation ofP B2 Is the ordinate of (2);
P Ai representing the first key point of human body in A-type cameraiThe coordinates of the group of pixels,u Ai representation ofP Ai Is defined by the transverse axis of (c),v Ai representation ofP Ai Is the ordinate of (2);P Bi representing the key point of human body in the camera BiThe coordinates of the group of pixels,u Bi representation ofP Bi Is defined by the transverse axis of (c),v Bi representation ofP Bi Is the ordinate of (2);
P Am representing the first key point of human body in A-type cameramThe coordinates of the group of pixels,u Am representation ofP Am Is defined by the transverse axis of (c),v Am representation ofP Am Is the ordinate of (2);P Bm representing the key point of human body in the camera BmThe coordinates of the group of pixels,u Bm representation ofP Bm Is defined by the transverse axis of (c),v B1 representation ofP Bm Is the ordinate of (2);
s213, according to the epipolar geometry constraint, lettingmThe set of coordinates satisfies the following equation:
wherein ,representing a matrix formP B1 Is a transposed matrix of (a); />Representing a matrix formP B2 Is a transposed matrix of (a); />Representing a matrix formP Bm Is a transposed matrix of (a);Ethe matrix is an essential matrix between the camera A and the camera B;
s214, decomposing the essential matrix by using singular valuesEDecomposing into a rotation matrix and an initial translation matrix, and obtaining four groups of solutions: (R 1 ,t 1 )、(R 2 ,t 2 )、(R 1 ,t 2) and (R 2 ,t 1), wherein ,R 1 the solved 1 st rotation matrix is shown,R 2 representing the solved 2 nd rotation matrix;t 1 the solved 1 st rotation matrix is shown,t 1 representing the solved 2 nd rotation matrix;
s215, calculating correct solutions in the four groups of solutions through judging criteria.
As still further aspects of the invention: the specific steps of step S215 are as follows:
s2151, calculating rotation matrixR 1 AndR 2 a rotation matrix with a determinant value of 1 is the correct solution corresponding to the rotation matrixR
S2152 selecting the pixel coordinates of the camera AP Ai And pixel coordinates of camera number BP Bi According to pinhole model theory of camera, pixel coordinatesP Ai And pixel coordinatesP Bi The following calculation formula is satisfied:
wherein ,K A an internal reference matrix of the camera A;K B an internal reference matrix of the camera B;trepresenting a correct solution corresponding to the initial translation matrix;P Ai-3D representing pixel coordinatesP Ai Corresponding camera coordinates in a camera coordinate system of the camera A;
representing pixel coordinatesP Ai Corresponding camera coordinate values in a camera coordinate system of the camera A;
representing pixel coordinatesP Bi Corresponding camera coordinate values in a camera coordinate system of the camera B;
s2153 respectively willt=t 1 Andt=t 2 is brought into the calculation formula of step S2152 for calculationz Ai Andz Bi and the set of equations is obtained as follows:
wherein ,x Ai1 to use the initial translation matrixt 1 Calculated camera coordinatesP Ai-3D At the position ofXCoordinate values on the axis;y Ai1 to use the initial translation matrixt 1 Calculated camera coordinatesP Ai-3D At the position ofYCoordinate values on the axis;z Ai1 to use the initial translation matrixt 1 Calculated camera coordinatesP Ai-3D At the position ofZCoordinate values on the axis;
x Bi 1 to use the initial translation matrixt 1 Calculated camera coordinatesP Bi-3D At the position ofXCoordinate values on the axis;y Bi1 to use the initial translation matrixt 1 Calculated camera coordinatesP Bi-3D At the position ofYCoordinate values on the axis;z Bi1 to use the initial translation matrixt 1 Calculated camera coordinatesP Bi-3D At the position ofZCoordinate values on the axis;
x Ai 2 to use the initial translation matrixt 2 Calculated camera coordinatesP Ai-3D At the position ofXCoordinate values on the axis;y Ai2 to use the initial translation matrixt 2 Calculated camera coordinatesP Ai-3D At the position ofYCoordinate values on the axis;z Ai2 to use the initial translation matrixt 2 Calculated camera coordinatesP Ai-3D At the position ofZCoordinate values on the axis;
x Bi 2 to use the initial translation matrixt 2 Calculated camera coordinatesP Bi-3D At the position ofXCoordinate values on the axis;y Bi2 to use the initial translation matrixt 2 Calculated camera coordinatesP Bi-3D At the position ofYCoordinate values on the axis;z Bi2 to use the initial translation matrixt 2 Calculated camera coordinatesP Bi-3D At the position ofZCoordinate values on the axis;
s2154, ifz Ai1 Andz Bi1 all are greater than 0, the correct solution corresponding to the initial translation matrix ist 1 The method comprises the steps of carrying out a first treatment on the surface of the If it isz Ai2 Andz Bi2 all are greater than 0, the correct solution corresponding to the initial translation matrix ist 2 The method comprises the steps of carrying out a first treatment on the surface of the The correct solution corresponding to the initial translation matrix is recorded ast
As still further aspects of the invention: step S22 includes step S22A, step S22B, and step S22C; the specific steps of step S22A are as follows:
S22A1, selecting the head and the ankle of the pedestrian as key points of the human body,and acquiring pixel coordinates of two human body key points in the A camera and the B camera at the same momentH A =(u HA ,v HA ,1)、H B =(u HB ,v HB ,1)、F A =(u FA ,v FA ,1) and F B =(u FB ,v FB ,1);
wherein ,H A representing the pixel coordinates of the pedestrian's head in camera a,u HA representing pixel coordinatesH A Is defined by the transverse axis of (c),v HA representing pixel coordinatesH A Is the ordinate of (2);
H B representing the pixel coordinates of the pedestrian's head in camera B,u HB representing pixel coordinatesH B Is defined by the transverse axis of (c),v HB representing pixel coordinatesH B Is the ordinate of (2);
F A representing the pixel coordinates of the pedestrian's ankle in camera a,u FA representing pixel coordinatesF A Is defined by the transverse axis of (c),v FA representing pixel coordinatesF A Is the ordinate of (2);
F B representing the pixel coordinates of the pedestrian's ankle in camera B,u FB representing pixel coordinatesF B Is defined by the transverse axis of (c),v FB representing pixel coordinatesF B Is the ordinate of (2);
S22A2, pixel coordinatesH A And pixel coordinatesH B The first conversion formula is carried into the following formula:
wherein ,x HA for pixel coordinatesH A The corresponding camera coordinates in camera A are inXCoordinate values on the axis;y HA for pixel coordinatesH A The corresponding camera coordinates in camera A are inYCoordinate values on the axis;z HA for pixel coordinatesH A The corresponding camera coordinates in camera A are inZCoordinate values on the axis;z HB for pixel coordinatesH B The corresponding camera coordinates in the camera B are inZCoordinate values on the axis;K B representing an internal reference matrix of the camera B;srepresenting scale factors;
S22A3, pixel coordinates are converted by a first conversion formulaH A Conversion to camera coordinates corresponding to camera AH A- D3 Camera coordinatesH A- D3 The expression is as follows:
wherein,Q H representing a head intermediate matrix;b H representing a head assistance matrix;
S22A4, introducing functionGBy a function ofGCharacterizing camera coordinatesH A- D3 The characterization form is as follows:
wherein,G HA- D3 representing a functionGWith respect toRtK A K B H A AndH B is calculated by the computer.
As still further aspects of the invention: the specific steps of step S22B are as follows:
S22B1, pixel coordinatesF A And pixel coordinatesF B The second conversion formula is brought into the second conversion formula as follows:
wherein,x FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inXCoordinate values on the axis;y FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inYCoordinate values on the axis;z FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inZCoordinate values on the axis;z FB for pixel coordinatesF B The corresponding camera coordinates in the camera B are inZCoordinate values on the axis;K B representing an internal reference matrix of the camera B;srepresenting scale factors;
S22B2, the pixel coordinates are converted by a second conversion formulaF A Conversion to camera coordinates corresponding to camera AF A- D3 Camera coordinatesF A- D3 The expression is as follows:
wherein,Q F representing an ankle intermediate matrix;b F representing an ankle-assisting matrix;
S22B3, introducing functionGBy a function ofGCharacterizing camera coordinatesF A- D3 The characterization form is as follows:
wherein,G FA- D3 representing a functionGWith respect toRtK A K B F A AndF B is calculated by the computer.
As still further aspects of the invention: the specific steps of step S22C are as follows:
S22C1, willH A- D3 AndF A- D3 carrying out calculation in a scale factor calculation formula to obtain a scale factor; the scale factor is calculated as follows:
wherein,Wrepresenting the average height of the area where the camera is located;Orepresenting the number of pedestrians appearing in a camera shooting picture; |G HA- D-i3 -G FA- D-i3 | represents the firstiCorresponding to individual pedestriansG HA- D3 AndG FA- D3 absolute value of the two;
S22C2, multiplying the scale factor by the initial translation matrix, and replacing and updating the initial translation matrix by the obtained product to obtain an updated external parameter matrixR,T) The method comprises the steps of carrying out a first treatment on the surface of the The product formula is as follows:
wherein,Tis the product of the scale factor and the initial translation matrix, i.e., the final translation matrix.
As still further aspects of the invention: the specific steps of step S1 are as follows:
s11, acquiring all cameras to be positioned in a camera mounting place, and forming a camera group;
s12, acquiring coordinate positions of pedestrians in each frame of shooting pictures of each camera in the camera group, wherein the coordinate positions comprise pixel coordinates corresponding to human body key points of the pedestrians, and the human body key points comprise left wrists, right wrists, left elbows, right elbows, left shoulders, right shoulders, waists, left knees, right knees, left ankles, right ankles, necks and heads.
As still further aspects of the invention: the specific steps of step S3 are as follows:
s31, selecting the camera A as a reference camera, and establishing a world coordinate system with the camera A as a coordinate origin;
s32, through a rotation matrixRAnd final translation matrixTAnd (3) converting coordinates, converting the camera B into a coordinate point in the world coordinate system in the step S31, and further obtaining the position of the camera B.
As still further aspects of the invention: the specific steps of step S4 are as follows:
s41, calculating out an extrinsic matrix between any two cameras meeting adjacent criteria in the camera group;
s42, selecting a camera with a coordinate determined in a world coordinate system with the camera A as a coordinate origin as a reference camera, and calculating the position coordinates of the camera which is adjacent to the reference camera and has not determined the position in the world coordinate system;
s43, and the like, calculating the coordinates of all cameras in the camera group in a world coordinate system with the position of the camera A as the origin of coordinates.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention can automatically calibrate the monitoring camera based on the pedestrian ReID. And acquiring an external parameter matrix between adjacent cameras according to the positions of the same pedestrian on different cameras at the same moment so as to automatically calibrate the positions of the cameras. Automatically calibrating the camera according to the result of the pedestrian ReID, so as to calculate the track of the personnel; and special camera calibration actions are not needed, so that the deployment flow of the ReID system is greatly simplified.
Drawings
FIG. 1 is a schematic diagram of the operation flow structure of the present invention.
Fig. 2 is a schematic diagram of the position of a pedestrian in the camera a in the present invention.
Fig. 3 is a schematic diagram of the position of a pedestrian in the camera B in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 to 3, a method for calibrating a position of a monitoring camera based on a pedestrian ReID includes the following steps:
1. pedestrian ReID identification:
and carrying out pedestrian ReID identification among different cameras, and recording and storing the coordinate positions of the pedestrians in the monitoring pictures of different cameras at the same time.
2. Human body key point detection:
and detecting key points of human bodies appearing on monitoring pictures of different cameras at the same moment. Preferably, the key points of the human body which are obvious to hands, feet, elbows, knees, shoulders and the like can be detected. In order to improve accuracy, human body key points with shielding properties can be eliminated during detection.
3. Estimating an internal reference matrix of the camera:
first, an internal reference matrix of the camera is estimated. The focal length of the camera is obtained, and if the focal length is a fixed focus lens, the focal length of the camera can be obtained from factory information of the camerafThe unit is mm; in the case of a zoom lens, the focal length of the camera can be obtained by the camera SDKfThe unit is mm; through focal lengthfInquiring the type of a sensor used by a camera, wherein the size of a single pixel is as followshMicron, frame lateral resolution ofJ W Longitudinal resolution ofJ H . Then the internal reference matrix of the cameraKIs represented as follows:
The internal reference matrix of the cameras with n positions to be calibrated can be obtained in the modeK 1K 2 、...、K n
4. Calibrating adjacent cameras:
and if the number of the cameras exceeds two, selecting two cameras meeting the adjacent criterion for two-by-two calibration. The adjacency criteria are as follows: the two cameras are arranged adjacent to each other, and the same pedestrian appears in the monitoring pictures of the two cameras at the same time. Assuming that the selected cameras are the A camera and the B camera, the calibration steps are as follows:
step A: and screening the same human body key points of a certain pedestrian on the selected A camera and the selected B camera at a certain moment as a group of key point coordinates. A plurality of coordinate sets may be selected that satisfy different pedestrians, or different moments of the same pedestrian, at least 8 different coordinate sets being required.
And (B) step (B): and calculating the essential matrix of the A camera and the B camera. In step A, the same key point is shared by the coordinate sets in different monitoring picturesmThe set of coordinates is represented as follows:
P A 1 =(u A1 ,v A1 ,1)、P B1 =(u B1 ,v B1 ,1);
P A 2 =(u A2 ,v A2 ,1)、P B2 =(u B2 ,v B2 ,1);
…、…
P Ai =(u Ai ,v Ai ,1)、P Bi =(u Bi ,v Bi ,1);
…、…
P Am =(u Am ,v Am ,1)、P Bm =(u Bm ,v Bm ,1);
the above set of coordinates satisfies the following equation, based on epipolar geometry constraints:
wherein,Ethe matrix is an essential matrix between the A camera and the B camera, and is a matrix of 3 multiplied by 3. Irrespective ofEMultiple solutions to the scaling problem of (2) thenEIs 8, and can be solved by using at least 8 coordinate setsE. Preferably, the least squares method is used to solve the essential matrixE
Step C: the essential matrix can be decomposed using Singular Values (SVD)EThe decomposition into a rotation matrix and a translation matrix can typically result in 4 solutions, respectively: (R 1 ,t 1 )、(R 2 ,t 2 )、(R 1 ,t 2 ) And%R 2 ,t 1 )。
Step D: the correct solution is selected from the 4 solutions described above.
First the correct rotation matrix is screened: separately calculateR 1 AndR 2 leaving the rotation matrix with a value of 1 and removing the rotation matrix with a value of-1. Assuming that leave behindR 1 The screening was performed as follows.
And screening the correct translation matrix: selecting one point in the coordinate set of the camera A, such asP A1 Pinhole model considering cameraThe following formula is satisfied:
wherein,K A an internal reference matrix of the camera A;K B an internal reference matrix of the camera B;trepresenting a correct solution corresponding to the initial translation matrix;P Ai-3D representing pixel coordinatesP Ai Corresponding camera coordinates in a camera coordinate system of the camera A;
representing pixel coordinatesP Ai Corresponding camera coordinate values in a camera coordinate system of the camera A;
representing pixel coordinatesP Bi And (5) corresponding camera coordinate values in a camera coordinate system of the camera B.
Will respectivelyt=t 1 Andt=t 2 substituting into the calculation formula of step S2152, calculatingz Ai Andz Bi and the set of equations is obtained as follows:
if it isz Ai1 Andz Bi1 all are greater than 0, the correct solution corresponding to the initial translation matrix ist 1 The method comprises the steps of carrying out a first treatment on the surface of the If it isz Ai2 Andz Bi2 all are greater than 0, the correct solution corresponding to the initial translation matrix ist 2 The method comprises the steps of carrying out a first treatment on the surface of the The correct solution corresponding to the initial translation matrix is recorded ast
5. Calculating a scaling scale:
calculated as aboveR,t) Due to lack of scale information, getIs of the initial translation matrix of (a)tThere is a certain difference from the true final translation matrix T. True final translation matrixT=s*tsIs the scale factor, i.e., the value that needs to be solved. The scale factor is estimated using the average height of the human body.
And selecting the head and ankle of the same pedestrian on the A camera and the B camera as key points of the human body. And acquiring pixel coordinates of two human body key points in the A camera and the B camera at the same momentH A =(u HA ,v HA ,1)、H B =(u HB ,v HB ,1)、F A =(u FA ,v FA 1) andF B =(u FB ,v FB 1) a step of; the three-dimensional coordinates are solved using a least square method. In this case of the formulasStill unknown.
To be used forH A The pixel coordinate system of the camera A where the point is located is taken as a reference system, and the conversion is carried out according to a camera pinhole model and a coordinate system conversion principle by a first conversion formula as follows:
pixel coordinates are converted by a first conversion formulaH A Conversion to camera coordinates corresponding to camera AH A- D3 Camera coordinatesH A- D3 The expression is as follows:
wherein,Q H representing a head intermediate matrix;b H representing a head assistance matrix;
introducing functionsGBy a function ofGCharacterizing camera coordinatesH A- D3 The characterization form is as follows:
wherein,G HA- D3 representing a functionGWith respect toRtK A K B H A AndH B is calculated by the computer.
Coordinates of pixelsF A And pixel coordinatesF B The second conversion formula is brought into the second conversion formula as follows:
wherein,x FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inXCoordinate values on the axis;y FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inYCoordinate values on the axis;z FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inZCoordinate values on the axis;z FB for pixel coordinatesF B The corresponding camera coordinates in the camera B are inZCoordinate values on the axis;K B and the internal reference matrix of the camera B is represented.
Pixel coordinates are converted by a second conversion formulaF A Conversion to camera coordinates corresponding to camera AF A- D3 Camera coordinatesF A- D3 The expression is as follows:
wherein,Q F representing an ankle intermediate matrix;b F representation ofAn ankle support matrix;
introducing functionsGBy a function ofGCharacterizing camera coordinatesF A- D3 The characterization form is as follows:
wherein,G FA- D3 representing a functionGWith respect toRtK A K B F A AndF B is calculated by the computer.
Will beH A- D3 AndF A- D3 carrying out calculation in a scale factor calculation formula to obtain a scale factor; the scale factor is calculated as follows:
wherein,Wrepresenting the average height of the area where the camera is located;Orepresenting the number of pedestrians appearing in a camera shooting picture; |G HA- D-i3 -G FA- D-i3 | represents the firstiCorresponding to individual pedestriansG HA- D3 AndG FA- D3 absolute value of the same.
Multiplying the scale factor with the initial translation matrix, and replacing and updating the initial translation matrix by the obtained product to obtain an updated external parameter matrixR,T) The method comprises the steps of carrying out a first treatment on the surface of the The product formula is as follows:
wherein,Tis the product of the scale factor and the initial translation matrix, i.e., the final translation matrix.
Thus, an essential matrix is obtainedE=(R,T)。
6. Calibrating a camera group:
firstly, in the camera group, a pair of cameras are selected for calibration according to the steps. And selecting one of the calibrated two cameras as a reference camera, and calibrating the other cameras with the reference camera. And the other cameras to be calibrated and the reference camera are selected for calibration, the whole camera group is gradually expanded, and the calibration calculation is completed after all the cameras and the reference camera are calibrated.
7. Examples:
in the two cameras, the coordinates of pixels corresponding to the key points of the same person detected are shown in fig. 2 and 3, the camera A corresponds to fig. 2, and the camera B corresponds to fig. 3. The order of the key points in the two images of fig. 2 and 3 is as follows: left wrist, right wrist, left elbow, right elbow, left shoulder, right shoulder, waist, left knee, right knee, left ankle, right ankle, neck, head.
The corresponding pixel coordinates of fig. 2 are as follows: (214, 290), (133, 283), (195, 259), (132, 253), (189, 235), (137, 230), (165, 277), (185, 320), (150, 317), (191, 375), (152, 373), (163, 236), (163, 207).
The corresponding pixel coordinates of fig. 3 are as follows: (390, 365), (429, 368), (399, 350), (429, 352), (402, 337), (427, 339), (414, 363), (405, 385), (422, 386), (402, 415), (422, 416), (414, 340), (414, 325).
The two cameras used were identical in specification, with a resolution of 1920 x 1080, an equivalent focal length of 3.49mm and a pixelsize of 2.9 microns. Therefore, the reference matrix of the camera AK A Reference matrix for camera with number BK B Identical, i.eK A =K B = [[1204, 0, 960],[0,1204,540],[0, 0, 1]]。
The pixel coordinates are converted into corresponding camera coordinates, and the conversion results are shown in table 1:
table 1 camera coordinates of images
From the camera coordinates in Table 1EEThe results of (2) are as follows:
E= [ [8.23131564 -37.3013028 -7.83100586]
[-30.82140561 1.81615451 -26.21725608]
[ -3.81928167 -13.13169691 -8.62998182]]
to be solved forESingular value decomposition is performed to obtain 4 solutions, which are respectively:
R 1 = [[0.91487685 0.12102523 0.38516652]
[0.28033413 -0.87696453 -0.39031525]
[-0.29053938 -0.46506571 0.83624204]]
R 2 = [[-0.91487685 -0.12102523 -0.38516652]
[-0.28033413 0.87696453 0.39031525]
[ 0.29053938 0.46506571 -0.83624204]]
t 1 = [[-0.33392648] [-0.20323632] [ 0.92042822]]
t 2 = [[ 0.33392648] [ 0.20323632] [-0.92042822]]
separately solve forR 1 AndR 2 the result of the solution is expressed in detail as det @R 1 )=-1,det(R 2 ) The correct solution for the rotation matrix is =1R 2
Selecting the pixel coordinates in Table 1P A1 =(214, 290, 1),P B1 = (390, 365, 1), solved by least squares:t=t 1 in the time-course of which the first and second contact surfaces,z A1 =0.307,z B1 =0.565, i.e. rotation matrixR=R 2 Translation matrixt=t 1
Pixel coordinates of head on camera A and camera BH A AndH B the calculation can be obtained:
taking the average pixel coordinates of the left ankle and the right ankle on the camera A and the camera BF A AndF B F A =(171.5, 374),F B = (412,415.5), the calculation yields:
solving for 200 different pedestrians in the manner described aboveH A- D3 AndF A- D3 and establishing constraints among the key point distances. The three-dimensional linear distance from the head key point to the ankle key point is calculated as follows:
looking up the local average height of about 1.65m, then establish the following equation:
thus, solve: s= 36.94.
Obtaining the external parameter matrix between the two cameras as%R,T) Wherein:
R=R 2 = [[-0.91487685 -0.12102523 -0.38516652]
[-0.28033413 0.87696453 0.39031525]
[ 0.29053938 0.46506571 -0.83624204]]
T=st 1 =[[-12.3427659] [-7.51223941] [34.01687632]]
selecting a camera with a coordinate determined in a world coordinate system with a camera A as a coordinate origin as a reference camera, and calculating the position coordinates of the camera adjacent to the reference camera and not determining the position in the world coordinate system; and similarly, calculating the coordinate positions of all cameras in the camera group in a world coordinate system with the A camera as the origin of coordinates.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (4)

1. The method for calibrating the position of the monitoring camera equipment based on the pedestrian ReID is characterized by comprising the following steps of:
s1, acquiring coordinate positions of pedestrians in monitoring pictures of all cameras;
s2, selecting two cameras adjacent to each other, and calculating an external parameter matrix between the two cameras according to the coordinate positions of pedestrians;
s3, selecting any one of the two cameras in the step S2 as a reference camera, and calculating the position of the other camera through the external reference matrix;
s4, selecting any one of the two cameras with the determined positions in the step S3 as a new reference camera, and calculating the positions of cameras adjacent to the new reference camera; and so on, further calculating to obtain the positions of all cameras;
the specific steps of step S2 are as follows:
s21, selecting two cameras which are installed adjacently to each other according to an adjacent criterion; the adjacency criteria are as follows: the two cameras are arranged adjacent to each other, and the same pedestrian appears in the monitoring pictures of the two cameras at the same time; calculating an external parameter matrix between the two cameras through the position coordinates of the pedestrians; the extrinsic matrix comprises a rotation matrix and an initial translation matrix;
s22, calculating a scale factor, solving the product of the scale factor and the initial translation matrix, and replacing and updating the initial translation matrix by using the product to further obtain an updated external parameter matrix;
the specific steps of step S21 are as follows:
s211, selecting a camera A and a camera B which meet the adjacent criterion;
s212, acquiring the same human body key points of the same pedestrian from the databasemPixel coordinates in the camera A and the camera B at different moments to formmA group coordinate set;mthe group coordinates are expressed as follows:
P A 1 =(u A1 ,v A1 ,1)、P B1 =(u B1 ,v B1 ,1);
P A 2 =(u A2 ,v A2 ,1)、P B2 =(u B2 ,v B2 ,1);
…、…
P Ai =(u Ai ,v Ai ,1)、P Bi =(u Bi ,v Bi ,1);
…、…
P Am =(u Am ,v Am ,1)、P Bm =(u Bm ,v Bm ,1);
wherein,P A1 representing the 1 st group of pixel coordinates of the key points of the human body in the camera A,u A1 representation ofP A1 Is defined by the transverse axis of (c),v A1 representation ofP A1 Is the ordinate of (2);P B1 representing the 1 st group of pixel coordinates of the key points of the human body in the camera B,u B1 representation ofP B1 Is defined by the transverse axis of (c),v B1 representation ofP B1 Is the ordinate of (2);
P A 2 representing the 2 nd group pixel coordinates of the key points of the human body in the camera A,u A2 representation ofP A2 Is defined by the transverse axis of (c),v A2 representation ofP A2 Is the ordinate of (2);P B2 representing the 2 nd group of pixel coordinates of the key points of the human body in the camera B,u B2 representation ofP B2 Is defined by the transverse axis of (c),v B2 representation ofP B2 Is the ordinate of (2);
P Ai representing the first key point of human body in A-type cameraiThe coordinates of the group of pixels,u Ai representation ofP Ai Is defined by the transverse axis of (c),v Ai representation ofP Ai Is the ordinate of (2);P Bi representing the key point of human body in the camera BiThe coordinates of the group of pixels,u Bi representation ofP Bi Is defined by the transverse axis of (c),v Bi representation ofP Bi Is the ordinate of (2);
P Am representing the first key point of human body in A-type cameramThe coordinates of the group of pixels,u Am representation ofP Am Is defined by the transverse axis of (c),v Am representation ofP Am Is of the longitudinal direction of (2)Coordinates;P Bm representing the key point of human body in the camera BmThe coordinates of the group of pixels,u Bm representation ofP Bm Is defined by the transverse axis of (c),v B1 representation ofP Bm Is the ordinate of (2);
s213, according to the epipolar geometry constraint, lettingmThe set of coordinates satisfies the following equation:
wherein (1)>Representing a matrix formP B1 Is a transposed matrix of (a); />Representing a matrix formP B2 Is a transposed matrix of (a); />Representing a matrix formP Bm Is a transposed matrix of (a);Ethe matrix is an essential matrix between the camera A and the camera B;
s214, decomposing the essential matrix by using singular valuesEDecomposing into a rotation matrix and an initial translation matrix, and obtaining four groups of solutions: (R 1 ,t 1 )、(R 2 ,t 2 )、(R 1 ,t 2 ) And%R 2 ,t 1 ) Wherein, the method comprises the steps of, wherein,R 1 the solved 1 st rotation matrix is shown,R 2 representing the solved 2 nd rotation matrix;t 1 the solved 1 st rotation matrix is shown,t 2 representing the solved 2 nd rotation matrix;
s215, calculating correct solutions in the four groups of solutions through a judgment criterion;
the specific steps of step S215 are as follows:
s2151, calculating rotation matrixR 1 AndR 2 a rotation matrix with a determinant value of 1 is the correct solution corresponding to the rotation matrixR
S2152 selecting the pixel coordinates of the camera AP Ai And pixel coordinates of camera number BP Bi According to pinhole model theory of camera, pixel coordinatesP Ai And pixel coordinatesP Bi The following calculation formula is satisfied:
wherein,K A an internal reference matrix of the camera A;K B an internal reference matrix of the camera B;trepresenting a correct solution corresponding to the initial translation matrix;P Ai-3D representing pixel coordinatesP Ai Corresponding camera coordinates in a camera coordinate system of the camera A;
representing pixel coordinatesP Ai Corresponding camera coordinate values in a camera coordinate system of the camera A;
representing pixel coordinatesP Bi Corresponding camera coordinate values in a camera coordinate system of the camera B;
s2153 respectively willt=t 1 Andt=t 2 is brought into the calculation formula of step S2152 for calculationz Ai Andz Bi and the set of equations is obtained as follows:
wherein,x Ai1 to use the initial translation matrixt 1 Calculated camera coordinatesP Ai-3D At the position ofXCoordinate values on the axis;y Ai1 to use the initial translation matrixt 1 Calculated camera coordinatesP Ai-3D At the position ofYCoordinate values on the axis;z Ai1 to use the initial translation matrixt 1 Calculated camera coordinatesP Ai-3D At the position ofZCoordinate values on the axis;
x Bi 1 to use the initial translation matrixt 1 Calculated camera coordinatesP Bi-3D At the position ofXCoordinate values on the axis;y Bi1 to use the initial translation matrixt 1 Calculated camera coordinatesP Bi-3D At the position ofYCoordinate values on the axis;z Bi1 to use the initial translation matrixt 1 Calculated camera coordinatesP Bi-3D At the position ofZCoordinate values on the axis;
x Ai 2 to use the initial translation matrixt 2 Calculated camera coordinatesP Ai-3D At the position ofXCoordinate values on the axis;y Ai2 to use the initial translation matrixt 2 Calculated camera coordinatesP Ai-3D At the position ofYCoordinate values on the axis;z Ai2 to use the initial translation matrixt 2 Calculated camera coordinatesP Ai-3D At the position ofZCoordinate values on the axis;
x Bi 2 to use the initial translation matrixt 2 Calculated camera coordinatesP Bi-3D At the position ofXCoordinate values on the axis;y Bi2 to use the initial translation matrixt 2 Calculated camera coordinatesP Bi-3D At the position ofYCoordinate values on the axis;z Bi2 to use the initial translation matrixt 2 Calculated camera coordinatesP Bi-3D At the position ofZCoordinate values on the axis;
S2154, ifz Ai1 Andz Bi1 all are greater than 0, the correct solution corresponding to the initial translation matrix ist 1 The method comprises the steps of carrying out a first treatment on the surface of the If it isz Ai2 Andz Bi2 all are greater than 0, the correct solution corresponding to the initial translation matrix ist 2 The method comprises the steps of carrying out a first treatment on the surface of the The correct solution corresponding to the initial translation matrix is recorded ast
Step S22 includes step S22A, step S22B, and step S22C; the specific steps of step S22A are as follows:
S22A1, selecting the head and ankle of a pedestrian as human body key points, and acquiring pixel coordinates of two human body key points in the A camera and the B camera at the same momentH A =(u HA ,v HA ,1)、H B =(u HB ,v HB ,1)、F A =(u FA ,v FA 1) andF B =(u FB ,v FB ,1);
wherein,H A representing the pixel coordinates of the pedestrian's head in camera a,u HA representing pixel coordinatesH A Is defined by the transverse axis of (c),v HA representing pixel coordinatesH A Is the ordinate of (2);
H B representing the pixel coordinates of the pedestrian's head in camera B,u HB representing pixel coordinatesH B Is defined by the transverse axis of (c),v HB representing pixel coordinatesH B Is the ordinate of (2);
F A representing the pixel coordinates of the pedestrian's ankle in camera a,u FA representing pixel coordinatesF A Is defined by the transverse axis of (c),v FA representing pixel coordinatesF A Is the ordinate of (2);
F B representing the pixel coordinates of the pedestrian's ankle in camera B,u FB representing pixel coordinatesF B Is defined by the transverse axis of (c),v FB representing pixel coordinatesF B Is the ordinate of (2);
S22A2, pixel coordinatesH A And pixel coordinatesH B The first conversion formula is carried into the following formula:
wherein,x HA for pixel coordinatesH A The corresponding camera coordinates in camera A are inXCoordinate values on the axis;y HA for pixel coordinatesH A The corresponding camera coordinates in camera A are inYCoordinate values on the axis;z HA for pixel coordinatesH A The corresponding camera coordinates in camera A are inZCoordinate values on the axis;z HB for pixel coordinatesH B The corresponding camera coordinates in the camera B are inZCoordinate values on the axis;K B representing an internal reference matrix of the camera B;srepresenting scale factors;
S22A3, pixel coordinates are converted by a first conversion formulaH A Conversion to camera coordinates corresponding to camera AH A- D3 Camera coordinatesH A- D3 The expression is as follows:
wherein,Q H representing a head intermediate matrix;b H representing a head assistance matrix;
S22A4, introducing functionGBy a function ofGCharacterizing camera coordinatesH A- D3 The characterization form is as follows:
wherein,G HA- D3 representing a functionGWith respect toRtK A K B H A AndH B is calculated according to the calculation result of (2);
the specific steps of step S22B are as follows:
S22B1, pixel coordinatesF A And pixel coordinatesF B The second conversion formula is brought into the second conversion formula as follows:
wherein,x FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inXCoordinate values on the axis;y FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inYCoordinate values on the axis;z FA for pixel coordinatesF A The corresponding camera coordinates in camera A are inZCoordinate values on the axis;z FB for pixel coordinatesF B The corresponding camera coordinates in the camera B are inZCoordinate values on the axis;K B representing an internal reference matrix of the camera B;srepresenting scale factors;
S22B2, the pixel coordinates are converted by a second conversion formulaF A Conversion to camera coordinates corresponding to camera AF A- D3 Camera coordinatesF A- D3 The expression is as follows:
wherein,Q F representing an ankle intermediate matrix;b F representing an ankle-assisting matrix;
S22B3, introducing functionGBy a function ofGCharacterizing camera coordinatesF A- D3 The characterization form is as follows:
wherein,G FA- D3 representing a functionGWith respect toRtK A K B F A AndF B is calculated according to the calculation result of (2);
the specific steps of step S22C are as follows:
S22C1, willH A- D3 AndF A- D3 carrying out calculation in a scale factor calculation formula to obtain a scale factor; the scale factor is calculated as follows:
wherein,Wrepresenting the average height of the area where the camera is located;Orepresenting the number of pedestrians appearing in a camera shooting picture; />Represent the firstiCorresponding to individual pedestriansG HA- D3 AndG FA- D3 absolute value of the two;
S22C2, multiplying the scale factor by the initial translation matrix, and replacing and updating the initial translation matrix by the obtained product to obtain an updated external parameter matrixR,T) The method comprises the steps of carrying out a first treatment on the surface of the The product formula is as follows:
wherein,Tis the product of the scale factor and the initial translation matrix, i.e., the final translation matrix.
2. The method for calibrating the position of a monitoring camera based on pedestrian ReID according to claim 1, wherein the specific steps of step S1 are as follows:
s11, acquiring all cameras to be positioned in a camera mounting place, and forming a camera group;
s12, acquiring coordinate positions of pedestrians in each frame of shooting pictures of each camera in the camera group, wherein the coordinate positions comprise pixel coordinates corresponding to human body key points of the pedestrians, and the human body key points comprise left wrists, right wrists, left elbows, right elbows, left shoulders, right shoulders, waists, left knees, right knees, left ankles, right ankles, necks and heads.
3. The method for calibrating the position of the monitoring camera based on the pedestrian ReID according to claim 2, wherein the specific steps of step S3 are as follows:
s31, selecting the camera A as a reference camera, and establishing a world coordinate system with the camera A as a coordinate origin;
s32, through a rotation matrixRAnd final translation matrixTAnd (3) converting coordinates, converting the camera B into a coordinate point in the world coordinate system in the step S31, and further obtaining the position of the camera B.
4. A method for calibrating a position of a monitoring camera based on a pedestrian ReID according to claim 3, wherein the specific steps of step S4 are as follows:
s41, calculating out an extrinsic matrix between any two cameras meeting adjacent criteria in the camera group;
s42, selecting a camera with a coordinate determined in a world coordinate system with the camera A as a coordinate origin as a reference camera, and calculating the position coordinates of the camera which is adjacent to the reference camera and has not determined the position in the world coordinate system;
s43, and the like, calculating the coordinates of all cameras in the camera group in a world coordinate system with the position of the camera A as the origin of coordinates.
CN202310953744.4A 2023-08-01 2023-08-01 Pedestrian ReID-based calibration method for monitoring camera equipment position Active CN116704047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310953744.4A CN116704047B (en) 2023-08-01 2023-08-01 Pedestrian ReID-based calibration method for monitoring camera equipment position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310953744.4A CN116704047B (en) 2023-08-01 2023-08-01 Pedestrian ReID-based calibration method for monitoring camera equipment position

Publications (2)

Publication Number Publication Date
CN116704047A CN116704047A (en) 2023-09-05
CN116704047B true CN116704047B (en) 2023-10-27

Family

ID=87836072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310953744.4A Active CN116704047B (en) 2023-08-01 2023-08-01 Pedestrian ReID-based calibration method for monitoring camera equipment position

Country Status (1)

Country Link
CN (1) CN116704047B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN110969668A (en) * 2019-11-22 2020-04-07 大连理工大学 Stereoscopic calibration algorithm of long-focus binocular camera
CN113160325A (en) * 2021-04-01 2021-07-23 长春博立电子科技有限公司 Multi-camera high-precision automatic calibration method based on evolutionary algorithm
CN116309851A (en) * 2023-05-19 2023-06-23 安徽云森物联网科技有限公司 Position and orientation calibration method for intelligent park monitoring camera

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816733B (en) * 2019-01-14 2023-08-18 京东方科技集团股份有限公司 Camera parameter initialization method and device, camera parameter calibration method and device and image acquisition system
US11756231B2 (en) * 2021-06-29 2023-09-12 Midea Group Co., Ltd. Method and apparatus for scale calibration and optimization of a monocular visual-inertial localization system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110969668A (en) * 2019-11-22 2020-04-07 大连理工大学 Stereoscopic calibration algorithm of long-focus binocular camera
CN113160325A (en) * 2021-04-01 2021-07-23 长春博立电子科技有限公司 Multi-camera high-precision automatic calibration method based on evolutionary algorithm
CN116309851A (en) * 2023-05-19 2023-06-23 安徽云森物联网科技有限公司 Position and orientation calibration method for intelligent park monitoring camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于双目立体视觉的标定技术及应用;田昊;刘春萌;;吉林大学学报(信息科学版)(02);120-128 *
获取摄像机安装高度及倾角的简易标定方法;朱秋煜;朱鸣;赵保珠;;电视技术(12);148-152 *

Also Published As

Publication number Publication date
CN116704047A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
CN107818571B (en) Ship automatic tracking method and system based on deep learning network and average drifting
JP5453429B2 (en) Surveillance camera terminal
CN101996407A (en) Colour calibration method for multiple cameras
Snidaro et al. Sensor fusion for video surveillance
CN113077519B (en) Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN106156714A (en) The Human bodys' response method merged based on skeletal joint feature and surface character
CN111996883B (en) Method for detecting width of road surface
CN112966628A (en) Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network
CN110348371A (en) Human body three-dimensional acts extraction method
CN114527294B (en) Target speed measuring method based on single camera
CN114612933B (en) Monocular social distance detection tracking method
CN107704851A (en) Character recognition method, Public Media exhibiting device, server and system
CN116704047B (en) Pedestrian ReID-based calibration method for monitoring camera equipment position
CN114266823A (en) Monocular SLAM method combining SuperPoint network characteristic extraction
CN110349209A (en) Vibrating spear localization method based on binocular vision
Kini et al. 3dmodt: Attention-guided affinities for joint detection & tracking in 3d point clouds
CN104156952B (en) A kind of image matching method for resisting deformation
JP2007109126A (en) Moving body distribution estimation device, moving body distribution estimation method, and moving body distribution estimation program
CN113421286B (en) Motion capturing system and method
CN105894505A (en) Quick pedestrian positioning method based on multi-camera geometrical constraint
CN115909396A (en) Dynamic target tracking method for foot type robot
CN113743380A (en) Active tracking method based on video image dynamic monitoring
Wang et al. Facilitating PTZ camera auto-calibration to be noise resilient with two images
CN114372996A (en) Pedestrian track generation method oriented to indoor scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant