CN108010089B - High-resolution image acquisition method based on binocular movable camera - Google Patents

High-resolution image acquisition method based on binocular movable camera Download PDF

Info

Publication number
CN108010089B
CN108010089B CN201711402076.7A CN201711402076A CN108010089B CN 108010089 B CN108010089 B CN 108010089B CN 201711402076 A CN201711402076 A CN 201711402076A CN 108010089 B CN108010089 B CN 108010089B
Authority
CN
China
Prior art keywords
camera
image
target area
parameter
cam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711402076.7A
Other languages
Chinese (zh)
Other versions
CN108010089A (en
Inventor
崔智高
李爱华
王涛
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rocket Force University of Engineering of PLA
Original Assignee
Rocket Force University of Engineering of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocket Force University of Engineering of PLA filed Critical Rocket Force University of Engineering of PLA
Priority to CN201711402076.7A priority Critical patent/CN108010089B/en
Publication of CN108010089A publication Critical patent/CN108010089A/en
Application granted granted Critical
Publication of CN108010089B publication Critical patent/CN108010089B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method for acquiring a high-resolution image of a static target based on a binocular movable camera visual system. The invention is based on the expansion of the traditional stereoscopic vision theory. According to the invention, the coordinate association of a target between two camera coordinate systems is established through a stereo correction result and a disparity map of an image captured by a binocular movable camera, so that the horizontal and vertical rotation angles of the camera are determined; and establishing a corresponding relation between the size of the target area and the zoom multiple in an off-line manner, and estimating the parallax reliability on line to determine the zoom multiple of the camera. The invention can realize that after a target in a view field is selected by any camera, the other camera enables the target to be positioned at the central position of an image with high resolution, can be applied to the fields of traffic, security protection, human-computer interaction and the like, and has wide application prospect and important practical value.

Description

High-resolution image acquisition method based on binocular movable camera
Technical Field
The invention relates to a high-resolution image acquisition method based on a binocular movable camera.
Background
With the perfection of the camera manufacturing process, the improvement of the mechanical control precision and the reduction of the production cost, the movable video camera is more and more applied to the field of intelligent monitoring and gradually replaces the traditional static camera. The movable camera can be regarded as a static camera with variable internal and external parameters, including horizontal rotation (pan), vertical rotation (tilt) and focal length variation (zoom), and by adjusting these control parameters, the movable camera can not only change the focal length to obtain different resolution information of the same object or area, but also change the monitoring view angle to obtain monitoring information of different objects or areas in the scene. From the view of visual bionics, the image acquisition mode of the movable camera is consistent with the rotation mechanism of human eyes, human beings generally observe scenes purposefully, and when the information provided by the scenes cannot meet the requirements, the human eyes rotate a certain angle and focus until clear images of targets are obtained; from a mathematical point of view, the initiative of image acquisition can optimize the derivation of visual models and either adapt many problems that are underdetermined in conventional vision or linearize the problem of non-linearity in conventional vision. From the above analysis, it can be seen that one of the trends in the development of intelligent monitoring systems is to use a movable camera.
On the other hand, monocular mobile cameras also have certain drawbacks in monitoring: firstly, the visual angle is single, when the shielding phenomenon occurs, a visual blind area possibly occurs, and therefore information in the blind area cannot be acquired; secondly, under the condition of no scene prior knowledge, depth information cannot be acquired, classical stereoscopic vision needs two cameras, and the depth of the target is estimated according to the size of parallax; finally, the use of a single movable camera often results in a conflict between the monitoring field of view and the resolution of the target, resulting in the inability to simultaneously acquire panoramic information and high resolution information of the target motion. Therefore, another trend in intelligent surveillance technology is to employ two moveable cameras.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, constructs a binocular movable video camera vision system by utilizing two SONY EVI D70P cameras, and provides a static target high-resolution image acquisition method based on the binocular movable video camera vision system by introducing depth information.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention provides a high-resolution image acquisition method based on a binocular movable camera, which comprises the following steps: let the two movable cameras be denoted Cam-1 and Cam-2, and let the current parameters of the two movable cameras be (p)1,t1,z1) And (p)2,t2,z2) Corresponding observation images are respectively I1And I2And assume Cam-1 image I1Wherein the determined static target area is
Figure BDA0001519612330000021
Estimating active Camera Cam-2 parameters (p'2,t'2,z'2) So as to be static in the target area
Figure BDA0001519612330000022
At high resolution in Cam-2 image I2The center position of (a);
(p'2,t'2,z'2) The specific calculation process of (2) is as follows:
(1) calibrating two movable cameras by using a camera calibration method based on feature point matching, namely obtaining the principal point coordinates (u) of the two cameras0,v0) And the variation of the focal length z with the zoom parameter f (z);
(2) two images I by using spherical stereo correction algorithm1And I2Performing stereo correction, and setting the corrected images as I1rAnd I2rAt this time, an image I is obtained1And I1rAnd I2And I2rCorresponding to the relationship between the pixel points, i.e.
Figure BDA0001519612330000023
Figure BDA0001519612330000024
Wherein, [ x ]1,y1,1]T、[x2,y2,1]TFor correcting the pre-stereoscopic image I1And I2A pixel point of (1); [ x ] of1r,y1r,1]T、[x2r,y2r,1]TFor the stereoscopically corrected image I1rAnd I2rA pixel point of (1);
(3) acquiring the corresponding relation between the sizes of the multiple groups of target areas and the zoom parameters, and storing the corresponding relation in a table form;
(4) computing an image I1Medium static target area
Figure BDA0001519612330000025
Central position c of1And according to I obtained in step (2)1And I1rCorrelation between pixelsIs obtained by1In picture I1rCorresponding position c in1rI.e. by
Figure BDA0001519612330000031
(5) Calculating corrected image I by using dynamic programming stereo matching algorithm1rAnd I2rD is marked as I1rAnd I2rAt point c1rThen c can be obtained1rIn picture I2rAt a corresponding point of (1), i.e.
c2r=c1r+[d,0]T
(6) I obtained according to step (2)2rAnd I2The correspondence between pixels, c is calculated2rIn picture I2Corresponding position c of2I.e. by
Figure BDA0001519612330000032
(7) Let c2=[u,v]Then using the camera principal point coordinates (u) obtained in step (1)0,v0) And the variation relation f (z) of the focal length z along with the zoom parameter can be calculated to obtain the pan parameter p'2And tilt parameter t'2I.e. by
Figure BDA0001519612330000033
Figure BDA0001519612330000034
(8) For zoom parameter z'2Which generally consists of a static target area
Figure BDA0001519612330000035
The size of (2); the patent refers to the field of 'electric digital data processing'
Figure BDA0001519612330000036
Using the table obtained in step (3), obtaining the initial value z of zoom parameter by table lookup0(ii) a Then introduces the parallax reliability rd∈[0,1],rdCan be selected from the target area
Figure BDA0001519612330000037
The variance of the disparity estimate of each pixel is determined, i.e.
Figure BDA0001519612330000038
Wherein the content of the first and second substances,
Figure BDA0001519612330000039
representing static target areas
Figure BDA00015196123300000310
The variance of the disparity is estimated at coordinates (i, j), λ being the disparity reliability rdControlling a parameter in the range of 0 to 1; the above formula shows that the smaller the variance is, the more accurate the parallax estimation is, and the reliability rdThe larger will also be, the final zoom parameter z'2Can pass through an initial value z0And degree of reliability rdAre jointly determined, i.e.
z'2=z0(0.7+0.3rd)
That is, when the parallax estimation accuracy is low, a small zoom parameter is given to secure a static target region
Figure BDA0001519612330000041
In Cam-2 image I2Otherwise, a larger zoom parameter is given.
The invention is based on the expansion of the traditional stereoscopic vision theory, and from the perspective of the computer stereoscopic vision, the depth is defined as the distance from a certain space point in a scene to the baselines of two cameras. The depth information acquisition methods mainly include two methods: one is a method based on physical sensors, such as ultrasonic ranging, laser ranging, etc., which have high depth measurement accuracy but relatively high cost, and the depth value cannot correspond to each pixel of the image acquired by the camera; the other method is based on classical stereo vision, namely, the relative position relation of corresponding pixels of the same target in two images is found by combining the camera imaging principle and the calculation geometry, and the depth of the target is further deduced. In visual surveillance and other vision-related applications, methods based on classical stereo vision are also generally chosen.
Detailed Description
The following description of the preferred embodiments of the present invention is provided for the purpose of illustration and description, and is in no way intended to limit the invention.
A high-resolution image acquisition method based on a binocular movable camera comprises the following steps: let the two movable cameras be denoted Cam-1 and Cam-2, and let the current parameters of the two movable cameras be (p)1,t1,z1) And (p)2,t2,z2) Corresponding observation images are respectively I1And I2And assume Cam-1 image I1Wherein the determined static target area is
Figure BDA0001519612330000042
Estimating active Camera Cam-2 parameters (p'2,t’2,z’2) So as to be static in the target area
Figure BDA0001519612330000043
At high resolution in Cam-2 image I2The center position of (a);
(p'2,t'2,z'2) The specific calculation process of (2) is as follows:
(1) calibrating two movable cameras by using a camera calibration method based on feature point matching, namely obtaining the principal point coordinates (u) of the two cameras0,v0) And the variation of the focal length z with the zoom parameter f (z);
(2) two images I by using spherical stereo correction algorithm1And I2Perform three-dimensionalCorrecting, and setting the corrected images as I1rAnd I2rAt this time, an image I is obtained1And I1rAnd I2And I2rCorresponding to the relationship between the pixel points, i.e.
Figure BDA0001519612330000051
Figure BDA0001519612330000052
Wherein, [ x ]1,y1,1]T、[x2,y2,1]TFor correcting the pre-stereoscopic image I1And I2A pixel point of (1); [ x ] of1r,y1r,1]T、[x2r,y2r,1]TFor the stereoscopically corrected image I1rAnd I2rA pixel point of (1);
(3) acquiring the corresponding relation between the sizes of the multiple groups of target areas and the zoom parameters, and storing the corresponding relation in a table form;
(4) computing an image I1Medium static target area
Figure BDA0001519612330000053
Central position c of1And according to I obtained in step (2)1And I1rThe corresponding relation between the pixels is obtained as c1In picture I1rCorresponding position c in1rI.e. by
Figure BDA0001519612330000054
(5) Calculating corrected image I by using dynamic programming stereo matching algorithm1rAnd I2rD is marked as I1rAnd I2rAt point c1rThen c can be obtained1rIn picture I2rAt a corresponding point of (1), i.e.
c2r=c1r+[d,0]T
(6) I obtained according to step (2)2rAnd I2The correspondence between pixels, c is calculated2rIn picture I2Corresponding position c of2I.e. by
Figure BDA0001519612330000055
(7) Let c2=[u,v]Then using the camera principal point coordinates (u) obtained in step (1)0,v0) And the variation relation f (z) of the focal length z along with the zoom parameter can be calculated to obtain the pan parameter p'2And tilt parameter t'2I.e. by
Figure BDA0001519612330000061
Figure BDA0001519612330000062
(8) For zoom parameter z'2Which generally consists of a static target area
Figure BDA0001519612330000063
The size of (2); the patent refers to the field of 'electric digital data processing'
Figure BDA0001519612330000064
Using the table obtained in step (3), obtaining the initial value z of zoom parameter by table lookup0(ii) a Then introduces the parallax reliability rd∈[0,1],rdCan be selected from the target area
Figure BDA0001519612330000065
The variance of the disparity estimate of each pixel is determined, i.e.
Figure BDA0001519612330000066
Wherein the content of the first and second substances,
Figure BDA0001519612330000067
representing static target areas
Figure BDA0001519612330000068
The variance of the disparity is estimated at coordinates (i, j), λ being the disparity reliability rdControlling a parameter in the range of 0 to 1; the above formula shows that the smaller the variance is, the more accurate the parallax estimation is, and the reliability rdThe larger will also be, the final zoom parameter z'2Can pass through an initial value z0And degree of reliability rdAre jointly determined, i.e.
z'2=z0(0.7+0.3rd)
That is, when the parallax estimation accuracy is low, a small zoom parameter is given to secure a static target region
Figure BDA0001519612330000069
In Cam-2 image I2Otherwise, a larger zoom parameter is given.
The invention is based on the expansion of the traditional stereoscopic vision theory, and from the perspective of the computer stereoscopic vision, the depth is defined as the distance from a certain space point in a scene to the baselines of two cameras. The depth information acquisition methods mainly include two methods: one is a method based on physical sensors, such as ultrasonic ranging, laser ranging, etc., which have high depth measurement accuracy but relatively high cost, and the depth value cannot correspond to each pixel of the image acquired by the camera; the other method is based on classical stereo vision, namely, the relative position relation of corresponding pixels of the same target in two images is found by combining the camera imaging principle and the calculation geometry, and the depth of the target is further deduced. In visual surveillance and other vision-related applications, methods based on classical stereo vision are also generally chosen.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. A high-resolution image acquisition method based on a binocular movable camera is characterized by comprising the following steps: let the two movable cameras be denoted Cam-1 and Cam-2, and let the current parameters of the two movable cameras be (p)1,t1,z1) And (p)2,t2,z2) Corresponding observation images are respectively I1And I2And assume Cam-1 image I1Wherein the determined static target area is
Figure FDA0003124267430000011
Estimating active Camera Cam-2 parameters (p'2,t'2,z'2) So as to be static in the target area
Figure FDA0003124267430000012
At high resolution in Cam-2 image I2The center position of (a);
(p'2,t'2,z'2) The specific calculation process of (2) is as follows:
(1) calibrating two movable cameras by using a camera calibration method based on feature point matching, namely obtaining the principal point coordinates (u) of the two cameras0,v0) And the variation of the focal length z with the zoom parameter f (z);
(2) two images I by using spherical stereo correction algorithm1And I2Performing stereo correction, and setting the corrected images as I1rAnd I2rAt this time, an image I is obtained1And I1rAnd I2And I2rCorresponding to the relationship between the pixel points, i.e.
Figure FDA0003124267430000013
Figure FDA0003124267430000014
Wherein, [ x ]1,y1,1]T、[x2,y2,1]TFor correcting the pre-stereoscopic image I1And I2A pixel point of (1); [ x ] of1r,y1r,1]T、[x2r,y2r,1]TFor the stereoscopically corrected image I1rAnd I2rA pixel point of (1);
(3) acquiring the corresponding relation between the sizes of the multiple groups of target areas and the zoom parameters, and storing the corresponding relation in a table form;
(4) computing an image I1Medium static target area
Figure FDA0003124267430000015
Central position c of1And according to I obtained in step (2)1And I1rThe corresponding relation between the pixels is obtained as c1In picture I1rCorresponding position c in1rI.e. by
Figure FDA0003124267430000016
(5) Calculating corrected image I by using dynamic programming stereo matching algorithm1rAnd I2rD is marked as I1rAnd I2rAt point c1rThen c can be obtained1rIn picture I2rAt a corresponding point of (1), i.e.
c2r=c1r+[d,0]T
(6) I obtained according to step (2)2rAnd I2The correspondence between pixels, c is calculated2rIn picture I2Corresponding position c of2I.e. by
Figure FDA0003124267430000021
(7) Let c2=[u,v]Then using the camera principal point coordinates (u) obtained in step (1)0,v0) And the variation relation f (z) of the focal length z along with the zoom parameter can be calculated to obtain the pan parameter p'2And tilt parameter t'2I.e. by
Figure FDA0003124267430000022
Figure FDA0003124267430000023
(8) For zoom parameter z'2From a static target area
Figure FDA0003124267430000024
The size of (2); according to the current target area
Figure FDA0003124267430000025
Using the table obtained in step (3), obtaining the initial value z of zoom parameter by table lookup0(ii) a Then introduces the parallax reliability rd∈[0,1],rdCan be selected from the target area
Figure FDA0003124267430000026
The variance of the disparity estimate of each pixel is determined, i.e.
Figure FDA0003124267430000027
Wherein the content of the first and second substances,
Figure FDA0003124267430000028
means for indicating silenceAttitude target area
Figure FDA0003124267430000029
The variance of the disparity is estimated at coordinates (i, j), λ being the disparity reliability rdControlling a parameter in the range of 0 to 1; the above formula shows that the smaller the variance is, the more accurate the parallax estimation is, and the reliability rdThe larger will also be, the final zoom parameter z'2Can pass through an initial value z0And degree of reliability rdAre jointly determined, i.e.
z'2=z0(0.7+0.3rd),
When the parallax estimation accuracy is low, a smaller zoom parameter is given to ensure a static target area
Figure FDA00031242674300000210
In Cam-2 image I2Otherwise, a larger zoom parameter is given.
CN201711402076.7A 2017-12-22 2017-12-22 High-resolution image acquisition method based on binocular movable camera Expired - Fee Related CN108010089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711402076.7A CN108010089B (en) 2017-12-22 2017-12-22 High-resolution image acquisition method based on binocular movable camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711402076.7A CN108010089B (en) 2017-12-22 2017-12-22 High-resolution image acquisition method based on binocular movable camera

Publications (2)

Publication Number Publication Date
CN108010089A CN108010089A (en) 2018-05-08
CN108010089B true CN108010089B (en) 2021-09-07

Family

ID=62060540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711402076.7A Expired - Fee Related CN108010089B (en) 2017-12-22 2017-12-22 High-resolution image acquisition method based on binocular movable camera

Country Status (1)

Country Link
CN (1) CN108010089B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991306B (en) * 2019-11-27 2024-03-08 北京理工大学 Self-adaptive wide-field high-resolution intelligent sensing method and system
CN114095644B (en) * 2020-08-24 2023-08-04 武汉Tcl集团工业研究院有限公司 Image correction method and computer equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024350A (en) * 2012-11-13 2013-04-03 清华大学 Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN106600645A (en) * 2016-11-24 2017-04-26 大连理工大学 Quick extraction method for space three-dimensional calibration of camera
CN107103626A (en) * 2017-02-17 2017-08-29 杭州电子科技大学 A kind of scene reconstruction method based on smart mobile phone

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720282B2 (en) * 2005-08-02 2010-05-18 Microsoft Corporation Stereo image segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024350A (en) * 2012-11-13 2013-04-03 清华大学 Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN106600645A (en) * 2016-11-24 2017-04-26 大连理工大学 Quick extraction method for space three-dimensional calibration of camera
CN107103626A (en) * 2017-02-17 2017-08-29 杭州电子科技大学 A kind of scene reconstruction method based on smart mobile phone

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Moving Shadow Detection Based on Multi-feature Fusion;Hua Wang et al.;《2016 8th International Conference on Intelligent Human-Machine Systems and Cybermetics(IHMSC)》;20161215;全文 *
双目协同多分辨率主动跟踪方法;崔智高 等.;《红外与激光工程》;20131231;第42卷(第12期);全文 *

Also Published As

Publication number Publication date
CN108010089A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN107977997B (en) Camera self-calibration method combined with laser radar three-dimensional point cloud data
US10271036B2 (en) Systems and methods for incorporating two dimensional images captured by a moving studio camera with actively controlled optics into a virtual three dimensional coordinate system
US8593524B2 (en) Calibrating a camera system
CN109348119B (en) Panoramic monitoring system
WO2019114617A1 (en) Method, device, and system for fast capturing of still frame
US6847392B1 (en) Three-dimensional structure estimation apparatus
US20130242059A1 (en) Primary and auxiliary image capture devices for image processing and related methods
EP2728374A1 (en) Invention relating to the hand-eye calibration of cameras, in particular depth image cameras
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
KR20140090775A (en) Correction method of distortion image obtained by using fisheye lens and image display system implementing thereof
JP4865065B1 (en) Stereo imaging device control system
KR20130121290A (en) Georeferencing method of indoor omni-directional images acquired by rotating line camera
Patel et al. Distance measurement system using binocular stereo vision approach
CN108010089B (en) High-resolution image acquisition method based on binocular movable camera
CN113048888A (en) Binocular vision-based remote three-dimensional displacement measurement method and system
KR101745493B1 (en) Apparatus and method for depth map generation
CN113487683A (en) Target tracking system based on trinocular vision
CN113534737A (en) PTZ (Pan/Tilt/zoom) dome camera control parameter acquisition system based on multi-view vision
BR112021008558A2 (en) apparatus, disparity estimation method, and computer program product
KR100399047B1 (en) The Apparatus and Method for Vergence Control of Crossing Axis Stereo Camera
CN109990756B (en) Binocular ranging method and system
JP6734994B2 (en) Stereo measuring device and system
JP5925109B2 (en) Image processing apparatus, control method thereof, and control program
CN110068308B (en) Distance measurement method and distance measurement system based on multi-view camera
CN108535252A (en) A kind of binocular stereo vision food recognition methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210907