CN112880563A - Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method - Google Patents

Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method Download PDF

Info

Publication number
CN112880563A
CN112880563A CN202110088136.2A CN202110088136A CN112880563A CN 112880563 A CN112880563 A CN 112880563A CN 202110088136 A CN202110088136 A CN 202110088136A CN 112880563 A CN112880563 A CN 112880563A
Authority
CN
China
Prior art keywords
camera
area
array
narrow
equivalent narrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110088136.2A
Other languages
Chinese (zh)
Other versions
CN112880563B (en
Inventor
高扬
崔恒宇
王旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110088136.2A priority Critical patent/CN112880563B/en
Publication of CN112880563A publication Critical patent/CN112880563A/en
Application granted granted Critical
Publication of CN112880563B publication Critical patent/CN112880563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a single-dimensional pixel combination mode equivalent narrow-area-array camera space position measuring method which comprises the steps of establishing a trinocular vision measuring system by using three industrial area-array cameras, constructing equivalent narrow-area-array imaging by using the cameras in different directions, collecting a target image, extracting a feature identifier sub-pixel center position under the narrow-area-array image, and calculating space position coordinates of a feature identifier under a measuring system coordinate system according to internal parameters and external parameter calibration results of the trinocular cameras. The invention provides a precise dynamic spatial position measuring method between area array and linear array vision, an equivalent narrow area array camera based on a single-dimensional binding mode realizes spatial position measurement through trinocular combination, meets the requirements of high-precision and high-speed measurement, and has important significance for dynamic precise spatial position measurement.

Description

Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method
Technical Field
The invention belongs to the field of dynamic precise spatial position measurement, and particularly relates to a spatial position measurement method of an equivalent narrow-area-array camera based on a single-dimensional pixel merging mode (namely binning).
Background
The dynamic precise spatial position measurement is an important field of mechanical test, and is widely applied in the fields of aircraft wind tunnel test, ship towing test, large-size component assembly, motion process monitoring and the like.
The dynamic precise spatial position measurement puts high requirements on a measurement system: firstly, the requirement on spatial resolution is high, the precise spatial position measurement is realized, and the higher resolution is an important premise; secondly, the sampling rate requirement is high, and the measurement system must have enough sampling rate to capture the detail change in the target motion process; thirdly, the requirement on sensitivity is high, and because the sampling rate is high and the integral time of each sampling is short, the high sensitivity is an important guarantee for the target measurement precision and the measurement range; fourthly, synchronous measurement capability of multiple targets is required, and introduction of dynamic errors is reduced by ensuring time synchronism.
For the precise space position measurement of the conventional and large-scale space dynamic target, the stereoscopic vision measurement method based on the area-array camera or the line-array camera is widely applied due to the non-contact measurement, simple configuration, higher measurement precision and flexible measurement mode.
However, under the condition of high dynamic, the traditional binocular area-array camera measuring system and the three-line-array camera measuring system have certain defects. For a binocular area-array camera measurement system, factors such as high-resolution image acquisition and transmission, image processing time and the like of an area-array camera limit the measurement rate of the system, and a high-frame-rate camera is high in cost and large in size. For a three-linear-array camera, a cylindrical mirror is adopted to compress an imaging field of view in a pixel arrangement normal direction to one point, so that a measurement target is easily interfered, multi-target imaging cannot be realized due to target aliasing, and only cyclic lighting can be realized; meanwhile, the linear array camera has small aperture diaphragm and small light entering brightness due to the imaging principle of a cylindrical mirror, so that a measured target has to adopt high-brightness light-emitting characteristics, and the application range of the linear array camera is limited.
Therefore, the traditional binocular area-array camera measuring system and the three-line-array camera measuring system have respective defects, and various requirements of dynamic precise space position measurement are difficult to balance.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention aims to provide a method for measuring the spatial position of a single-dimensional pixel combination (binning) mode equivalent narrow-area-array camera, which breaks through the inherent defects of the traditional binocular area-array camera and the traditional three-line-array camera, combines the advantages of the two cameras, can make up the defects of the two cameras to a certain extent, and realizes high-performance dynamic spatial position measurement.
In order to achieve the purpose and achieve the technical effect, the invention adopts the technical scheme that:
a single-dimensional pixel merging mode equivalent narrow-area array camera space position measuring method adopts a binding mode equivalent narrow-area array imaging mode to realize the capability of high-speed measurement; the high-precision measurement is realized by adopting a three-phase machine combination measurement mode, and the method relates to a bining mode imaging method, a trinocular combination, an equivalent narrow-area array trinocular vision measurement model, an equivalent narrow-area array trinocular vision measurement system camera internal parameter and external parameter calibration, a narrow-area array image processing method and the like.
Converting an industrial area-array camera into an equivalent narrow-area array based on a single-dimensional pixel merging mode (namely binning), constructing a three-visual measurement system through three equivalent narrow-area-array cameras, and measuring the characteristic identification constructed by the light spot; the three-eye vision measurement system is constructed by three equivalent narrow-area-array cameras, the three cameras are arranged in a left, middle and right view field and are intersected to form the three-eye vision measurement system, wherein the left camera and the right camera adopt 1 × 4 binding (namely longitudinally 4 times of pixel combination and transverse non-combination), and the middle camera adopts 4 × 1 binding (namely transversely 4 times of pixel combination and longitudinal non-combination); the position of the three-eye camera is fixed, and the position of the camera is guaranteed not to change in the measuring process.
The equivalent narrow-area array trinocular vision measurement system establishes an equivalent narrow-area array trinocular vision measurement model and calibrates to obtain internal parameters of each camera and external parameters of spatial relationship between the cameras; the equivalent narrow-area-array trinocular vision measurement model establishes the relation between the pixel position of the measured feature identification image and the space position under the measurement coordinate system, and the model comprises internal parameters and external parameters of each camera.
The internal parameters and the external parameters of each camera in the equivalent narrow-area array trinocular vision measuring system are calibrated, and the method comprises the following steps:
(1) calibrating internal parameters of the three cameras by using the checkerboard target in a conventional mode;
(2) converting internal parameters defined by the three cameras under a conventional mode into corresponding internal parameters of the narrow-area array according to different binding directions of the cameras;
(3) and (3) using the checkerboard target, shooting the feature points on the target by three eyes simultaneously, continuously changing the position and the posture of the target during shooting, extracting the feature points of the target by a plurality of groups of images, and calibrating the rotation and translation relation among cameras of the equivalent narrow-area-array trinocular vision measuring system.
The narrow-area array image processing method extracts precise sub-pixel coordinates of the feature identification points, under the irradiation of an illumination light source, the gray value of a highlight light spot used by the feature identification is far larger than that of a background, and the general position of a cooperative target can be determined through self-adaptive binarization processing and morphological operation; and performing sub-pixel center extraction on the preliminarily screened light spot image by adopting a Hessian matrix pair, and robustly acquiring the sub-pixel centers of the light spot image of each required characteristic mark due to other high-brightness points of the background.
According to the method, the equivalent narrow-area array camera based on the single-dimensional binning mode realizes spatial position measurement through trinocular combination, meets the requirements of high-precision and high-speed measurement, and has important significance for dynamic precise spatial position measurement.
Compared with the prior spatial position measuring technology based on area array vision and linear array vision, the invention has the advantages that:
(1) in a traditional binocular vision measuring system based on an area-array camera, factors such as high-resolution image acquisition and transmission, image processing time and the like of the area-array camera limit the measuring rate of the system, and a high-frame-frequency camera is high in cost and large in size. Compared with the traditional area-array camera, the narrow area-array camera after single-dimensional Binning keeps the imaging view field, simultaneously realizes the improvement of the highest sampling rate and sensitivity and the reduction of data volume, and simultaneously ensures the measurement precision to be basically unchanged;
(2) for a three-linear-array camera, a cylindrical mirror is adopted to compress an imaging field of view in a pixel arrangement normal direction to one point, so that a measurement target is easily interfered, multi-target imaging cannot be realized due to target aliasing, and only cyclic lighting can be realized; meanwhile, the linear array camera has small aperture diaphragm and small light entering brightness due to the imaging principle of a cylindrical mirror, so that a measured target has to adopt high-brightness light-emitting characteristics, and the application range of the linear array camera is limited. Compared with a one-dimensional linear array camera, the narrow-area array camera after single-dimensional Binning has richer imaging and stronger anti-interference capability, and a cylindrical mirror does not need to be specially designed, so that the defects caused by large-view-angle imaging of the cylindrical mirror are avoided.
Drawings
FIG. 1 is a schematic diagram of the conversion of the present invention into a binning mode using an area-array camera;
fig. 2 is a measurement schematic diagram of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail with reference to the accompanying drawings so that the advantages and features of the invention can be more easily understood by those skilled in the art, and the scope of the invention will be clearly and clearly defined.
As shown in fig. 1-2, a spatial position measurement method based on a single-dimensional pixel combination (binning) mode equivalent narrow-area array camera adopts a binning mode equivalent narrow-area array imaging mode to realize high-speed measurement capability; the high-precision measurement is realized by adopting a three-phase machine combination measurement mode, and the method relates to a bining mode imaging method, a trinocular combination, an equivalent narrow-area array trinocular vision measurement model, an equivalent narrow-area array trinocular vision measurement system camera internal parameter and external parameter calibration, a narrow-area array image processing method and the like.
The specific implementation steps are as follows:
step 1, converting an industrial area-array camera into an equivalent narrow-area array based on a single-dimensional pixel merging mode (namely binning), constructing a trinocular vision measurement system through three equivalent narrow-area-array cameras, and measuring the characteristic identification constructed by the light spot.
Specifically, a three-eye vision measurement system is constructed by three equivalent narrow-area-array cameras, the three cameras are arranged in left, middle and right to form the three-eye intersection three-eye vision measurement system, wherein the left camera and the right camera adopt 1 × 4 binding (namely 4 times of pixel combination is carried out longitudinally and the left camera and the right camera are not combined transversely), and the middle camera adopts 4 × 1 binding (namely 4 times of pixel combination is carried out transversely and the left camera and the right camera are not combined longitudinally); the position of the three-eye camera is fixed, and the position of the camera is guaranteed not to change in the measuring process.
Step 2, establishing an equivalent narrow-area-array trinocular vision measurement model according to the equivalent narrow-area-array trinocular vision measurement system in the step 1, and calibrating to obtain internal parameters of each camera and external parameters of spatial relations among the cameras;
specifically, a relation between the pixel position of the detected feature identification point image and the spatial position under a measurement coordinate system is established through an equivalent narrow-area-array trinocular vision measurement model, and the model comprises internal parameters and external parameters of each camera. The calibration steps of the internal parameters and the external parameters of each camera in the equivalent narrow-area array trinocular vision measurement system are as follows:
(1) calibrating internal parameters of the three cameras by using the checkerboard target in a conventional mode;
(2) converting internal parameters defined by the three cameras under a conventional mode into corresponding internal parameters of the narrow-area array according to different binding directions of the cameras;
(3) and (3) using the checkerboard target, shooting the feature points on the target by three eyes simultaneously, continuously changing the position and the posture of the target during shooting, extracting the feature points of the target by a plurality of groups of images, and calibrating the rotation and translation relation among the cameras of the three-eye camera measuring system.
And 3, aiming at the characteristic identifier constructed by the light spot in the step 1, shooting by the equivalent narrow-area-array camera to obtain an image of the equivalent narrow-area-array camera as an equivalent narrow-area-array image, and extracting the sub-pixel coordinates of the center of the light spot by a precision image processing method.
Specifically, the gray value of a highlight light spot used by the feature identifier is far greater than that of a background, and the general position of the cooperative target can be determined through self-adaptive binarization processing and morphological operation;
and performing sub-pixel center extraction on the preliminarily screened light spot image by adopting a Hessian matrix method, wherein due to other high-light spots of the background, the sub-pixel centers of the light spot image of each required characteristic mark can be robustly obtained, so that the precise sub-pixel coordinates of the characteristic mark points are extracted.
And 4, resolving and obtaining the spatial position of the characteristic identification point according to the equivalent narrow-area array trinocular vision measurement model in the step 2, the camera internal parameters and external parameters obtained by calibration and the light spot image sub-pixel center coordinates extracted by each camera in the step 3.
Step 101, converting an industrial area-array camera into an equivalent narrow-area-array camera.
As shown in fig. 1, the original imaging single pixel of the industrial area-array camera has the same horizontal and vertical dimensions, the length of the vertical pixel after unidirectional 1 × 4 binding is 4 times of the horizontal, and when the data is transmitted to the computer and becomes an image, the image shows the effect of narrow area array because the horizontal and vertical pixel dimensions of the image display are the same, which is called equivalent narrow area array imaging.
Step 201, a single camera perspective projection model.
The single-camera perspective projection model is as follows:
Figure BDA0002911487490000051
where ρ is a proportionality coefficient and p ═ u v 1]TAnd q ═ x y z 1]TThe characteristic points are respectively in the pixel coordinates of the camera image and the coordinates of the measurement system coordinate system, K is the internal parameter matrix of the camera, and the internal parameter ax,ay,u0,v0Determination of ax,ayNormalized focal length, u, in the x-axis and y-axis, respectively0,v0Is the pixel coordinate of the center of the image. R, t ═ tx ty tz]TRespectively the rotation matrix and translation vector of the camera coordinate system in the camera extrinsic parameters with respect to the coordinate system of the measurement system, r1~r9Representing the coefficients of the R matrix.
Step 202, the trinocular vision measurement model.
As shown in fig. 2, three cameras are combined into a three-view stereo vision measurement system in a left, middle and right arrangement, and coordinate systems of the cameras are established, and a middle camera coordinate system is used as a coordinate system of the whole measurement system.
According to the perspective projection model of the camera, a three-dimensional vision measurement model is obtained as follows:
Figure BDA0002911487490000052
in the formula, ρL,ρMAnd ρRIs the proportional coefficient corresponding to the left, middle and right cameras; the three-dimensional homogeneous coordinate of the characteristic point in a coordinate system of a measuring system, namely a coordinate system of a middle camera is qMThe pixel coordinates under the projection, i.e., the image, under the left, center, and right cameras are pL,pMAnd pR;KL,KMAnd KRInternal parameter matrixes of the left camera, the middle camera and the right camera are respectively set; rMLAnd tMLAnd RMRAnd tMRThe rotation matrix and the translation vector of the left camera coordinate system and the right camera coordinate system relative to the measurement system coordinate system, namely the middle camera coordinate system, are respectively. Equation (2) is developed according to equation (1) as follows:
Figure BDA0002911487490000061
wherein each parameter definition corresponds to equation (1), and L, M, R in the lower right hand corner indicates the parameters under the corresponding left, center, and right cameras, respectively. And (3) further expanding the formula to obtain 6 linear equation sets by about the proportionality coefficient rho corresponding to each camera as follows:
Figure BDA0002911487490000062
and step 203, the equivalent narrow-area array trinocular vision measurement model.
The equivalent narrow-area array trinocular vision measuring system is opposite to the area array trinocular vision measuring system, and the number of the image pixels and the unit pixel length after binning are changedTherefore, the normalized focal length and the coordinate position of the image center pixel are changed, wherein the normalized focal length a of the left camera and the right camera in the y directionyAnd the central pixel coordinate v0And normalized focal length a in the x-direction of the medium cameraxAnd the central pixel coordinate u01/4, which all become normal mode, the equivalent narrow-area array trinocular measurement model is converted from equation (4):
Figure BDA0002911487490000071
under the condition that all camera internal parameters and external parameters are known, acquiring the pixel coordinates (u) of the measured feature point on each camera imageL,vL),(uM,vM),(uR,vR) Then, the space coordinate q of the measured characteristic point under the coordinate system of the measuring system can be solvedM=[xM yM zM 1]TFrom equation (5), the measured quantity is proposed and can be converted into:
AqM=b (6)
wherein the content of the first and second substances,
Figure BDA0002911487490000072
Figure BDA0002911487490000073
note that after binning, the weights are not equal for the 6 linear equations in equation (5) or (6). For 2 equations constructed for each camera imaging, the standard deviation of the binned equation is 4 times that of the normal mode, and the weight of the binned equation is 1/16 of the equation in the normal mode, so the value of the weighting matrix W is obtained as follows:
Figure BDA0002911487490000081
and then obtaining the three-dimensional space coordinates of the measured characteristic points through weighted least square solution:
qM=(ATWA)-1ATWb (8)
and 204, calibrating internal parameters and external parameters of the camera of the equivalent narrow-area array trinocular vision measuring system.
The planar checkerboard targets are respectively placed in the visual fields of the three cameras, and the internal parameters and the external parameters of the three cameras in the conventional mode are respectively calibrated by adopting a camera calibration method mentioned in an article' A flexible new technical for camera calibration [ J ]. IEEE trans.
Step 301, extracting the center sub-pixel coordinates of the light spot image by the narrow-area array image processing method.
And respectively placing the planar checkerboard targets in the visual fields of the three cameras, and extracting the sub-pixel coordinates of the light spot center in the image by adopting an image processing method based on a Hessian matrix.
The parts of the invention not described in detail can be realized by adopting the prior art, and are not described herein.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations that are made by using the contents of the present specification and the drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (4)

1. A single-dimensional pixel combination mode equivalent narrow-area-array camera space position measuring method is characterized by comprising the following steps:
a. converting the industrial area-array camera into an equivalent narrow-area array based on one-dimensional pixel merging mode binding, constructing an equivalent narrow-area array trinocular vision measurement system by using three equivalent narrow-area array cameras, and measuring the characteristic identification constructed by the light spot;
b. b, establishing an equivalent narrow-area array trinocular vision measurement model according to the equivalent narrow-area array trinocular vision measurement system in the step a, and calibrating to obtain internal parameters of each camera and external parameters of spatial relations among the cameras;
c. aiming at the characteristic identification constructed by the light spot in the step a, an equivalent narrow-area-array camera shoots and acquires an image of the equivalent narrow-area-array camera as an equivalent narrow-area-array image, and a sub-pixel coordinate of the center of the light spot is extracted by a precision image processing method;
d. and c, calculating the space position of the acquired feature identifier according to the equivalent narrow-area-array trinocular vision measuring system in the step b, the camera internal parameters and external parameters obtained by calibration and the space position of the light spot center sub-pixel coordinates extracted by each camera in the step c.
2. The spatial position measurement method of the single-dimensional pixel combination mode equivalent narrow-area-array camera according to claim 1, characterized in that: in the step a, a three-visual measurement system constructed by three equivalent narrow-area-array cameras is realized as follows: the three cameras are arranged in a left, middle and right view field and are intersected to form a three-eye stereoscopic vision measuring system, wherein the left camera and the right camera adopt 1 x 4binning, namely 4 times of pixels are longitudinally combined, and are not transversely combined; the middle camera adopts 4 × 1binning, namely 4 times of pixel combination is carried out in the transverse direction, and the pixel combination is not carried out in the longitudinal direction; the position of the three-eye camera is fixed, and the position of the camera is guaranteed not to change in the measuring process.
3. The spatial position measurement method of the single-dimensional pixel combination mode equivalent narrow-area-array camera according to claim 1, characterized in that: in the step a, the light spot is used as a feature identifier, and the feature identifier is in a public view field of the trinocular camera to ensure the image acquisition of the equivalent narrow-area camera.
4. The spatial position measurement method of the single-dimensional pixel combination mode equivalent narrow-area-array camera according to claim 1, characterized in that: in the step b, the method for establishing the equivalent narrow-area array trinocular vision measurement model comprises the following steps:
(1) single-camera perspective projection model
The single-camera perspective projection model is as follows:
Figure FDA0002911487480000011
where ρ is a proportionality coefficient; p ═ u v 1]TAnd q ═ x y z 1]TThe pixel coordinates of the characteristic points under the camera image and the coordinates under the coordinate system of the measuring system respectively; k is the camera intrinsic parameter matrix, consisting of the intrinsic parameters ax,ay,u0,v0Determination of ax,ayNormalized focal length, u, in the x-axis and y-axis, respectively0,v0Is the pixel coordinate of the center of the image; r, t ═ tx ty tz]TRespectively the rotation matrix and translation vector of the camera coordinate system in the camera extrinsic parameters with respect to the coordinate system of the measurement system, r1~r9Each coefficient representing an R matrix;
(2) three-eye vision measuring model
Aiming at the trinocular vision measurement system, according to a single-camera perspective projection model and a middle-phase machine coordinate system as a measurement system coordinate system, the trinocular vision measurement model is obtained as follows:
Figure FDA0002911487480000021
in the formula, ρL,ρMAnd ρRIs the proportional coefficient corresponding to the left, middle and right cameras; the three-dimensional homogeneous coordinate of the characteristic point in a coordinate system of a measuring system, namely a coordinate system of a middle camera is qMThe pixel coordinates under the projection, i.e., the image, under the left, center, and right cameras are pL,pMAnd pR;KL,KMAnd KRInternal parameter matrixes of the left camera, the middle camera and the right camera are respectively set; rMLAnd tMLAnd RMRAnd tMRThe rotation matrix and the translation vector of the left camera coordinate system and the right camera coordinate system relative to the measurement system coordinate system, namely the middle camera coordinate system, respectively, and the formula (2) is developed according to the formula (1) as follows:
Figure FDA0002911487480000022
wherein each parameter definition corresponds to the formula (1), L, M, R in the lower right corner mark represents the parameters corresponding to the left, middle and right cameras respectively, and 6 linear equations are obtained based on the formula (3) as follows:
Figure FDA0002911487480000031
(3) equivalent narrow-area array trinocular vision measurement model
The equivalent narrow-area array trinocular vision measuring system is opposite to the area array trinocular vision measuring system, the number of the image pixels and the length of the unit pixel after binding are changed, the normalized focal length and the coordinate position of the image center pixel are changed, wherein the normalized focal length a of the left camera and the right camera in the y directionyAnd the central pixel coordinate v0And normalized focal length a in the x-direction of the medium cameraxAnd the central pixel coordinate u01/4, which all become normal mode, the equivalent narrow-area array trinocular measurement model is converted from equation (4):
Figure FDA0002911487480000032
under the condition that all camera internal parameters and external parameters are known, acquiring the pixel coordinates (u) of the measured feature point on each camera imageL,vL),(uM,vM),(uR,vR) Namely, solving the space coordinate q of the measured characteristic point under the coordinate system of the measuring systemM=[xM yM zM 1]TFrom equation (5), the measurands are converted into:
AqM=b (6)
wherein the content of the first and second substances,
Figure FDA0002911487480000041
Figure FDA0002911487480000042
after binning, the weights are unequal for the 6 linear equations in equation (5) or (6); for 2 equations constructed for each camera imaging, the standard deviation of the binned equation is 4 times that of the normal mode, and the weight of the binned equation is 1/16 of the equation in the normal mode, and the value of the weighting matrix W is obtained as follows:
Figure FDA0002911487480000043
and obtaining the three-dimensional space coordinates of the measured characteristic points through weighted least square solution:
qM=(ATWA)-1ATWb (8)。
CN202110088136.2A 2021-01-22 2021-01-22 Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method Active CN112880563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110088136.2A CN112880563B (en) 2021-01-22 2021-01-22 Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110088136.2A CN112880563B (en) 2021-01-22 2021-01-22 Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method

Publications (2)

Publication Number Publication Date
CN112880563A true CN112880563A (en) 2021-06-01
CN112880563B CN112880563B (en) 2021-12-28

Family

ID=76050186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110088136.2A Active CN112880563B (en) 2021-01-22 2021-01-22 Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method

Country Status (1)

Country Link
CN (1) CN112880563B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422736A (en) * 2022-03-28 2022-04-29 荣耀终端有限公司 Video processing method, electronic equipment and computer storage medium
CN116797669A (en) * 2023-08-24 2023-09-22 成都飞机工业(集团)有限责任公司 Multi-camera array calibration method based on multi-face tool

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236091A (en) * 2008-01-31 2008-08-06 北京控制工程研究所 Visual light navigation sensor
US20100214416A1 (en) * 2006-02-01 2010-08-26 Hannu Ruuska Device for Monitoring a Web
CN106289106A (en) * 2016-08-04 2017-01-04 北京航空航天大学 Stereo vision sensor that a kind of line-scan digital camera and area array cameras combine and scaling method
CN107657604A (en) * 2017-09-06 2018-02-02 西安交通大学 A kind of polishing scratch three-dimensional appearance original position acquisition methods based near field non-standard light source
CN109541791A (en) * 2019-01-30 2019-03-29 清华大学 High-resolution light field micro imaging system and method based on sub-pix translation
CN110610505A (en) * 2019-09-25 2019-12-24 中科新松有限公司 Image segmentation method fusing depth and color information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100214416A1 (en) * 2006-02-01 2010-08-26 Hannu Ruuska Device for Monitoring a Web
CN101236091A (en) * 2008-01-31 2008-08-06 北京控制工程研究所 Visual light navigation sensor
CN106289106A (en) * 2016-08-04 2017-01-04 北京航空航天大学 Stereo vision sensor that a kind of line-scan digital camera and area array cameras combine and scaling method
CN107657604A (en) * 2017-09-06 2018-02-02 西安交通大学 A kind of polishing scratch three-dimensional appearance original position acquisition methods based near field non-standard light source
CN109541791A (en) * 2019-01-30 2019-03-29 清华大学 High-resolution light field micro imaging system and method based on sub-pix translation
CN110610505A (en) * 2019-09-25 2019-12-24 中科新松有限公司 Image segmentation method fusing depth and color information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YEYEODU: "A Rapid, Inexpensive High Throughput Screen Method for Neurite Outgrowth", 《CURRENT CHEMICAL GENOMICS》 *
张恒伟: "激光对CCD相机Binning模式下饱和干扰效果仿真研究", 《激光与红外 》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422736A (en) * 2022-03-28 2022-04-29 荣耀终端有限公司 Video processing method, electronic equipment and computer storage medium
CN114422736B (en) * 2022-03-28 2022-08-16 荣耀终端有限公司 Video processing method, electronic equipment and computer storage medium
CN116797669A (en) * 2023-08-24 2023-09-22 成都飞机工业(集团)有限责任公司 Multi-camera array calibration method based on multi-face tool
CN116797669B (en) * 2023-08-24 2024-01-12 成都飞机工业(集团)有限责任公司 Multi-camera array calibration method based on multi-face tool

Also Published As

Publication number Publication date
CN112880563B (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN106595528B (en) A kind of micro- binocular stereo vision measurement method of telecentricity based on digital speckle
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN112880563B (en) Single-dimensional pixel combination mode equivalent narrow-area-array camera spatial position measuring method
CN109559355B (en) Multi-camera global calibration device and method without public view field based on camera set
CN109191509A (en) A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN109544628B (en) Accurate reading identification system and method for pointer instrument
CN112525107B (en) Structured light three-dimensional measurement method based on event camera
CN114283203B (en) Calibration method and system of multi-camera system
CN112489137A (en) RGBD camera calibration method and system
Ma et al. Line-scan CCD camera calibration in 2D coordinate measurement
CN109712232A (en) A kind of profiling object surface three-D imaging method based on light field
CN112556594A (en) Strain field and temperature field coupling measurement method and system fusing infrared information
CN109596054A (en) The size detection recognition methods of strip workpiece
CN112595236A (en) Measuring device for underwater laser three-dimensional scanning and real-time distance measurement
CN114820563A (en) Industrial component size estimation method and system based on multi-view stereo vision
CN111311659A (en) Calibration method based on three-dimensional imaging of oblique plane mirror
CN108335333A (en) A kind of linear camera scaling method
CN117115272A (en) Telecentric camera calibration and three-dimensional reconstruction method for precipitation particle multi-angle imaging
CN112132957A (en) High-precision annular scanning method and device
CN105427302B (en) A kind of three-dimensional acquisition and reconstructing system based on the sparse camera collection array of movement
CN116205993A (en) Double-telecentric lens high-precision calibration method for 3D AOI
CN110702015A (en) Method and device for measuring icing thickness of power transmission line
CN107274449B (en) Space positioning system and method for object by optical photo
CN115082538A (en) System and method for three-dimensional reconstruction of surface of multi-view vision balance ring part based on line structure light projection
CN106097248B (en) High-resolution image knowledge prior-based compressed sensing method and mixed vision system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant