CN106651957A - Monocular vision target space positioning method based on template - Google Patents

Monocular vision target space positioning method based on template Download PDF

Info

Publication number
CN106651957A
CN106651957A CN201610910758.8A CN201610910758A CN106651957A CN 106651957 A CN106651957 A CN 106651957A CN 201610910758 A CN201610910758 A CN 201610910758A CN 106651957 A CN106651957 A CN 106651957A
Authority
CN
China
Prior art keywords
central point
summit
template
calibrating template
calibrating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610910758.8A
Other languages
Chinese (zh)
Other versions
CN106651957B (en
Inventor
毛琳
杨大伟
吴俊伟
刘冠群
姬梦婷
郭超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Nationalities University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Nationalities University filed Critical Dalian Nationalities University
Priority to CN201610910758.8A priority Critical patent/CN106651957B/en
Publication of CN106651957A publication Critical patent/CN106651957A/en
Application granted granted Critical
Publication of CN106651957B publication Critical patent/CN106651957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Abstract

The invention provides a monocular vision target space positioning method based on template which belongs to the space positioning technology field and aims to solve the problems with target positioning failure and low positioning accuracy. The main technical schemes of the invention are as follows: defining and calibrating a template; detecting the space two-dimensional position of the calibrated template image; mapping the space two-dimensional coordinates system of the calibrated template onto a three-dimensional coordinates system; according to the projection of the calibrated template on the image plane, inferring the distance and the position of the calibrated template corresponding to the camera in the three-dimensional space so as to calibrate the monocular vision target. According to the invention, through the use of a template to replace a moving target to detect the template, position and measure the distance, the method requires less complicated calculation and rapidly and effectively positions a moving target.

Description

Monocular vision object space localization method based on template
Technical field
The invention belongs to space orientation technique field, specifically a kind of monocular vision object space based on template is determined Position method.
Technical background
Being accurately positioned for the identification of target and the understanding of image are played a very important role with analysis for target, it is multiple Target in miscellaneous background is positioned at the fields such as military affairs, industrial monitoring, traffic control and management important application.It is fixed for target Research in terms of position, JorgeLob etc. proposes the three-dimensional rebuilding method with reference to Inertia information with vision, using inertial sensor with Binocular vision combines to recover the three-dimensional parameter of ground level and line segment normal thereto;Chinese Academy of Sciences's Shenyang automation research Hao Ying Bright grade presets some artificial targets in target, and the three-dimensional information of environment is obtained according to binocular stereo vision, real When calculate the position relationship of mobile robot relative flag thing;Stone is pure as jade to Double-visual angle moving target three-dimensional space in video monitoring Between location technology studied, extracted based on SURF algorithm and introduce in characteristic point, and environment out of doors GPS setting up unification Coordinate system realizes that target is positioned.
Monocular vision carries out a kind of sterically defined location technology by monocular-camera.Method not stopping pregnancy new at present It is raw, with multi-disciplinary cross reference, more promote monocular vision object localization method constantly to develop.Existing object localization method Mainly target designation model is set up using visual signature, Target space position is determined by target projection positioning.Color, texture, The visual signatures such as edge, light stream are easy to extract, but easily receive such environmental effects, and feature stability is bad, cause target positioning to be lost Effect, positional accuracy is low.Although wavelet character, local feature, feature base etc. can strengthen the accuracy of positioning, feature extraction Algorithm computation complexity is high, is unfavorable for that real-time target is positioned.Locating speed is fast, precision is high, strong robustness location algorithm is The research emphasis of current monocular vision object localization method.
The content of the invention
When determining Target space position by target projection positioning to solve existing object localization method, its color, line Though the visual signatures such as reason, edge, light stream are easy to extract, such environmental effects are easily received, feature stability is bad, cause target to be determined The low problem of position failure, positional accuracy.
Technical solution of the present invention is as follows:
A kind of monocular vision target designation method based on calibrating template:Calibrating template is defined, calibrating template image is detected Space two-dimensional position, by calibrating template image space two-dimensional coordinate system three-dimensional coordinate system is mapped to, and is existed according to calibrating template In the plane of delineation projection derive its in three dimensions with respect to camera range-azimuth with to monocular vision target designation.
Further, the definition calibrating template, comprises the steps:Diagonal black squares square is adopted for template base This form, definition central point and summit, and the completeness of central point and summit is defined, opposite vertexes are encoded with central point.
Further, it is described to define comprising the concrete steps that for central point and the completeness on summit:
According to template form, the four-quadrant subregion on central point or summit is defined, four subregions are designated as respectively I1、I2、I3、I4, The n pixel equidistantly chosen on any i directions to any central point or summit, its average isCorrespondence central point or top Point is in I1The pixel average of subregionFor:
In the same manner, the pixel average of central point or four, summit subregion is obtained with thisCentral point or top Point completeness is defined as follows:
Complete central point or summit:
Half complete central point or summit:
Incomplete central point or summit:Do not exist;
Wherein, TH1Represent the similarity degree of diagonal black bars or diagonal white portion pixel in image, TH2Represent figure The difference degree of black and white area pixel as in.
Further, the opposite vertexes and comprising the concrete steps that central point is encoded:
Pixel average on 4 subregions of calibrating template central point O is remembered respectivelyIt is equal according to pixel Value determines the coding of 4 subregions, and wherein black is white with 1 coding with 0 coding;
Determine each subregion pixel average in summitArbitrary summit is encoded, wherein black uses 0 Coding, white is encoded with 1;
Determine that its color is encoded according to the pixel average of central point and four, summit subregion.
Further, the derivation calibrating template is in three dimensions with respect to the range-azimuth of camera including as follows Step:
Using template calibrating camera, initial alignment parameter is determined;
Calibrating template and target bind to be measured are made, IMAQ is carried out;
The collection image angle point is searched for, complete and half complete central point and summit is extracted, its in collection image is calculated In a pair of central points and summit Euclidean pixel distance;
Determine the collection image center that monocular vision is collected;
According to central point and the summit Euclidean pixel distance of the calibrating template of the collection image, and initial alignment parameter, Calculate the Euclidean pixel distance of collection image center and camera center;
Calculate the Euclidean pixel distance of collection image center and calibrating template central point;
According to above-mentioned collection image center and camera center Euclidean pixel distance and
The Euclidean pixel distance of collection image center and calibrating template central point,
Calculate distance, the azimuth at calibrating template center and camera center.
Further, the use calibrating template calibrating camera, determines initial alignment parameter, comprises the steps:
Calibrating template central point O points are positioned on camera lens group central axis, parallel to lens plane;
Calibrating template obtains each apart from upper uncalibrated image, respectively to every by closely sampling of taking pictures is translated to remote continuous horizontal Open uncalibrated image and detect all angle points, extract complete central point, half complete central point, complete summit and half complete summit, according to Vertex encoding matching correspondence central point and summit, obtain central point and apex coordinate;
Calculate the Euclidean pixel distance d of 2 points of B, O in uncalibrated imageBO, the Euclidean pixel distance of 2 points of E, O in uncalibrated image dEO
Wherein, summit O (x0,y0), summit B (xB,yB), summit E (xE,yE);
dlRepresent Euclidean pixel distance between black patch summit and central point:
dl=E (dBO, dEO) (3)
Determine initial alignment parameter betai
Wherein, DKOFor the Euclidean space distance of uncalibrated image central point O and camera center point K.
Further, the central point and summit Euclidean pixel distance of the calibrating template according to the collection image, and Initial alignment parameter, calculating collection image center is comprised the concrete steps that with the Euclidean pixel distance of camera center:
The central point of collection image is designated as P (x, y), the central point and summit Europe according to the calibrating template of the collection image Family name pixel distance dl, and initial alignment parameter betai, calculate the Euclidean space distance of collection image center P and camera center point K DKP
DKP=βi×dl (5)。
Further, the concrete step for calculating collection image center and the Euclidean pixel distance of calibrating template central point Suddenly it is:Calibrating template central point O (x0,y0) with collection image center P (x, y) Euclidean pixel distance be dOP
Wherein:(x0,y0) it is template central point O point coordinates in collection image.
Further, calculating calibrating template center and the distance of camera center, azimuthal comprise the concrete steps that:Mark The Euclidean space of solid plate central point and camera center is apart from D and azimuth angle alpha:
Beneficial effect:
Existing object localization method needs extra aiding sensors or distance-measuring equipment, needs to carry out coordinate system transformation or again Modeling, computation complexity is high, it is impossible to provides accurate target component information for subsequent treatment in real time, causes target following, target Identification, voice positioning etc. are processed and calculate failure, meanwhile, testing cost is too high, and multi-sensor information fusion can also reduce the degree of accuracy, It is higher to optimized algorithm requirement, it is unfavorable for that requirement of real-time height and the mini-plant of low-power consumption are adopted.
The present invention is in existing object space location technology, camera marking method is complicated, it is impossible to realize in real time, accurately The problem of positioning target, there is provided a kind of monocular vision object space localization method based on calibrating template (afterwards referred to as in template Also be calibrating template), the projection according to template in the plane of delineation derive its in three dimensions the distance of relative camera and Orientation, when template close to be positioned over before moving target when, equally determined moving target can be positioned, aided in this Follow-up video image motion target following, moving target positioning, microphone acoustic source array positioning and pedestrian target positioning etc. Research.The method is to gather front end to be capable of achieving the initialization positioning work to moving target merely with monocular cam/video camera Make, save the coordinate system transformation step brought using the auxiliary positioning means such as laser range finder, infrared range-measurement system, reduce test Measuring apparatus cost, improves real-time positioning measurement efficiency.
The method of the invention substitutes moving target and carries out auxiliary directional, range finding with template, using template calibrating camera Derive and calculate initial parameter, detection template center position in the picture, i.e. template image space two-dimensional position, according to initial mark Determine parameter and template image space two-dimensional coordinate system be mapped to into three-dimensional coordinate system, with this obtain Target space position (including Deflection and distance).Being determined to of Target space position overcome overlap and partial occlusion in the case of, moving target is in figure The shortcoming that image plane can not be accurately identified, particularly in pedestrian detection, acoustic source array orientation during practical application, as beneficial Auxiliary initial alignment parameter acquiring means, more highlight its engineering practical value.
Description of the drawings
Fig. 1 I types and II type calibrating template schematic diagrames;
Fig. 2 any directions equidistantly take a schematic diagram;
Fig. 3 central points completeness schematic diagram (n=1);
Fig. 4 central point O coding schematic diagrams;
Fig. 5 summits E coding schematic diagrams;
Fig. 6 camera calibration method schematic diagrames;
Fig. 7 Template Location schematic diagrames;
Fig. 8 camera calibration processes;
Fig. 9 target range 3m testing result schematic diagrames;
The assignment test schematic diagram of Figure 10 examples 2;
The assignment test schematic diagram of Figure 11 examples 3;
The assignment test schematic diagram of Figure 12 examples 4;
The assignment test schematic diagram of Figure 13 examples 5;
Figure 14 monocular vision target designation procedure declaration figures.
Specific embodiment
Below in conjunction with the accompanying drawings the present invention is explained in further detail with specific embodiment:
A kind of monocular vision target designation method based on calibrating template,
The first step, defines as requested calibrating template, defines target step as follows:
(1) calibrating template is defined
As shown in figure 1, calibrating template is divided into I types in Fig. 1, two kinds of forms of II types, using diagonal black squares square (two-square feature) is template citation form, is defined as camera calibration formwork style.The I types or II pattern plates are only Vertical to use, black and white prints on A4 paper (international standard size is 210mm × 297mm), it is desirable to horizontally and vertically placed in the middle, wherein, If the length of side of any one black squares is l (units:Millimeter).
(2) completeness of central point and summit is defined
In Fig. 1, the O points of I types and II type calibrating templates are defined as the central point of calibrating template, afterwards abbreviation central point;A、B、 C, D, E and F point is defined as the summit of calibrating template, afterwards abbreviation summit.
According to template form, the four-quadrant subregion of central point is defined, as shown in Fig. 2 four subregions are designated as respectively I1、I2、 I3、I4, the n pixel that any central point is equidistantly chosen on any i directions, its average isCorrespondence central point is in I1 The pixel average of subregionFor:
The pixel average of four subregions of central point is obtained in the same mannerCentral point completeness is defined as follows:
Complete central point:
Half complete central point:
Incomplete central point:Do not exist;
Wherein, TH1Represent the similarity degree of diagonal black bars or diagonal white portion pixel in image, TH2Represent figure The difference degree of black and white area pixel as in.Summit completeness definition is with central point in the same manner.
In the present embodiment, n is to take a numberWhereinTo round behaviour downwards Make.In Fig. 2, black takes a position with white expression, as a kind of embodiment, respectively with 45 °, 135 °, 225 ° and 315 ° directions Take a little, count n=4.The pixel average of 4 subregions is designated as, Using said method, to determine each point Completeness.
(3) summit encodes with central point
In the present embodiment, using explanation as a example by I pattern plates.Pixel average on 4 subregions of calibrating template central point O point Do not rememberThe coding of 4 subregions is determined according to pixel average, wherein black is white to be compiled with 1 with 0 coding Code;
Determine each subregion pixel average in summitArbitrary summit is encoded, wherein black uses 0 Coding, white is encoded with 1;
Determine that its color is encoded according to the pixel average of central point and four, summit subregion.Central point and vertex encoding Illustrate as shown in Figure 4 and Figure 5, encoder dictionary is as shown in table 1.
Second step, using template camera is demarcated, and determines initial alignment parameter betai
Calibrating template central point O points are positioned on camera lens group central axis, it is (or burnt flat parallel to lens plane Face), position relationship is as shown in Figure 6.
Calibrating template obtains each apart from upper uncalibrated image, respectively to every by closely sampling of taking pictures is translated to remote continuous horizontal Uncalibrated image detects all angle points using Harris Corner Detection Algorithms, extracts complete central point, half complete central point, complete Summit and half complete summit, according to vertex encoding matching correspondence central point and summit, obtain central point and apex coordinate.With summit O(x0,y0) and summit B (xB,yB) and summit E (xE,yE) as a example by, it is known that:
Wherein, dBOFor 2 points of Euclidean pixel distance of B, O in image, dEOFor 2 points of E, O in image Euclidean pixel away from From dlEuclidean pixel distance between black patch summit and central point is represented, is then had:
dl=E (dBO, dEO)
Wherein, DKOEuclidean for image center (now, as template center's point O) and camera lens central point K is empty Between distance, thereby determine that initial alignment parameter betai
3rd step:Calibrating template is fixed together with target to be measured, using calibrating template position target position to be measured is replaced Put, carry out IMAQ and video capture, as shown in Figure 7.Specifically, template is fixed on into moving target center, by taking the photograph As head gathers video image, and frame of video is converted to into grayscale image sequence.
4th step:Using Harris Corner Detection Algorithms, angle point search is carried out to grey-level image frame, is calculated.
5th step:The completeness of each angle point is detected, complete and half complete central point and summit is extracted.
6th step:Matching of tabling look-up is carried out according to central point and vertex encoding table, determines central point O with summit B, C, E, F Coordinate, take one pair of which summit carries out Euclidean pixel distance calculating with central point, is designated as dl
7th step:Determine image center P (x, y), the Euclidean pixel of calculation template central point O and image center P away from From dOP, according to initial parameter βiWith dlDetermine image center with the Euclidean space of camera center apart from DKP
8th step:According to dOPAnd DKP, calculation template center is with video camera apart from D and azimuth angle alpha.
Thus, the concrete grammar of step 4 to step 8 it is also possible to use following statement:
Using corner detection approach, whole angle points in detection input picture (collection image) or frame of video, according to the complete of angle point Standby property, inspection center's point and summit.According to coding schedule, matching central point and each summit, central point and each apex coordinate, meter are obtained Euclidean pixel distance d in nomogram picture between template central point and summitl.Input picture central point is designated as P (x, y), template center The Euclidean pixel distance of point O and image center P is dOP, according to initial alignment parameter betaiEuclidean picture between central point and summit Element is apart from dl, determine that image center P is D with the Euclidean space distance of camera centerKP
DKPi*dl
Locating template center, i.e., be oriented to target, find range, according to DKPWith dOPDetermine template center's point and video camera Euclidean space apart from D and azimuth angle alpha
The present embodiment solves to initialize location algorithm complexity in existing Technology for Target Location, it is often necessary to laser range finder, The auxiliary positioning means such as infrared range-measurement system, testing cost is high, the problem of poor real.A kind of monocular vision based on template is provided Object localization method.By monocular cam/video camera, moving target is replaced to carry out auxiliary directional, range finding, algorithm using template Complexity is low, reduce test equipment cost, greatly strengthen target positioning real-time and accuracy, be succeeding target tracking, The practical applications such as target detection, acoustic source array initial alignment provide necessary initiation parameter and real time calibration guarantee.
By adopting above-mentioned technical proposal, a kind of monocular vision target positioning side based on template that the present embodiment is provided Method, compared with prior art with such beneficial effect:
Existing object localization method needs extra aiding sensors or distance-measuring equipment, needs to carry out coordinate system transformation or again Modeling, computation complexity is high, it is impossible to provides accurate target component information for subsequent treatment in real time, causes target following, target Identification, voice positioning etc. are processed and calculate failure, meanwhile, testing cost is too high, and multi-sensor information fusion can also reduce the degree of accuracy, It is higher to optimized algorithm requirement, it is unfavorable for that requirement of real-time height and the mini-plant of low-power consumption are adopted.
The present embodiment replaces moving target using template, template is detected, oriented and is found range, and is demarcated using template Video camera, calculates initial alignment parameter, template center is determined using Corner Detection and the detection of angle point completeness, according to initial ginseng Number, by two-dimensional image space template position three-dimensional space position is mapped to, and obtains distance and direction of the moving target away from camera Angle.The method can be efficiently applied to indoor and outdoor scene, it is adaptable to single or multiple moving target positioning, solve motion mesh Mark easily by illumination, the such environmental effects such as block and cause the problem that feature extraction is difficult, detection is realized, computation complexity is low, energy Enough fast and effectively to position moving target, the determination of moving target locus can be effectively applied to target following, pedestrian In the subsequent treatments such as detection, acoustic source array orientation, with certain engineering use value.
Embodiment 1:Camera calibration, initial alignment parameter determination
The present embodiment is camera calibration process, determines initial alignment parameter, and institute's extracting method of the present invention is tested.In room In external environment, video camera is mounted and fixed to certain robot or tripod top, and level shoots, and this is taken the photograph using cross grid Camera, obtains camera and shoots automatically, and in the visual field of camera lens, the hand-held template of moving target is continuously put down between 0.5 meter -5 meters Move, video camera from 0.5 meter of target range when start to take pictures, target range often increases by 0.5 meter of video camera and shoots once, shooting process In, it is ensured that try one's best just to camera in template direction, it is ensured that template center's point keeps overlapping with video camera shooting image center.
Instance parameter explanation:Picture format PNG, picture size 1920 × 1080, uncalibrated image number 10.
This example calibration process as shown in figure 8, as a example by testing result when target range video camera 3m, as shown in figure 9, Initial alignment parameter result of calculation is as shown in table 2, using initial parameter βiCalculate now measurement distance and deflection, with reality away from From being contrasted with actual angle, obtain range error and angular error, due in calibration process template center all the time with shooting Head center superposition, actual angle is 0 °.Calibrated error is as shown in table 3, it is known that, range error scope in 15mm, angular error Scope reaches in error allowed band index in 1.2 °,.
Embodiment 2:Monocular vision object localization method performance test based on template
The present embodiment is based on embodiment 1, according to initial alignment parameter, based on template target is carried out using monocular-camera Positioning.Indoors in environment, video camera is mounted and fixed to certain robot or tripod top, and level shoots, this employing Cross grid video camera, obtains camera and shoots automatically, in the visual field of camera lens, the hand-held template of moving target, according to setting away from From moving with direction, demarcate chi using ground and determine target actual range and actual angle, using context of methods to shooting image Detected, measured value and actual value technology distance error and angular error.
Instance parameter explanation:Picture format PNG, picture size 1920 × 1080, uncalibrated image number 8.This example is determined Position process is as shown in Figure 10, using initial parameter βiNow measurement distance and deflection are calculated, with actual range and actual angle Contrasted, obtained range error and angular error, positioning result and method the performance test results are as shown in table 4, it is known that, distance In 15mm, angular error scope reaches in error allowed band index error range in 1.2 °,.
Embodiment 3:Corridor environment, single target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, single target motion positions in corridor. Under the conditions of this, video camera is mounted and fixed to certain robot or tripod top, and level shoots, in the visual field of camera lens, one The hand-held template of individual human target, template direction is tried one's best just to video camera, target according to 0.6m/s speed, within sweep of the eye by It is remote and near near video camera.The camera calibration process of this example Case-based Reasoning 1, positions to moving target in video.
Embodiment parameter declaration:Video format MP4, the frame of video frame number 160, video image size 1920 × 1080.
This example position fixing process as shown in figure 11, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150 As a example by frame, moving target motion process and testing result are illustrated, target positioning result is as shown in table 5.
Embodiment 4:Outdoor environment, single target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, and single target motion is fixed in outdoor square Position.With this understanding, video camera is mounted and fixed to certain robot or tripod top, and level shoots, in the visual field of camera lens Interior, the hand-held template of human target, template direction is tried one's best just to video camera, target according to 0.6m/s speed, in visual field model Draw near away from video camera in enclosing.The camera calibration process of this example Case-based Reasoning 1, it is fixed that moving target in video is carried out Position.
Embodiment parameter declaration:Video format MP4, the frame of video frame number 160, video image size 1920 × 1080.
This example position fixing process as shown in figure 12, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150 As a example by frame, moving target motion process and testing result are illustrated.Target positioning result is as shown in table 6.
Embodiment 5:Outdoor environment, two target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, and two moving targets are determined in outdoor square Position.With this understanding, video camera is mounted and fixed to certain robot or tripod top, and level shoots, in the visual field of camera lens Interior, the trumpeter of target 1 holds I pattern plates, and the trumpeter of target 2 holds II pattern plates, and template direction is tried one's best just to video camera, respectively according to The speed of 0.6m/s, in the video camera that draws near within sweep of the eye.The camera calibration process of this example Case-based Reasoning 1, Moving target in video is positioned.
Embodiment parameter declaration:Video format MP4, the frame of video frame number 160, video image size 1920 × 1080.
This example position fixing process as shown in figure 12, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150 As a example by frame, moving target motion process and testing result are illustrated, target positioning result is as shown in table 5.
Subordinate list:
The I patterns plate central point of table 1-vertex encoding table;
The initial alignment parameter list of table 2;
The initial alignment assignment test table of table 3;
The positioning result of 4 example of table 2;
The positioning result of 5 example of table 3;
The positioning result of 6 example of table 4;
The positioning result of 7 example of table 5.
The I patterns plate central point of table 1-vertex encoding table
The initial alignment parameter list of table 2
The initial alignment of table 3 positions table
The positioning result of 4 example of table 2
The positioning result of 5 example of table 3
The positioning result of 6 example of table 4
The positioning result of 7 example of table 5
The above, only the invention preferably specific embodiment, but the protection domain of the invention is not Be confined to this, any those familiar with the art in the technical scope that the invention is disclosed, according to the present invention The technical scheme of creation and its inventive concept equivalent or change in addition, all should cover the invention protection domain it It is interior.

Claims (9)

1. a kind of monocular vision target designation method based on calibrating template, it is characterised in that define calibrating template, detection is demarcated Template image space two-dimensional position, is mapped to three-dimensional coordinate system, according to mark by calibrating template image space two-dimensional coordinate system Solid plate in the plane of delineation projection derive its in three dimensions with respect to camera range-azimuth with to monocular vision Target designation.
2. the monocular vision target designation method of calibrating template is based on as claimed in claim 1, it is characterised in that the definition Calibrating template, comprises the steps:Diagonal black squares square is adopted for template citation form, central point and summit is defined, And the completeness of central point and summit is defined, opposite vertexes are encoded with central point.
3. the monocular vision target designation method of calibrating template is based on as claimed in claim 2, it is characterised in that the definition Central point is comprised the concrete steps that with the completeness on summit:
According to template form, the four-quadrant subregion on central point or summit is defined, four subregions are designated as respectively I1、I2、I3、I4, to appointing The n pixel that meaning central point or summit are equidistantly chosen on any i directions, its average isCorrespondence central point or summit exist I1The pixel average of subregionFor:
I 1 ‾ = E ( Σ i ∈ I 1 I 1 i ‾ )
In the same manner, the pixel average of central point or four, summit subregion is obtained with thisCentral point or summit are complete Standby property is defined as follows:
Complete central point or summit:
Half complete central point or summit:
Incomplete central point or summit:Do not exist;
Wherein, TH1Represent the similarity degree of diagonal black bars or diagonal white portion pixel in image, TH2In representing image The difference degree of black and white area pixel.
4. the monocular vision target designation method of calibrating template is based on as claimed in claim 2, it is characterised in that described pair of top Point is comprised the concrete steps that with central point coding:
Pixel average on 4 subregions of calibrating template central point O is remembered respectivelyDetermined according to pixel average The coding of 4 subregions, wherein black are encoded with 0 coding, white with 1;
Determine each subregion pixel average in summitArbitrary summit is encoded, wherein black is encoded with 0, White is encoded with 1;
Determine that its color is encoded according to the pixel average of central point and four, summit subregion.
5. the monocular vision target designation method of calibrating template is based on as claimed in claim 1 or 2, it is characterised in that
The derivation calibrating template comprises the steps in three dimensions with respect to the range-azimuth of camera:
Using template calibrating camera, initial alignment parameter is determined;
Calibrating template and target bind to be measured are made, IMAQ is carried out;
The collection image angle point is searched for, complete and half complete central point and summit is extracted, wherein in collection image is calculated To central point and the Euclidean pixel distance on summit;
Determine the collection image center that monocular vision is collected;
According to central point and the summit Euclidean pixel distance of the calibrating template of the collection image, and initial alignment parameter,
Calculate the Euclidean pixel distance of collection image center and camera center;
Calculate the Euclidean pixel distance of collection image center and calibrating template central point;
According to above-mentioned collection image center and camera center Euclidean pixel distance and
The Euclidean pixel distance of collection image center and calibrating template central point,
Calculate distance, the azimuth at calibrating template center and camera center.
6. the monocular vision target designation method of calibrating template is based on as claimed in claim 5, it is characterised in that:It is described to use Calibrating template calibrating camera, determines initial alignment parameter, comprises the steps:
Calibrating template central point O points are positioned on camera lens group central axis, parallel to lens plane;
Calibrating template obtains each apart from upper uncalibrated image, respectively to per a mark by closely sampling of taking pictures is translated to remote continuous horizontal Determine all angle points of image detection, complete central point, half complete central point, complete summit and half complete summit are extracted, according to summit Codes match correspondence central point and summit, obtain central point and apex coordinate;
Calculate the Euclidean pixel distance d of 2 points of B, O in uncalibrated imageBO, the Euclidean pixel distance d of 2 points of E, O in uncalibrated imageEO
d B O = ( λ 0 - x B ) 2 + ( y 0 - y B ) 2 - - - ( 1 )
d E O = ( x 0 - x E ) 2 + ( y 0 - y E ) 2 - - - ( 2 )
Wherein, summit O (x0,y0), summit B (xB,yB), summit E (xE,yE);
dlRepresent Euclidean pixel distance between black patch summit and central point:
dl=E (dBO, dEO) (3)
Determine initial alignment parameter betai
β i = D K O d l - - - ( 4 )
Wherein, DKOFor the Euclidean space distance of uncalibrated image central point O and camera center point K.
7. the monocular vision target designation method of calibrating template is based on as claimed in claim 6, it is characterised in that:The basis The central point and summit Euclidean pixel distance of the calibrating template of the collection image, and initial alignment parameter, calculate collection image Central point is comprised the concrete steps that with the Euclidean pixel distance of camera center:
The central point of collection image is designated as P (x, y), according to the central point and summit Euclidean picture of the calibrating template of the collection image Element is apart from dl, and initial alignment parameter betai, the Euclidean space of collection image center P and camera center point K is calculated apart from DKP
DKPi×dl (5)。
8. the monocular vision target designation method of calibrating template is based on as claimed in claim 7, it is characterised in that:The calculating Collection image center is comprised the concrete steps that with the Euclidean pixel distance of calibrating template central point:Calibrating template central point O (x0, y0) with collection image center P (x, y) Euclidean pixel distance be dOP
d O P = ( x - x 0 ) 2 + ( y - y 0 ) 2 - - - ( 6 )
Wherein:(x0,y0) it is template central point O point coordinates in collection image.
9. the monocular vision target designation method of calibrating template is based on as claimed in claim 8, it is characterised in that:The calculating Calibrating template center and the distance of camera center, azimuthal comprise the concrete steps that:Calibrating template central point and camera center Euclidean space apart from D and azimuth angle alpha:
D = d O P 2 + D K P 2 - - - ( 7 )
α = tan - 1 d O P D K P - - - ( 8 ) .
CN201610910758.8A 2016-10-19 2016-10-19 Monocular vision object space localization method based on template Active CN106651957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610910758.8A CN106651957B (en) 2016-10-19 2016-10-19 Monocular vision object space localization method based on template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610910758.8A CN106651957B (en) 2016-10-19 2016-10-19 Monocular vision object space localization method based on template

Publications (2)

Publication Number Publication Date
CN106651957A true CN106651957A (en) 2017-05-10
CN106651957B CN106651957B (en) 2019-07-30

Family

ID=58855956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610910758.8A Active CN106651957B (en) 2016-10-19 2016-10-19 Monocular vision object space localization method based on template

Country Status (1)

Country Link
CN (1) CN106651957B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504287A (en) * 2016-10-19 2017-03-15 大连民族大学 Monocular vision object space alignment system based on template
CN109636859A (en) * 2018-12-24 2019-04-16 武汉大音科技有限责任公司 A kind of scaling method of the 3D vision detection based on one camera
CN109872366A (en) * 2019-02-25 2019-06-11 清华大学 Object dimensional method for detecting position and device based on depth fitting degree assessment network
CN111833405A (en) * 2020-07-27 2020-10-27 北京大华旺达科技有限公司 Calibration identification method and device based on machine vision
CN115655112A (en) * 2022-11-09 2023-01-31 长安大学 Underground marker based on localizability and underground auxiliary positioning method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144716A (en) * 2007-10-15 2008-03-19 清华大学 Multiple angle movement target detection, positioning and aligning method
CN101629806A (en) * 2009-06-22 2010-01-20 哈尔滨工程大学 Nonlinear CCD 3D locating device combined with laser transmitter and locating method thereof
WO2012019370A1 (en) * 2010-08-13 2012-02-16 武汉大学 Camera calibration method, image processing device and motor vehicle
CN103177439A (en) * 2012-11-26 2013-06-26 惠州华阳通用电子有限公司 Automatically calibration method based on black and white grid corner matching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144716A (en) * 2007-10-15 2008-03-19 清华大学 Multiple angle movement target detection, positioning and aligning method
CN101629806A (en) * 2009-06-22 2010-01-20 哈尔滨工程大学 Nonlinear CCD 3D locating device combined with laser transmitter and locating method thereof
WO2012019370A1 (en) * 2010-08-13 2012-02-16 武汉大学 Camera calibration method, image processing device and motor vehicle
CN103177439A (en) * 2012-11-26 2013-06-26 惠州华阳通用电子有限公司 Automatically calibration method based on black and white grid corner matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周娜: "基于单目视觉的摄像机定位技术研究", 《中国优秀硕士学位论文全文数据库》 *
熊九龙 等: "融合线性方法和神经网络的摄像机并行标定技术", 《仪器仪表学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504287A (en) * 2016-10-19 2017-03-15 大连民族大学 Monocular vision object space alignment system based on template
CN106504287B (en) * 2016-10-19 2019-02-15 大连民族大学 Monocular vision object space positioning system based on template
CN109636859A (en) * 2018-12-24 2019-04-16 武汉大音科技有限责任公司 A kind of scaling method of the 3D vision detection based on one camera
CN109636859B (en) * 2018-12-24 2022-05-10 武汉大音科技有限责任公司 Single-camera-based calibration method for three-dimensional visual inspection
CN109872366A (en) * 2019-02-25 2019-06-11 清华大学 Object dimensional method for detecting position and device based on depth fitting degree assessment network
CN111833405A (en) * 2020-07-27 2020-10-27 北京大华旺达科技有限公司 Calibration identification method and device based on machine vision
CN111833405B (en) * 2020-07-27 2023-12-08 北京大华旺达科技有限公司 Calibration and identification method and device based on machine vision
CN115655112A (en) * 2022-11-09 2023-01-31 长安大学 Underground marker based on localizability and underground auxiliary positioning method

Also Published As

Publication number Publication date
CN106651957B (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN106504287B (en) Monocular vision object space positioning system based on template
CN106651957A (en) Monocular vision target space positioning method based on template
CN104173054B (en) Measuring method and measuring device for height of human body based on binocular vision technique
CN104200086B (en) Wide-baseline visible light camera pose estimation method
CN104299244B (en) Obstacle detection method and device based on monocular camera
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN103559711B (en) Based on the method for estimating of three dimensional vision system characteristics of image and three-dimensional information
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN102622747B (en) Camera parameter optimization method for vision measurement
Tamas et al. Targetless calibration of a lidar-perspective camera pair
CN103994765B (en) Positioning method of inertial sensor
CN104142157A (en) Calibration method, device and equipment
CN102609941A (en) Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN103759669A (en) Monocular vision measuring method for large parts
CN102136140B (en) Rectangular pattern-based video image distance detecting method
CN103886107A (en) Robot locating and map building system based on ceiling image information
CN113592989A (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN104034269A (en) Monocular vision measuring method and monocular vision measuring device
Momeni-k et al. Height estimation from a single camera view
CN105354819A (en) Depth data measurement system, depth data determination method and apparatus
CN107063229A (en) Mobile robot positioning system and method based on artificial landmark
CN108279677B (en) Rail robot detection method based on binocular vision sensor
CN108469254A (en) A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose
CN103985121B (en) Method for optical calibration of underwater projector structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant