CN106504287A - Monocular vision object space alignment system based on template - Google Patents

Monocular vision object space alignment system based on template Download PDF

Info

Publication number
CN106504287A
CN106504287A CN201610910942.2A CN201610910942A CN106504287A CN 106504287 A CN106504287 A CN 106504287A CN 201610910942 A CN201610910942 A CN 201610910942A CN 106504287 A CN106504287 A CN 106504287A
Authority
CN
China
Prior art keywords
central point
summit
template
calibrating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610910942.2A
Other languages
Chinese (zh)
Other versions
CN106504287B (en
Inventor
杨大伟
毛琳
张汝波
蔺蘭
姬梦婷
郭超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Nationalities University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Nationalities University filed Critical Dalian Nationalities University
Priority to CN201610910942.2A priority Critical patent/CN106504287B/en
Publication of CN106504287A publication Critical patent/CN106504287A/en
Application granted granted Critical
Publication of CN106504287B publication Critical patent/CN106504287B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Based on the monocular vision object space alignment system of template, belong to space orientation technique field, low for solving the problems, such as target positioning failure, positional accuracy, technical essential is:Including:Calibrating template, for calibrating camera, is used for detecting calibrating template image space two-dimensional position;Coordinate mapping module, calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system, and derivation module, according to calibrating template in the plane of delineation projection derive its in three dimensions with respect to photographic head range-azimuth with to monocular vision target designation.Effect is:Moving target is replaced using template, template is detected, oriented and is found range, computation complexity is low, can fast and effectively position moving target.

Description

Monocular vision object space alignment system based on template
Technical field
The invention belongs to space orientation technique field, specifically a kind of monocular vision object space based on template is calmly Position system.
Technical background
Being accurately positioned for target is played a very important role with analysis for the identification of target and the understanding of image, multiple Target in miscellaneous background is positioned at the fields such as military affairs, industrial monitoring, traffic control and management important application.For target is determined The research of position aspect, JorgeLob etc. propose the three-dimensional rebuilding method for combining Inertia information and vision, using inertial sensor with Binocular vision combines to recover the three-dimensional parameter of ground level and line segment normal thereto;Chinese Academy of Sciences's Shenyang automation research Hao Ying Bright grade presets some artificial targets in target, the three-dimensional information of environment is obtained according to binocular stereo vision, real When calculate the position relationship of mobile robot relative flag thing;Stone is pure as jade to Double-visual angle moving target three-dimensional space in video monitoring Between location technology studied, extracted based on SURF algorithm Coordinate system realizes that target is positioned.
Monocular vision carries out a kind of sterically defined location technology by monocular-camera.Method not pregnancy ceased new at present Raw, with multi-disciplinary cross reference, more promote monocular vision object localization method constantly to develop.Existing object localization method Mainly target designation model is set up using visual signature, Target space position is determined by target projection positioning.Color, texture, The visual signatures such as edge, light stream are easy to extract, but easily receive such environmental effects, and feature stability is bad, cause target positioning to be lost Effect, positional accuracy are low.Although wavelet character, local feature, feature base etc. can strengthen the accuracy of positioning, feature extraction Algorithm computation complexity is high, is unfavorable for that real-time target is positioned.Locating speed is fast, precision is high, the location algorithm of strong robustness is The research emphasis of current monocular vision object localization method.
Content of the invention
When determining Target space position in order to solve existing object localization method by target projection positioning, its color, stricture of vagina Though the visual signatures such as reason, edge, light stream are easy to extract, such environmental effects are easily received, feature stability is bad, cause target fixed The low problem of position failure, positional accuracy.
Technical solution of the present invention is as follows:
A kind of monocular vision target designation system based on calibrating template, including:
Calibrating template, for calibrating camera, is used for detecting calibrating template image space two-dimensional position;
Calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system by coordinate mapping module, and
Derivation module, the projection according to calibrating template in the plane of delineation derive which in three dimensions with respect to photographic head Range-azimuth is with to monocular vision target designation.
Further, the calibrating template, adopts diagonal black squares square for template primitive form, with central point With summit, the completeness of central point and summit is defined, opposite vertexes are encoded with central point.
Further,
The derivation module, including:
Parameter determination module:Calibrating template calibrating camera is made, initial alignment parameter is determined;
Image capture module:Calibrating template and target bind to be measured, carry out image acquisition;
Computing module:
The collection image angle point is searched for, complete and half complete central point and summit is extracted, calculating gathers its in image In a pair of central points and summit Euclidean pixel distance;
Determine the collection image center that monocular vision is collected;
According to the central point and summit Euclidean pixel distance of the calibrating template of the collection image, and initial alignment parameter, Calculate the Euclidean pixel distance of collection image center and camera center;
Calculate the Euclidean pixel distance of collection image center and calibrating template central point;
According to above-mentioned collection image center and camera center Euclidean pixel distance and
Collection image center and the Euclidean pixel distance of calibrating template central point,
Calculate distance, the azimuth at calibrating template center and camera center.
Further, the parameter determination module:
Calibrating template central point O points are positioned on camera lens group central axis, parallel to lens plane;
Calibrating template is taken pictures sampling to the translation of remote continuous horizontal by near, obtain each apart from upper uncalibrated image, respectively to often Open uncalibrated image and detect all angle points, extract complete central point, half complete central point, complete summit and half complete summit, according to The corresponding central point of vertex encoding coupling and summit, obtain central point and apex coordinate;
Calculate the Euclidean pixel distance d of 2 points of B, O in uncalibrated imageBO, the Euclidean pixel distance of 2 points of E, O in uncalibrated image dEO
Wherein, summit O (x0,y0), summit B (xB,yB), summit E (xE,yE);
dlRepresent Euclidean pixel distance between black patch summit and central point:
dl=E (dBO, dEO) (3)
Determine initial alignment parameter betai
Wherein, DKOEuclidean space distance for uncalibrated image central point O and camera center point K.
Further, the computing module:
The central point of collection image is designated as P (x, y), according to central point and the summit Europe of the calibrating template of the collection image Family name pixel distance dl, and initial alignment parameter betai, calculate the Euclidean space distance of collection image center P and camera center point K DKP
DKP=βi×dl(5)
Calibrating template central point O (x0,y0) with collection image center P (x, y) Euclidean pixel distance be dOP
Wherein:(x0,y0) it is template central point O point coordinates in collection image.
The Euclidean space of calibrating template central point and camera center is apart from D and azimuth angle alpha:
Further, the central point of the calibrating template with the definition step of the completeness on summit is:
According to template form, the four-quadrant subregion on central point or summit is defined, four subregions are designated as I respectively1、I2、I3、I4, The n pixel equidistantly chosen on any i directions by any central point or summit, its average isCorresponding central point or top Point is in I1The pixel average of subregionFor:
In the same manner, the pixel average of four subregions of central point or summit is obtained with thisCentral point or top Point completeness is defined as follows:
Complete central point or summit:
Half complete central point or summit:
Incomplete central point or summit:Do not exist;
Wherein, TH1Represent the similarity degree of diagonal black bars or diagonal white portion pixel in image, TH2Represent figure The difference degree of black and white area pixel as in.
Further, the summit to calibrating template and comprising the concrete steps that central point is encoded:
Pixel average on 4 subregions of calibrating template central point O is remembered respectivelyEqual according to pixel Value determines the coding of 4 subregions, and wherein black is with 0 coding, white with 1 coding;
Determine each subregion pixel average in summitArbitrary summit is encoded, wherein black uses 0 Coding, white are encoded with 1;
Pixel average according to four subregions of central point and summit determines that its color is encoded.
Beneficial effect:
Existing object locating system needs extra aiding sensors or distance-measuring equipment, needs to carry out coordinate system transformation or again Modeling, computation complexity are high, it is impossible to provide accurate target component information for subsequent treatment in real time, cause target following, target Identification, voice positioning etc. are processed and calculate failure, meanwhile, testing cost is too high, and multi-sensor information fusion also can reduce accuracy, Higher to optimized algorithm requirement, it is unfavorable for that requirement of real-time height and the mini-plant of low-power consumption are adopted.
, in existing object space location technology, camera marking method is complicated for the present invention, it is impossible to realize in real time, accurately The problem of positioning target, there is provided a kind of monocular vision object space alignment system based on calibrating template (afterwards referred to as in template Also be calibrating template), the projection according to template in the plane of delineation derive its in three dimensions the distance of relative photographic head and Orientation, when template close to be positioned over before moving target when, equally determined moving target can be positioned, be aided in this Follow-up video image motion target following, moving target positioning, mike acoustic source array positioning and pedestrian target positioning etc. Research.The system is to gather front end to can achieve the initialization positioning work to moving target merely with monocular cam/video camera Make, save the coordinate system transformation step that is brought using the auxiliary positioning means such as laser range finder, infrared range-measurement system, reduce test Measuring apparatus cost, improves real-time positioning measurement efficiency.
System of the present invention substitutes moving target with template and carries out auxiliary directional, range finding, using template calibrating camera Derive and calculate initial parameter, detection template center position in the picture, i.e. template image space two-dimensional position, according to initial mark Determining parameter and template image space two-dimensional coordinate system being mapped to three-dimensional coordinate system, Target space position is obtained with this (includes Deflection and distance).Being determined to of Target space position overcome overlap and partial occlusion in the case of, moving target is in figure The shortcoming that image plane can not be accurately identified, particularly in pedestrian detection, acoustic source array orientation during practical application, as beneficial Auxiliary initial alignment parameter acquiring means, more highlight its engineering practical value.
Description of the drawings
Fig. 1 I types and II type calibrating template schematic diagrams;
Fig. 2 any directions equidistantly take a schematic diagram;
Fig. 3 central points completeness schematic diagram (n=1);
Fig. 4 central point O coding schematic diagrams;
Fig. 5 summits E coding schematic diagrams;
Fig. 6 camera calibration method schematic diagrams;
Fig. 7 Template Location schematic diagrams;
Fig. 8 camera calibration processes;
Fig. 9 target range 3m testing result schematic diagrams;
2 assignment test schematic diagram of Figure 10 examples;
3 assignment test schematic diagram of Figure 11 examples;
4 assignment test schematic diagram of Figure 12 examples;
5 assignment test schematic diagram of Figure 13 examples;
Figure 14 monocular vision target designation procedure declaration figures.
Specific embodiment
The present invention is explained in further detail with specific embodiment below in conjunction with the accompanying drawings:
A kind of monocular vision target designation method based on calibrating template,
The first step, defines calibrating template as requested, defines target step as follows:
(1) calibrating template is defined
As shown in figure 1, calibrating template is divided into I types in Fig. 1, two kinds of forms of II types, using diagonal black squares square (two-square feature) is template primitive form, is defined as camera calibration formwork style.The I types or II pattern plates are only Vertical use, black and white print on A4 paper (international standard size is 210mm × 297mm), it is desirable to horizontally and vertically placed in the middle, wherein, If the length of side of any one black squares is l (units:Millimeter).
(2) completeness of central point and summit is defined
In Fig. 1, the O points of I types and II type calibrating templates are defined as the central point of calibrating template, afterwards abbreviation central point;A、B、 C, D, E and F point is defined as the summit of calibrating template, afterwards abbreviation summit.
According to template form, the four-quadrant subregion of central point is defined, as shown in Fig. 2 four subregions are designated as I respectively1、I2、 I3、I4, n pixel that any central point is equidistantly chosen on any i directions, its average isCorresponding central point is in I1 The pixel average of subregionFor:
The pixel average of central point four subregions is obtained in the same mannerCentral point completeness is defined as follows:
Complete central point:
Half complete central point:
Incomplete central point:Do not exist;
Wherein, TH1Represent the similarity degree of diagonal black bars or diagonal white portion pixel in image, TH2Represent figure The difference degree of black and white area pixel as in.Summit completeness definition is with central point in the same manner.
In the present embodiment, n is to take a numberWhereinFor rounding downwards behaviour Make.In Fig. 2, black takes a position with white expression, as a kind of embodiment, respectively with 45 °, 135 °, 225 ° and 315 ° directions Take a little, count n=4.The pixel average of 4 subregions is designated as, Using said method, to determine each point Completeness.
(3) summit is encoded with central point
In the present embodiment, using explanation as a example by I pattern plates.Pixel average on 4 subregions of calibrating template central point O point Do not rememberAccording to the coding that pixel average determines 4 subregions, wherein black is with 0 coding, white with 1 volume Code;
Determine each subregion pixel average in summitArbitrary summit is encoded, wherein black uses 0 Coding, white are encoded with 1;
Pixel average according to four subregions of central point and summit determines that its color is encoded.Central point and vertex encoding Illustrate as shown in Figure 4 and Figure 5, encoder dictionary is as shown in table 1.
Second step, demarcates photographic head using template, determines initial alignment parameter betai
Calibrating template central point O points are positioned on camera lens group central axis, (or burnt flat parallel to lens plane Face), position relationship is as shown in Figure 6.
Calibrating template is taken pictures sampling to the translation of remote continuous horizontal by near, obtain each apart from upper uncalibrated image, respectively to often Uncalibrated image detects all angle points using Harris Corner Detection Algorithms, extracts complete central point, half complete central point, complete Summit and half complete summit, according to the corresponding central point of vertex encoding coupling and summit, obtain central point and apex coordinate.With summit O(x0,y0) and summit B (xB,yB) and summit E (xE,yE) as a example by, it is known that:
Wherein, dBOFor 2 points of Euclidean pixel distance of B, O in image, dEOFor 2 points of E, O in image Euclidean pixel away from From dlEuclidean pixel distance between black patch summit and central point is represented, is then had:
dl=E (dBO, dEO)
Wherein, DKOEmpty with the Euclidean of camera lens central point K for image center (now, as template center's point O) Between distance, thereby determine that initial alignment parameter betai
3rd step:Calibrating template is fixed together with target to be measured, replaces target position to be measured using calibrating template position Put, carry out image acquisition and video capture, as shown in Figure 7.Specifically, template is fixed on moving target center, by taking the photograph As head gathers video image, and frame of video is converted to grayscale image sequence.
4th step:Using Harris Corner Detection Algorithms, angle point search, calculating are carried out to grey-level image frame.
5th step:The completeness of each angle point is detected, complete and half complete central point and summit is extracted.
6th step:Coupling of tabling look-up is carried out according to central point and vertex encoding table, determines central point O with summit B, C, E, F Coordinate, taking one pair of which summit carries out Euclidean pixel distance calculating with central point, is designated as dl.
7th step:Determine image center P (x, y), the Euclidean pixel of calculation template central point O and image center P away from From dOP, according to initial parameter βiWith dlDetermine image center with the Euclidean space of camera center apart from DKP.
8th step:According to dOPAnd DKP, calculation template center is with video camera apart from D and azimuth angle alpha.
Thus, the concrete grammar of step 4 to step 8 it is also possible to use following statement:
Using corner detection approach, whole angle points in input picture (collection image) or frame of video are detected, according to the complete of angle point Standby property, inspection center's point and summit.According to coding schedule, coupling central point and each summit, central point and each apex coordinate, meter is obtained Euclidean pixel distance d in nomogram picture between template central point and summitl.Input picture central point is designated as P (x, y), template center The Euclidean pixel distance of point O and image center P is dOP, according to initial alignment parameter betaiEuclidean picture between central point and summit Element is apart from dl, determine that image center P is D with the Euclidean space distance of camera centerKP.
DKPi*dl
Locating template center, i.e., be oriented to target, find range, according to DKPWith dOPDetermine template center's point and video camera Euclidean space apart from D and azimuth angle alpha
The present embodiment solves to initialize location algorithm complexity in existing Technology for Target Location, it is often necessary to laser range finder, The auxiliary positioning means such as infrared range-measurement system, testing cost are high, the problem of poor real.A kind of monocular vision based on template is provided Object localization method.By monocular cam/video camera, moving target is replaced to carry out auxiliary directional, range finding, algorithm using template Complexity is low, reduce test equipment cost, greatly strengthen target positioning real-time and accuracy, be succeeding target tracking, The practical applications such as target detection, acoustic source array initial alignment provide necessary initiation parameter and real time calibration guarantee.
By adopting above-mentioned technical proposal, a kind of monocular vision target positioning side based on template that the present embodiment is provided Method, has such beneficial effect compared with prior art:
Existing object localization method needs extra aiding sensors or distance-measuring equipment, needs to carry out coordinate system transformation or again Modeling, computation complexity are high, it is impossible to provide accurate target component information for subsequent treatment in real time, cause target following, target Identification, voice positioning etc. are processed and calculate failure, meanwhile, testing cost is too high, and multi-sensor information fusion also can reduce accuracy, Higher to optimized algorithm requirement, it is unfavorable for that requirement of real-time height and the mini-plant of low-power consumption are adopted.
The present embodiment replaces moving target using template, template is detected, oriented and is found range, and is demarcated using template Video camera, calculates initial alignment parameter, determines template center using Corner Detection and the detection of angle point completeness, according to initial ginseng Two-dimensional image space template position is mapped to three-dimensional space position by number, obtains distance and direction of the moving target away from photographic head Angle.The method can be efficiently applied to indoor and outdoor scene, it is adaptable to single or multiple moving target positioning, solve motion mesh Mark easily by illumination, the such environmental effects such as block and cause the problem that feature extraction is difficult, detection is realized, computation complexity is low, energy Enough fast and effectively positioning moving targets, the determination of moving target locus can be effectively applied to target following, pedestrian In the subsequent treatment such as detection, acoustic source array orientation, with certain engineering use value.
As another kind of embodiment, corresponding to above-described embodiment in method, the present embodiment proposes a kind of to be based on calibration mold The monocular vision target designation system of plate, including:
Calibrating template, for calibrating camera, is used for detecting calibrating template image space two-dimensional position;
Calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system by coordinate mapping module, and
Derivation module, the projection according to calibrating template in the plane of delineation derive which in three dimensions with respect to photographic head Range-azimuth is with to monocular vision target designation.
Further, the calibrating template, adopts diagonal black squares square for template primitive form, with central point With summit, the completeness of central point and summit is defined, opposite vertexes are encoded with central point.
Further,
The derivation module, including:
Parameter determination module:Calibrating template calibrating camera is made, initial alignment parameter is determined;
Image capture module:Calibrating template and target bind to be measured, carry out image acquisition;
Computing module:
The collection image angle point is searched for, complete and half complete central point and summit is extracted, calculating gathers its in image In a pair of central points and summit Euclidean pixel distance;
Determine the collection image center that monocular vision is collected;
According to the central point and summit Euclidean pixel distance of the calibrating template of the collection image, and initial alignment parameter, Calculate the Euclidean pixel distance of collection image center and camera center;
Calculate the Euclidean pixel distance of collection image center and calibrating template central point;
According to above-mentioned collection image center and camera center Euclidean pixel distance and
Collection image center and the Euclidean pixel distance of calibrating template central point,
Calculate distance, the azimuth at calibrating template center and camera center.
Further, the parameter determination module:
Calibrating template central point O points are positioned on camera lens group central axis, parallel to lens plane;
Calibrating template is taken pictures sampling to the translation of remote continuous horizontal by near, obtain each apart from upper uncalibrated image, respectively to often Open uncalibrated image and detect all angle points, extract complete central point, half complete central point, complete summit and half complete summit, according to The corresponding central point of vertex encoding coupling and summit, obtain central point and apex coordinate;
Calculate the Euclidean pixel distance d of 2 points of B, O in uncalibrated imageBO, the Europe of 2 points of E, O in uncalibrated image
Family name pixel distance dEO
Wherein, summit O (x0,y0), summit B (xB,yB), summit E (xE,yE);
dlRepresent Euclidean pixel distance between black patch summit and central point:
dl=E (dBO, dEO) (3)
Determine initial alignment parameter betai
Wherein, DKOEuclidean space distance for uncalibrated image central point O and camera center point K.
Further, the computing module:
The central point of collection image is designated as P (x, y), according to central point and the summit Europe of the calibrating template of the collection image Family name pixel distance dl, and initial alignment parameter betai, calculate the Euclidean space distance of collection image center P and camera center point K DKP
DKPi×dl(5)
Calibrating template central point O (x0,y0) with collection image center P (x, y) Euclidean pixel distance be dOP
Wherein:(x0,y0) it is template central point O point coordinates in collection image.
The Euclidean space of calibrating template central point and camera center is apart from D and azimuth angle alpha:
Further, the central point of the calibrating template with the definition step of the completeness on summit is:
According to template form, the four-quadrant subregion on central point or summit is defined, four subregions are designated as I respectively1、I2、I3、I4, The n pixel equidistantly chosen on any i directions by any central point or summit, its average isCorresponding central point or top Point is in I1The pixel average of subregionFor:
In the same manner, the pixel average of four subregions of central point or summit is obtained with thisCentral point or top Point completeness is defined as follows:
Complete central point or summit:
Half complete central point or summit:
Incomplete central point or summit:Do not exist;
Wherein, TH1Represent the similarity degree of diagonal black bars or diagonal white portion pixel in image, TH2Represent figure The difference degree of black and white area pixel as in.
Further, the summit to calibrating template and comprising the concrete steps that central point is encoded:
Pixel average on 4 subregions of calibrating template central point O is remembered respectivelyEqual according to pixel Value determines the coding of 4 subregions, and wherein black is with 0 coding, white with 1 coding;
Determine each subregion pixel average in summitArbitrary summit is encoded, wherein black uses 0 Coding, white are encoded with 1;
Pixel average according to four subregions of central point and summit determines that its color is encoded.
Embodiment 1:Camera calibration, initial alignment parameter determination
The present embodiment is camera calibration process, determines initial alignment parameter, and institute's extracting method of the present invention is tested.In room In external environment, video camera is mounted and fixed at the top of certain robot or spider, and level shoots, and this is taken the photograph using cross grid Camera, obtains automatically photographic head and shoots, and in the visual field of camera lens, the hand-held template of moving target is continuously put down between 0.5 meter -5 meters Move, video camera from 0.5 meter of target range when start to take pictures, target range often increases by 0.5 meter of video camera and shoots once, shooting process In, it is ensured that try one's best just to photographic head in template direction, it is ensured that template center's point keeps overlapping with video camera shooting image center.
Instance parameter explanation:Picture format PNG, picture size 1920 × 1080, uncalibrated image number 10.
This example calibration process as shown in figure 8, as a example by testing result when target range video camera 3m, as shown in figure 9, Initial alignment parameter result of calculation is as shown in table 2, using initial parameter βiCalculate now measurement distance and deflection, with reality away from From being contrasted with actual angle, obtain range error and angular error, due in calibration process template center all the time with shooting Head center superposition, actual angle are 0 °.Calibrated error is as shown in table 3, it is known that, range error scope in 15mm, angular error Scope is reached in error allowed band index in 1.2 °.
Embodiment 2:Monocular vision object localization method performance test based on template
The present embodiment, is carried out to target using monocular-camera based on template according to initial alignment parameter based on embodiment 1 Positioning.Indoors in environment, video camera is mounted and fixed at the top of certain robot or spider, and level shoots, this employing Cross grid video camera, automatically obtains photographic head and shoots, in the visual field of camera lens, the hand-held template of moving target, according to set away from From moving with direction, chi is demarcated using ground and determine target actual range and actual angle, using context of methods to shooting image Detected, measured value and actual value technology distance error and angular error.
Instance parameter explanation:Picture format PNG, picture size 1920 × 1080, uncalibrated image number 8.This example is fixed Position process is as shown in Figure 10, using initial parameter βiNow measurement distance and deflection is calculated, with actual range and actual angle Contrasted, obtained range error and angular error, positioning result and method the performance test results are as shown in table 4, it is known that, distance In 15mm, angular error scope reaches in error allowed band index in 1.2 ° range of error.
Embodiment 3:Corridor environment, single target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, single target motion positions in corridor.? Under the conditions of this, video camera is mounted and fixed at the top of certain robot or spider, and level shoots, in the visual field of camera lens, one The hand-held template of individual human target, template direction are tried one's best just to video camera, speed of the target according to 0.6m/s, within sweep of the eye by Remote and near near video camera.The camera calibration process of this example Case-based Reasoning 1, positions to moving target in video.
Embodiment parameter declaration:Video format MP4,160 frame of video frame number, video image size 1920 × 1080.
This example position fixing process as shown in figure 11, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150 As a example by frame, moving target motor process and testing result are described, target positioning result is as shown in table 5.
Embodiment 4:Outdoor environment, single target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, and in outdoor square, single target motion is fixed Position.With this understanding, video camera is mounted and fixed at the top of certain robot or spider, and level shoots, in the visual field of camera lens Interior, the hand-held template of human target, template direction are tried one's best just to video camera, and speed of the target according to 0.6m/s, in visual field model Draw near away from video camera in enclosing.The camera calibration process of this example Case-based Reasoning 1, it is fixed that moving target in video is carried out Position.
Embodiment parameter declaration:Video format MP4,160 frame of video frame number, video image size 1920 × 1080.
This example position fixing process as shown in figure 12, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150 As a example by frame, moving target motor process and testing result are described.Target positioning result is as shown in table 6.
Embodiment 5:Outdoor environment, two target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, and in outdoor square, two moving targets are fixed Position.With this understanding, video camera is mounted and fixed at the top of certain robot or spider, and level shoots, in the visual field of camera lens Interior, 1 trumpeter of target holds I pattern plates, and 2 trumpeter of target holds II pattern plates, and template direction is tried one's best just to video camera, respectively according to The speed of 0.6m/s, in the video camera that draws near within sweep of the eye.The camera calibration process of this example Case-based Reasoning 1, Moving target in video is positioned.
Embodiment parameter declaration:Video format MP4,160 frame of video frame number, video image size 1920 × 1080.
This example position fixing process as shown in figure 12, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150 As a example by frame, moving target motor process and testing result are described, target positioning result is as shown in table 5.
Subordinate list:
1 I patterns plate central point of table-vertex encoding table;
2 initial alignment parameter list of table;
3 initial alignment assignment test table of table;
4 example of table, 2 positioning result;
5 example of table, 3 positioning result;
6 example of table, 4 positioning result;
7 example of table, 5 positioning result.
1 I patterns plate central point of table-vertex encoding table
2 initial alignment parameter list of table
3 initial alignment of table positions table
4 example of table, 2 positioning result
5 example of table, 3 positioning result
6 example of table, 4 positioning result
7 example of table, 5 positioning result
The above, only the invention preferably specific embodiment, but the protection domain of the invention is not Be confined to this, any those familiar with the art in the technical scope that the invention is disclosed, according to the present invention The technical scheme of creation and its inventive concept equivalent or change in addition, should all cover the invention protection domain it Interior.

Claims (7)

1. a kind of monocular vision target designation system based on calibrating template, it is characterised in that include:
Calibrating template, for calibrating camera, is used for detecting calibrating template image space two-dimensional position;
Calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system by coordinate mapping module, and
Derivation module, the projection according to calibrating template in the plane of delineation derive which in three dimensions with respect to the distance of photographic head With orientation with to monocular vision target designation.
2. the monocular vision target designation system based on calibrating template as claimed in claim 1, it is characterised in that the demarcation Template, adopts diagonal black squares square for template primitive form, with central point and summit, defines central point with summit Completeness, opposite vertexes are encoded with central point.
3. the monocular vision target designation system based on calibrating template as claimed in claim 1, it is characterised in that the derivation Module, including:
Parameter determination module:Calibrating template calibrating camera is made, initial alignment parameter is determined;
Image capture module:Calibrating template and target bind to be measured, carry out image acquisition;
Computing module:
The collection image angle point is searched for, complete and half complete central point and summit is extracted, wherein in collection image is calculated To central point and the Euclidean pixel distance on summit;
Determine the collection image center that monocular vision is collected;
According to the central point and summit Euclidean pixel distance of the calibrating template of the collection image, and initial alignment parameter,
Calculate the Euclidean pixel distance of collection image center and camera center;
Calculate the Euclidean pixel distance of collection image center and calibrating template central point;
According to above-mentioned collection image center and camera center Euclidean pixel distance and
Collection image center and the Euclidean pixel distance of calibrating template central point,
Calculate distance, the azimuth at calibrating template center and camera center.
4. the monocular vision target designation system based on calibrating template as claimed in claim 3, it is characterised in that the parameter Determining module:
Calibrating template central point O points are positioned on camera lens group central axis, parallel to lens plane;
Calibrating template is taken pictures sampling to the translation of remote continuous horizontal by near, obtain each apart from upper uncalibrated image, respectively to per marking Determine all angle points of image detection, extract complete central point, half complete central point, complete summit and half complete summit, according to summit The corresponding central point of codes match and summit, obtain central point and apex coordinate;
Calculate the Euclidean pixel distance d of 2 points of B, O in uncalibrated imageBO, the Euclidean pixel distance d of 2 points of E, O in uncalibrated imageEO
d B O = ( x 0 - x B ) 2 + ( y 0 - y B ) 2 - - - ( 1 )
d E O = ( x 0 - x E ) 2 + ( y 0 - y E ) 2 - - - ( 2 )
Wherein, summit O (x0,y0), summit B (xB,yB), summit E (xE,yE);
dlRepresent Euclidean pixel distance between black patch summit and central point:
dl=E (dBO, dEO) (3)
Determine initial alignment parameter betai
β i = D K O d l - - - ( 4 )
Wherein, DKOEuclidean space distance for uncalibrated image central point O and camera center point K.
5. the monocular vision target designation system based on calibrating template as claimed in claim 3, it is characterised in that the calculating Module:
The central point of collection image is designated as P (x, y), according to the central point and summit Euclidean picture of the calibrating template of the collection image Element is apart from dl, and initial alignment parameter betai, the Euclidean space of collection image center P and camera center point K is calculated apart from DKP
DKPi×dl(5)
Calibrating template central point O (x0,y0) with collection image center P (x, y) Euclidean pixel distance be dOP
d O P = ( x - x 0 ) 2 + ( y - y 0 ) 2 - - - ( 6 )
Wherein:(x0,y0) it is template central point O point coordinates in collection image.
The Euclidean space of calibrating template central point and camera center is apart from D and azimuth angle alpha:
D = d O P 2 + D K P 2 - - - ( 7 )
α = tan - 1 d O P D K P - - - ( 8 )
6. the monocular vision target designation system based on calibrating template as claimed in claim 2, it is characterised in that the demarcation The central point of template with the definition step of the completeness on summit is:
According to template form, the four-quadrant subregion on central point or summit is defined, four subregions are designated as I respectively1、I2、I3、I4, to appointing The n pixel that meaning central point or summit are equidistantly chosen on any i directions, its average isCorresponding central point or summit exist I1The pixel average of subregionFor:
I 1 ‾ = E ( Σ i ∈ I 1 I 1 i ‾ )
In the same manner, the pixel average of four subregions of central point or summit is obtained with thisCentral point or summit are complete Standby property is defined as follows:
Complete central point or summit:
Half complete central point or summit:
Incomplete central point or summit:Do not exist;
Wherein, TH1Represent the similarity degree of diagonal black bars or diagonal white portion pixel in image, TH2Represent in image The difference degree of black and white area pixel.
7. the monocular vision target designation system based on calibrating template as claimed in claim 2, it is characterised in that described pair of mark The summit of solid plate and comprising the concrete steps that central point is encoded:
Pixel average on 4 subregions of calibrating template central point O is remembered respectivelyDetermined according to pixel average The coding of 4 subregions, wherein black are encoded with 1 with 0 coding, white;
Determine each subregion pixel average in summitArbitrary summit is encoded, wherein black is encoded with 0, White is encoded with 1;
Pixel average according to four subregions of central point and summit determines that its color is encoded.
CN201610910942.2A 2016-10-19 2016-10-19 Monocular vision object space positioning system based on template Expired - Fee Related CN106504287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610910942.2A CN106504287B (en) 2016-10-19 2016-10-19 Monocular vision object space positioning system based on template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610910942.2A CN106504287B (en) 2016-10-19 2016-10-19 Monocular vision object space positioning system based on template

Publications (2)

Publication Number Publication Date
CN106504287A true CN106504287A (en) 2017-03-15
CN106504287B CN106504287B (en) 2019-02-15

Family

ID=58294328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610910942.2A Expired - Fee Related CN106504287B (en) 2016-10-19 2016-10-19 Monocular vision object space positioning system based on template

Country Status (1)

Country Link
CN (1) CN106504287B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108761436A (en) * 2018-08-27 2018-11-06 上海岗消网络科技有限公司 A kind of flame visual token device and method
CN108828965A (en) * 2018-05-24 2018-11-16 联想(北京)有限公司 Localization method, electronic equipment and smart home system, storage medium
CN109636859A (en) * 2018-12-24 2019-04-16 武汉大音科技有限责任公司 A kind of scaling method of the 3D vision detection based on one camera
CN109887025A (en) * 2019-01-31 2019-06-14 沈阳理工大学 Monocular self-adjustable fire point 3-D positioning method and device
CN113538578A (en) * 2021-06-22 2021-10-22 恒睿(重庆)人工智能技术研究院有限公司 Target positioning method and device, computer equipment and storage medium
CN114018215A (en) * 2022-01-04 2022-02-08 智道网联科技(北京)有限公司 Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103487034A (en) * 2013-09-26 2014-01-01 北京航空航天大学 Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode
CN104484883A (en) * 2014-12-24 2015-04-01 河海大学常州校区 Video-based three-dimensional virtual ship positioning and track simulation method
TWM505820U (en) * 2015-04-22 2015-08-01 Waitemata Hands Co Ltd Improved structure of socks
CN106651957A (en) * 2016-10-19 2017-05-10 大连民族大学 Monocular vision target space positioning method based on template

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003530039A (en) * 2000-04-04 2003-10-07 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Automatic calibration of panorama / tilt / zoom cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode
CN103487034A (en) * 2013-09-26 2014-01-01 北京航空航天大学 Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target
CN104484883A (en) * 2014-12-24 2015-04-01 河海大学常州校区 Video-based three-dimensional virtual ship positioning and track simulation method
TWM505820U (en) * 2015-04-22 2015-08-01 Waitemata Hands Co Ltd Improved structure of socks
CN106651957A (en) * 2016-10-19 2017-05-10 大连民族大学 Monocular vision target space positioning method based on template

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周娜: "基于单目视觉的摄像机定位技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108828965A (en) * 2018-05-24 2018-11-16 联想(北京)有限公司 Localization method, electronic equipment and smart home system, storage medium
CN108761436A (en) * 2018-08-27 2018-11-06 上海岗消网络科技有限公司 A kind of flame visual token device and method
CN108761436B (en) * 2018-08-27 2023-07-25 上海岗消网络科技有限公司 Flame vision distance measuring device and method
CN109636859A (en) * 2018-12-24 2019-04-16 武汉大音科技有限责任公司 A kind of scaling method of the 3D vision detection based on one camera
CN109636859B (en) * 2018-12-24 2022-05-10 武汉大音科技有限责任公司 Single-camera-based calibration method for three-dimensional visual inspection
CN109887025A (en) * 2019-01-31 2019-06-14 沈阳理工大学 Monocular self-adjustable fire point 3-D positioning method and device
CN113538578A (en) * 2021-06-22 2021-10-22 恒睿(重庆)人工智能技术研究院有限公司 Target positioning method and device, computer equipment and storage medium
CN114018215A (en) * 2022-01-04 2022-02-08 智道网联科技(北京)有限公司 Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation
CN114018215B (en) * 2022-01-04 2022-04-12 智道网联科技(北京)有限公司 Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation

Also Published As

Publication number Publication date
CN106504287B (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN106504287A (en) Monocular vision object space alignment system based on template
CN106651957B (en) Monocular vision object space localization method based on template
CN104173054B (en) Measuring method and measuring device for height of human body based on binocular vision technique
CN104200086B (en) Wide-baseline visible light camera pose estimation method
CN102788559B (en) Optical vision measuring system with wide-field structure and measuring method thereof
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
Tamas et al. Targetless calibration of a lidar-perspective camera pair
CN102609941A (en) Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN104142157A (en) Calibration method, device and equipment
CN103759669A (en) Monocular vision measuring method for large parts
CN113592989A (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN104021588A (en) System and method for recovering three-dimensional true vehicle model in real time
CN102136140B (en) Rectangular pattern-based video image distance detecting method
CN104034269A (en) Monocular vision measuring method and monocular vision measuring device
CN112884841B (en) Binocular vision positioning method based on semantic target
CN102930551B (en) Camera intrinsic parameters determined by utilizing projected coordinate and epipolar line of centres of circles
CN103852060A (en) Visible light image distance measuring method based on monocular vision
CN108469254A (en) A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose
CN110415299B (en) Vehicle position estimation method based on set guideboard under motion constraint
CN111998862A (en) Dense binocular SLAM method based on BNN
Yuan et al. Combining maps and street level images for building height and facade estimation
CN103985121B (en) Method for optical calibration of underwater projector structure
CN114066985B (en) Method for calculating hidden danger distance of power transmission line and terminal
CN109035343A (en) A kind of floor relative displacement measurement method based on monitoring camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190215

Termination date: 20211019

CF01 Termination of patent right due to non-payment of annual fee