CN104374386B - A kind of based on target localization method linearly - Google Patents

A kind of based on target localization method linearly Download PDF

Info

Publication number
CN104374386B
CN104374386B CN201410608584.0A CN201410608584A CN104374386B CN 104374386 B CN104374386 B CN 104374386B CN 201410608584 A CN201410608584 A CN 201410608584A CN 104374386 B CN104374386 B CN 104374386B
Authority
CN
China
Prior art keywords
image
coordinate system
terrestrial reference
pixel
linearly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410608584.0A
Other languages
Chinese (zh)
Other versions
CN104374386A (en
Inventor
魏东岩
李雯
来奇峰
张晓光
陈夏兰
李祥红
徐颖
袁洪
公续平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Academy of Opto Electronics of CAS
Original Assignee
Academy of Opto Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Academy of Opto Electronics of CAS filed Critical Academy of Opto Electronics of CAS
Priority to CN201410608584.0A priority Critical patent/CN104374386B/en
Publication of CN104374386A publication Critical patent/CN104374386A/en
Application granted granted Critical
Publication of CN104374386B publication Critical patent/CN104374386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind ofly based on target localization method linearly, because the linear terrestrial reference of road has universality, effectively save practical application cost; The present invention proposes the linear terrestrial reference area positioning method based on Adaboost; By extracting straight line information and the true linear terrestrial reference framework straight line information in linear terrestrial reference coordinate system of image neutral line terrestrial reference framework in pixel coordinate system, by two coordinate system underframe straight line correspondent equals, resolve the transformational relation of pixel coordinate system and linear terrestrial reference coordinate system, obtain thus the position of user in earth coordinates, improved accuracy of detection.

Description

A kind of based on target localization method linearly
Technical field
The present invention relates to vision guided navigation technical field, relate in particular to a kind of localization method based on linear element.
Background technology
Perfect along with the satellite fix such as the Big Dipper, GPS system, user side positioning precision is further enhanced, but the region satellite-signals such as indoor, urban canyons are blocked, and cannot effectively meet the demand of user to positioning result precision. In recent years, the locate mode based on vision cooperative target is considered to be in the effective means of signal occlusion area hi-Fix, and the main thought of the location based on vision cooperative target is:
First design specific aim cooperative target, then obtain current scene image by user's vision sensor, according to the cooperative target characteristic use image processing techniques of design, obtain cooperative target characteristic information (dot information, line information, marginal information etc.), finally utilize the Conversion Relations between user coordinate system, image capture device coordinate system, pixel coordinate system and cooperative target coordinate to calculate the required position location of user and attitude.
The cooperative target of H type that Northwestern Polytechnical University's control and information research are designed, this cooperative target is effectively utilized angle point corresponding in color of image information assisted extraction cooperative target. But it is low that the method detects angle point precision, and it is larger affected by the light such as backlight, practical application simultaneously needs the specific cooperative target of a large amount of preparation, is difficult to practical application and promotes.
The people such as Kong Ruonan, according to airfield runway line feature, by artificially track line being made as to redness, and using airfield runway line as cooperative target, utilize Hough straight line extracting method to extract track line information, for the navigation of unmanned plane. The method accuracy of detection is high, but based on airfield runway line, and need the artificial track line color of setting, can not practical application.
Summary of the invention
In view of this, the invention provides a kind ofly based on target localization method linearly, the linear terrestrial reference that can realize based in scene positions.
One of the present invention, based on target localization method linearly, comprises the steps:
Step 1, in application scenarios, measure length and the width of scene neutral line terrestrial reference; Define linear terrestrial reference coordinate system, according to target length and width obtain the expression formula of target framework straight line under linear terrestrial reference coordinate system linearly linearly;
Step 2, utilize image capture device obtain described in target image linearly, extract and in image, comprise the linear landmark region of target linearly; Then from linear landmark region, extract target framework straight line information linearly, obtain the expression formula of each framework straight line under the pixel coordinate system of described image;
Step 3, the framework straight line that step 2 is obtained are distinguished corresponding one by one with the framework of the target linearly straight line in step 1, framework straight line expression formula lower the pixel coordinate system of image is carried out to simultaneous and solved with the framework straight line expression formula under corresponding linear terrestrial reference coordinate system respectively, obtain the pixel coordinate system of image and the transformational relation of linear terrestrial reference coordinate system;
Step 4, obtain the transformational relation between linear terrestrial reference coordinate system and earth coordinates; According to the transformational relation of image pixel coordinate system and linear terrestrial reference coordinate system, obtain the transformational relation of image pixel coordinate system and earth coordinates again, finally obtain the coordinate of user in earth coordinates, realize thus location.
Preferably, described step 2 specifically comprises the steps:
Steps A 0, control the real scene image in image capture device Real-time Collection scene, and judge in every two field picture, whether to contain linear terrestrial reference: if had, will contain target image linearly and be defined as image A, and perform step A1; If no, again carry out this step;
Steps A 1, to described image A, extract R, G, the B triple channel gray value information of each pixel in A; And be 255 by the pixel assignment that meets following condition, the pixel assignment that does not meet following condition is 0, obtains thus the image A * after binaryzation;
Condition is wherein:
In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums;
Steps A 2, image A * is carried out to morphology open operation, remove assorted little noise; And then carry out edge extracting, and remove fine edge, obtain maximal margin profile, be defined as image A * *;
Steps A 3, image A * * is carried out to the extraction of Hough straight line, obtain image A * * cathetus information, from the straight line of image A * *, find 4 the framework straight lines and the expression formula of this 4 framework straight lines under the pixel coordinate system of described image that characterize linear terrestrial reference edge.
Preferably, the concrete grammar of described steps A 0 is:
Step SA00, described real scene image is carried out to down-sampling, obtain the image A of 128 pixel × 256 pixel sizes ';
Step SA01, to image A ' extract R, G, the B triple channel gray value information of each pixel;
In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums: if met, be 255 by this pixel assignment; If do not met, be 0 by this pixel value assignment, obtain image A ";
Step SA02, computed image A " in the non-zero number of the each pixel gray value percentage that accounts for all pixels whether be greater than the threshold value T1 of setting: if be more than or equal to threshold value T1, think image A " in comprise linear terrestrial reference; If be less than threshold value T1, think image A " in do not comprise linear terrestrial reference; Wherein, described threshold value T1 elects as more than 10%.
Preferably, in described steps A 2, obtain the method for maximal margin profile as follows:
Step SA20, order mark sequence number L=0; Gray value equals 1 pixel and is defined as marginal point;
Step SA21, for any marginal point P in image A *, judge whether marginal point P has sequence number mark:
If no, to this marginal point P mark sequence number L, execution step SA22;
If had, then judge whether this marginal point P is last marginal point being marked, if so, execution step SA23; If not, again carry out this step;
Step SA22, judge whether surrounding's 8 neighborhoods of marginal point P exist marginal point: if exist, marginal point is all marked to sequence number L, marginal point P and its marginal point around form a continuous boundary; If there is not marginal point, note L=L+1, returns to step SA21;
SA23, to the L an obtaining continuous boundary, the marginal point assignment that edge length is less than to the threshold value T2 of setting is 0; The non-zero continuous boundary finally obtaining forms described maximal margin profile.
Preferably, the concrete grammar of described step 2 is:
Step B0, obtain the secondary real scene image of J by image capture device, as training sample, wherein a part of training sample comprises linear terrestrial reference, and another part training sample does not comprise linear terrestrial reference; Described J is more than or equal to 10 integer;
Step B1, to all training sample down-samplings to 20 pixel × 20 pixel size, then the gray value of the each pixel in each image is normalized;
Step B2, utilize AdaboostHarr training aids to carry out parameter learning to training sample, obtain many sub-classifiers of AdaboostHarr;
Step B3, control the real scene image in user images collecting device Real-time Collection scene, and judge in every two field picture, whether to contain linear terrestrial reference: if had, will contain target image linearly and be defined as image B, and perform step B4; If no, again carry out this step;
Step B4, setting scale factor k=1;
Step B5, big or small as sliding window taking (20*k pixel) x (20*k pixel), controls sliding window and slides in the region of image B, obtains being no less than the image block of 1, to all image block down-samplings to 20 pixel × 20 pixel size;
Step B6, utilize the AdaboostHarr sub-classifier of step B2 training to classify to all image blocks, and judge in the classification results of grader output whether have and comprise target image block linearly: if had, execution step B7; If no, scale factor is increased to 0.1, execution step B5;
Step B7, when only having one to comprise target image block linearly in classification results, by the output of this image block; When there being two to comprise above target image block linearly in classification results, the image block output that selection sort weights the maximum is corresponding;
Step B8, the target image block linearly that comprises that step B7 is obtained carry out gray processing, obtain image I 2, then adopt feature constraint method, image I 2 is extracted to maximal margin profile, then maximal margin profile is carried out to the extraction of Hough straight line, and obtain straight line information, from the straight line extracting, find 4 the framework straight lines and the expression formula of this 4 framework straight lines under the pixel coordinate system of described image that characterize linear terrestrial reference edge.
Preferably, described in, find the method for 4 framework straight lines that characterize linear terrestrial reference edge to be: the distance of choosing is between any two greater than 31 pixels, and 4 the highest straight lines of ballot probability, is 4 framework straight lines that characterize linear terrestrial reference edge.
Preferably, the feature constraint method in described step B8 specifically comprises the steps:
Step SB80, to described image I 2, extract R, G, the B triple channel gray value information of each pixel in image I 2; In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums: if met, be 255 by this pixel assignment; If do not met, pixel value assignment is 0, obtains image I 2*;
Step SB81, image I 2* is carried out to morphology open operation, remove assorted little noise; And then carry out edge extracting and obtain edge image, obtain maximal margin profile.
Preferably, described step 3 specifically comprises the steps:
Step SFO, make aiu+biv+ci=0 represents the linear equation of described 4 framework straight lines in pixel coordinate system, wherein i=1,2,3,4, the sequence number of 4 framework straight lines of expression; aibiAnd ciRepresent respectively the coefficient of i article of framework straight line;
The linear equation being marked on linearly under linear terrestrial reference coordinate system is respectively: y=W/2, x=L/2, y=-W/2 and y=-L/2, and wherein W and L are respectively target length and width linearly; Linear terrestrial reference Coordinate system definition is as follows: choosing the long limit of linear terrestrial reference is to the right coordinate x direction, and minor face is downwards as y direction, and initial point is target geometric center linearly;
Step SF1, determine image coordinate system and linear terrestrial reference coordinate system transformational relation, i article of linear equation in pixel coordinate system projected under linear terrestrial reference coordinate system, obtain projection formula as follows: A i B i C i x w y w 1 = 0 ;
Wherein Ai、Bi、CiBe respectively:
Ai=aifxr11+bifyr21+(aiu0+biv0+ci)r31
Bi=aifxr12+bifyr22+(aiu0+biv0+ci)r32
Ci=aifxtx+bifyty+(aiu0+biv0+ci)tz
Wherein, R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 T Represent the spin matrix of pixel coordinate system with respect to linear terrestrial reference coordinate system; [tx,ty,tz] represent that the position between pixel coordinate system and linear terrestrial reference coordinate system is offset; fxAnd fyFor image capture device is at the equivalent focal length of x and y direction; u0And v0Expression is the coordinate of primary optical axis and plane of delineation intersection point; xwAnd ywRepresent respectively x coordinate and the y coordinate of image capture device in linear terrestrial reference coordinate system;
Step SF2, by the correspondent equal one by one respectively of equal sign left side in the projection formula of 4 straight lines in 4 formula of equation of straight lines in step SF0 and step SF1, obtain 4 equivalent Nonlinear System of Equations, resolve spin matrix R and the position skew [t between pixel coordinate system and linear terrestrial reference coordinate system according to Newton iteration methodx,ty,tz], obtain the transformational relation between pixel coordinate system and linear terrestrial reference coordinate system.
Preferably, the transformational relation acquisition methods between linear terrestrial reference coordinate system and the earth coordinates of described step 4 is as follows:
Step D0, in scene, at N real scene image collection position, utilize real time dynamic differential measuring instrument to obtain the coordinate of N real scene image collection position under WGS-84
Transformational relation between transformational relation between step D1, the pixel coordinate system obtaining according to step 3 and linear terrestrial reference coordinate system and known pixel coordinate system and image capture device coordinate system, obtain N real scene image collection position with respect to linear terrestrial reference coordinate system position, be respectivelyThe value of described N is to be more than or equal to 3 integer;
Step D2, the following coordinate formula of basis: y 1 g y 2 g . . . y N g = [ R | T ] · x 1 c x 2 c . . . x N gc , The anti-spin matrix [R|T] that solves, obtains the transformational relation between linear terrestrial reference coordinate system and earth coordinates.
The present invention has following beneficial effect:
1. localization method of the present invention utilizes the linear terrestrial reference of road to position, and because the linear terrestrial reference of road has universality, effectively saves practical application cost.
2. the present invention proposes the linear terrestrial reference area positioning method based on Adaboost; By extracting straight line information and the true linear terrestrial reference framework straight line information in linear terrestrial reference coordinate system of image neutral line terrestrial reference framework in pixel coordinate system, by two coordinate system underframe straight line correspondent equals, resolve the transformational relation of pixel coordinate system and linear terrestrial reference coordinate system, obtain thus the position of user in earth coordinates, improved accuracy of detection.
Brief description of the drawings
Fig. 1 is the intention of sign linearly in the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing embodiment that develops simultaneously, describe the present invention.
One of the present invention, based on target localization method linearly, comprises the steps:
Step 1, in application scenarios, measure length, the width of scene neutral line terrestrial reference, longitude, latitude and the elevation information of linear terrestrial reference central point; To be designated as linearly referential, define linear terrestrial reference coordinate system, according to target length and width obtain the expression formula of target framework straight line under linear terrestrial reference coordinate system linearly linearly;
Step 2, utilize image capture device obtain described in target image linearly, extract and in image, comprise the linear landmark region of target linearly; Then extract the framework straight line information of linear landmark region, obtain the expression formula of each framework straight line under the pixel coordinate system of described image;
Step 3, the framework straight line that step 2 is obtained are distinguished corresponding one by one with the framework straight line of true linear terrestrial reference, expression formula according to the framework straight line of the framework straight line expression formula under image pixel coordinate system and true linear terrestrial reference under linear terrestrial reference coordinate system, obtains the transformational relation of image pixel coordinate system and linear terrestrial reference coordinate system;
Step 4, according to target longitude, latitude linearly with highly obtain the transformational relation between linear terrestrial reference coordinate system and earth coordinates; According to the transformational relation of image pixel coordinate system and linear terrestrial reference coordinate system, obtain the transformational relation of image pixel coordinate system and earth coordinates again, finally obtain the coordinate of user in earth coordinates.
Wherein, the present invention provides 2 kinds of linear terrestrial references to detect and extraction algorithm to step 2, and the linear terrestrial reference of the first detects and extraction algorithm comprises the steps:
Steps A 0, control the real scene image in user images collecting device Real-time Collection scene, and judge in every two field picture, whether to contain linear terrestrial reference: if had, will contain target image linearly and be defined as image A, and perform step A1; If no, again carry out this step;
Wherein, steps A 0 specifically also comprises the steps: step SA00, real scene image is carried out to down-sampling, obtains 128 pixel × 256 pixel sizes, obtains image A ';
Step SA01, to image A ' extract each pixel R, G, B triple channel gray value information;
In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums: if met, be 255 by this pixel assignment; If do not met, pixel value assignment is 0, obtains image A ";
Step SA02, calculate A " in the non-zero number of the each pixel gray value of the image percentage that accounts for all pixels whether be greater than threshold value: if be more than or equal to threshold value, think and comprise linear landmark region; If be less than threshold value, think and do not comprise linear landmark region; Wherein, described threshold value is elected as more than 10%.
Steps A 1, to described image A, extract each pixel R, G in A, B triple channel gray value information; And be 255 by the pixel assignment that meets following condition, the pixel assignment that does not meet following condition is 0, obtains thus the image A * after binaryzation;
Condition is wherein:
In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums;
Steps A 2, image A * is carried out to morphology open operation, remove assorted little noise; And then carry out edge extracting, and remove fine edge, obtain maximal margin profile, be defined as image A * *, concrete steps comprise:
Step SA20, order mark sequence number L=0; Gray value equals 1 pixel and is defined as marginal point;
Step SA21, for the marginal point P in image A *, judge whether marginal point P has sequence number mark: if do not had, to this marginal point mark sequence number L, execution step SA22; If had, judge whether this marginal point is last marginal point being marked, if so, execution step SA23; If not, again carry out this step;
Step SA22, judge whether surrounding's 8 neighborhoods of marginal point P exist new marginal point: if exist, new marginal point is all marked to sequence number L, marginal point P and its new marginal point around form a continuous boundary; If there is not new marginal point, note L=L+1, returns to step SA21;
SA23, to the L an obtaining continuous boundary, the marginal point assignment that edge length is less than to threshold value is 0.
Steps A 3, image A * * is carried out to the extraction of Hough straight line, obtain image A * * cathetus information, the distance of choosing from the straight line of image A * * is between any two greater than 31 pixels, and 4 the highest straight lines of ballot probability, these 4 straight lines are 4 framework straight lines that characterize linear terrestrial reference framework, then determine the expression formula of these 4 framework straight lines under the pixel coordinate system of image.
The 2nd kind of linear terrestrial reference extraction algorithm provided by the invention is as follows:
Step B0, obtain the secondary real scene image of N by image capture device, as training sample, wherein a part of training sample kind comprises linear terrestrial reference, and another part training sample does not comprise linear terrestrial reference; Described N is more than or equal to 10 integer;
Step B1, to all training sample down-samplings to 20 pixel × 20 pixel size, then the gray value of the each pixel in each image is normalized;
Step B2, utilize AdaboostHarr training aids to carry out parameter learning to training sample, obtain many sub-classifiers of AdaboostHarr;
Step B3, control the real scene image in user images collecting device Real-time Collection scene, and judge in every two field picture, whether to contain linear terrestrial reference: if had, will contain target image linearly and be defined as image B, and perform step B4; If no, again carry out this step;
Step B4, setting scale factor k=1;
Step B5, taking 20k pixel × 20k pixel size as sliding window, control sliding window in the region of image B, slide, obtain being no less than the image block of 1, to all image block down-samplings to 20 pixel × 20 pixel size;
Step B6, utilize the AdaboostHarr sub-classifier of step B2 training to classify to all image blocks, and judge in the classification results of grader output whether have and comprise target image block linearly: if had, execution step B7; If no, amendment scale factor k=k+0.1, execution step B5;
Step B7, when only having one to comprise target image block linearly in classification results, by the output of this image block; When there being two to comprise above target image block linearly in classification results, the image block output that selection sort weights the maximum is corresponding;
Step B8, the target image block linearly that comprises that step B7 is obtained carry out gray processing, obtain image I 2, then adopt feature constraint method, image I 2 is extracted to maximal margin profile, then maximal margin profile is carried out to the extraction of Hough straight line, and obtain straight line information, from the straight line extracting, find 4 the framework straight lines and the expression formula of this 4 framework straight lines under the pixel coordinate system of described image that characterize linear terrestrial reference edge.
Wherein, the feature constraint method in step B8 specifically comprises the steps:
Step SB80, to described image I 2, extract each pixel R, G in I2, B triple channel gray value information; In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums: if met, be 255 by this pixel assignment; If do not met, pixel value assignment is 0, obtains image I 2*;
Step SB81, image I 2* is carried out to morphology open operation, remove assorted little noise; And then carry out edge extracting and obtain edge image, obtain maximal margin profile.
Wherein, step 3 comprises the steps:
Step SFO, make aiu+biv+ci=0 represents the linear equation of described 4 framework straight lines in pixel coordinate system, wherein i=1,2,3,4, the sequence number of 4 framework straight lines of expression;
The straight line of true linear terrestrial reference under linear terrestrial reference coordinate system represent with sequence number #5 to #8, and the linear equation of 4 straight lines is respectively: #5:y=W/2, #6:x=L/2, #7:y=-W/2 and #8:y=-L/2; Wherein W and L are respectively that target is long and wide linearly; Linear terrestrial reference Coordinate system definition is as follows: choosing the long limit of linear terrestrial reference is to the right coordinate x direction, and minor face is downwards as y direction, and initial point is target geometric center linearly;
Step SF1, determine image coordinate system and linear terrestrial reference coordinate system transformational relation, the i linear equation in pixel coordinate system projected under linear terrestrial reference coordinate system, obtain projection formula as follows: A i B i C i x w y w 1 = 0 ; , 4 corresponding 4 projection formulas of straight line.
Wherein Ai、Bi、CiBe respectively:
Ai=aifxr11+bifyr21+(aiu0+biv0+ci)r31
Bi=aifxr12+bifyr22+(aiu0+biv0+ci)r32
Ci=aifxtx+bifyty+(aiu0+biv0+ci)tz
Wherein, R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 T Represent the spin matrix of pixel coordinate system with respect to linear terrestrial reference coordinate system, fx、fyFor image capture device is at x and y direction volume equivalent focal length; u0And v0Expression is the image pixel coordinate of primary optical axis and picture plane point of intersection; xwAnd ywRepresent respectively x coordinate and the y coordinate of image capture device in linear terrestrial reference coordinate system;
Step SF2, by projection formula's correspondent equal one by one of 4 straight lines in 4 formula of equation of straight lines in step SF0 and step SF1, obtain corresponding projection formula as follows:
#1=#5:#2=#6:#3=#7:#4=#8:
A1=0;B2=0;A3=0;B4=0;
- C 1 B 1 = W / 2 - C 2 A 2 = L / 2 - C 3 B 3 = - W / 2 - C 4 B 4 = - L / 2
According to above 4 equivalent Nonlinear System of Equations, resolve spin matrix R and the position skew [t between pixel coordinate system and linear terrestrial reference coordinate system according to Newton iteration methodx,ty,tz], obtain the transformational relation between pixel coordinate system and linear terrestrial reference coordinate system.
For the transformational relation acquisition methods between linear terrestrial reference coordinate system and the earth coordinates of step 4, the invention provides following acquisition methods:
Step D0, in scene, at N real scene image collection position, while utilization, dynamic difference measuring instrument RTK obtains the coordinate of N real scene image collection position under WGS-84
Transformational relation between transformational relation, pixel coordinate system and image capture device coordinate system between step D1, the pixel coordinate system obtaining according to step 3 and linear terrestrial reference coordinate system, obtain N real scene image collection position with respect to linear terrestrial reference coordinate system position, be respectivelyThe value of described N is to be more than or equal to 3 integer;
Step D2, the following coordinate formula of basis, y 1 g y 2 g . . . y N g = [ R | T ] · x 1 c x 2 c . . . x N gc , The anti-spin matrix [R|T] that solves, obtains the coordinate of user at earth coordinates.
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention. Within the spirit and principles in the present invention all, any amendment of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (9)

1. based on a target localization method linearly, it is characterized in that, comprise the steps:
Step 1, in application scenarios, measure length and the width of scene neutral line terrestrial reference; Define linear terrestrial reference coordinate system, according to target length and width obtain the expression formula of target framework straight line under linear terrestrial reference coordinate system linearly linearly;
Step 2, utilize image capture device obtain described in target image linearly, extract and in image, comprise the linear landmark region of target linearly; Then from linear landmark region, extract target framework straight line information linearly, obtain the expression formula of each framework straight line under the pixel coordinate system of described image;
Step 3, the framework straight line that step 2 is obtained are distinguished corresponding one by one with the framework of the target linearly straight line in step 1, framework straight line expression formula lower the pixel coordinate system of image is carried out to simultaneous and solved with the framework straight line expression formula under corresponding linear terrestrial reference coordinate system respectively, obtain the pixel coordinate system of image and the transformational relation of linear terrestrial reference coordinate system;
Step 4, obtain the transformational relation between linear terrestrial reference coordinate system and earth coordinates; According to the transformational relation of image pixel coordinate system and linear terrestrial reference coordinate system, obtain the transformational relation of image pixel coordinate system and earth coordinates again, finally obtain the coordinate of user in earth coordinates, realize thus location.
2. one as claimed in claim 1, based on target localization method linearly, is characterized in that, described step 2 specifically comprises the steps:
Steps A 0, control the real scene image in image capture device Real-time Collection scene, and judge in every two field picture, whether to contain linear terrestrial reference: if had, will contain target image linearly and be defined as image A, and perform step A1; If no, again carry out this step;
Steps A 1, to described image A, extract R, G, the B triple channel gray value information of each pixel in A; And be 255 by the pixel assignment that meets following condition, the pixel assignment that does not meet following condition is 0, obtains thus the image A * after binaryzation;
Condition is wherein:
In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums;
Steps A 2, image A * is carried out to morphology open operation, remove assorted little noise; And then carry out edge extracting, and remove fine edge, obtain maximal margin profile, be defined as image A * *;
Steps A 3, image A * * is carried out to the extraction of Hough straight line, obtain image A * * cathetus information, from the straight line of image A * *, find 4 the framework straight lines and the expression formula of this 4 framework straight lines under the pixel coordinate system of described image that characterize linear terrestrial reference edge.
3. one as claimed in claim 2, based on target localization method linearly, is characterized in that, the concrete grammar of described steps A 0 is:
Step SA00, described real scene image is carried out to down-sampling, obtain the image A of 128 pixel × 256 pixel sizes ';
Step SA01, to image A ' extract R, G, the B triple channel gray value information of each pixel;
In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums: if met, be 255 by this pixel assignment; If do not met, be 0 by this pixel value assignment, obtain image A ";
Step SA02, computed image A " in the non-zero number of the each pixel gray value percentage that accounts for all pixels whether be greater than the threshold value T1 of setting: if be more than or equal to threshold value T1, think image A " in comprise linear terrestrial reference; If be less than threshold value T1, think image A " in do not comprise linear terrestrial reference; Wherein, described threshold value T1 elects as more than 10%.
4. one as claimed in claim 2, based on target localization method linearly, is characterized in that, the method for obtaining maximal margin profile in described steps A 2 is as follows:
Step SA20, order mark sequence number L=0; Gray value equals 1 pixel and is defined as marginal point;
Step SA21, for any marginal point P in image A *, judge whether marginal point P has sequence number mark:
If no, to this marginal point P mark sequence number L, execution step SA22;
If had, then judge whether this marginal point P is last marginal point being marked, if so, execution step SA23; If not, again carry out this step;
Step SA22, judge whether surrounding's 8 neighborhoods of marginal point P exist marginal point: if exist, marginal point is all marked to sequence number L, marginal point P and its marginal point around form a continuous boundary; If there is not marginal point, note L=L+1, returns to step SA21;
SA23, to the L an obtaining continuous boundary, the marginal point assignment that edge length is less than to the threshold value T2 of setting is 0; The non-zero continuous boundary finally obtaining forms described maximal margin profile.
5. one as claimed in claim 1, based on target localization method linearly, is characterized in that, the concrete grammar of described step 2 is:
Step B0, obtain the secondary real scene image of J by image capture device, as training sample, wherein a part of training sample comprises linear terrestrial reference, and another part training sample does not comprise linear terrestrial reference; Described J is more than or equal to 10 integer;
Step B1, to all training sample down-samplings to 20 pixel × 20 pixel size, then the gray value of the each pixel in each image is normalized;
Step B2, utilize AdaboostHarr training aids to carry out parameter learning to training sample, obtain many sub-classifiers of AdaboostHarr;
Step B3, control the real scene image in user images collecting device Real-time Collection scene, and judge in every two field picture, whether to contain linear terrestrial reference: if had, will contain target image linearly and be defined as image B, and perform step B4; If no, again carry out this step;
Step B4, setting scale factor k=1;
Step B5, big or small as sliding window taking (20*k pixel) × (20*k pixel), controls sliding window and slides in the region of image B, obtains being no less than the image block of 1, to all image block down-samplings to 20 pixel × 20 pixel size;
Step B6, utilize the AdaboostHarr sub-classifier of step B2 training to classify to all image blocks, and judge in the classification results of grader output whether have and comprise target image block linearly: if had, execution step B7; If no, scale factor is increased to 0.1, execution step B5;
Step B7, when only having one to comprise target image block linearly in classification results, by the output of this image block; When there being two to comprise above target image block linearly in classification results, the image block output that selection sort weights the maximum is corresponding;
Step B8, the target image block linearly that comprises that step B7 is obtained carry out gray processing, obtain image I 2, then adopt feature constraint method, image I 2 is extracted to maximal margin profile, then maximal margin profile is carried out to the extraction of Hough straight line, and obtain straight line information, from the straight line extracting, find 4 the framework straight lines and the expression formula of this 4 framework straight lines under the pixel coordinate system of described image that characterize linear terrestrial reference edge.
6. the one as described in claim 2 or 5 is based on target localization method linearly, it is characterized in that, the described method that finds 4 framework straight lines that characterize linear terrestrial reference edge is: the distance of choosing is between any two greater than 31 pixels, and 4 the highest straight lines of ballot probability, are 4 framework straight lines that characterize linear terrestrial reference edge.
7. one as claimed in claim 5, based on target localization method linearly, is characterized in that, the feature constraint method in described step B8 specifically comprises the steps:
Step SB80, to described image I 2, extract R, G, the B triple channel gray value information of each pixel in image I 2; In the time being marked with linearly one of them color in triple channel and being background, judge whether this Color Channel gray value is greater than 1.2 times of other two passage gray value sums: if met, be 255 by this pixel assignment; If do not met, pixel value assignment is 0, obtains image I 2*;
Step SB81, image I 2* is carried out to morphology open operation, remove assorted little noise; And then carry out edge extracting and obtain edge image, obtain maximal margin profile.
8. one as claimed in claim 1, based on target localization method linearly, is characterized in that, described step 3 specifically comprises the steps:
Step SF0, make aiu+biv+ci=0 represents the linear equation of described 4 framework straight lines in pixel coordinate system, wherein i=1,2,3,4, the sequence number of 4 framework straight lines of expression; ai、biAnd ciRepresent respectively the coefficient of i article of framework straight line;
The linear equation being marked on linearly under linear terrestrial reference coordinate system is respectively: y=W/2, x=L/2, y=-W/2 and y=-L/2, and wherein W and L are respectively target length and width linearly; Linear terrestrial reference Coordinate system definition is as follows: choosing the long limit of linear terrestrial reference is to the right coordinate x direction, and minor face is downwards as y direction, and initial point is target geometric center linearly;
Step SF1, determine image coordinate system and linear terrestrial reference coordinate system transformational relation, i article of linear equation in pixel coordinate system projected under linear terrestrial reference coordinate system, obtain projection formula as follows:
Wherein Ai、Bi、CiBe respectively:
Ai=aifxr11+bifyr21+(aiu0+biv0+ci)r31
Bi=aifxr12+bifyr22+(aiu0+biv0+ci)r32
Ci=aifxtx+bifyty+(aiu0+biv0+ci)tz
Wherein,Represent the spin matrix of pixel coordinate system with respect to linear terrestrial reference coordinate system; [tx,ty,tz] represent that the position between pixel coordinate system and linear terrestrial reference coordinate system is offset; fxAnd fyFor image capture device is at the equivalent focal length of x and y direction; u0And v0Expression is the coordinate of primary optical axis and plane of delineation intersection point; xwAnd ywRepresent respectively x coordinate and the y coordinate of image capture device in linear terrestrial reference coordinate system;
Step SF2, by the correspondent equal one by one respectively of equal sign left side in the projection formula of 4 straight lines in 4 formula of equation of straight lines in step SF0 and step SF1, obtain 4 equivalent Nonlinear System of Equations, resolve spin matrix R and the position skew [t between pixel coordinate system and linear terrestrial reference coordinate system according to Newton iteration methodx,ty,tz], obtain the transformational relation between pixel coordinate system and linear terrestrial reference coordinate system.
9. the one as described in claim 1,2,5 or 8, based on target localization method linearly, is characterized in that, the transformational relation acquisition methods between linear terrestrial reference coordinate system and the earth coordinates of described step 4 is as follows:
Step D0, in scene, at N real scene image collection position, utilize real time dynamic differential measuring instrument to obtain the coordinate of N real scene image collection position under WGS-84
Transformational relation between transformational relation between step D1, the pixel coordinate system obtaining according to step 3 and linear terrestrial reference coordinate system and known pixel coordinate system and image capture device coordinate system, obtain N real scene image collection position with respect to linear terrestrial reference coordinate system position, be respectivelyThe value of described N is to be more than or equal to 3 integer;
Step D2, the following coordinate formula of basis:The anti-spin matrix [R|T] that solves, obtains the transformational relation between linear terrestrial reference coordinate system and earth coordinates.
CN201410608584.0A 2014-11-03 2014-11-03 A kind of based on target localization method linearly Active CN104374386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410608584.0A CN104374386B (en) 2014-11-03 2014-11-03 A kind of based on target localization method linearly

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410608584.0A CN104374386B (en) 2014-11-03 2014-11-03 A kind of based on target localization method linearly

Publications (2)

Publication Number Publication Date
CN104374386A CN104374386A (en) 2015-02-25
CN104374386B true CN104374386B (en) 2016-05-25

Family

ID=52553441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410608584.0A Active CN104374386B (en) 2014-11-03 2014-11-03 A kind of based on target localization method linearly

Country Status (1)

Country Link
CN (1) CN104374386B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046686A (en) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 Positioning method and apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5848374A (en) * 1995-03-06 1998-12-08 Nippon Telegraph And Telephone Corporation Map information processing method and apparatus for correlating road location on a road network map
CN100370226C (en) * 2004-07-23 2008-02-20 东北大学 Method for visual guiding by manual road sign
CN101620671B (en) * 2009-08-14 2012-05-09 华中科技大学 Method for indirectly positioning and identifying three-dimensional buildings by using riverway landmarks
CN102121831B (en) * 2010-12-01 2013-01-09 北京腾瑞万里科技有限公司 Real-time street view navigation method and device

Also Published As

Publication number Publication date
CN104374386A (en) 2015-02-25

Similar Documents

Publication Publication Date Title
CN111881790B (en) Automatic extraction method and device for road crosswalk in high-precision map making
CN103714339B (en) SAR image road damaging information extracting method based on vector data
CN103714538B (en) road edge detection method, device and vehicle
CN104361590B (en) High-resolution remote sensing image registration method with control points distributed in adaptive manner
CN103839264B (en) A kind of detection method of lane line
CN107167431A (en) A kind of black and odorous water recognition methods and system based on spectral index model
CN102982334B (en) The sparse disparities acquisition methods of based target edge feature and grey similarity
CN103413303A (en) Infrared target segmentation method based on joint obviousness
Zheng et al. An artificial immune approach for vehicle detection from high resolution space imagery
US20220315243A1 (en) Method for identification and recognition of aircraft take-off and landing runway based on pspnet network
CN107679470A (en) A kind of traffic mark board detection and recognition methods based on HDR technologies
CN106446785A (en) Passable road detection method based on binocular vision
CN103871081A (en) Method for tracking self-adaptive robust on-line target
CN107944437A (en) A kind of Face detection method based on neutral net and integral image
CN102944226B (en) Meteor crater detecting method based on bright and dark area pairing
CN104504675A (en) Active vision positioning method
CN109558808A (en) A kind of road Edge Detection based on deep learning
CN116152342A (en) Guideboard registration positioning method based on gradient
CN104374386B (en) A kind of based on target localization method linearly
CN102306282B (en) License plate positioning method and license plate positioning device
CN107463944A (en) A kind of road information extracting method using multidate High Resolution SAR Images
CN106446864A (en) Method for detecting feasible road
CN103927523B (en) Fog level detection method based on longitudinal gray features
CN104700428B (en) A kind of desert areas Approach for road detection be positioned on unmanned plane
CN106682668A (en) Power transmission line geological disaster monitoring method using unmanned aerial vehicle to mark images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wei Dongyan

Inventor after: Li Wen

Inventor after: Lai Qifeng

Inventor after: Zhang Xiaoguang

Inventor after: Chen Xialan

Inventor after: Li Xianghong

Inventor after: Xu Ying

Inventor after: Yuan Hong

Inventor after: Gong Xuping

Inventor before: Gong Xuping

Inventor before: Wei Dongyan

Inventor before: Lai Qifeng

Inventor before: Zhang Xiaoguang

Inventor before: Chen Xialan

Inventor before: Li Xianghong

Inventor before: Xu Ying

Inventor before: Yuan Hong

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant