Summary of the invention
In view of this, the invention provides a kind of active vision localization method, the hi-Fix of user in scene under GPS disappearance environment can be realized.
In order to solve the problems of the technologies described above, the present invention is achieved in that
A kind of active vision localization method of the present invention, comprises the steps:
Step 1, making cooperative target, and cooperative target is arranged in scene;
Wherein, described cooperative target is backboard with red pattern, backboard is arranged three yellow squares as cooperation mark; Described backboard adopts RGB color gray-scale value to be (255,0,0), and three square RGB color gray-scale values are (255,255,0); Wherein the first square and the second square with the 3rd foursquare wherein one article of diagonal line for axis of symmetry is symmetrical; Described 3rd foursquare size is greater than the first and second squares; Described 3rd square and first square and the second foursquare side ratio are 4:3; The minor increment on three square borders is greater than the length of side of smallest square;
Step 2, the cooperative target image utilized in user images collecting device acquisition scene, and extract the angle point of the cooperation mark in cooperative target, the coordinate in the image coordinate system that acquisition angle point is arranged on described cooperative target image;
Step 3, the coordinate of cooperation mark angle point in image coordinate system obtained according to the coordinate of cooperation mark angle point in cooperative target coordinate system and step 2, obtain the transformational relation between cooperative target coordinate system and image coordinate system; And then based on the coordinate transformation relation of known image coordinate system and user images collecting device coordinate system, obtain the coordinate transformation relation of user images collecting device coordinate system and cooperative target coordinate system; Finally, based on user images device coordinate system and the coordinate transformation relation of vehicle axis system and the coordinate transformation relation of cooperative target coordinate system and geographic coordinate system, obtain vehicle coordinate and attitude in geographic coordinate system, complete thus the location of vehicle in scene.
Preferably, described backboard is of a size of 80 centimetres * 120 centimetres, and the described 3rd foursquare length of side is 32 centimetres, and the first and second foursquare length of sides are 24 centimetres, and the first and second square center distance the 3rd square center distances are greater than 35 centimetres; Described 3rd foursquare described axis of symmetry overlaps with described carupace width direction center line.
Preferably, described step 2 comprises following concrete steps:
Steps A 0, the real scene image controlled in user images collecting device Real-time Collection scene, and judge whether contain cooperative target in every two field picture: if had, be image A by the image definition containing cooperative target, perform steps A 1; If no, return steps A 0;
Steps A 1, extract R, G, B triple channel gray value information of each pixel in described image A; And be 255 by the pixel assignment meeting following condition, the pixel assignment not meeting following condition is 0, after each pixel according to the method described above assignment, then carries out binaryzation to image A, obtains image A*;
Condition is: for each pixel, and R passage gray-scale value is greater than 50, R passage gray-scale value and is greater than 1.5 times of G passage gray-scale values, and R passage gray-scale value is greater than 1.3 times of channel B gray-scale values;
Steps A 2, morphology is carried out to image A* open operation, remove tiny region; And then edge extracting is carried out to image A*, and carry out edge compensation, obtain image A**;
Border sequences all in steps A 3, detected image A**, carry out Edge track, find the longest edge edge in image A**, and to extract on longest edge edge in all pixels at the maximal value of x-axis and y-axis and minimum value: maximal value xmax, ymax and minimum value xmin, ymin;
Steps A 4, in image A, connection coordinate point (xmin, ymin), (xmin successively, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as cooperative target region, and the image definition in region is image T1;
Steps A 5, gray processing is carried out to image A, after gray processing in image A, connection coordinate point (xmin successively, ymin), (xmin, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as Corner Detection region, and the image definition in region is image T2;
Steps A 6, for each pixel in image T1, be greater than 50, G passage gray-scale value be greater than 50 by meeting R passage gray-scale value, and the pixel assignment that G passage is greater than 1.5 times of channel B gray-scale values is 255, otherwise assignment is 0, obtains image T1* by binaryzation;
Steps A 7: carry out morphology to image T1* and open operation, removes region tiny in image T1*, and carries out edge extracting to T1*, and carry out edge compensation, obtain image T1**;
Steps A 8, Edge track is carried out to image T1**, find first three section of edge that in image T1**, edge length is the longest as edge square in cooperative target, delete remaining edge;
Steps A 9, utilize Harris Robust Algorithm of Image Corner Extraction to carry out angle point grid to image T2, if the angle point extracted is arranged on the square edge of described cooperative target, then using the angle point that meets this condition as cooperative target angle point;
Steps A 10, utilize Kmeans algorithm that the cooperative target angle point obtained in A9 is divided into 3 classes, if there is the situation that a class angle point number is less than 3, return steps A 10, continue classification; If there is the situation that a class angle point number equals 3, carry out angle point compensation, make each class all have 4 angle points;
Steps A 11, relative position relation according to 12 angle points, by 3 foursquare summits of 4 angle points in each class angle point and described cooperative target in correspondence with each other, obtain the coordinate of cooperative target angle point in image coordinate system.
Preferably, judge in described steps A 0 that the concrete grammar whether containing cooperative target in realtime graphic is:
Step D1: R, G, B triple channel information extracting each pixel in present image;
Step D2: be greater than 50, R passage gray-scale value be greater than 1.5 times of G passage gray-scale values by meeting R passage gray-scale value, and the pixel assignment that R passage gray-scale value is greater than 1.3 times of channel B gray-scale values is 255, otherwise assignment is 0, then carries out binaryzation to present image;
Step D3: calculating gray-scale value in the image after binaryzation is the pixel number of 255, if pixel number is greater than 20% of all pixel numbers of full figure, then thinks that this present image exists cooperative target, otherwise thinks that cooperative target does not exist.
Preferably, in described steps A 10, when existence one class angle point number equals the situation of 3, described angle point compensation method is as follows:
Step B1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step B2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step B3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step B4;
Step B3, supplement shortcoming in x-axis direction;
SB31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point;
SB33, find two differences maximum in difference DELTA X1, difference DELTA X2 and difference DELTA X3, and ask the average of two differences
When the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that self relative angle point x-axis coordinate adds twice
When the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that self relative angle point x-axis coordinate subtracts twice
The y-axis coordinate of described shortcoming equals the y-axis coordinate of self relative angle point;
Step B4, supplement shortcoming in y-axis direction:
SB41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point;
SB43, find two differences maximum in difference DELTA Y1, difference DELTA Y2 and difference DELTA Y3, and ask the average of two differences
When shortcoming be positioned at its relative angle point+y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate adds twice
When shortcoming be positioned at its relative angle point-y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate subtracts twice
The x-axis coordinate of described shortcoming equals the x-axis coordinate of self relative angle point;
Obtain the coordinate of shortcoming thus, realize angle point and compensate.
Preferably, in described steps A 10, when existence one class angle point number equals the situation of 3, described angle point compensation method is as follows:
Step C1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step C2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step C3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step C4;
Step C3, supplement shortcoming in x-axis direction;
SC31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point;
Step C4, supplement shortcoming in y-axis direction:
SC41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point;
Step C5, the shortcoming determined according to step C3 or step C4 and the relative position relation of relative angle point, solve following formula, obtain the optimal value of coordinate P:
S.t||P-P
3||
2=|| P
0-P
3||
2and
Wherein, P
0represent angle point p relative to shortcoming
0coordinate, P
1and P
2be respectively in certain class angle point except shortcoming and angle point p
0the coordinate of two other angle point; P
3denotation coordination P
1with coordinate P
2line center point coordinate;
represent positive integer;
Using the coordinate of the optimal value of coordinate P as shortcoming, realize angle point and compensate.
Preferably, when arranging multiple cooperative target at diverse location in scene, the backboard of each cooperative target arranging mark, distinguishing different cooperative targets with this.
Preferably, the backboard of each cooperative target is arranged multiple towards different triangles as mark, described triangle is green pattern, and RGB color gray-scale value is (0,255,0);
As follows to the recognition methods of the mark of cooperative target:
Step e 1, identify leg-of-mutton number;
Step e 2, for each triangle obtained in step e 1, represent that triangle 3 summits are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3]
Represent that triangle 3 summits are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3]
The minimum value of the difference in step e 3, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference of x direction of principal axis is minimum, then show that this triangle is towards x-axis positive dirction or negative direction, perform step e 4; If the difference in y-axis direction is minimum, then show that this triangle is towards y-axis positive dirction or negative direction, perform step e 5;
If step e 4 triangle is towards x-axis positive dirction or negative direction;
Find two triangular apex that two x-axis differences are minimum, then judge the x coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards x-axis negative direction; If two differences are all greater than 0, then triangle is towards x-axis positive dirction;
Step e 5, triangle are towards y-axis positive dirction or negative direction:
Find two triangular apex that two y-axis differences are minimum, then judge the y coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards y-axis negative direction; If two differences are all greater than 0, then triangle is towards y-axis positive dirction;
Step e 6, step e 1 to E5 is all performed to each triangle, obtain all leg-of-mutton towards.
Preferably, the backboard of each cooperative target arranges the triangle of different number, for identifying different cooperative targets.
Preferably, the recognition methods of described triangle number is as follows:
Step F 1, to arrange triangle be green pattern, and RGB color gray-scale value is respectively (0,255,0); For the image T2 in described steps A 5, travel through each pixel, to meet that G passage gray-scale value is greater than 100, G passage is greater than 1.5 times of channel B gray-scale values and G passage is greater than 1.5 times of R passage gray-value pixel point point assignment is 255, otherwise assignment is 0, obtains image T3* by binaryzation;
Step F 2, morphology is carried out to image T3* open operation, remove region tiny in T3*, and edge extracting is carried out to T3*, and carry out edge compensation, obtain image T3**;
Step F 3, carry out Edge track to image T3**, following limb number is cooperative target intermediate cam shape number.
The present invention has following beneficial effect:
(1) the present invention uses background, adaptability design is carried out to cooperative target, make the relative position between the geomery of cooperative target and square more be conducive to the detection and Identification of cooperative target: 1., three square Edge Distances to be greater than the length of side of square, effectively improve the success ratio of angle point classification and sequence; 2. be that backboard and three square yellow patterns form jointly for cooperation mark with red pattern, both are more easily distinguished; 3. the square mark occuping upper middle is bigger than normal, cooperation mark is made to have good directivity like this, all the other two square patterns occupy middle square pattern both sides respectively, in image processing section, can carry out effective identification according to the profile size of square pattern to three square patterns; 4. be 4:3 by three square Proportionality designs, improve inspection rate;
(2) the present invention adopts angle point backoff algorithm to the position determining shortcoming, improves angle point grid accuracy rate; Simultaneously for user location provides more unique point, improve positioning precision thus;
(3) mark is set in cooperative target, to distinguish different cooperative targets, thus can multiple cooperative target be set in scene, select for vehicle location provides more and locate more accurately.
Embodiment
To develop simultaneously embodiment below in conjunction with accompanying drawing, describe the present invention.
A kind of high-precision locating method for underground scene vehicle of the present invention, as shown in figure 13, comprises the steps:
Step 1, making cooperative target, and cooperative target is arranged in scene;
As shown in Figure 1, cooperative target in the present invention take into account the feature being easy to identify, the cooperative target of design uses the thought of hierarchy, take red pattern as backboard and three square yellow patterns for cooperation mark forms jointly, backboard adopts RGB color gray-scale value to be respectively (255,0,0), three square RGB color gray-scale values are respectively (255,255,0).Also different in the design of three foursquare patterns, the square mark occuping upper middle is bigger than normal, cooperation mark is made to have good directivity like this, all the other two square pattern occupy intermediate square pattern both sides respectively, in image processing section, can carry out effective identification according to the profile size of square pattern to three square patterns, the distance simultaneously between three directions is greater than each square radius, and the position being conducive to realizing angle point is extracted and sequence.
Three square pattern also have certain proportionate relationship, experimentally analyze, and size is than during 4:3 being best proportion, if this ratio is excessive, little square pattern is considered to assorted point sometimes and cannot detects; If this ratio is too small, then when different angles and different distance are taken, due to pattern distortion, cause the profile of square pattern to change, size square pattern cannot be distinguished.
Consider that general scene entrance size is 2 meters to 2.5 meters, scene inner pillar width 75 centimetres to 100 centimetres, cooperative target is designed to backboard size 80 centimetres * 120 centimetres, 32 centimetres * 32 centimetres, large square, large square is positioned at cooperative target y central axis, 24 centimetres * 24 centimetres, little square, is positioned at large square both sides, is greater than 35 centimetres apart from large square center distance.
Step 2, as shown in figure 12, utilizes user images collecting device to obtain cooperative target image in scene, and extracts the angle point of the cooperation mark in cooperative target according to the method for prior art, obtain the coordinate of angle point in cooperative target image coordinate system;
Step 3, the coordinate of angle point in cooperative target image coordinate system of cooperation mark obtained according to step 2, obtain vehicle coordinate in geographic coordinate system, complete thus the location of vehicle in scene.
Wherein, step 2 comprises following concrete steps:
Steps A 0, the real scene image controlled in user images collecting device Real-time Collection scene, as shown in Figure 2, and judge whether contain cooperative target in every two field picture: if had, be image A by the image definition containing cooperative target, perform steps A 1; If no, return steps A 0;
Wherein, judge that the concrete grammar whether containing cooperative target in realtime graphic is:
Step D1: R, G, B triple channel information extracting each pixel in present image;
Step D2: be greater than 50, R passage gray-scale value be greater than 1.5 times of G passage gray-scale values by meeting R passage gray-scale value, and the pixel assignment that R passage gray-scale value is greater than 1.3 times of channel B gray-scale values is 255, otherwise assignment is 0, then carries out binaryzation to present image;
Step D3: calculating gray-scale value in the image after binaryzation is the pixel number of 255, if pixel number is greater than account for 20% of full images vegetarian refreshments number, then thinks that this present image exists cooperative target, otherwise thinks that cooperative target does not exist.
Steps A 1, extract R, G, B triple channel gray value information of each pixel in described image A; And be 255 by the pixel assignment meeting following condition, the pixel assignment not meeting following condition is 0:
Condition is: R passage gray-scale value is greater than 50, R passage gray-scale value and is greater than 1.5 times of G passage gray-scale values, and R passage gray-scale value is greater than 1.3 times of channel B gray-scale values, that is:
R > 50 and R > 1.5G and R > 1.3B (1);
Traversing graph, as all pixels in A, and to after each pixel according to the method described above assignment, then carries out binaryzation to image A, obtains image A*, as shown in Figure 3;
Steps A 2, morphology is carried out to image A* open operation, remove tiny region; And then edge extracting is carried out to image A*, and edge compensation is carried out to extraction, obtain image A**, as shown in Figure 4;
Border sequences all in steps A 3, detected image A**, carry out Edge track, find the longest edge edge in image A**, and to extract on longest edge edge in all pixels at the maximal value of x-axis and y-axis and upper minimum value: maximal value xmax, ymax and minimum value xmin, ymin;
Steps A 4, in image A, connection coordinate point (xmin successively, ymin), (xmin, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as cooperative target region, image definition in region is image T1, as shown in Fig. 5 (a);
Steps A 5, gray processing is carried out to image A, after gray processing in image A, connection coordinate point (xmin successively, ymin), (xmin, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as Corner Detection region, and the image definition in region is image T2, as shown in Fig. 5 (b);
Steps A 6, for each pixel in image T1, be greater than 50, G passage gray-scale value be greater than 50 by meeting R passage gray-scale value, and the pixel assignment that G passage is greater than 1.5 times of channel B gray-scale values is 255, otherwise assignment is 0, obtains image T1* by binaryzation;
Steps A 7: carry out morphology to image T1* and open operation, removes region tiny in image T1*, and carries out edge extracting to T1*, and carry out edge compensation, obtain image T1**;
Steps A 8, Edge track is carried out to image T1**, find first three section of edge that in image T1**, edge length is the longest as edge square in cooperative target, delete remaining edge, as shown in Figure 6;
Steps A 9, Harris Robust Algorithm of Image Corner Extraction is utilized to carry out angle point grid to image T2; If the angle point extracted is arranged on the square edge of described cooperative target, then using the angle point that meets this condition as cooperative target angle point, as shown in Figure 7;
Steps A 10, utilize kmeans algorithm that the cooperative target angle point obtained in A9 is divided into 3 classes, if there is the situation that a class angle point number is less than 3, return steps A 10, continue classification; If there is the situation that a class angle point number equals 3, carry out angle point compensation, make each class all have 4 angle points, as shown in Figure 8; Conveniently distinguish angle point, can sort to angle point, as shown in Figure 9, choose the minimum angle point of x in A13 as first kind point, corresponding angle point sequence number is 1,2,3,4 by sorting clockwise; Choose angle point that in three classes, y is minimum as Equations of The Second Kind point, corresponding angle point sequence number is 5,6,7,8 by sequence clockwise; Choose angle point that in three classes, y is maximum as the 3rd class point, corresponding angle point sequence number is 9,10,11,12 by sequence clockwise;
Steps A 11, relative position relation according to 12 angle points, by 3 foursquare summits of 4 angle points in each class angle point and described cooperative target in correspondence with each other, obtain the coordinate of cooperative target angle point in image coordinate system, concrete grammar is:
As shown in figure 15, choose angle point a minimum at x-axis coordinate in cooperative target angle point and a 3rd foursquare vertex correspondence, by other three angle points similar with angle point a, foursquare other three summits are corresponding respectively with the 3rd respectively; Choose angle point b minimum in y-axis in cooperative target angle point and a first foursquare vertex correspondence, by other three angle points similar with angle point b, foursquare other three summits are corresponding respectively with first respectively; Choose angle point c maximum in y-axis in cooperative target angle point and a second foursquare vertex correspondence, by other three angle points similar with angle point c, foursquare other three summits are corresponding respectively with second respectively.
Steps A 12, the angle point coordinate in image coordinate system of cooperative target obtained according to steps A 11, obtain the current position of vehicle and attitude.
Wherein, in steps A 10, when existence one class angle point number equals the situation of 3, the invention provides two kinds of angle point compensation methodes, specific as follows:
Angle point compensation method one comprises:
Step B1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step B2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step B3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step B4;
Step B3, supplement shortcoming in x-axis direction, as shown in Figure 10;
SB31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+x-axis direction; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-x-axis direction;
SB33, find two differences maximum in difference DELTA X1, difference DELTA X2 and difference DELTA X3, and ask the average of two differences
When the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that its relative angle point x-axis coordinate adds twice
When shortcoming be positioned at its relative angle point-x-axis direction, the x-axis coordinate of shortcoming equals the difference average that its relative angle point x-axis coordinate subtracts twice
The y-axis coordinate of described shortcoming equals the y-axis coordinate of angle point corresponding thereto;
Step B4, supplement shortcoming in y-axis direction, as shown in figure 11:
SB41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+y-axis direction; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-y-axis direction;
SB43, find two differences maximum in difference DELTA Y1, difference DELTA Y2 and difference DELTA Y3, and ask the average of two differences
When shortcoming be positioned at its relative angle point+y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate adds twice
When shortcoming be positioned at its relative angle point-y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate subtracts twice
The x-axis coordinate of described shortcoming equals the x-axis coordinate of angle point corresponding thereto;
Obtain the coordinate of shortcoming thus, realize angle point and compensate.
Angle point compensation method two is as follows:
Step C1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step C2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step C3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step C4;
Step C3, supplement shortcoming in x-axis direction;
SC31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+x-axis direction; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-x-axis direction;
Step C4, supplement shortcoming in y-axis direction:
SC41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+y-axis direction; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-y-axis direction;
Step C5, the shortcoming determined according to step C3 or step C4 and the relative position relation of relative angle point, solve following formula, obtain the optimal value of the coordinate P of shortcoming:
S.t||P-P
3||
2=|| P
0-P
3||
2and
Wherein, P
0represent corner location relative to shortcoming, P
1and P
2be respectively in certain class angle point except shortcoming P and angle point P
0two other angle point; P
3represent angle point P
1with angle point P
2line central point;
represent positive integer.
Using the coordinate of the optimal value of coordinate P as shortcoming, realize angle point and compensate.
In addition, when arranging multiple cooperative target at diverse location in scene, which cooperative target what vehicle cannot identify that it takes automatically is, then cannot complete automatic location.The present invention, for solving this problem, the backboard of each cooperative target arranges the triangle of different number, or arranges multiple towards different triangles on each backboard, in this, as the mark distinguishing cooperative target; As in Fig. 1, the upper left corner of backboard, triangle is green pattern, and RGB color gray-scale value is respectively (0,255,0);
When the triangle arranging different number in cooperative target is as cooperative target mark, as follows to the recognition methods of cooperation blip:
Step F 1, for the Corner Detection region T2 in described steps A 5, travel through each pixel, to meet that G passage gray-scale value is greater than 100, G passage is greater than 1.5 times of channel B gray-scale values and G passage is greater than 1.5 times of R passage gray-value pixel point point assignment is 255, otherwise assignment is 0, obtains image T3* by binaryzation;
Step F 2, morphology is carried out to image T3* open operation, remove region tiny in T3*, edge extracting is carried out to T3*, and carries out edge compensation, obtain image T3**;
Step F 3, carry out Edge track to image T3**, following limb number is cooperative target intermediate cam shape number;
When arrange in cooperative target multiple towards different triangles as cooperative target mark time, as follows to the recognition methods of cooperation blip:
Step e 1, execution step F 1 to F3, identify leg-of-mutton number;
Step e 2, for each triangle obtained in step e 1, represent that triangle 3 summits are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3]
Represent that triangle 3 summits are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3]
The minimum value of the difference in step e 3, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that this triangle is towards x-axis positive dirction or negative direction, perform step e 4; If the difference in y-axis direction is minimum, then show that this triangle is towards y-axis positive dirction or negative direction, perform step e 5;
If step e 4 triangle is towards x-axis positive dirction or negative direction;
Find two triangular apex that two x-axis differences are minimum, then judge the x coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards x-axis negative direction; If two differences are all greater than 0, then triangle is towards x-axis positive dirction;
Step e 5, triangle are towards y-axis positive dirction or negative direction:
Find two triangular apex that two y-axis differences are minimum, then judge the y coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards y-axis negative direction; If two differences are all greater than 0, then triangle is towards y-axis positive dirction;
Step e 6, step e 1 to E5 is all performed to each triangle, obtain all leg-of-mutton towards.
Wherein, in step 3, as shown in figure 14, after the coordinate of the angle point obtaining cooperation mark in cooperative target image coordinate system, according to the known coordinate of cooperation mark angle point in cooperative target (demarcating in advance), the transformational relation between cooperative target and cooperative target image coordinate system can be obtained; Obtain the transformational relation of cooperative target coordinate system and camera coordinate system, the transformational relation between camera coordinate system and cooperative target coordinate system and the relation between cooperative target coordinate system and vehicle axis system more successively, last according to the position of cooperative target coordinate system in geographic coordinate system, determine the position of vehicle in geographic coordinate system.Concrete ordinate transform process is as follows:
First above-mentioned each coordinate system is defined:
It is a kind of coordinate system motionless relative to earth surface, and initial point takes from cooperative target large square pattern apexes, Ox
gaxle is positioned at surface level, be parallel to cooperative target backboard minor face and point to before; Oy
gaxle is positioned at surface level, is parallel to the long limit of cooperative target backboard and points to right; Oz
gaxle is perpendicular to Ox
gy
gplane is also pointed to, and coordinate system meets right hand rule.
2, user coordinate system Oxyz
Be fixed on a coordinate system in user's body, initial point O is taken at the center of gravity of user.X-axis is consistent with user's longitudinal axis, points to user front, and is in user's symmetrical plane; Z-axis is in user's symmetrical plane and point to right; Oy axle in user's symmetrical plane and perpendicular to Ox axle, under sensing.
3, video camera (or user images collecting device) coordinate system.
The initial point of camera coordinate system is video camera photocentre, Oz
caxle overlaps with camera optical axis, and to get shooting direction be forward, Ox
c, Oy
caxle and image physical coordinates system X, Y-axis are parallel.
4, image coordinate system (u, v) and (X, Y)
Image coordinate system is divided into image pixel coordinates system (u, v) and image physical coordinates system (X, Y), is defined as respectively:
I () image pixel coordinates system (u, v): with the image upper left corner for initial point take pixel as the rectangular coordinate system of coordinate unit, u, v represent the columns of this pixel in digital picture and line number respectively.
(ii) image physical coordinates system (X, Y): image physical coordinates system be with grating with picture plane focus for initial point, the rectangular coordinate system in units of millimeter.Wherein X, Y-axis are parallel with u, v axle of pixel coordinate system respectively.
Transformational relation between each coordinate system is as follows:
1) relation, between vehicle axis system and cooperative target coordinate system:
Because video camera and vehicle are connected, the angle between vehicle axis system and cooperative target coordinate system is exactly the Installation posture angle of video camera, also known as Eulerian angle.
(a) angle of pitch β: axle Oy and plane Ox
gy
gbetween angle;
(b) crab angle (position angle) γ: axle Oy is at surface level Ox
gy
gon projection and cooperative target longitudinal axis Oy
gbetween angle;
C () roll angle (pitch angle) α: the angle that the plane of symmetry turns over around axle Oy, the right side is rolled as just.
2) transformational relation, between cooperative target coordinate system and camera coordinate system
Point in cooperative target coordinate system can be expressed as with an orthogonal rotation matrix R and translation matrix T to the conversion of camera coordinate system:
Wherein,
T=[t
x, t
y, t
z] be the coordinate of cooperative target coordinate origin in camera coordinate system.
3) transformational relation between image coordinate system and camera coordinate system
The picture point P coordinate of object point p in camera coordinate system in image physical coordinates system is:
X=fx
c/z
c(7)
Y=fy
c/z
c(8)
Image pixel coordinates system is further converted to by the image physical coordinates system of above formula:
u=-X/d
x+u
0=-f
xX+u
0=-f
xfx
c/z
c+u
0(9)
v=Y/d
x+v
0=f
yY+v
0=f
yfy
c/z
c+v
0(10)
Wherein f is focal length of camera, (u
0, v
0) be the coordinate figure of image physical coordinates system initial point in image pixel coordinates system, f
x, f
ybe respectively x
i, y
isample frequency on direction, the number of pixels namely in unit length; f
x, f
y, u
0and v
0four parameters are only relevant with video camera inner structure.
4) transformational relation between cooperative target coordinate system and image coordinate system
Bring formula (8), (9) into formula (6) respectively, transformational relation that (7) can obtain between cooperative target coordinate system and image coordinate system:
Above two formulas also claim collinearity equation, and necessarily require object point, photocentre and picture point these 3 on the same line, it illustrates dimensional target point coordinate (x
g, y
g, z
g) and photocentre coordinate T and the mathematical relation between optical axis angle R and corresponding picture point (u, v).According to collinearity equation, under the condition that intrinsic parameters of the camera is determined, utilize several known object points and corresponding picpointed coordinate, just can solve the locus of video camera in cooperative target coordinate system and attitude.
Suppose that the coordinate of the i-th angle point under cooperative target coordinate system in cooperative target is
This coordinate under pixel coordinate system is p
i=[u
iv
i1]
t, then both sides relation is expressed as:
λ
ip
i=A[R T]q
i(13)
Wherein,
F
x, f
yrepresent the equivalent focal length of video camera respectively, i.e. f
x=f/dx, f
y=f/dy, f are focal length of camera, u
0, v
0for the coordinate of camera coordinate system initial point under pixel coordinate system.Rotation matrix is expressed as follows:
T=[t
xt
yt
z]
ttranslation matrix, λ
iunknown yardstick:
Wherein
Simultaneous is obtained:
Known from the cooperative target pattern of design, q
iit is known quantity.In each frame of image, Corner Detection Algorithm extract minutiae p
i, and unique point calibration algorithm is each p
iwith the q of its correspondence
iconnect.Just can obtain the pose parameter of video camera according to data above, wherein R is the amount of spin from camera to cooperative target, and T is the translational movement from camera to cooperative target.
Because cooperative target is on large ground level, so have again
so can obtain:
Referred to as F [r
1 tr
2 tt
t]
t=0, wherein
N is angle point number.
The singular vectors corresponding to F minimum singular value is calculated according to standard svd (SVD) technology
Pass through
with
unit norm calculate the T size of translation vector,
Estimation can be utilized in two steps to obtain
with
solve rotation matrix R.The first step, passes through accounting equation
SVD, if
then
if
then
q=diag (1,1 ,-1).
Finally obtain the location parameter of vehicle relative to cooperative target:
Η=[x
gy
gz
g]
T=-R
-1T (19)
Attitude parameter:
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.