CN104504675A - Active vision positioning method - Google Patents

Active vision positioning method Download PDF

Info

Publication number
CN104504675A
CN104504675A CN201410608792.0A CN201410608792A CN104504675A CN 104504675 A CN104504675 A CN 104504675A CN 201410608792 A CN201410608792 A CN 201410608792A CN 104504675 A CN104504675 A CN 104504675A
Authority
CN
China
Prior art keywords
angle point
coordinate
shortcoming
axis
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410608792.0A
Other languages
Chinese (zh)
Other versions
CN104504675B (en
Inventor
公续平
魏东岩
来奇峰
张晓光
陈夏兰
李祥红
徐颖
袁洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Academy of Opto Electronics of CAS
Original Assignee
Academy of Opto Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Academy of Opto Electronics of CAS filed Critical Academy of Opto Electronics of CAS
Priority to CN201410608792.0A priority Critical patent/CN104504675B/en
Publication of CN104504675A publication Critical patent/CN104504675A/en
Application granted granted Critical
Publication of CN104504675B publication Critical patent/CN104504675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an active vision positioning method. An adaptability design is carried out on a cooperative target to cause the shape and size of the cooperative target and relative positions among squares to be more favorable for detecting and identifying the cooperative target, wherein the edge distance of three squares is greater than the edge length of each square to effectively improve a classifying and sorting success rate of angular points; the cooperative target is jointly formed by that a red pattern is used as a back plate and three square yellow patterns are used as cooperative marks, so that the back plate and the cooperative mark can be more easily distinguished; the square mark positioned on a middle upper position is bigger, so that the cooperative mark exhibits good directivity, other two square patterns are independently positioned on two sides of the middle square pattern, and the three square patterns can be effectively distinguished according to the outline size of the square patterns in an image processing part; and the ratio of the three squares is designed into 4:3 to improve a detection rate. An angular point compensation algorithm is adopted to determine the position of a missed point and improve angular point detection accuracy. Meanwhile, more characteristic points are provided for a user to carry out positioning, and therefore, positioning precision is improved.

Description

A kind of active vision localization method
Technical field
The present invention relates to technical field of visual navigation, particularly relate to a kind of active vision localization method.
Background technology
Along with user is to the raising of navigator fix demand, because satellite-signal is blocked, traditional approach use GPS more than 30 meters, effectively cannot inform the position that user is current and corresponding navigation information in this zone location error of signal Zhe Dang Den.
The main thought of vision cooperative target locator meams is: first design cooperative target, then cooperative target image is comprised by the acquisition of user's vision sensor is current, according to the cooperative target characteristic use image processing techniques of design, obtain cooperative target characteristic information, i.e. angular coordinate, finally utilizes the Conversion Relations between the angular coordinate of user coordinate system, camera coordinates system, pixel coordinate system and cooperative target to calculate navigation position needed for user and attitude.
Within 2013, people such as Northwestern Polytechnical University's used as a personal name in ancient times will good grade devises a three foursquare cooperative targets, and this cooperative target utilizes image color information, provides auxiliary to cooperative target Corner Detection.This cooperative target has when detecting must advantage, but this cooperative target is when angle point grid, utilize that angle point is relevant to cooperative target information carries out angle point optimization, do not consider the correlationship of angle point and edge, angle point and color region, easily occur the problems such as Corner Detection failure, Iamge Segmentation and angle point misordering.
Summary of the invention
In view of this, the invention provides a kind of active vision localization method, the hi-Fix of user in scene under GPS disappearance environment can be realized.
In order to solve the problems of the technologies described above, the present invention is achieved in that
A kind of active vision localization method of the present invention, comprises the steps:
Step 1, making cooperative target, and cooperative target is arranged in scene;
Wherein, described cooperative target is backboard with red pattern, backboard is arranged three yellow squares as cooperation mark; Described backboard adopts RGB color gray-scale value to be (255,0,0), and three square RGB color gray-scale values are (255,255,0); Wherein the first square and the second square with the 3rd foursquare wherein one article of diagonal line for axis of symmetry is symmetrical; Described 3rd foursquare size is greater than the first and second squares; Described 3rd square and first square and the second foursquare side ratio are 4:3; The minor increment on three square borders is greater than the length of side of smallest square;
Step 2, the cooperative target image utilized in user images collecting device acquisition scene, and extract the angle point of the cooperation mark in cooperative target, the coordinate in the image coordinate system that acquisition angle point is arranged on described cooperative target image;
Step 3, the coordinate of cooperation mark angle point in image coordinate system obtained according to the coordinate of cooperation mark angle point in cooperative target coordinate system and step 2, obtain the transformational relation between cooperative target coordinate system and image coordinate system; And then based on the coordinate transformation relation of known image coordinate system and user images collecting device coordinate system, obtain the coordinate transformation relation of user images collecting device coordinate system and cooperative target coordinate system; Finally, based on user images device coordinate system and the coordinate transformation relation of vehicle axis system and the coordinate transformation relation of cooperative target coordinate system and geographic coordinate system, obtain vehicle coordinate and attitude in geographic coordinate system, complete thus the location of vehicle in scene.
Preferably, described backboard is of a size of 80 centimetres * 120 centimetres, and the described 3rd foursquare length of side is 32 centimetres, and the first and second foursquare length of sides are 24 centimetres, and the first and second square center distance the 3rd square center distances are greater than 35 centimetres; Described 3rd foursquare described axis of symmetry overlaps with described carupace width direction center line.
Preferably, described step 2 comprises following concrete steps:
Steps A 0, the real scene image controlled in user images collecting device Real-time Collection scene, and judge whether contain cooperative target in every two field picture: if had, be image A by the image definition containing cooperative target, perform steps A 1; If no, return steps A 0;
Steps A 1, extract R, G, B triple channel gray value information of each pixel in described image A; And be 255 by the pixel assignment meeting following condition, the pixel assignment not meeting following condition is 0, after each pixel according to the method described above assignment, then carries out binaryzation to image A, obtains image A*;
Condition is: for each pixel, and R passage gray-scale value is greater than 50, R passage gray-scale value and is greater than 1.5 times of G passage gray-scale values, and R passage gray-scale value is greater than 1.3 times of channel B gray-scale values;
Steps A 2, morphology is carried out to image A* open operation, remove tiny region; And then edge extracting is carried out to image A*, and carry out edge compensation, obtain image A**;
Border sequences all in steps A 3, detected image A**, carry out Edge track, find the longest edge edge in image A**, and to extract on longest edge edge in all pixels at the maximal value of x-axis and y-axis and minimum value: maximal value xmax, ymax and minimum value xmin, ymin;
Steps A 4, in image A, connection coordinate point (xmin, ymin), (xmin successively, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as cooperative target region, and the image definition in region is image T1;
Steps A 5, gray processing is carried out to image A, after gray processing in image A, connection coordinate point (xmin successively, ymin), (xmin, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as Corner Detection region, and the image definition in region is image T2;
Steps A 6, for each pixel in image T1, be greater than 50, G passage gray-scale value be greater than 50 by meeting R passage gray-scale value, and the pixel assignment that G passage is greater than 1.5 times of channel B gray-scale values is 255, otherwise assignment is 0, obtains image T1* by binaryzation;
Steps A 7: carry out morphology to image T1* and open operation, removes region tiny in image T1*, and carries out edge extracting to T1*, and carry out edge compensation, obtain image T1**;
Steps A 8, Edge track is carried out to image T1**, find first three section of edge that in image T1**, edge length is the longest as edge square in cooperative target, delete remaining edge;
Steps A 9, utilize Harris Robust Algorithm of Image Corner Extraction to carry out angle point grid to image T2, if the angle point extracted is arranged on the square edge of described cooperative target, then using the angle point that meets this condition as cooperative target angle point;
Steps A 10, utilize Kmeans algorithm that the cooperative target angle point obtained in A9 is divided into 3 classes, if there is the situation that a class angle point number is less than 3, return steps A 10, continue classification; If there is the situation that a class angle point number equals 3, carry out angle point compensation, make each class all have 4 angle points;
Steps A 11, relative position relation according to 12 angle points, by 3 foursquare summits of 4 angle points in each class angle point and described cooperative target in correspondence with each other, obtain the coordinate of cooperative target angle point in image coordinate system.
Preferably, judge in described steps A 0 that the concrete grammar whether containing cooperative target in realtime graphic is:
Step D1: R, G, B triple channel information extracting each pixel in present image;
Step D2: be greater than 50, R passage gray-scale value be greater than 1.5 times of G passage gray-scale values by meeting R passage gray-scale value, and the pixel assignment that R passage gray-scale value is greater than 1.3 times of channel B gray-scale values is 255, otherwise assignment is 0, then carries out binaryzation to present image;
Step D3: calculating gray-scale value in the image after binaryzation is the pixel number of 255, if pixel number is greater than 20% of all pixel numbers of full figure, then thinks that this present image exists cooperative target, otherwise thinks that cooperative target does not exist.
Preferably, in described steps A 10, when existence one class angle point number equals the situation of 3, described angle point compensation method is as follows:
Step B1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step B2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step B3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step B4;
Step B3, supplement shortcoming in x-axis direction;
SB31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point;
SB33, find two differences maximum in difference DELTA X1, difference DELTA X2 and difference DELTA X3, and ask the average of two differences
When the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that self relative angle point x-axis coordinate adds twice
When the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that self relative angle point x-axis coordinate subtracts twice
The y-axis coordinate of described shortcoming equals the y-axis coordinate of self relative angle point;
Step B4, supplement shortcoming in y-axis direction:
SB41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point;
SB43, find two differences maximum in difference DELTA Y1, difference DELTA Y2 and difference DELTA Y3, and ask the average of two differences
When shortcoming be positioned at its relative angle point+y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate adds twice
When shortcoming be positioned at its relative angle point-y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate subtracts twice
The x-axis coordinate of described shortcoming equals the x-axis coordinate of self relative angle point;
Obtain the coordinate of shortcoming thus, realize angle point and compensate.
Preferably, in described steps A 10, when existence one class angle point number equals the situation of 3, described angle point compensation method is as follows:
Step C1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step C2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step C3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step C4;
Step C3, supplement shortcoming in x-axis direction;
SC31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point;
Step C4, supplement shortcoming in y-axis direction:
SC41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point;
Step C5, the shortcoming determined according to step C3 or step C4 and the relative position relation of relative angle point, solve following formula, obtain the optimal value of coordinate P:
arg min P Σ i , j ∈ { 1,2 } , i ≠ j | | ( P - P i ) · ( P j - P 0 ) | | F 2
S.t||P-P 3|| 2=|| P 0-P 3|| 2and
Wherein, P 0represent angle point p relative to shortcoming 0coordinate, P 1and P 2be respectively in certain class angle point except shortcoming and angle point p 0the coordinate of two other angle point; P 3denotation coordination P 1with coordinate P 2line center point coordinate; represent positive integer;
Using the coordinate of the optimal value of coordinate P as shortcoming, realize angle point and compensate.
Preferably, when arranging multiple cooperative target at diverse location in scene, the backboard of each cooperative target arranging mark, distinguishing different cooperative targets with this.
Preferably, the backboard of each cooperative target is arranged multiple towards different triangles as mark, described triangle is green pattern, and RGB color gray-scale value is (0,255,0);
As follows to the recognition methods of the mark of cooperative target:
Step e 1, identify leg-of-mutton number;
Step e 2, for each triangle obtained in step e 1, represent that triangle 3 summits are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3]
Represent that triangle 3 summits are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3]
The minimum value of the difference in step e 3, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference of x direction of principal axis is minimum, then show that this triangle is towards x-axis positive dirction or negative direction, perform step e 4; If the difference in y-axis direction is minimum, then show that this triangle is towards y-axis positive dirction or negative direction, perform step e 5;
If step e 4 triangle is towards x-axis positive dirction or negative direction;
Find two triangular apex that two x-axis differences are minimum, then judge the x coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards x-axis negative direction; If two differences are all greater than 0, then triangle is towards x-axis positive dirction;
Step e 5, triangle are towards y-axis positive dirction or negative direction:
Find two triangular apex that two y-axis differences are minimum, then judge the y coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards y-axis negative direction; If two differences are all greater than 0, then triangle is towards y-axis positive dirction;
Step e 6, step e 1 to E5 is all performed to each triangle, obtain all leg-of-mutton towards.
Preferably, the backboard of each cooperative target arranges the triangle of different number, for identifying different cooperative targets.
Preferably, the recognition methods of described triangle number is as follows:
Step F 1, to arrange triangle be green pattern, and RGB color gray-scale value is respectively (0,255,0); For the image T2 in described steps A 5, travel through each pixel, to meet that G passage gray-scale value is greater than 100, G passage is greater than 1.5 times of channel B gray-scale values and G passage is greater than 1.5 times of R passage gray-value pixel point point assignment is 255, otherwise assignment is 0, obtains image T3* by binaryzation;
Step F 2, morphology is carried out to image T3* open operation, remove region tiny in T3*, and edge extracting is carried out to T3*, and carry out edge compensation, obtain image T3**;
Step F 3, carry out Edge track to image T3**, following limb number is cooperative target intermediate cam shape number.
The present invention has following beneficial effect:
(1) the present invention uses background, adaptability design is carried out to cooperative target, make the relative position between the geomery of cooperative target and square more be conducive to the detection and Identification of cooperative target: 1., three square Edge Distances to be greater than the length of side of square, effectively improve the success ratio of angle point classification and sequence; 2. be that backboard and three square yellow patterns form jointly for cooperation mark with red pattern, both are more easily distinguished; 3. the square mark occuping upper middle is bigger than normal, cooperation mark is made to have good directivity like this, all the other two square patterns occupy middle square pattern both sides respectively, in image processing section, can carry out effective identification according to the profile size of square pattern to three square patterns; 4. be 4:3 by three square Proportionality designs, improve inspection rate;
(2) the present invention adopts angle point backoff algorithm to the position determining shortcoming, improves angle point grid accuracy rate; Simultaneously for user location provides more unique point, improve positioning precision thus;
(3) mark is set in cooperative target, to distinguish different cooperative targets, thus can multiple cooperative target be set in scene, select for vehicle location provides more and locate more accurately.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of cooperative target of the present invention.
Fig. 2 is the image A that the present invention obtains;
Fig. 3 carries out to image A the image A* that binaryzation obtains in the present invention;
Fig. 4 is result after image A* edge extracting of the present invention;
Fig. 5 (a) is T1 area image of the present invention; Fig. 5 (b) is T2 area image;
Fig. 6 is image T1* edge extracting result of the present invention;
Fig. 7 is T2 Corner Detection result of the present invention;
Fig. 8 is T2 angle point classification results of the present invention;
Fig. 9 is the angle point image after the present invention sorts;
Figure 10 is two kinds of situation schematic diagram of X-direction shortcoming of the present invention;
Figure 11 is two kinds of situation schematic diagram of Y-direction shortcoming of the present invention;
Figure 12 is vision cooperative target of the present invention location schematic diagram;
Figure 13 is localization method process flow diagram of the present invention;
Figure 14 is vehicle-mounted cooperative target positioning-system coordinate system transformational relation figure of the present invention.
Figure 15 is the angle point sequence schematic diagram of vehicle-mounted cooperative target of the present invention.
Embodiment
To develop simultaneously embodiment below in conjunction with accompanying drawing, describe the present invention.
A kind of high-precision locating method for underground scene vehicle of the present invention, as shown in figure 13, comprises the steps:
Step 1, making cooperative target, and cooperative target is arranged in scene;
As shown in Figure 1, cooperative target in the present invention take into account the feature being easy to identify, the cooperative target of design uses the thought of hierarchy, take red pattern as backboard and three square yellow patterns for cooperation mark forms jointly, backboard adopts RGB color gray-scale value to be respectively (255,0,0), three square RGB color gray-scale values are respectively (255,255,0).Also different in the design of three foursquare patterns, the square mark occuping upper middle is bigger than normal, cooperation mark is made to have good directivity like this, all the other two square pattern occupy intermediate square pattern both sides respectively, in image processing section, can carry out effective identification according to the profile size of square pattern to three square patterns, the distance simultaneously between three directions is greater than each square radius, and the position being conducive to realizing angle point is extracted and sequence.
Three square pattern also have certain proportionate relationship, experimentally analyze, and size is than during 4:3 being best proportion, if this ratio is excessive, little square pattern is considered to assorted point sometimes and cannot detects; If this ratio is too small, then when different angles and different distance are taken, due to pattern distortion, cause the profile of square pattern to change, size square pattern cannot be distinguished.
Consider that general scene entrance size is 2 meters to 2.5 meters, scene inner pillar width 75 centimetres to 100 centimetres, cooperative target is designed to backboard size 80 centimetres * 120 centimetres, 32 centimetres * 32 centimetres, large square, large square is positioned at cooperative target y central axis, 24 centimetres * 24 centimetres, little square, is positioned at large square both sides, is greater than 35 centimetres apart from large square center distance.
Step 2, as shown in figure 12, utilizes user images collecting device to obtain cooperative target image in scene, and extracts the angle point of the cooperation mark in cooperative target according to the method for prior art, obtain the coordinate of angle point in cooperative target image coordinate system;
Step 3, the coordinate of angle point in cooperative target image coordinate system of cooperation mark obtained according to step 2, obtain vehicle coordinate in geographic coordinate system, complete thus the location of vehicle in scene.
Wherein, step 2 comprises following concrete steps:
Steps A 0, the real scene image controlled in user images collecting device Real-time Collection scene, as shown in Figure 2, and judge whether contain cooperative target in every two field picture: if had, be image A by the image definition containing cooperative target, perform steps A 1; If no, return steps A 0;
Wherein, judge that the concrete grammar whether containing cooperative target in realtime graphic is:
Step D1: R, G, B triple channel information extracting each pixel in present image;
Step D2: be greater than 50, R passage gray-scale value be greater than 1.5 times of G passage gray-scale values by meeting R passage gray-scale value, and the pixel assignment that R passage gray-scale value is greater than 1.3 times of channel B gray-scale values is 255, otherwise assignment is 0, then carries out binaryzation to present image;
Step D3: calculating gray-scale value in the image after binaryzation is the pixel number of 255, if pixel number is greater than account for 20% of full images vegetarian refreshments number, then thinks that this present image exists cooperative target, otherwise thinks that cooperative target does not exist.
Steps A 1, extract R, G, B triple channel gray value information of each pixel in described image A; And be 255 by the pixel assignment meeting following condition, the pixel assignment not meeting following condition is 0:
Condition is: R passage gray-scale value is greater than 50, R passage gray-scale value and is greater than 1.5 times of G passage gray-scale values, and R passage gray-scale value is greater than 1.3 times of channel B gray-scale values, that is:
R > 50 and R > 1.5G and R > 1.3B (1);
Traversing graph, as all pixels in A, and to after each pixel according to the method described above assignment, then carries out binaryzation to image A, obtains image A*, as shown in Figure 3;
Steps A 2, morphology is carried out to image A* open operation, remove tiny region; And then edge extracting is carried out to image A*, and edge compensation is carried out to extraction, obtain image A**, as shown in Figure 4;
Border sequences all in steps A 3, detected image A**, carry out Edge track, find the longest edge edge in image A**, and to extract on longest edge edge in all pixels at the maximal value of x-axis and y-axis and upper minimum value: maximal value xmax, ymax and minimum value xmin, ymin;
Steps A 4, in image A, connection coordinate point (xmin successively, ymin), (xmin, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as cooperative target region, image definition in region is image T1, as shown in Fig. 5 (a);
Steps A 5, gray processing is carried out to image A, after gray processing in image A, connection coordinate point (xmin successively, ymin), (xmin, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as Corner Detection region, and the image definition in region is image T2, as shown in Fig. 5 (b);
Steps A 6, for each pixel in image T1, be greater than 50, G passage gray-scale value be greater than 50 by meeting R passage gray-scale value, and the pixel assignment that G passage is greater than 1.5 times of channel B gray-scale values is 255, otherwise assignment is 0, obtains image T1* by binaryzation;
Steps A 7: carry out morphology to image T1* and open operation, removes region tiny in image T1*, and carries out edge extracting to T1*, and carry out edge compensation, obtain image T1**;
Steps A 8, Edge track is carried out to image T1**, find first three section of edge that in image T1**, edge length is the longest as edge square in cooperative target, delete remaining edge, as shown in Figure 6;
Steps A 9, Harris Robust Algorithm of Image Corner Extraction is utilized to carry out angle point grid to image T2; If the angle point extracted is arranged on the square edge of described cooperative target, then using the angle point that meets this condition as cooperative target angle point, as shown in Figure 7;
Steps A 10, utilize kmeans algorithm that the cooperative target angle point obtained in A9 is divided into 3 classes, if there is the situation that a class angle point number is less than 3, return steps A 10, continue classification; If there is the situation that a class angle point number equals 3, carry out angle point compensation, make each class all have 4 angle points, as shown in Figure 8; Conveniently distinguish angle point, can sort to angle point, as shown in Figure 9, choose the minimum angle point of x in A13 as first kind point, corresponding angle point sequence number is 1,2,3,4 by sorting clockwise; Choose angle point that in three classes, y is minimum as Equations of The Second Kind point, corresponding angle point sequence number is 5,6,7,8 by sequence clockwise; Choose angle point that in three classes, y is maximum as the 3rd class point, corresponding angle point sequence number is 9,10,11,12 by sequence clockwise;
Steps A 11, relative position relation according to 12 angle points, by 3 foursquare summits of 4 angle points in each class angle point and described cooperative target in correspondence with each other, obtain the coordinate of cooperative target angle point in image coordinate system, concrete grammar is:
As shown in figure 15, choose angle point a minimum at x-axis coordinate in cooperative target angle point and a 3rd foursquare vertex correspondence, by other three angle points similar with angle point a, foursquare other three summits are corresponding respectively with the 3rd respectively; Choose angle point b minimum in y-axis in cooperative target angle point and a first foursquare vertex correspondence, by other three angle points similar with angle point b, foursquare other three summits are corresponding respectively with first respectively; Choose angle point c maximum in y-axis in cooperative target angle point and a second foursquare vertex correspondence, by other three angle points similar with angle point c, foursquare other three summits are corresponding respectively with second respectively.
Steps A 12, the angle point coordinate in image coordinate system of cooperative target obtained according to steps A 11, obtain the current position of vehicle and attitude.
Wherein, in steps A 10, when existence one class angle point number equals the situation of 3, the invention provides two kinds of angle point compensation methodes, specific as follows:
Angle point compensation method one comprises:
Step B1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step B2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step B3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step B4;
Step B3, supplement shortcoming in x-axis direction, as shown in Figure 10;
SB31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+x-axis direction; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-x-axis direction;
SB33, find two differences maximum in difference DELTA X1, difference DELTA X2 and difference DELTA X3, and ask the average of two differences
When the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that its relative angle point x-axis coordinate adds twice
When shortcoming be positioned at its relative angle point-x-axis direction, the x-axis coordinate of shortcoming equals the difference average that its relative angle point x-axis coordinate subtracts twice
The y-axis coordinate of described shortcoming equals the y-axis coordinate of angle point corresponding thereto;
Step B4, supplement shortcoming in y-axis direction, as shown in figure 11:
SB41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+y-axis direction; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-y-axis direction;
SB43, find two differences maximum in difference DELTA Y1, difference DELTA Y2 and difference DELTA Y3, and ask the average of two differences
When shortcoming be positioned at its relative angle point+y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate adds twice
When shortcoming be positioned at its relative angle point-y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate subtracts twice
The x-axis coordinate of described shortcoming equals the x-axis coordinate of angle point corresponding thereto;
Obtain the coordinate of shortcoming thus, realize angle point and compensate.
Angle point compensation method two is as follows:
Step C1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step C2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step C3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step C4;
Step C3, supplement shortcoming in x-axis direction;
SC31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+x-axis direction; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-x-axis direction;
Step C4, supplement shortcoming in y-axis direction:
SC41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point+y-axis direction; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point, namely shortcoming be positioned at its relative angle point-y-axis direction;
Step C5, the shortcoming determined according to step C3 or step C4 and the relative position relation of relative angle point, solve following formula, obtain the optimal value of the coordinate P of shortcoming:
arg min P Σ i , j ∈ { 1,2 } , i ≠ j | | ( P - P i ) · ( P j - P 0 ) | | F 2
S.t||P-P 3|| 2=|| P 0-P 3|| 2and
Wherein, P 0represent corner location relative to shortcoming, P 1and P 2be respectively in certain class angle point except shortcoming P and angle point P 0two other angle point; P 3represent angle point P 1with angle point P 2line central point; represent positive integer.
Using the coordinate of the optimal value of coordinate P as shortcoming, realize angle point and compensate.
In addition, when arranging multiple cooperative target at diverse location in scene, which cooperative target what vehicle cannot identify that it takes automatically is, then cannot complete automatic location.The present invention, for solving this problem, the backboard of each cooperative target arranges the triangle of different number, or arranges multiple towards different triangles on each backboard, in this, as the mark distinguishing cooperative target; As in Fig. 1, the upper left corner of backboard, triangle is green pattern, and RGB color gray-scale value is respectively (0,255,0);
When the triangle arranging different number in cooperative target is as cooperative target mark, as follows to the recognition methods of cooperation blip:
Step F 1, for the Corner Detection region T2 in described steps A 5, travel through each pixel, to meet that G passage gray-scale value is greater than 100, G passage is greater than 1.5 times of channel B gray-scale values and G passage is greater than 1.5 times of R passage gray-value pixel point point assignment is 255, otherwise assignment is 0, obtains image T3* by binaryzation;
Step F 2, morphology is carried out to image T3* open operation, remove region tiny in T3*, edge extracting is carried out to T3*, and carries out edge compensation, obtain image T3**;
Step F 3, carry out Edge track to image T3**, following limb number is cooperative target intermediate cam shape number;
When arrange in cooperative target multiple towards different triangles as cooperative target mark time, as follows to the recognition methods of cooperation blip:
Step e 1, execution step F 1 to F3, identify leg-of-mutton number;
Step e 2, for each triangle obtained in step e 1, represent that triangle 3 summits are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3]
Represent that triangle 3 summits are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3]
The minimum value of the difference in step e 3, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that this triangle is towards x-axis positive dirction or negative direction, perform step e 4; If the difference in y-axis direction is minimum, then show that this triangle is towards y-axis positive dirction or negative direction, perform step e 5;
If step e 4 triangle is towards x-axis positive dirction or negative direction;
Find two triangular apex that two x-axis differences are minimum, then judge the x coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards x-axis negative direction; If two differences are all greater than 0, then triangle is towards x-axis positive dirction;
Step e 5, triangle are towards y-axis positive dirction or negative direction:
Find two triangular apex that two y-axis differences are minimum, then judge the y coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards y-axis negative direction; If two differences are all greater than 0, then triangle is towards y-axis positive dirction;
Step e 6, step e 1 to E5 is all performed to each triangle, obtain all leg-of-mutton towards.
Wherein, in step 3, as shown in figure 14, after the coordinate of the angle point obtaining cooperation mark in cooperative target image coordinate system, according to the known coordinate of cooperation mark angle point in cooperative target (demarcating in advance), the transformational relation between cooperative target and cooperative target image coordinate system can be obtained; Obtain the transformational relation of cooperative target coordinate system and camera coordinate system, the transformational relation between camera coordinate system and cooperative target coordinate system and the relation between cooperative target coordinate system and vehicle axis system more successively, last according to the position of cooperative target coordinate system in geographic coordinate system, determine the position of vehicle in geographic coordinate system.Concrete ordinate transform process is as follows:
First above-mentioned each coordinate system is defined:
It is a kind of coordinate system motionless relative to earth surface, and initial point takes from cooperative target large square pattern apexes, Ox gaxle is positioned at surface level, be parallel to cooperative target backboard minor face and point to before; Oy gaxle is positioned at surface level, is parallel to the long limit of cooperative target backboard and points to right; Oz gaxle is perpendicular to Ox gy gplane is also pointed to, and coordinate system meets right hand rule.
2, user coordinate system Oxyz
Be fixed on a coordinate system in user's body, initial point O is taken at the center of gravity of user.X-axis is consistent with user's longitudinal axis, points to user front, and is in user's symmetrical plane; Z-axis is in user's symmetrical plane and point to right; Oy axle in user's symmetrical plane and perpendicular to Ox axle, under sensing.
3, video camera (or user images collecting device) coordinate system.
The initial point of camera coordinate system is video camera photocentre, Oz caxle overlaps with camera optical axis, and to get shooting direction be forward, Ox c, Oy caxle and image physical coordinates system X, Y-axis are parallel.
4, image coordinate system (u, v) and (X, Y)
Image coordinate system is divided into image pixel coordinates system (u, v) and image physical coordinates system (X, Y), is defined as respectively:
I () image pixel coordinates system (u, v): with the image upper left corner for initial point take pixel as the rectangular coordinate system of coordinate unit, u, v represent the columns of this pixel in digital picture and line number respectively.
(ii) image physical coordinates system (X, Y): image physical coordinates system be with grating with picture plane focus for initial point, the rectangular coordinate system in units of millimeter.Wherein X, Y-axis are parallel with u, v axle of pixel coordinate system respectively.
Transformational relation between each coordinate system is as follows:
1) relation, between vehicle axis system and cooperative target coordinate system:
Because video camera and vehicle are connected, the angle between vehicle axis system and cooperative target coordinate system is exactly the Installation posture angle of video camera, also known as Eulerian angle.
(a) angle of pitch β: axle Oy and plane Ox gy gbetween angle;
(b) crab angle (position angle) γ: axle Oy is at surface level Ox gy gon projection and cooperative target longitudinal axis Oy gbetween angle;
C () roll angle (pitch angle) α: the angle that the plane of symmetry turns over around axle Oy, the right side is rolled as just.
2) transformational relation, between cooperative target coordinate system and camera coordinate system
Point in cooperative target coordinate system can be expressed as with an orthogonal rotation matrix R and translation matrix T to the conversion of camera coordinate system: x c y c z c = R x g y g z g + T - - - ( 6 )
Wherein, R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 = cos αγ cos γ sin α sin β cos γ - cos βγ sin γ sin α cos β cos γ + sin β sin γ cos α sin γ sin α sin β sin γ + cos β cos γ sin α cos β sin γ - sin β cos γ - sin α cos α sin β cos α cos β , T=[t x, t y, t z] be the coordinate of cooperative target coordinate origin in camera coordinate system.
3) transformational relation between image coordinate system and camera coordinate system
The picture point P coordinate of object point p in camera coordinate system in image physical coordinates system is:
X=fx c/z c(7)
Y=fy c/z c(8)
Image pixel coordinates system is further converted to by the image physical coordinates system of above formula:
u=-X/d x+u 0=-f xX+u 0=-f xfx c/z c+u 0(9)
v=Y/d x+v 0=f yY+v 0=f yfy c/z c+v 0(10)
Wherein f is focal length of camera, (u 0, v 0) be the coordinate figure of image physical coordinates system initial point in image pixel coordinates system, f x, f ybe respectively x i, y isample frequency on direction, the number of pixels namely in unit length; f x, f y, u 0and v 0four parameters are only relevant with video camera inner structure.
4) transformational relation between cooperative target coordinate system and image coordinate system
Bring formula (8), (9) into formula (6) respectively, transformational relation that (7) can obtain between cooperative target coordinate system and image coordinate system:
X f = u 0 - u f x f = r 11 x g + r 12 y g + r 13 z g + t x r 31 x g + r 32 y g + r 33 z g + t z - - - ( 11 )
Y f = v 0 - v f y f = r 21 x g + r 22 y g + r 23 z g + t y r 31 x g + r 32 y g + r 33 z g + t z - - - ( 12 )
Above two formulas also claim collinearity equation, and necessarily require object point, photocentre and picture point these 3 on the same line, it illustrates dimensional target point coordinate (x g, y g, z g) and photocentre coordinate T and the mathematical relation between optical axis angle R and corresponding picture point (u, v).According to collinearity equation, under the condition that intrinsic parameters of the camera is determined, utilize several known object points and corresponding picpointed coordinate, just can solve the locus of video camera in cooperative target coordinate system and attitude.
Suppose that the coordinate of the i-th angle point under cooperative target coordinate system in cooperative target is q i = x g i y g i z g i 1 T , This coordinate under pixel coordinate system is p i=[u iv i1] t, then both sides relation is expressed as:
λ ip i=A[R T]q i(13)
Wherein, A = f x 0 u 0 0 f y v 0 0 0 1 , F x, f yrepresent the equivalent focal length of video camera respectively, i.e. f x=f/dx, f y=f/dy, f are focal length of camera, u 0, v 0for the coordinate of camera coordinate system initial point under pixel coordinate system.Rotation matrix is expressed as follows:
R = r 1 r 2 r 3 = [ r ij ] 3 × 3 = cos α cos γ sin α sin β cos γ - cos β sin γ sin α cos β cos γ + sin β sin γ cos α sin γ sin α sin β sin γ + co s β cos γ sin α cos β sin γ - sin β cos γ - sin α cos α sin β cos α cos β - - - ( 14 )
T=[t xt yt z] ttranslation matrix, λ iunknown yardstick:
λ i = e 3 T R T q i - - - ( 15 )
Wherein e 3 T 0 0 1 T , Simultaneous is obtained:
( A - 1 p i e 3 T - I ) R T q i = 0 - - - ( 16 )
Known from the cooperative target pattern of design, q iit is known quantity.In each frame of image, Corner Detection Algorithm extract minutiae p i, and unique point calibration algorithm is each p iwith the q of its correspondence iconnect.Just can obtain the pose parameter of video camera according to data above, wherein R is the amount of spin from camera to cooperative target, and T is the translational movement from camera to cooperative target.
Because cooperative target is on large ground level, so have again so can obtain:
- x g 0 u i - u 0 f x x g - y g 0 u i - u 0 f x y g - zg 0 u i - u 0 f x z g - 1 0 u i - u 0 f x 0 - x g v i - v 0 f y x g 0 - yg v i - v 0 f y y g 0 - zg v i - v 0 f y z g 0 - 1 v i - v 0 f y r 11 r 21 r 31 r 12 r 22 r 32 r 13 r 23 r 33 t x t y t z = 0 - - - ( 17 )
Referred to as F [r 1 tr 2 tt t] t=0, wherein F = F 1 T · · · F n T T , N is angle point number.
F i = x g i 0 - x g i u i y g i 0 - y g i u i 1 0 u i 0 x g i - x g i v i 0 y g i - y g i v i 0 1 v i - - - ( 18 )
The singular vectors corresponding to F minimum singular value is calculated according to standard svd (SVD) technology
Pass through with unit norm calculate the T size of translation vector,
Estimation can be utilized in two steps to obtain with solve rotation matrix R.The first step, passes through accounting equation r ~ 1 r ~ 2 0 = UΣV T SVD, if then if then q=diag (1,1 ,-1).
Finally obtain the location parameter of vehicle relative to cooperative target:
Η=[x gy gz g] T=-R -1T (19)
Attitude parameter: α = - arcsin r 31 β = arctan ( r 32 / r 33 ) γ = arctan ( r 21 / r 11 ) - - - ( 20 ) .
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. an active vision localization method, is characterized in that, comprises the steps:
Step 1, making cooperative target, and cooperative target is arranged in scene;
Wherein, described cooperative target is backboard with red pattern, backboard is arranged three yellow squares as cooperation mark; Described backboard adopts RGB color gray-scale value to be (255,0,0), and three square RGB color gray-scale values are (255,255,0); Wherein the first square and the second square with the 3rd foursquare wherein one article of diagonal line for axis of symmetry is symmetrical; Described 3rd foursquare size is greater than the first and second squares; Described 3rd square and first square and the second foursquare side ratio are 4:3; The minor increment on three square borders is greater than the length of side of smallest square;
Step 2, the cooperative target image utilized in user images collecting device acquisition scene, and extract the angle point of the cooperation mark in cooperative target, the coordinate in the image coordinate system that acquisition angle point is arranged on described cooperative target image;
Step 3, the coordinate of cooperation mark angle point in image coordinate system obtained according to the coordinate of cooperation mark angle point in cooperative target coordinate system and step 2, obtain the transformational relation between cooperative target coordinate system and image coordinate system; And then based on the coordinate transformation relation of known image coordinate system and user images collecting device coordinate system, obtain the coordinate transformation relation of user images collecting device coordinate system and cooperative target coordinate system; Finally, based on user images device coordinate system and the coordinate transformation relation of vehicle axis system and the coordinate transformation relation of cooperative target coordinate system and geographic coordinate system, obtain vehicle coordinate and attitude in geographic coordinate system, complete thus the location of vehicle in scene.
2. a kind of active vision localization method as claimed in claim 1, it is characterized in that, described backboard is of a size of 80 centimetres * 120 centimetres, the described 3rd foursquare length of side is 32 centimetres, the first and second foursquare length of sides are 24 centimetres, and the first and second square center distance the 3rd square center distances are greater than 35 centimetres; Described 3rd foursquare described axis of symmetry overlaps with described carupace width direction center line.
3. a kind of active vision localization method as claimed in claim 1, it is characterized in that, described step 2 comprises following concrete steps:
Steps A 0, the real scene image controlled in user images collecting device Real-time Collection scene, and judge whether contain cooperative target in every two field picture: if had, be image A by the image definition containing cooperative target, perform steps A 1; If no, return steps A 0;
Steps A 1, extract R, G, B triple channel gray value information of each pixel in described image A; And be 255 by the pixel assignment meeting following condition, the pixel assignment not meeting following condition is 0, after each pixel according to the method described above assignment, then carries out binaryzation to image A, obtains image A*;
Condition is: for each pixel, and R passage gray-scale value is greater than 50, R passage gray-scale value and is greater than 1.5 times of G passage gray-scale values, and R passage gray-scale value is greater than 1.3 times of channel B gray-scale values;
Steps A 2, morphology is carried out to image A* open operation, remove tiny region; And then edge extracting is carried out to image A*, and carry out edge compensation, obtain image A**;
Border sequences all in steps A 3, detected image A**, carry out Edge track, find the longest edge edge in image A**, and to extract on longest edge edge in all pixels at the maximal value of x-axis and y-axis and minimum value: maximal value xmax, ymax and minimum value xmin, ymin;
Steps A 4, in image A, connection coordinate point (xmin, ymin), (xmin successively, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as cooperative target region, and the image definition in region is image T1;
Steps A 5, gray processing is carried out to image A, after gray processing in image A, connection coordinate point (xmin successively, ymin), (xmin, ymax), (xmax, ymax) and (xmax, ymin), the region obtained is as Corner Detection region, and the image definition in region is image T2;
Steps A 6, for each pixel in image T1, be greater than 50, G passage gray-scale value be greater than 50 by meeting R passage gray-scale value, and the pixel assignment that G passage is greater than 1.5 times of channel B gray-scale values is 255, otherwise assignment is 0, obtains image T1* by binaryzation;
Steps A 7: carry out morphology to image T1* and open operation, removes region tiny in image T1*, and carries out edge extracting to T1*, and carry out edge compensation, obtain image T1**;
Steps A 8, Edge track is carried out to image T1**, find first three section of edge that in image T1**, edge length is the longest as edge square in cooperative target, delete remaining edge;
Steps A 9, utilize Harris Robust Algorithm of Image Corner Extraction to carry out angle point grid to image T2, if the angle point extracted is arranged on the square edge of described cooperative target, then using the angle point that meets this condition as cooperative target angle point;
Steps A 10, utilize Kmeans algorithm that the cooperative target angle point obtained in A9 is divided into 3 classes, if there is the situation that a class angle point number is less than 3, return steps A 10, continue classification; If there is the situation that a class angle point number equals 3, carry out angle point compensation, make each class all have 4 angle points;
Steps A 11, relative position relation according to 12 angle points, by 3 foursquare summits of 4 angle points in each class angle point and described cooperative target in correspondence with each other, obtain the coordinate of cooperative target angle point in image coordinate system.
4. a kind of active vision localization method as claimed in claim 3, is characterized in that, judges that the concrete grammar whether containing cooperative target in realtime graphic is in described steps A 0:
Step D1: R, G, B triple channel information extracting each pixel in present image;
Step D2: be greater than 50, R passage gray-scale value be greater than 1.5 times of G passage gray-scale values by meeting R passage gray-scale value, and the pixel assignment that R passage gray-scale value is greater than 1.3 times of channel B gray-scale values is 255, otherwise assignment is 0, then carries out binaryzation to present image;
Step D3: calculating gray-scale value in the image after binaryzation is the pixel number of 255, if pixel number is greater than 20% of all pixel numbers of full figure, then thinks that this present image exists cooperative target, otherwise thinks that cooperative target does not exist.
5. a kind of active vision localization method as claimed in claim 3, is characterized in that, in described steps A 10, when existence one class angle point number equals the situation of 3, described angle point compensation method is as follows:
Step B1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step B2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step B3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step B4;
Step B3, supplement shortcoming in x-axis direction;
SB31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point;
SB33, find two differences maximum in difference DELTA X1, difference DELTA X2 and difference DELTA X3, and ask the average of two differences :
When the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that self relative angle point x-axis coordinate adds twice ;
When the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point, the x-axis coordinate of shortcoming equals the difference average that self relative angle point x-axis coordinate subtracts twice ;
The y-axis coordinate of described shortcoming equals the y-axis coordinate of self relative angle point;
Step B4, supplement shortcoming in y-axis direction:
SB41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SB42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point;
SB43, find two differences maximum in difference DELTA Y1, difference DELTA Y2 and difference DELTA Y3, and ask the average of two differences :
When shortcoming be positioned at its relative angle point+y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate adds twice ;
When shortcoming be positioned at its relative angle point-y-axis direction, the y-axis coordinate of shortcoming equals the difference average that its relative angle point y-axis coordinate subtracts twice ;
The x-axis coordinate of described shortcoming equals the x-axis coordinate of self relative angle point;
Obtain the coordinate of shortcoming thus, realize angle point and compensate.
6. a kind of active vision localization method as claimed in claim 3, is characterized in that, in described steps A 10, when existence one class angle point number equals the situation of 3, described angle point compensation method is as follows:
Step C1, for a certain class angle point, represent that 3 angle points are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3] (3)
Represent that 3 angle points are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3] (4)
The minimum value of the difference in step C2, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that such angle point exists shortcoming in x-axis direction, perform step C3; If the difference in y-axis direction is minimum, then show that such angle point exists shortcoming in y-axis direction, perform step C4;
Step C3, supplement shortcoming in x-axis direction;
SC31, first judge which difference of x-axis direction is minimum:
If the 1st difference DELTA X1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA X2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA X3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC32, then calculate the relative angle point x-axis coordinate of shortcoming and two other angle point x-axis coordinate difference respectively, if two differences are all less than 0, then the x-axis coordinate of shortcoming is greater than the x-axis coordinate of self relative angle point; If two differences are all greater than 0, then the x-axis coordinate of shortcoming is less than the x-axis coordinate of self relative angle point;
Step C4, supplement shortcoming in y-axis direction:
SC41, judge which difference of y-axis direction is minimum:
If first difference DELTA Y1 is minimum, then at angle point (x2, y2) and angle point (x1, y1) line opposite side and the position relative with angle point (x3, y3) is shortcoming position;
If second difference DELTA Y2 is minimum, then at angle point (x3, y3) and angle point (x1, y1) line opposite side and the position relative with angle point (x2, y2) is shortcoming position;
If the 3rd difference DELTA Y3 is minimum, then at angle point (x2, y2) and angle point (x3, y3) line opposite side and the position relative with angle point (x1, y1) is shortcoming position;
SC42, then calculate the relative angle point y-axis coordinate of shortcoming and two other angle point y-axis coordinate difference respectively, if two differences are all less than 0, then the y-axis coordinate of shortcoming is greater than the y-axis coordinate of self relative angle point; If two differences are all greater than 0, then the y-axis coordinate of shortcoming is less than the y-axis coordinate of self relative angle point;
Step C5, the shortcoming determined according to step C3 or step C4 and the relative position relation of relative angle point, solve following formula, obtain the optimal value of coordinate P:
arg min P Σ i , j ∈ { 1,2 } , i ≡ j | | ( P - P i ) · ( P j - P 0 ) | | F 2
S.t||P-P 3|| 2=|| P 0-P 3|| 2and
Wherein, P 0represent angle point p relative to shortcoming 0coordinate, P 1and P 2be respectively in certain class angle point except shortcoming and angle point p 0the coordinate of two other angle point; P 3denotation coordination P 1with coordinate P 2line center point coordinate; represent positive integer;
Using the coordinate of the optimal value of coordinate P as shortcoming, realize angle point and compensate.
7. a kind of active vision localization method as claimed in claim 3, is characterized in that, when arranging multiple cooperative target at diverse location in scene, the backboard of each cooperative target arranging mark, distinguishing different cooperative targets with this.
8. a kind of active vision localization method as claimed in claim 7, is characterized in that, the backboard of each cooperative target is arranged multiple towards different triangles as mark, described triangle is green pattern, RGB color gray-scale value is (0,255,0);
As follows to the recognition methods of the mark of cooperative target:
Step e 1, identify leg-of-mutton number;
Step e 2, for each triangle obtained in step e 1, represent that triangle 3 summits are at x-axis direction coordinate with x1, x2 and x3 respectively, and ask poor between two, that is:
[ΔX1=x1-x2 ΔX2=x1-x3 ΔX3=x2-x3]
Represent that triangle 3 summits are at y-axis direction coordinate with y1, y2 and y3 respectively, and ask poor between two, that is:
[ΔY1=y1-y2 ΔY2=y1-y3 ΔY3=y2-y3]
The minimum value of the difference in step e 3, the difference finding x-axis direction respectively and y-axis direction, and judge: if the difference in x-axis direction is minimum, then show that this triangle is towards x-axis positive dirction or negative direction, perform step e 4; If the difference in y-axis direction is minimum, then show that this triangle is towards y-axis positive dirction or negative direction, perform step e 5;
If step e 4 triangle is towards x-axis positive dirction or negative direction;
Find two triangular apex that two x-axis differences are minimum, then judge the x coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards x-axis negative direction; If two differences are all greater than 0, then triangle is towards x-axis positive dirction;
Step e 5, triangle are towards y-axis positive dirction or negative direction:
Find two triangular apex that two y-axis differences are minimum, then judge the y coordinate difference on the 3rd summit and the first two summit, if two differences are all less than 0, then triangle is towards y-axis negative direction; If two differences are all greater than 0, then triangle is towards y-axis positive dirction;
Step e 6, step e 1 to E5 is all performed to each triangle, obtain all leg-of-mutton towards.
9. a kind of active vision localization method as claimed in claim 7, is characterized in that, the backboard of each cooperative target arranges the triangle of different number, for identifying different cooperative targets.
10. a kind of active vision localization method as claimed in claim 8 or 9, is characterized in that, the recognition methods of described triangle number is as follows:
Step F 1, to arrange triangle be green pattern, and RGB color gray-scale value is respectively (0,255,0); For the image T2 in described steps A 5, travel through each pixel, to meet that G passage gray-scale value is greater than 100, G passage is greater than 1.5 times of channel B gray-scale values and G passage is greater than 1.5 times of R passage gray-value pixel point point assignment is 255, otherwise assignment is 0, obtains image T3* by binaryzation;
Step F 2, morphology is carried out to image T3* open operation, remove region tiny in T3*, and edge extracting is carried out to T3*, and carry out edge compensation, obtain image T3**;
Step F 3, carry out Edge track to image T3**, following limb number is cooperative target intermediate cam shape number.
CN201410608792.0A 2014-11-03 2014-11-03 A kind of active vision localization method Active CN104504675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410608792.0A CN104504675B (en) 2014-11-03 2014-11-03 A kind of active vision localization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410608792.0A CN104504675B (en) 2014-11-03 2014-11-03 A kind of active vision localization method

Publications (2)

Publication Number Publication Date
CN104504675A true CN104504675A (en) 2015-04-08
CN104504675B CN104504675B (en) 2016-05-04

Family

ID=52946069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410608792.0A Active CN104504675B (en) 2014-11-03 2014-11-03 A kind of active vision localization method

Country Status (1)

Country Link
CN (1) CN104504675B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106546233A (en) * 2016-10-31 2017-03-29 西北工业大学 A kind of monocular visual positioning method towards cooperative target
CN107038722A (en) * 2016-02-02 2017-08-11 深圳超多维光电子有限公司 Equipment positioning method and device
CN107682595A (en) * 2017-08-14 2018-02-09 中国科学院深圳先进技术研究院 A kind of alternative projection method, system and computer-readable recording medium
CN108827326A (en) * 2018-06-20 2018-11-16 安徽迈普德康信息科技有限公司 A kind of acquisition method and its acquisition device of the navigation map based on big data
CN109376208A (en) * 2018-09-18 2019-02-22 高枫峻 A kind of localization method based on intelligent terminal, system, storage medium and equipment
CN110119698A (en) * 2019-04-29 2019-08-13 北京百度网讯科技有限公司 For determining the method, apparatus, equipment and storage medium of Obj State
CN115100293A (en) * 2022-06-24 2022-09-23 河南工业大学 ADS-B signal blindness-compensating method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090180667A1 (en) * 2008-01-14 2009-07-16 Mahan Larry G Optical position marker apparatus
CN104034305A (en) * 2014-06-10 2014-09-10 杭州电子科技大学 Real-time positioning method based on monocular vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090180667A1 (en) * 2008-01-14 2009-07-16 Mahan Larry G Optical position marker apparatus
CN104034305A (en) * 2014-06-10 2014-09-10 杭州电子科技大学 Real-time positioning method based on monocular vision

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038722A (en) * 2016-02-02 2017-08-11 深圳超多维光电子有限公司 Equipment positioning method and device
CN106546233A (en) * 2016-10-31 2017-03-29 西北工业大学 A kind of monocular visual positioning method towards cooperative target
CN107682595A (en) * 2017-08-14 2018-02-09 中国科学院深圳先进技术研究院 A kind of alternative projection method, system and computer-readable recording medium
CN107682595B (en) * 2017-08-14 2019-12-13 中国科学院深圳先进技术研究院 interactive projection method, system and computer readable storage medium
CN108827326A (en) * 2018-06-20 2018-11-16 安徽迈普德康信息科技有限公司 A kind of acquisition method and its acquisition device of the navigation map based on big data
CN109376208A (en) * 2018-09-18 2019-02-22 高枫峻 A kind of localization method based on intelligent terminal, system, storage medium and equipment
CN110119698A (en) * 2019-04-29 2019-08-13 北京百度网讯科技有限公司 For determining the method, apparatus, equipment and storage medium of Obj State
CN110119698B (en) * 2019-04-29 2021-08-10 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for determining object state
CN115100293A (en) * 2022-06-24 2022-09-23 河南工业大学 ADS-B signal blindness-compensating method

Also Published As

Publication number Publication date
CN104504675B (en) 2016-05-04

Similar Documents

Publication Publication Date Title
CN104504675A (en) Active vision positioning method
Sochor et al. Traffic surveillance camera calibration by 3d model bounding box alignment for accurate vehicle speed measurement
CN106651953B (en) A kind of vehicle position and orientation estimation method based on traffic sign
CN104536009B (en) Above ground structure identification that a kind of laser infrared is compound and air navigation aid
CN106340044B (en) Join automatic calibration method and caliberating device outside video camera
CN104778721B (en) The distance measurement method of conspicuousness target in a kind of binocular image
CN107392963B (en) Eagle eye-imitated moving target positioning method for soft autonomous aerial refueling
CN110501018B (en) Traffic sign information acquisition method for high-precision map production
CN110285793A (en) A kind of Vehicular intelligent survey track approach based on Binocular Stereo Vision System
CN104637073B (en) It is a kind of based on the banding underground structure detection method for shining upon shadow compensation
Wu et al. Vehicle localization using road markings
CN107330376A (en) A kind of Lane detection method and system
CN104121902B (en) Implementation method of indoor robot visual odometer based on Xtion camera
CN111079589B (en) Automatic height detection method based on depth camera shooting and height threshold value pixel calibration
CN105335973A (en) Visual processing method for strip steel processing production line
CN104102909B (en) Vehicle characteristics positioning and matching process based on lenticular information
CN103745221A (en) Two-dimensional code image correction method
CN108362205A (en) Space ranging method based on fringe projection
CN101620672B (en) Method for positioning and identifying three-dimensional buildings on the ground by using three-dimensional landmarks
CN103488801B (en) A kind of airport target detection method based on geographical information space database
CN102095370B (en) Detection identification method for three-X combined mark
CN107424156A (en) Unmanned plane autonomous formation based on Fang Cang Owl eye vision attentions accurately measures method
CN116152342A (en) Guideboard registration positioning method based on gradient
CN104567879A (en) Method for extracting geocentric direction of combined view field navigation sensor
CN113221883B (en) Unmanned aerial vehicle flight navigation route real-time correction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wei Dongyan

Inventor after: Lai Qifeng

Inventor after: Zhang Xiaoguang

Inventor after: Li Wen

Inventor after: Chen Xialan

Inventor after: Li Xianghong

Inventor after: Xu Ying

Inventor after: Yuan Hong

Inventor after: Gong Xuping

Inventor before: Gong Xuping

Inventor before: Wei Dongyan

Inventor before: Lai Qifeng

Inventor before: Zhang Xiaoguang

Inventor before: Chen Xialan

Inventor before: Li Xianghong

Inventor before: Xu Ying

Inventor before: Yuan Hong

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant