CN107167826A - The longitudinal direction of car alignment system and method for Image Feature Detection based on variable grid in a kind of automatic Pilot - Google Patents
The longitudinal direction of car alignment system and method for Image Feature Detection based on variable grid in a kind of automatic Pilot Download PDFInfo
- Publication number
- CN107167826A CN107167826A CN201710205430.0A CN201710205430A CN107167826A CN 107167826 A CN107167826 A CN 107167826A CN 201710205430 A CN201710205430 A CN 201710205430A CN 107167826 A CN107167826 A CN 107167826A
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- target
- feature
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 74
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 230000004807 localization Effects 0.000 claims abstract description 12
- 238000009434 installation Methods 0.000 claims abstract description 5
- 230000008569 process Effects 0.000 claims description 21
- 230000000750 progressive effect Effects 0.000 claims description 16
- 230000004044 response Effects 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 10
- 238000012946 outsourcing Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 7
- 239000012491 analyte Substances 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 4
- 241000208340 Araliaceae Species 0.000 claims description 3
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 3
- 235000008434 ginseng Nutrition 0.000 claims description 3
- 238000011179 visual inspection Methods 0.000 claims description 3
- 238000000205 computational method Methods 0.000 claims description 2
- 238000002790 cross-validation Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 claims description 2
- 230000009191 jumping Effects 0.000 claims description 2
- 230000002045 lasting effect Effects 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims description 2
- 230000002441 reversible effect Effects 0.000 claims description 2
- 238000012552 review Methods 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims description 2
- 230000001960 triggered effect Effects 0.000 claims description 2
- 238000009966 trimming Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007850 degeneration Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/40—Correcting position, velocity or attitude
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/343—Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Abstract
The present invention provides a kind of longitudinal direction of car localization method of the Image Feature Detection based on variable grid in automatic Pilot, the target range that road is calculated in the objects ahead got in the data exported according to high-precision navigation system and high accuracy navigation, pass through ORB feature extraction algorithm of the application based on variable grid region (carrying dimensional information), specific objects ahead can be retrieved in vehicle-mounted binocular vision system, and exports vision system to the distance of objects ahead.According to this installation site of distance and biocular systems in vehicle, you can be corrected with the track of vehicle in high accuracy navigation, improve the longitudinal direction of car positioning precision in automatic Pilot.
Description
Technical field
The invention belongs to automatic driving technical field, be related to a kind of autonomous driving vehicle the system positioned from car and
Method, specifically refer to a kind of Image Feature Detection based on variable grid in automatic Pilot longitudinal direction of car positioning system and
Method.
Background technology
The automated driving system of intelligent vehicle needs to rely on accurately diagram data, according to designated destination information, moves
State, economic completion overall situation and partial situation path planning form itself navigation path, safely and conveniently complete the items of vehicle
Control action., it is necessary in real time, accurately understand the high accuracy positioning information of vehicle itself in the implementation procedure of this system,
The judgement that makes a policy aspect in could be controlled current running state.
Conventional positioning method is generally GNSS (Global Navigation Satellite in automatic Pilot
System IMU (Inertial Measurement Unit) compositions) are combined.GNSS can be got in countryside flat-bottomed land compared with
Good positioning precision, but in complicated urban area circumstance, the Multipath reflection effect that signal is propagated easily causes several meters of scopes
Positioning precision error;IMU typically uses the metrical instruments such as gyroscope, multi-shaft acceleration transducer to constitute, and detection is current certainly in real time
The posture and acceleration of body, vehicle movement information that can accurately in recursion certain distance according to IMU, but navigated using IMU
Mark can produce the accumulation of error during calculating, the degeneration for increasing positioning precision with the time is more serious.Pass through fusion and interpolation
GNSS and IMU data, can reach preferable high precision positioning effect.
However, if it is to ensure to complete high accuracy positioning only with GNSS+IMU modes in automated driving system
Safety in the implementation procedure of automatic decision, accurate control action is completed, it is necessary to rely on extra localization method and sensor enters
Row auxiliary.General, matched using LiDAR (Light DetectionAndRanging) laser point clouds obtained with complete
Object detection and recognition, depth calculation, estimation are carried out into positioning of the vehicle in local environment, and using multiple-camera
Etc. come complete positioning.Both respectively in use high cost LiDAR and inexpensive multiple-camera scheme, with conventional based on height
The mutual auxiliary corrective error of cost GNSS+IMU schemes, can provide high accuracy positioning information in automatic Pilot.
The automatic Pilot auxiliary positioning mode based on video camera, typically calculates camera posture changing and is regarded to constitute at this stage
Feel odometer, this method accurate can determine position and posture of the vehicle in the range of certain time.But based on binocular
The visual odometry of video camera needs to carry out the calculating of the correcting of left and right mesh image, registration and disparity map in real time, can not with compared with
High-frequency is exported, and the depth calculation frame per second of binocular image is carried out less than 10FPS for the image of 1600 × 1200 Pixel Dimensions
(Frames Per Second).Image procossing frame per second is relatively low, relies on video camera and calculates obtained location information output frequency
It is relatively low, when being merged with other location informations such as GNSS+IMU, then need to consider more time synchronizeds, linear/non-linear
The problems such as interpolation, influence reliability, real-time, the accuracy of high accuracy positioning information.It can reach real-time, applicable, robust
The multiple-camera image processing techniques scheme of track level positioning precision, in intelligent transportation detecting system with being accounted in automatic Pilot field
According to the status of core.
The content of the invention
In view of the shortcomings of the prior art, the technical problem to be solved in the present invention is to provide a kind of image based on variable grid
The system and method for longitudinal direction of car positioning of the feature detection in automatic Pilot, (yardstick is carried using based on variable grid region
Information) ORB feature extraction algorithms, specific objects ahead is retrieved in vehicle-mounted binocular vision system, and export vision system
Unite to the distance of objects ahead, according to this installation site of distance and biocular systems in vehicle, you can led with high accuracy
Track of vehicle in boat is corrected, and improves the longitudinal direction of car positioning precision in automatic Pilot.
The technical solution adopted in the present invention is as follows:
A kind of longitudinal direction of car alignment system of Image Feature Detection based on variable grid in automatic Pilot, the system is pressed
Being divided according to functional module includes high-precision navigation system, binocular camera, image pre-processor, object detector, target following
Device and target range calculator;
The high-precision navigation system, for carrying out map retrieval in real time and according to car body current location to object detector
The appearance in front of car body traveling or the title ID for the object that will occur are sent, and high accuracy is led according to specific objective distance
Boat carries out fore-and-aft distance correction;
The binocular camera, including left lens camera and right lens camera, are regarded for real-time collection vehicle traveling front
Frequency image, and export and pre-processed to image processor;
Described image preprocessor, the inside and outside parameter for being demarcated according to binocular camera carries out binocular camera collection figure
The distribution work of the distortion correction of picture, epipolar-line constraint correction, the gray processing of image, and image;
The object detector, claims ID for receiving the specific objective name that high-precision navigation system is sent, to figure
Left mesh image carries out the Image Feature Matching work based on variable grid in the gray level image distributed as preprocessor, and gives birth to offline
Into object variable grid tag file;
The target tracker, the spy that image and object detector for being inputted according to image pre-processor are detected
Set the goal the detection rectangle frame of thing, the tracking operation based on image-region is completed, by specific target site area image and its neighborhood portion
Partial image judges convolution response highest area in the new scene image of view picture input as convolution mask in subsequent picture frame
Domain, then update current convolution mask with highest response region;And continue to export each frame highest response region as target with
Track position;
The target range calculator, the vertical of binocular camera distance objective thing is calculated for being constrained by epipolar geom etry
Distance on direction, and then obtain distance when car body present frame is gathered with specific objective thing.
A kind of vehicle of the Image Feature Detection based on variable grid based on said system realization in automatic Pilot is indulged
To localization method, comprise the following steps:
S1, on automatic driving vehicle of the high-precision map with GNSS+IMU systems was disposed, to binocular camera shooting before installing
Machine, and the demarcation of inside and outside parameter is carried out to binocular camera;
Specific objective thing in S2, the scene in accurately chart database, takes the photograph with reference to when being run in real road
The frame of video of camera collection, extracts the multiple image for including specific target areas, constitutes object and extracts picture frame sequence, carries out
Object feature extraction, makes the feature based on variable grid and describes file;
S3, starts the preparatory stage in automatic driving vehicle, is sequentially completed high-precision navigation system, binocular camera, image
Preprocessor, object detector, target tracker, the initial work of target range calculator modules;
S4, the operation phase on automatic Pilot line, whole system needs high accuracy navigation to carry out the triggering of detection process, needs
The data source for wanting input system is the general distance that calculates, Yi Jishuan in the title ID of object in high-precision map, navigation
Lens camera passes through the picture frame sequence that pretreatment operation is inputted, and is carried out according to the flow of " detection "-" tracking "-" distance is exported "
The distance output of specific objective thing in scene;
S5, the distance of output is calculated by video camera, and input to high-precision navigation module, auxiliary performs longitudinal direction and corrected
Journey.
Compared with prior art, the present invention has advantages below:
1. the DYNAMIC COMPLEX road conditions faced for automatic Pilot, can be effectively using inexpensive vision sensor
GNSS+IMU alignment system longitudinal errors are corrected, are improved from car positioning precision;
2. reducing computational complexity in traditional binocular vision system, the positioning of vision sensor part is set to reach that system is real
The output frequency requirement of when property;
3. during realistic objective is detected and is tracked, more calculating is operated using image convolution, is easy to the inventive method
Transplanting in engineer applied in embedded+GPU system with it is hardware-accelerated.
Brief description of the drawings
Fig. 1 is a kind of the vertical of Image Feature Detection based on variable grid in the automatic Pilot implemented according to present invention
To positioning system structure figure and method flow schematic diagram;
Fig. 2 is the result schematic diagram of handmarking's specific objective thing region of correspondence present invention step 2;
Fig. 3 is variable grid division result schematic diagram of the present invention, and the Meshing Method uses step 42
Described bipartition point method;
Fig. 4 is that characteristic matching is used for target analyte detection and the schematic diagram of positioning, wherein left side is object to be detected, it is right
Side is test scene, by feature point detection and matching process, can obtain the feature point correspondence in two width figures, calculate it
The homography of corresponding relation, you can detection of the testing result of object with target where possible is obtained in test scene
Region;
Fig. 5 describes the image input object detector of left and right lens camera and successfully completes the detection of object with carrying
The operation result schematic diagram taken;
Fig. 6 depicts the judgment mode of progressive degree:Outsourcing rectangle ABCD (image outward flange) and inside are to specific objective
Analyte detection frame rectangle MNOP, inner rectangular frame MNOP tetra- sides respectively apart from outsourcing rectangle vertical range for dis_T, dis_R,
Dis_B and dis_L;
Fig. 7 describes what is matched according to 0,1,2 sequence number of the Image Feature Matching algorithm based on variable grid
Result figure;Left side is the figure that redraws of different sequence number detection templates, and right side is the scene image frame that actual camera is gathered, from a left side
The line segment that side to right side is connected is the character pair match point line drawn;
Fig. 8 is continuous 20 two field picture tracking sequence;Upper left corner image for input the first two field picture and to be tracked
Target rectangle frame, remaining image is the tracking result of successive frame.
Embodiment
In order that the above objects, features and advantages of the present invention can be more obvious understandable, below in conjunction with the accompanying drawings to this hair
Bright embodiment is described in detail.
The invention provides a kind of Image Feature Detection Algorithms based on variable grid, by pacifying in automated driving system
The vehicle-surroundings image that dress video camera is collected, by being provided in the high-precision navigation system of input on special scenes target
The target range that road is calculated during information is navigated with high accuracy, it is a kind of based on variable grid area size (carrying chi by application
Spend information) feature extraction algorithm, specific objects ahead can be retrieved in vehicle-mounted binocular vision system, and export vision
Distance of the system to objects ahead.According to the physical location of the installation of this range information and biocular systems in vehicle, i.e.,
It can be corrected with the track of vehicle in high accuracy navigation, improve the longitudinal direction of car positioning precision in automatic Pilot.
Special scenes target in above-mentioned high-precision map, refers to the fixed scene thing such as various traffic information indicators
Body, these specific objectives are generally present in accurately chart database, therefore, it is possible to the quilt in the navigation system of automatic Pilot
Appointment data relation in advance, i.e., a series of variable grid features descriptions for belonging to a certain specific objective, with accurately diagram data
The title of middle correspondence target.In automatic driving vehicle motion process, car is likely to occur in front of high-precision navigation system anticipation
Carry the object title in camera field of view, this information is sent to module of target detection call it is prefabricated variable in database
Grid search-engine file, to complete the detection of specific objective with matching.
The present invention provided in longitudinal auxiliary positioning of automatic Pilot a kind of Image Feature Detection based on variable grid with
Extracting method, is comprised the following steps that:
S1, on automatic driving vehicle of the high-precision map with GNSS+IMU systems was disposed, to binocular camera shooting before installing
Machine, and the demarcation of inside and outside parameter is carried out to binocular camera;
The step in, it is necessary to which the device attribute recorded in detail has with parameter:Left lens camera is relative to vehicle body coordinate system
Installation site information, left lens camera internal reference M1、D1, right lens camera internal reference M2、D2, and left and right lens camera outer ginseng R,
T, wherein M1With M2The focal length f of two video cameras is represented respectivelyx、fyWith principle point location cx、cy, form is 3*3 matrix forms:D1、D2The image deformation coefficient of left and right lens camera is represented respectively;External parameter R, T are used to describe the right side
The anglec of rotation and translation distance of the position of lens camera relative to left lens camera.
Can be with correcting camera image deformation by the intrinsic parameter of binocular camera, and carrying out disparity computation, depth
Join outside application in calculation or so lens camera.On the one hand it can be corrected by intrinsic parameter because stationary lens install the tangential of introducing
With radial distortion, eliminated as much as the distortion error that similar smooth linear edge in imaging results is degenerated to camber line edge;
On the other hand, the projection relation between the 3D points (X, Y, Z) in right mesh camera image plane a little in (x ', y ') and scene can be with
Provided by the inside and outside parameter of camera:Wherein s only represents the change on yardstick
Change amount.
Specific objective thing in S2, the scene in accurately chart database, takes the photograph with reference to when being run in real road
The frame of video of camera collection, extracts the multiple image for including specific target areas, constitutes object and extracts picture frame sequence, carries out
Object feature extraction, makes the feature based on variable grid and describes file;
Wherein, the image of object feature extraction is carried out, the frame of video of left lens camera collection is come from.For specific objective
Thing carries out the frame of video of feature extraction, can according to estimate roughly it is near, in, 3 two field pictures of remote collection constitute object abstraction sequence.
Particularly, the present invention supports more multiframe object image to carry out in feature extraction, but practical application for characteristic matching efficiency
Consideration, no more than 3 frame frame of video carry out feature extraction to be excellent.
The step in, can according to specific objective detection algorithm and/or the method for hand labeled, in picture frame detect
And/or frame selects the specific objective thing in scene, feature describes the size of mesh opening for needing to record the choosing of this time-frame in file.Apart from mesh
Mark thing different distance when acquired image sequence, size of mesh opening can feedback collection when video camera (automatic driving vehicle) distance
The range information of object, and object in grid presented in the picture details, characteristic point ordered series of numbers, i.e., containing abundant
Dimensional information.
Particularly, the present invention is in the calculating process for carrying out the image characteristics extraction based on variable grid, the feature used
Description algorithm is that classics ORB (ORiented Brief) feature describes algorithm.Further, the object stored after serializing can
The storage content of Moving grids tag file is defined as follows:
A) accurately in chart database, the title ID of searchable object;General, it is string number numbering;
B) according to the how far of distance objective thing during IMAQ, for the sequence of the picture frame sequence distribution collected
Number;The assigned sequence number since 0.Current goal thing gathers 3 frames, then corresponds to 0,1,2 image sequence numbering.
C) in the picture frame of the different sequence numbers of correspondence, the key feature point sequence in specific objective thing grid is described
Key feature point sequence refers to the characteristic point subset that can be matched between multiframe, i.e., represent same point in picture frame sequence
The sequence that is constituted of characteristic point;
D) the ORB features description of the characteristic point corresponding with characteristic point sequence;
E) in storage image specific objective thing retrieval mesh related information, it is general, include the upper left corner figure of the grid
Width and height as coordinate with grid in the picture;For the method for handmarking, grid is to should object in two field picture
Maximum outsourcing rectangle;For specific algorithm of target detection, grid is obtained when having corresponded to present frame operational objective detection algorithm
The rectangular window of the target analyte detection positioning output taken;
F) according to during the collection obtained by current traditional GNSS+IMU navigation system with the distance between object information.
Further, it is possible to the extraction of characteristics of image of the offline completion based on variable grid, that is, made with accurately
The related feature templates library file of specific objective thing in diagram data.
S3, starts the preparatory stage in automatic driving vehicle, is sequentially completed the initial work of modules;
Divided according to the functional module of the system, it is necessary to which to be sequentially completed high-precision navigation system, binocular camera, image pre-
Processor, object detector, target tracker, the modules initialization task of target range calculator.
Image pre-processor needs the distortion correction, epipolar-line constraint correction, image of completion binocular camera collection image
Gray processing, and image distribution work, therefore initialization when need rely on step 1 in camera calibration inside and outside parameter.
Further, the gray processing of coloured image needs to complete by 0.299*b+0.587*g+0.114*r, and b, g, r represent each respectively
Blueness, green, the image pixel intensities of red triple channel of pixel.
Object detector needs completing for task to be to receive the specific objective name that high-precision navigation system is sent
Claim, left mesh image carries out the Image Feature Matching work based on variable grid in the gray level image distributed to image pre-processor.
Therefore, object detector needs the tag file generated offline in initialization step 2.
Target tracker can complete the tracking operation based on image-region, because the module has used general discrete Fourier
The computing storehouse FFTW (FastestFourier Transform intheWest) of conversion in initial phase, it is necessary to complete FFTW
Parameter preset read in.
Target range calculator is the nucleus module for performing the Deep Computing based on binocular camera, and initialization is dependent on double
The inside and outside parameter of lens camera.
S4, the operation phase on automatic Pilot line, whole system need high accuracy navigation to carry out the triggering of detection process, need
The data source for wanting input system is the general distance that calculates in the title of object in high-precision map, navigation, and binocular
Video camera pass through pretreatment operation input picture frame sequence, according to ' detection ' --- ' tracking ' --- ' apart from export ' flow
Carry out the distance output of specific objective thing in scene.
Further, the step 4 includes following sub-step:
S41, the object title ID of map retrieval scope is transmitted into using high-precision navigation system, will in step s3
The corresponding feature of object initialized describes file pre-read to object detector;It is special to take out correspondence, it is necessary to pre-read
Mesh related information and storage content defines e with defining f in range information, i.e. step 2 during collection.
S42, the object distance value estimated when being triggered using high-precision navigation system, the mesh with pre-read in step S41
Mark thing feature describes file content, judges the size of mesh opening information that now variable grid is used, and using based on binary chop
Division methods, image pre-processor is exported using the size of mesh opening left lens camera collection image divide, obtain
To current variable grid;Calculating process is as follows:
(1/2)(n+1)*LenGlobal<=LenBlock<=(1/2)n*LenGlobal
In formula, n is needed according to the binary chop number of times iterated to calculate out, i.e. respectively number of times;LenGlobalFor by pre- place
The length or width of the camera review of reason;LenBlockIn the dimensional information used for current variable grid, the length of size of mesh opening or
It is wide;By since n=1 interative computation the n values of above formula are met until calculating;It should be noted that here respectively according to figure
As y, x direction calculating based on the length and width of image corresponding to nyAnd nx;Respectively in the input after pretreatment of left lens camera
On image, a length of n is averagely dividedyPart, a width of n of divisionxPart, the image lattice divided is current variable grid.
S43, successively travel through previous step obtain based on variable grid divide after image-region, to each region carry out
ORB feature extractions, and complete to describe the key feature point sequence in file and the ORB features description of characteristic point with object feature
The detection of (storage content being loaded into step 41 defines c, defines d) is with matching;If the match is successful, to image pre-processor
The image of the right lens camera collection of output carries out identical and detected with matching and jumping to step S44;Otherwise image preprocessing
Device reads in the left mesh image of next frame, re-executes the step;
Further, matching process specifically includes herein below:First, by the difference distance of ORB feature point descriptions come
It is similar, head between the template characteristic point that characteristic point and step S41 to each region are loaded into judge whether two ORB characteristic points
First complete a Feature Points Matching, obtain one from " detection zone " sequence to the Feature Points Matching pair of " template area ", then
Reverse from " template area " to " detection zone " completes the sequence that same operation obtains the Feature Points Matching pair of cross validation;
The detection zone refers to the specific objective thing region that left mesh camera is collected, and the template area refers to from high-precision map number
According to the characteristic target thing region in storehouse;Secondly obtained from two groups of matched sequences " detection zone " and " template area " it
Between the mapping relations of characteristic point that exist, and by the inverible transform existed between Direct mapping and back mapping, be defined as replay
Penetrate relation;Then during detection current matching, if the quantity of characteristic point pair, which reaches, can ask for the threshold value of mapping matrix, once
More than 4 characteristic points are to can so calculate mapping matrix, then carry out remapping relation detection, if it is satisfied, then this is matched
Process success, otherwise fails.
Further, if matching process fails always, until high-precision navigation arrangement is retrieved currently from map datum
Vehicle have left the viewing area of specific objective thing, then declare current one-time detection mission failure, this calculation process is no longer
Proceed down.
S44, will successfully complete respectively the left and right mesh picture frame of target analyte detection and left and right mesh image detection rectangle frame as
Parameter, is completed to left mesh image, the tracker of the specific region of right mesh image, tracker initialization procedure needs incoming parameter
It is that rectangle frame and corresponding left and right mesh image are surveyed in the left and right visual inspection that previous step is successfully got, once tracker is initialized
The lasting left and right mesh picture frame for exporting image pre-processor is separately input into corresponding target following in success, subsequent process
Device, target tracker using specific target site area image and its neighborhood parts of images as convolution mask, in subsequent picture frame
Judge convolution response highest region in the new scene image of view picture input, then current convolution mould is updated with highest response region
Plate;The specific target site area image is the detection rectangle frame region of specific objective thing;
Target tracker persistently exports each frame highest response region as the tracing positional of target;
Further, judge the degree of closeness at edge in the image of tracing positional and camera acquisition, use [0,1] area
Between progressive metrization.The computational methods of progressive degree are that the outsourcing rectangle 4 for calculating specific objective tracing positional in the ban respectively is pushed up
Point arrive respectively 4 edges of picture appearance apart from minimum value dbMinDis, according to the high H of scene imagescnWith wide Wscn, Ke Yiji
Calculate it is progressive degree rectangle (in current frame image the outsourcing rectangle frame of target following apart from scene image border beeline, as
One virtual, the entitled progressive every a line and scene image edge for spending rectangle distance) and scene image rectangular area
Ratio ((Hscn-2*dbMinDis)*(Wscn-2*dbMinDis))/(Hscn*Wscn).When ratio more levels off to 1 tracing positional
Closer to image border.Describe, when vehicle will be travelled by object to be passed through, adopted from video camera from imaging meaning
Seen on the image of collection, the image of object is also that will be moved about from picture centre to image border, then crosses image side
Edge (exceeding camera field of view angle) disappears until from image border.
From the calculating of progressive degree output, can be very good to reflect now video camera whether with target suitably away from
From, and whether need according to progressive degree as step 45 and the cut-off condition of subsequent step.General, progressive degree will be set
Threshold value is 0.8~0.9, is considered as the edge that target gets too close to image more than this threshold value, may have subregion
The direct images match process of left and right mesh can not be completed beyond image border.
S45, judges the degree of closeness at edge in the image of tracing positional and camera acquisition, i.e., progressive degree, and sets gradually
Progress threshold value is 0.9, if the progressive degree in the image border for the target outsourcing rectangle that left and right mesh is traced into all is not more than 0.9, is entered
Enter target range calculator, calculated by epipolar geom etry constraint in the vertical direction of binocular camera distance objective thing away from
From, and then obtain distance when car body present frame is gathered with specific objective thing;Otherwise it is assumed that now left and right lens camera has been
Specific tracking object can not completely be collected.
Further, into the left and right mesh image of target range calculator, and corresponding tracking rectangle frame region, Neng Goutong
Cross calculating epipolar geom etry and calculate the depth of field of correspondence object, that is, obtained when current image frame is gathered, binocular camera shooting
Distance in the vertical direction of machine distance objective thing, the position that this distance can be according to left lens camera under vehicle axis system
Obtain the distance to specific objective thing when car body current image frame is gathered.
Step 5, the distance by imaging generator terminal calculating output, input to high-precision navigation module, auxiliary perform longitudinal direction school
Positive process.
With reference to accompanying drawing 1, the present invention will be further described:
First, on the automatic driving vehicle of software and hardware system for deploying the present invention, binocular camera system is started,
Image pre-processor is initialized, the left and right mesh image by pretreatment can be corrected by camera distortion, binocular polar curve is corrected, be cut out
Cut and scale.Left and right mesh image after processing can be dispensed to object detector, target tracker, target range on demand
Calculator.
Secondly, using prefabricated variable grid offline feature file initialized target detector, FFTW configuration files are used
Initialized target tracker, uses binocular camera inside and outside parameter initialized target distance calculator.It is whole after success to be initiated
Set system can on-line running.
3rd, during automatic driving vehicle traveling, being sent by high-precision navigation system " needs front specific objective to examine
Survey " instruct to object detector.Object detector can be called not according to now object distance is calculated by navigation system
With target sequence number (character pair file memory format b), as shown in Figure of description 7, reflect from top to bottom by
The result figure matched according to 0,1,2 sequence number of the Image Feature Matching algorithm based on variable grid, left side is different sequences
Number detection template redraws figure, and right side is the scene image frame that actual camera is gathered, the line segment connected from left side to right side is
The character pair match point line of drafting.
4th, after left visual inspection is surveyed successfully, identical operating procedure is carried out to right mesh image.Detected into if right mesh remains unchanged
Work(, then will enter the operating environment of target tracker.This process is described in Fig. 8 continuous 20 two field picture tracking sequence
Example, Fig. 8 upper left corners image is the first two field picture and target rectangle frame to be tracked of input, and remaining image is successive frame
Tracking result.Left and right mesh image inputs target tracker, and the result tracked simultaneously.
5th, by when the tracking result of front left and right mesh picture frame and the left and right mesh image by image pre-processor distribution
Input target range calculator, the distance value exported.General, unsuccessfully indicated if this step is returned, need inspection
Survey and work as in front left and right mesh image whether include particular detection object, i.e., invocation target detector checks return value again, is used to
Judge whether current goal thing departing from video camera visual range, whether this specific objective detection, tracking, distance output
Handling process finishes.
6th, export in previous step apart from result of calculation, and the left and right mesh image of corresponding acquisition camera is
System timestamp, is back to high-precision navigation module in the lump, directly or indirectly corrects indulging for high accuracy navigation correspondence system timestamp
To vehicle location.
So far, embodiments of the invention terminate, that is, complete the characteristics of image based on variable grid in an automatic Pilot
Detect the application example in longitudinal direction of car positioning.
In the description of this specification, the description of term " one embodiment " etc. means that combining the embodiment or example describes
Specific features, structure, material or feature be contained in the present invention at least one embodiment or example in.In this manual,
Identical embodiment or example are not necessarily referring to the schematic representation of above-mentioned term.Moreover, the specific features of description, knot
Structure, material or feature can in an appropriate manner be combined in any one or more embodiments or example.
The part not illustrated in specification is prior art or common knowledge.The present embodiment is merely to illustrate the invention,
Rather than limitation the scope of the present invention, those skilled in the art change for equivalent replacement of the invention made etc. to be considered
Fall into invention claims institute protection domain.
Claims (10)
1. a kind of longitudinal direction of car alignment system of Image Feature Detection based on variable grid in automatic Pilot, its feature exists
In:The system is divided according to functional module includes high-precision navigation system, binocular camera, image pre-processor, target detection
Device, target tracker and target range calculator;
High-precision navigation system, for carrying out map retrieval in real time and sending car body to object detector according to car body current location
The title ID of appearance or the object that will occur in front of traveling, and high accuracy navigation is indulged according to specific objective distance
To distance correction;
Binocular camera, including left lens camera and right lens camera, for real-time collection vehicle traveling front video image, and
Export and pre-processed to image processor;
Image pre-processor, the inside and outside parameter for being demarcated according to binocular camera carries out the distortion that binocular camera gathers image
The distribution work of correction, epipolar-line constraint correction, the gray processing of image, and image;
Object detector, claims ID, to image preprocessing for receiving the specific objective name that high-precision navigation system is sent
Left mesh image carries out the Image Feature Matching work based on variable grid, and the target generated offline in the gray level image of device distribution
Thing variable grid tag file;
Target tracker, the specific objective thing that image and object detector for being inputted according to image pre-processor are detected
Detection rectangle frame, complete the tracking operation based on image-region, specific target site area image and its neighborhood parts of images made
For convolution mask, convolution response highest region in the new scene image of view picture input is judged in subsequent picture frame, then use
Highest response region updates current convolution mask;And continue to export each frame highest response region as the tracing positional of target;
Target range calculator, for being constrained by epipolar geom etry in the vertical direction for calculating binocular camera distance objective thing
Distance, and then obtain distance when car body present frame is gathered with specific objective thing.
2. a kind of longitudinal direction of car localization method of Image Feature Detection based on variable grid in automatic Pilot, its feature exists
In:Comprise the following steps:
S1, on automatic driving vehicle of the high-precision map with GNSS+IMU systems was disposed, to binocular camera before installing, and
The demarcation of inside and outside parameter is carried out to binocular camera;
Specific objective thing in S2, the scene in accurately chart database, with reference to video camera when being run in real road
The frame of video of collection, extracts the multiple image for including specific target areas, constitutes object and extracts picture frame sequence, carries out target
Thing feature extraction, makes the feature based on variable grid and describes file;
S3, starts the preparatory stage in automatic driving vehicle, is sequentially completed high-precision navigation system, binocular camera, image and locates in advance
Manage device, object detector, target tracker, the initial work of target range calculator modules;
S4, the operation phase on automatic Pilot line, whole system needs high accuracy navigation to carry out the triggering of detection process, it is necessary to defeated
The data source for entering system is the general distance that calculates in the title ID of object in high-precision map, navigation, and binocular is taken the photograph
Camera passes through the picture frame sequence that pretreatment operation is inputted, and scene is carried out according to the flow of " detection "-" tracking "-" distance is exported "
The distance output of middle specific objective thing;
S5, the distance of output, input to high-precision navigation module are calculated by imaging generator terminal, and auxiliary performs longitudinal trimming process.
3. a kind of longitudinal direction of car of the Image Feature Detection based on variable grid according to claim 2 in automatic Pilot
Localization method, it is characterised in that:In the step S1, described inside and outside parameter includes:Left lens camera is relative to vehicle body coordinate
The installation site information of system, left lens camera internal reference M1、D1, right lens camera internal reference M2、D2, and left and right lens camera outer ginseng
R, T, wherein M1With M2The focal length f of two video cameras is represented respectivelyx、fyWith principle point location cx、cy, form is 3*3 matrix forms:D1、D2The image deformation coefficient of left and right lens camera is represented respectively;External parameter R, T are used to describe the right side
The anglec of rotation and translation distance of the position of lens camera relative to left lens camera.
4. a kind of longitudinal direction of car of the Image Feature Detection based on variable grid according to claim 3 in automatic Pilot
Localization method, it is characterised in that:Step S2, carries out the image of object feature extraction, from regarding that left lens camera is gathered
Frequency frame, for specific objective thing carry out the frame of video of feature extraction according to apart from car body it is near, in, 3 two field pictures of remote collection constitute
Object extracts picture frame sequence.
5. a kind of longitudinal direction of car of the Image Feature Detection based on variable grid according to claim 4 in automatic Pilot
Localization method, it is characterised in that:The step S2, including according to specific objective detection algorithm and/or the method for hand labeled,
Detection and/or frame select the specific objective thing in scene in picture frame sequence, and the grid of frame choosing is recorded in feature describes file
Size.
6. a kind of longitudinal direction of car of the Image Feature Detection based on variable grid according to claim 5 in automatic Pilot
Localization method, it is characterised in that:In step S2, the feature describes the storage content of file including defined below:
A) accurately in chart database, the title ID of searchable object;
B) according to the how far of distance objective thing during IMAQ, for the sequence number of the picture frame sequence distribution collected;
C) in the picture frame of the different sequence numbers of correspondence, the key feature point sequence in specific objective thing grid, described key
Characteristic point sequence refers to the characteristic point subset that can be matched between multiframe, i.e., the spy of same point is represented in picture frame sequence
Levy a little constituted sequence;
D) the ORB features description of the characteristic point corresponding with characteristic point sequence;
E) the retrieval mesh related information of specific objective thing, including the grid upper left corner image coordinate and grid in the picture
It is wide and high;For the method for handmarking, grid be to should in two field picture object maximum outsourcing rectangle;For specific
For algorithm of target detection, grid has corresponded to what the target analyte detection positioning obtained during present frame operational objective detection algorithm was exported
Rectangular window;
F) according to during the collection obtained by current traditional GNSS+IMU navigation system with the distance between object information.
7. a kind of longitudinal direction of car of the Image Feature Detection based on variable grid according to claim 6 in automatic Pilot
Localization method, it is characterised in that:Step S4 specifically includes following sub-step:
S41, the object title ID of map retrieval scope is transmitted into using high-precision navigation system, will be initial in step s3
The corresponding feature of object changed describes file pre-read to object detector;
S42, the object distance value estimated when being triggered using high-precision navigation system, the object with pre-read in step S41
Feature describes file content, judges the size of mesh opening information that now variable grid is used, and using drawing based on binary chop
Divide method, the image of the left lens camera collection exported using the size of mesh opening to image pre-processor is divided, and is worked as
Preceding variable grid;Calculating process is as follows:
(1/2)(n+1)*LenGlobal<=LenBlock<=(1/2)n*LenGlobal
In formula, n is to need the binary chop number of times that goes out according to iteration 93, i.e. respectively number of times;LenGlobalFor by pretreatment
The length or width of camera review;LenBlockIn the dimensional information used for current variable grid, the length or width of size of mesh opening;It is logical
Cross the n values that the interative computation since n=1 meets above formula until calculating;Respectively in the input after pretreatment of left lens camera
On image, a length of n is averagely dividedyPart, a width of n of divisionxPart, the image lattice divided is current variable grid;
S43, successively travel through previous step obtain based on variable grid divide after image-region, to each region carry out ORB
Feature extraction, and complete to describe the ORB features description of key feature point sequence in file and characteristic point with object feature
Detection is with matching;If the match is successful, the image of the right lens camera collection exported to image pre-processor carries out identical inspection
Survey with matching and jumping to step S44;Otherwise image pre-processor reads in the left mesh image of next frame, re-executes the step;
S44, regard the left and right mesh picture frame and left and right mesh image detection rectangle frame that successfully complete target analyte detection as ginseng respectively
Number, is completed to left mesh image, the tracker of the specific region of right mesh image, the incoming parameter of tracker initialization procedure needs is
Rectangle frame and corresponding left and right mesh image are surveyed in the left and right visual inspection that previous step is successfully got, once tracker is initialized to
The lasting left and right mesh picture frame for exporting image pre-processor is separately input into corresponding target following in work(, subsequent process
Device, target tracker using specific target site area image and its neighborhood parts of images as convolution mask, in subsequent picture frame
Judge convolution response highest region in the new scene image of view picture input, then current convolution mould is updated with highest response region
Plate;The specific target site area image is the detection rectangle frame region of specific objective thing;
Target tracker persistently exports each frame highest response region as the tracing positional of target;
S45, judges the degree of closeness at edge in the image of tracing positional and camera acquisition, i.e., progressive degree, and sets progressive degree
Threshold value, if the progressive degree in the image border for the target outsourcing rectangle that left and right mesh is traced into all is not more than progressive degree threshold value, enters
Target range calculator, the distance in the vertical direction of binocular camera distance objective thing is calculated by epipolar geom etry constraint,
And then obtain distance when car body present frame is gathered with specific objective thing;Otherwise it is assumed that now left and right lens camera can not
Complete collects specific tracking object.
8. a kind of longitudinal direction of car of the Image Feature Detection based on variable grid according to claim 7 in automatic Pilot
Localization method, it is characterised in that:The computational methods of progressive degree are described in step S45:
The distance at 4 edges of picture appearance is arrived on the summit of outsourcing rectangle 4 for calculating specific objective tracing positional in the ban respectively respectively
Minimum value dbMinDis, according to the high H of scene imagescnWith wide Wscn, calculate progressive degree rectangle and scene image rectangular area
Ratio, calculation formula is as follows:
((Hscn-2*dbMinDis)*(Wscn-2*dbMinDis))/(Hscn*Wscn)
When ratio more levels off to 1 tracing positional closer to image border.
9. a kind of longitudinal direction of car of the Image Feature Detection based on variable grid according to claim 8 in automatic Pilot
Localization method, it is characterised in that:Detection includes following sub-step with matching process in the step S43:
S431, it is similar to judge whether two ORB characteristic points by the difference distance of ORB feature point descriptions, to each region
Characteristic point and step S41 be loaded into template characteristic point between complete a Feature Points Matching first, obtain one from " detection zone
The sequence of the Feature Points Matching pair of " template area " is arrived in domain ", then reverse from " template area " to " detection zone " completes same
Operation obtain cross validation Feature Points Matching pair sequence;The detection zone refers to the specific objective that left mesh camera is collected
Thing region, the template area refers to from the characteristic target thing region in accurately chart database;
S432, obtains the mapping pass of characteristic point existed between " detection zone " and " template area " from two groups of matched sequences
System, and by the inverible transform existed between Direct mapping and back mapping, be defined as remapping relation;
During S433, detection current matching, if the quantity of characteristic point pair, which reaches, can ask for the threshold value of mapping matrix, once it is super
4 characteristic points are crossed to can so calculate mapping matrix, then carry out remapping relation detection, if it is satisfied, then this was matched
Cheng Chenggong, otherwise fails.
10. a kind of vehicle of the Image Feature Detection based on variable grid according to claim 9 in automatic Pilot is indulged
To localization method, it is characterised in that:The step S43 also includes, if detecting step S43, it fails to match always, until high-precision
Degree navigation arrangement retrieves the viewing area that Current vehicle has left specific objective thing from map datum, then declares currently once
Detection task fails, and this calculation process does not continue to go on.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710205430.0A CN107167826B (en) | 2017-03-31 | 2017-03-31 | Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710205430.0A CN107167826B (en) | 2017-03-31 | 2017-03-31 | Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107167826A true CN107167826A (en) | 2017-09-15 |
CN107167826B CN107167826B (en) | 2020-02-04 |
Family
ID=59849031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710205430.0A Active CN107167826B (en) | 2017-03-31 | 2017-03-31 | Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107167826B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108051002A (en) * | 2017-12-04 | 2018-05-18 | 上海文什数据科技有限公司 | Transport vehicle space-location method and system based on inertia measurement auxiliary vision |
CN108196285A (en) * | 2017-11-30 | 2018-06-22 | 中山大学 | A kind of Precise Position System based on Multi-sensor Fusion |
CN108592797A (en) * | 2018-03-28 | 2018-09-28 | 华南理工大学 | A kind of dynamic measurement method and system of vehicle overall dimension and wheelbase |
CN109166155A (en) * | 2018-09-26 | 2019-01-08 | 北京图森未来科技有限公司 | A kind of calculation method and device of vehicle-mounted binocular camera range error |
CN110069593A (en) * | 2019-04-24 | 2019-07-30 | 百度在线网络技术(北京)有限公司 | Image processing method and system, server, computer-readable medium |
CN111160123A (en) * | 2019-12-11 | 2020-05-15 | 桂林长海发展有限责任公司 | Airplane target identification method and device and storage medium |
WO2020168668A1 (en) * | 2019-02-22 | 2020-08-27 | 广州小鹏汽车科技有限公司 | Slam mapping method and system for vehicle |
CN111623776A (en) * | 2020-06-08 | 2020-09-04 | 昆山星际舟智能科技有限公司 | Method for measuring distance of target by using near infrared vision sensor and gyroscope |
US20210192788A1 (en) * | 2019-12-18 | 2021-06-24 | Motional Ad Llc | Camera-to-lidar calibration and validation |
CN114200926A (en) * | 2021-11-12 | 2022-03-18 | 河南工业大学 | Local path planning method and system for unmanned vehicle |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105588563A (en) * | 2016-01-15 | 2016-05-18 | 武汉光庭科技有限公司 | Joint calibration method of binocular camera and inertial navigation unit in automatic driving |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
US20160223822A1 (en) * | 2012-12-19 | 2016-08-04 | Lockheed Martin Corporation | System, method and computer program product for real-time alignment of an augmented reality device |
CN105868574A (en) * | 2016-04-25 | 2016-08-17 | 南京大学 | Human face tracking optimization method for camera and intelligent health monitoring system based on videos |
-
2017
- 2017-03-31 CN CN201710205430.0A patent/CN107167826B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160223822A1 (en) * | 2012-12-19 | 2016-08-04 | Lockheed Martin Corporation | System, method and computer program product for real-time alignment of an augmented reality device |
CN105588563A (en) * | 2016-01-15 | 2016-05-18 | 武汉光庭科技有限公司 | Joint calibration method of binocular camera and inertial navigation unit in automatic driving |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
CN105868574A (en) * | 2016-04-25 | 2016-08-17 | 南京大学 | Human face tracking optimization method for camera and intelligent health monitoring system based on videos |
Non-Patent Citations (2)
Title |
---|
赵明: "基于图像的物体尺寸测量算法研究", 《软件导刊》 * |
马继红 等: "基于立体视觉的被动测距技术研究", 《西南师范大学学报(自然科学版)》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108196285A (en) * | 2017-11-30 | 2018-06-22 | 中山大学 | A kind of Precise Position System based on Multi-sensor Fusion |
CN108051002B (en) * | 2017-12-04 | 2021-03-16 | 上海文什数据科技有限公司 | Transport vehicle space positioning method and system based on inertial measurement auxiliary vision |
CN108051002A (en) * | 2017-12-04 | 2018-05-18 | 上海文什数据科技有限公司 | Transport vehicle space-location method and system based on inertia measurement auxiliary vision |
CN108592797A (en) * | 2018-03-28 | 2018-09-28 | 华南理工大学 | A kind of dynamic measurement method and system of vehicle overall dimension and wheelbase |
CN109166155A (en) * | 2018-09-26 | 2019-01-08 | 北京图森未来科技有限公司 | A kind of calculation method and device of vehicle-mounted binocular camera range error |
CN109166155B (en) * | 2018-09-26 | 2021-12-17 | 北京图森智途科技有限公司 | Method and device for calculating distance measurement error of vehicle-mounted binocular camera |
WO2020168668A1 (en) * | 2019-02-22 | 2020-08-27 | 广州小鹏汽车科技有限公司 | Slam mapping method and system for vehicle |
CN110069593A (en) * | 2019-04-24 | 2019-07-30 | 百度在线网络技术(北京)有限公司 | Image processing method and system, server, computer-readable medium |
CN110069593B (en) * | 2019-04-24 | 2021-11-12 | 百度在线网络技术(北京)有限公司 | Image processing method and system, server, computer readable medium |
CN111160123A (en) * | 2019-12-11 | 2020-05-15 | 桂林长海发展有限责任公司 | Airplane target identification method and device and storage medium |
CN111160123B (en) * | 2019-12-11 | 2023-06-09 | 桂林长海发展有限责任公司 | Aircraft target identification method, device and storage medium |
US20210192788A1 (en) * | 2019-12-18 | 2021-06-24 | Motional Ad Llc | Camera-to-lidar calibration and validation |
US11940539B2 (en) * | 2019-12-18 | 2024-03-26 | Motional Ad Llc | Camera-to-LiDAR calibration and validation |
CN111623776A (en) * | 2020-06-08 | 2020-09-04 | 昆山星际舟智能科技有限公司 | Method for measuring distance of target by using near infrared vision sensor and gyroscope |
CN111623776B (en) * | 2020-06-08 | 2022-12-02 | 昆山星际舟智能科技有限公司 | Method for measuring distance of target by using near infrared vision sensor and gyroscope |
CN114200926A (en) * | 2021-11-12 | 2022-03-18 | 河南工业大学 | Local path planning method and system for unmanned vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN107167826B (en) | 2020-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107167826A (en) | The longitudinal direction of car alignment system and method for Image Feature Detection based on variable grid in a kind of automatic Pilot | |
US11900627B2 (en) | Image annotation | |
CN105667518B (en) | The method and device of lane detection | |
CN103954275B (en) | Lane line detection and GIS map information development-based vision navigation method | |
CN112197770B (en) | Robot positioning method and positioning device thereof | |
CN102222236B (en) | Image processing system and position measuring system | |
CN103411609B (en) | A kind of aircraft return route planing method based on online composition | |
CN108986037A (en) | Monocular vision odometer localization method and positioning system based on semi-direct method | |
CN105930819A (en) | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system | |
CN112734852A (en) | Robot mapping method and device and computing equipment | |
CN105976402A (en) | Real scale obtaining method of monocular vision odometer | |
Levinson | Automatic laser calibration, mapping, and localization for autonomous vehicles | |
CN114526745B (en) | Drawing construction method and system for tightly coupled laser radar and inertial odometer | |
Sujiwo et al. | Monocular vision-based localization using ORB-SLAM with LIDAR-aided mapping in real-world robot challenge | |
Shunsuke et al. | GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon | |
CN114758504B (en) | Online vehicle overspeed early warning method and system based on filtering correction | |
CN113516664A (en) | Visual SLAM method based on semantic segmentation dynamic points | |
Filatov et al. | Any motion detector: Learning class-agnostic scene dynamics from a sequence of lidar point clouds | |
CN109978919A (en) | A kind of vehicle positioning method and system based on monocular camera | |
CN116147618B (en) | Real-time state sensing method and system suitable for dynamic environment | |
Lin et al. | Semi-automatic extraction of ribbon roads from high resolution remotely sensed imagery by T-shaped template matching | |
CN114119896B (en) | Driving path planning method | |
CN115546303A (en) | Method and device for positioning indoor parking lot, vehicle and storage medium | |
CN113227713A (en) | Method and system for generating environment model for positioning | |
CN115280960A (en) | Combine harvester steering control method based on field vision SLAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |