CN109887033A - Localization method and device - Google Patents
Localization method and device Download PDFInfo
- Publication number
- CN109887033A CN109887033A CN201910155155.5A CN201910155155A CN109887033A CN 109887033 A CN109887033 A CN 109887033A CN 201910155155 A CN201910155155 A CN 201910155155A CN 109887033 A CN109887033 A CN 109887033A
- Authority
- CN
- China
- Prior art keywords
- information
- posture
- posture information
- dimensional code
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The present invention provides a kind of localization methods, comprising: obtains the image information in parking lot;Obtain the posture information of IMU measurement;Image information is handled, lane line information and two-dimensional barcode information are obtained;According to lane line information, determine vehicle from the path locus information under vehicle coordinate system;According to two-dimensional barcode information and preset first map, the first posture information is determined;Feature extraction is carried out to image information, the characteristic information extracted is matched with the second map, determines the second posture information;According under vehicle coordinate system path locus information and based on global pose predict from the path locus information under vehicle coordinate system, calculate third posture information;By merging to the first posture information, the second posture information, third posture information and posture information, the object pose information of vehicle is obtained.The robustness and practicability of positioning are greatly improved as a result,.
Description
Technical field
The present invention relates to technical field of data processing more particularly to a kind of localization methods and device.
Background technique
With the rising of car ownership, parking for vehicle is a pain spot for driving to go on a journey at present, and parking consumes very
More travel times.Simultaneously with the fast development of automatic Pilot technology, the low speed valet parking neck of the special scenes such as underground garage
Domain starts to obtain the concern of more and more companies and research institution.It is believed that with the development of technology, inexpensive valet parking
Product can be received by user increasingly, while valet parking technology, will change much personal and automobile rental corporation take
Vehicle and park mode.
Since underground garage environment is complex, satellite navigation system (Global Navigation Satellite
System, GNSS) it can not be positioned, currently used localization method mainly has based on the synchronous positioning of laser radar and map structure
Build the positioning of the localization method and view-based access control model SLAM of (simultaneous localization and mapping, SLAM)
Method.
The localization method of underground parking based on laser radar SLAM technology needs Cai Tuche acquisition underground parking
Map datum carries out underground parking to build figure.Need to provide the initial alignment of priori in actual use, to accelerate SLAM's
Convergence rate improves the precision of positioning.Laser radar SLAM needs the three-dimensional point that will be scanned in environment in actual location
Cloud data are matched to obtain the real-time pose of vehicle with point cloud map.
But there is a possibility that positioning failure in environmental characteristic situation not abundant enough.Therefore, it is being based on laser thunder
Up to when SLAM positioning, in conjunction with Inertial Measurement Unit (Inertial measurement unit, IMU) information to the position of vehicle
Appearance information carries out fusion output, to obtain accurate posture information.But the shortcomings that above method be it is computationally intensive, to meter
The performance requirement for calculating platform is higher, and laser radar higher cost, it is difficult to carry out scale of mass production.
The localization method of the underground parking of view-based access control model SLAM has similarity with being based on laser radar SLAM, needs pair
Underground garage carries out data acquisition, establishes the point cloud map of underground parking.In actual use by the picture detected
It carries out Scale invariant features transform (Scale-invariant feature transform, SIFT), accelerates robust feature
The feature extraction of (Speeded Up Robust Features, SURF) etc, by the characteristic point extracted and point cloud progress
It is equipped with the real-time pose for obtaining vehicle.
But to the degree of dependence of light height, some texture-free or less texture region there is positioning failure can
Energy property, is merged even with IMU information to posture information, there is also cause to position due to error accumulation in actual use
The poor problem of precision.
Summary of the invention
The purpose of the embodiment of the present invention is that a kind of localization method and device are provided, to solve positioning existing in the prior art
At high cost and low precision problem.
To solve the above problems, in a first aspect, the present invention provides a kind of localization method, the localization method includes:
Obtain the image information in parking lot;
Obtain the posture information of Inertial Measurement Unit IMU measurement;
Described image information is handled, lane line information and two-dimensional barcode information are obtained;
According to the lane line information, determine from the path locus information under vehicle coordinate system;
According to the two-dimensional barcode information and preset first map, the first posture information is determined;
Feature extraction is carried out to described image information, the characteristic information extracted is matched with the second map, is determined
Second posture information;
According under vehicle coordinate system path locus information and based on global pose predict under vehicle coordinate system
Path locus information calculates third posture information;
By believing first posture information, second posture information, the third posture information and the posture
Breath is merged, and the object pose information of vehicle is obtained.
In one possible implementation, described that described image information is handled, lane line information is obtained, specifically
Include:
Multiple image in described image information is intercepted, image-region is obtained;
Gray proces are carried out to described image region, obtain gray level image;
Binary conversion treatment is carried out to the gray level image, obtains binary image;
The binary image is filtered, filtered binary image is obtained;
By edge detection, the marginal point of the filtered binary image is determined;
Changed by Hough, determines the straight line in the filtered binary image;
According to the straight line and the marginal point, lane line information is determined.
In one possible implementation, described that described image information is handled, two-dimensional barcode information is obtained, specifically
Include:
Described image information is detected, two dimensional code is judged whether there is;
When there are two dimensional code, judge whether the two dimensional code is effective;
When the two dimensional code is effective, two-dimensional barcode information is extracted;The two-dimensional barcode information includes that index and two dimensional code are numbered,
The index includes position of each two dimensional code in the first map in ground library in the number and all two dimensional codes of all two dimensional codes
It sets.
In one possible implementation, described according to the two-dimensional barcode information and preset first map, determine
One posture information, specifically includes:
According to the number and the index, position of the two dimensional code in the first map is determined;
Obtain the dimension information of the two dimensional code;
According to the dimension information of the two dimensional code, by angle point auxiliary positioning, the opposite of the two dimensional code and vehicle is determined
Pose;
According to the relative pose of position and the two dimensional code and vehicle of the two dimensional code in the first map, vehicle is determined
The first posture information in the second map.
In one possible implementation, described that feature extraction is carried out to institute's cartographic information, the feature extracted is believed
Breath is matched with the second map, is determined the second posture information, is specifically included:
By the SVO algorithm in vision SLAM, feature extraction is carried out, is matched with the second map, determines vehicle the
The second posture information in two maps.
In one possible implementation, the basis is from the path locus information under vehicle coordinate system and based on global position
Appearance predict from the path locus information under vehicle coordinate system, calculate third posture information, specifically include:
The lane line information arrived according to image detection is determined from the Actual path trace information under vehicle coordinate system;
According to vehicle overall situation pose predictive information, determine from the predicted path trace information under vehicle coordinate system;
Path point matching is carried out to the actual path information and the prediction locus information, determines third posture information.
Second aspect, the present invention provides a kind of positioning device, the positioning device includes:
Acquiring unit, the acquiring unit are used to obtain the image information in parking lot;
The acquiring unit is also used to obtain the posture information of Inertial Measurement Unit IMU measurement;
Processing unit, the processing unit obtain lane line information and two dimension for handling described image information
Code information;
Determination unit, the determination unit are used to be determined according to the lane line information from the path rail under vehicle coordinate system
Mark information;
The determination unit is also used to determine that the first pose is believed according to the two-dimensional barcode information and preset first map
Breath;
Extraction unit, the extraction unit are used to carry out feature extraction to described image information, the feature extracted are believed
Breath is matched with the second map, determines the second posture information;
Computing unit, the computing unit are used for according to from the path locus information under vehicle coordinate system and based on global pose
Predict to obtain from the path locus information under vehicle coordinate system, calculate third posture information;
Integrated unit, the integrated unit are used for by first posture information, second posture information, described
Third posture information and the posture information are merged, and the object pose information of vehicle is obtained.
The third aspect, the present invention provides a kind of equipment, including memory and processor, the memory is for storing journey
Sequence, the processor are used to execute any method of first aspect.
Fourth aspect, the present invention provides a kind of computer program products comprising instruction, when the computer program produces
When product are run on computers, so that the computer executes the method as described in first aspect is any.
5th aspect, the present invention provides a kind of computer readable storage medium, on the computer readable storage medium
It is stored with computer program, the method as described in first aspect is any is realized when the computer program is executed by processor.
By applying localization method provided by the invention and device, by melting to three kinds of posture informations and posture information
Conjunction processing, greatly improves the robustness and practicability of positioning.
Detailed description of the invention
Fig. 1 is the localization method flow diagram that the present invention implements that an example provides;
Fig. 2 is positioning device structure schematic diagram provided by Embodiment 2 of the present invention.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that for just
Part relevant to related invention is illustrated only in description, attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is the localization method flow diagram that the embodiment of the present invention one provides.The localization method can be applied such as
Underground garage etc. does not have in network signal or the weak scene of network signal, such as underground parking, in underground parking,
Due to not having satellite positioning signal or signal weaker, vehicle by GNSS cannot position from vehicle.Vehicle herein is automatic
Drive vehicle.As shown in Figure 1, the localization method includes:
Step 110, the image information in parking lot is obtained.
Wherein, which can be underground parking, which has satellite positioning signal difference or do not have
The characteristic of the signal.
Step 120, the posture information of IMU measurement is obtained.
Wherein, have on vehicle and look around camera and IMU, the image that parking lot can be obtained by looking around camera is believed
Breath, which may include multiple image.This look around camera with technology maturation, at low cost, angular is biggish excellent
Point, it is more by its available environment texture information, it is positioned convenient for subsequent vision SLAM.The IMU can be obtained in real time
The posture information of vehicle, to improve positioning accuracy.Wherein, the posture information of vehicle include but is not limited to the upwarping, lean forward of vehicle,
Left-hand rotation or right-hand rotation information.
Step 130, image information is handled, obtains lane line information and two dimensional code.
Wherein it is possible to carry out lane detection by edge detection and Hough transformation.
Specifically, camera first can be looked around to this before looking around camera using this and carry out inside and outside ginseng calibration, it is interior
Ginseng, that is, look around parameter of camera self-characteristic, such as focal length, pixel size etc., outer ginseng looks around camera in vehicle coordinate
Parameter in system, such as position and direction of rotation etc..
Image information includes multiple image, can be intercepted to frame image every in multiple image, then only to including vehicle
The image-region of diatom information is handled.Then, gray proces are carried out to the image-region comprising lane line information, obtains ash
Spend image information.Then, binary conversion treatment is carried out, binary image is obtained, then, carries out noise-removed filtering processing, including, removal
Gaussian noise and the line segment for filtering out minimum acute angle and very big obtuse angle.Then, canny edge detection is carried out, determines marginal point, is detected
The profile of lane line out.Then, it is converted by Hough hough, detects straight line.Finally, obtaining vehicle according to marginal point and straight line
Diatom information.The method for detecting lane lines robustness is good, has good adaptability to ambient light.
Specification is needed, it can also be by the method for detecting lane lines based on perspective transform or based on the vehicle of fitting
Road line detecting method or the method for detecting lane lines based on learning method etc., the application is to the method for lane detection and unlimited
It is fixed.
Two dimensional code, i.e., pre-set two dimensional code in garage.In garage, can air brushing two dimensional code in advance, the two dimensional code
Actual size be also to preset, the position of two dimensional code can be on garage floor, equal cameras can be collected on wall
Region.After air brushing is good, position of the two dimensional code in garage is known, and position of the two dimensional code in garage can be anti-
It should be on the first map.First map is garage map.First map also has index, indexes the volume of corresponding two dimensional code
Number and the position in the first map.
First image information can be detected, judge whether there is two dimensional code, when there are two dimensional code, judge two dimensional code
Effectively whether, can effectively refer to normally reads, and when two dimensional code part is blocked, it is invalid can be considered as two dimensional code.When two
When dimension code is effective, two-dimensional barcode information is extracted, which includes the number of index, two dimensional code.
Step 140, it according to lane line information, determines from the path locus information under vehicle coordinate system.
Specifically, can be believed according to the internal reference for looking around camera, such as focal length further according to the depth of multiple image information
Breath calculates the location information that vehicle is travelled according to lane line, multiple location informations is stitched together, that is, can determine that vehicle is sat from vehicle
Path locus information under mark system.
Step 150, according to two-dimensional barcode information and preset first map, the first posture information is determined.
Specifically, connecting example, number can be multiple pre-set numbers of two dimensional code in garage, to multiple two dimensional codes
It distinguishes, by the number, index can be searched, position position of the two dimensional code in the first map, is i.e. positioning two dimensional code exists
Position in garage map.
The dimension information of two dimensional code can also be got from image information.Dimension information can be the size of two dimensional code,
In addition to this, two dimensional code further includes 3 angle points (can calculate 3 angle points by two dimensional code Corner Detection Algorithm).At one
In example, for example two dimensional code is located at the parking stall middle in garage, having a size of 45cm*45cm, in conjunction with two dimensional code size with
And angle point auxiliary positioning, two dimensional code can be calculated and look around the relative position of camera, to calculate two dimensional code and vehicle
Position orientation relation.Position orientation relation according to two dimensional code in the position of the first map and two dimensional code and vehicle calculates vehicle second
Map, i.e., the absolute posture information in global map, is referred to as the first posture information.
Wherein, posture information may include position and the posture of vehicle, and position indicates position of the vehicle with respect to world coordinates
(translation) is generally indicated with coordinate (x, y), and posture indicates the practical direction of advance of yaw angle, that is, vehicle and the expectation advance side of vehicle
Misalignment angle between, Φ, which can be selected, to be indicated.It is that (x, y, Φ) can be used in three-dimensional spatial information that posture information is corresponding as a result,
It indicates.Corresponding first posture information can be indicated with n1=(x1, y1, Φ 1).
Step 160, feature extraction is carried out to institute's image information, by the characteristic information extracted and the progress of the second map
Match, determines the second posture information.
Specifically, after feature extraction being carried out to image information, by the characteristic information extracted and according to vision SLAM
The second determining map is matched, so that it is determined that the absolute posture information of the second posture information, i.e. vehicle in global map.
As to how determining the second map according to vision SLAM, the application is repeated no more.Example and it is non-limiting, semi-direct view can be passed through
Feel odometer (Semi-direct Visual Odometry, SVO) algorithm, extracts sparse features point and use in the implementation
The fritter of 4x4 carries out Block- matching, and estimation camera provides the movement of itself.The algorithm speed is exceedingly fast, and also can in low side computing platform
Reach real-time, the occasion for being suitble to computing platform limited.
The detailed process of SVO algorithm includes tracking and depth filtering, so that the second posture information is obtained, the second posture information
It can be indicated with n2=(x2, y2, Φ 2).Its specific calculation process the application repeats no more.
It is understood that can also be calculated using augmented reality (Parallel Tracking and Mapping, PTAM)
(Large Scale Direct monocular, LSD)-SLAM algorithm etc. obtains immediately for method, ORB-SLAM algorithm, a wide range of monocular
The second posture information is obtained, details are not described herein again.
Step 170, it is sat according to from the path locus information under vehicle coordinate system and based on what global pose was predicted from vehicle
Path locus information under mark system, calculates third posture information.
Specifically, based on the lane line information that image detection arrives, available vehicle is from the practical road under vehicle coordinate system
Diameter trace information, and (predictive information can be according to current posture information, present speed for the predictive information based on global pose
Obtained etc. being calculated), the predicted path trace information under the available coordinate system from vehicle of decision rule module in vehicle,
According to Actual path trace information and predicted path trace information, path point matching is carried out by difference algorithm, so as to
To absolute posture information of the vehicle in global map, i.e. third posture information, it is referred to as based on the modified position of lane line
Appearance information.The third posture information can be indicated with n3=(x3, y3, Φ 3).
Step 180, by melting to the first posture information, the second posture information, third posture information and posture information
It closes, obtains the object pose information of vehicle.
Specifically, can be by extended Kalman filter (Extended Kalman Filter, EKF), to first,
Two and third posture information, posture information carry out fusion treatment, obtain the object pose information of vehicle.To according to object pose
Information realizes the positioning to vehicle.
By applying localization method provided by the invention, by being carried out at fusion to three kinds of posture informations and posture information
Reason, greatly improves the robustness and practicability of positioning.
It is understood that in extreme circumstances, as vision SLAM loses location information while two dimensional code mark is not detected
Label, the blending algorithm can be degenerated to the local positioning algorithm based on lane line, location information still can be reliably provided, to vision
SLAM restores or can rapidly correct global posture information after detecting two-dimension code label.
Fig. 2 is positioning device structure schematic diagram provided by Embodiment 2 of the present invention.As shown in Fig. 2, the positioning device application
In localization method, which includes: acquiring unit 210, processing unit 220, determination unit 230, extraction unit
240, computing unit 250 and integrated unit 260.
Acquiring unit 210 is used to obtain the image information in parking lot;
Acquiring unit 210 is also used to obtain the posture information of IMU measurement;
Processing unit 220 obtains lane line information and two-dimensional barcode information for handling image information;
Determination unit 230 is used to be determined according to lane line information from the path locus information under vehicle coordinate system;
Determination unit 230 is also used to determine the first posture information according to two-dimensional barcode information and preset first map;
Extraction unit 240 is used to carry out feature extraction to image information, by the characteristic information extracted and the second map into
Row matching, determines the second posture information;
Computing unit 250 is used for what basis was predicted from the path locus information under vehicle coordinate system and based on global pose
From the path locus information under vehicle coordinate system, third posture information is calculated;
Integrated unit 260 is used for by the first posture information, the second posture information, third posture information and posture information
It is filtered, obtains the object pose information of vehicle.
Identical in the specific effect and method of each unit in the device, details are not described herein again.
By applying positioning device provided by the invention, by merging to three kinds of posture informations and posture information, pole
The big robustness and practicability for improving positioning.
The embodiment of the present invention three provides a kind of equipment, including memory and processor, and memory is deposited for storing program
Reservoir can be connect by bus with processor.Memory can be nonvolatile storage, such as hard disk drive and flash memory, storage
Software program and device driver are stored in device.Software program is able to carry out the above method provided in an embodiment of the present invention
Various functions;Device driver can be network and interface drive program.Processor is for executing software program, the software journey
Sequence is performed, and can be realized method provided in an embodiment of the present invention.
The embodiment of the present invention four provides a kind of computer program product comprising instruction, when computer program product is being counted
When being run on calculation machine, so that computer executes the method that the embodiment of the present invention one provides.
The embodiment of the present invention five provides a kind of computer readable storage medium, is stored on computer readable storage medium
Computer program realizes the method that the embodiment of the present invention one provides when computer program is executed by processor.
Professional should further appreciate that, described in conjunction with the examples disclosed in the embodiments of the present disclosure
Unit and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, hard in order to clearly demonstrate
The interchangeability of part and software generally describes each exemplary composition and step according to function in the above description.
These functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.
Professional technician can use different methods to achieve the described function each specific application, but this realization
It should not be considered as beyond the scope of the present invention.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can be executed with hardware, processor
The combination of software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only memory
(ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field
In any other form of storage medium well known to interior.
Above specific embodiment has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects
Illustrate, it should be understood that the above is only a specific embodiment of the invention, the protection model that is not intended to limit the present invention
It encloses, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in the present invention
Protection scope within.
Claims (10)
1. a kind of localization method, which is characterized in that the localization method includes:
Obtain the image information in parking lot;
Obtain the posture information of Inertial Measurement Unit IMU measurement;
Described image information is handled, lane line information and two-dimensional barcode information are obtained;
According to the lane line information, determine vehicle from the path locus information under vehicle coordinate system;
According to the two-dimensional barcode information and preset first map, the first posture information is determined;
Feature extraction is carried out to described image information, the characteristic information extracted is matched with the second map, determines second
Posture information;
According under vehicle coordinate system path locus information and based on global pose predict from the path under vehicle coordinate system
Trace information calculates third posture information;
By to first posture information, second posture information, the third posture information and the posture information into
Row fusion, obtains the object pose information of vehicle.
2. obtaining lane the method according to claim 1, wherein described handle described image information
Line information, specifically includes:
Multiple image in described image information is intercepted, image-region is obtained;
Gray proces are carried out to described image region, obtain gray level image;
Binary conversion treatment is carried out to the gray level image, obtains binary image;
The binary image is filtered, filtered binary image is obtained;
By edge detection, the marginal point of the filtered binary image is determined;
Changed by Hough, determines the straight line in the filtered binary image;
According to the straight line and the marginal point, lane line information is determined.
3. obtaining two dimension the method according to claim 1, wherein described handle described image information
Code information, specifically includes:
Described image information is detected, two dimensional code is judged whether there is;
When there are two dimensional code, judge whether the two dimensional code is effective;
When the two dimensional code is effective, two-dimensional barcode information is extracted;The two-dimensional barcode information includes that index and two dimensional code are numbered, described
Index includes position of each two dimensional code in the first map in ground library in the number and all two dimensional codes of all two dimensional codes.
4. according to the method described in claim 3, it is characterized in that, described according to the two-dimensional barcode information and preset first ground
Figure, determines the first posture information, specifically includes:
According to the number and the index, position of the two dimensional code in the first map is determined;
Obtain the dimension information of the two dimensional code;
The relative pose of the two dimensional code and vehicle is determined by angle point auxiliary positioning according to the dimension information of the two dimensional code;
According to the relative pose of position and the two dimensional code and vehicle of the two dimensional code in the first map, determine that vehicle exists
The first posture information in second map.
5. will be extracted the method according to claim 1, wherein described carry out feature extraction to institute's cartographic information
To characteristic information matched with the second map, determine the second posture information, specifically include:
By the SVO algorithm in vision SLAM, feature extraction is carried out, is matched with the second map, determines vehicle the second
The second posture information in figure.
6. the method according to claim 1, wherein the basis under vehicle coordinate system path locus information and
Based on global pose predict from the path locus information under vehicle coordinate system, calculate third posture information, specifically include:
The lane line information arrived according to image detection is determined from the Actual path trace information under vehicle coordinate system;
According to vehicle overall situation pose predictive information, determine from the predicted path trace information under vehicle coordinate system;
Path point matching is carried out to the actual path information and the prediction locus information, determines third posture information.
7. a kind of positioning device, which is characterized in that the positioning device includes:
Acquiring unit, the acquiring unit are used to obtain the image information in parking lot;
The acquiring unit is also used to obtain the posture information of Inertial Measurement Unit IMU measurement;
Processing unit, the processing unit obtain lane line information and two dimensional code letter for handling described image information
Breath;
Determination unit, the determination unit are used to be determined according to the lane line information from the path locus letter under vehicle coordinate system
Breath;
The determination unit is also used to determine the first posture information according to the two-dimensional barcode information and preset first map;
Extraction unit, the extraction unit be used for described image information carry out feature extraction, by the characteristic information extracted with
Second map is matched, and determines the second posture information;
Computing unit, the computing unit are used for according to from the path locus information under vehicle coordinate system and based on global pose prediction
Obtain from the path locus information under vehicle coordinate system, calculate third posture information;
Integrated unit, the integrated unit are used for by first posture information, second posture information, the third
Posture information and the posture information are merged, and the object pose information of vehicle is obtained.
8. a kind of equipment, including memory and processor, which is characterized in that the memory is for storing program, the processing
Device is used to execute the method as described in claim 1-6 is any.
9. a kind of computer program product comprising instruction, which is characterized in that when the computer program product on computers
When operation, so that the computer executes the method as described in claim 1-6 is any.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program realizes the method as described in claim 1-6 is any when the computer program is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910155155.5A CN109887033B (en) | 2019-03-01 | 2019-03-01 | Positioning method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910155155.5A CN109887033B (en) | 2019-03-01 | 2019-03-01 | Positioning method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109887033A true CN109887033A (en) | 2019-06-14 |
CN109887033B CN109887033B (en) | 2021-03-19 |
Family
ID=66930187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910155155.5A Active CN109887033B (en) | 2019-03-01 | 2019-03-01 | Positioning method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109887033B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263209A (en) * | 2019-06-27 | 2019-09-20 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN110349211A (en) * | 2019-06-18 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | The method and apparatus of framing, storage medium |
CN110597266A (en) * | 2019-09-26 | 2019-12-20 | 青岛蚂蚁机器人有限责任公司 | Robot path dynamic planning method based on two-dimensional code |
CN110595459A (en) * | 2019-09-18 | 2019-12-20 | 百度在线网络技术(北京)有限公司 | Vehicle positioning method, device, equipment and medium |
CN110861082A (en) * | 2019-10-14 | 2020-03-06 | 北京云迹科技有限公司 | Auxiliary mapping method and device, mapping robot and storage medium |
CN110910311A (en) * | 2019-10-30 | 2020-03-24 | 同济大学 | Automatic splicing method for multi-channel panoramic camera based on two-dimensional code |
CN111274934A (en) * | 2020-01-19 | 2020-06-12 | 上海智勘科技有限公司 | Implementation method and system for intelligently monitoring forklift operation track in warehousing management |
CN112284399A (en) * | 2019-07-26 | 2021-01-29 | 北京初速度科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
CN112285738A (en) * | 2020-10-23 | 2021-01-29 | 中车株洲电力机车研究所有限公司 | Positioning method and device for rail transit vehicle |
WO2021017211A1 (en) * | 2019-07-29 | 2021-02-04 | 魔门塔(苏州)科技有限公司 | Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal |
CN112904331A (en) * | 2019-11-19 | 2021-06-04 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining movement track and storage medium |
CN112927260A (en) * | 2021-02-26 | 2021-06-08 | 商汤集团有限公司 | Pose generation method and device, computer equipment and storage medium |
CN112991440A (en) * | 2019-12-12 | 2021-06-18 | 纳恩博(北京)科技有限公司 | Vehicle positioning method and device, storage medium and electronic device |
CN113156945A (en) * | 2021-03-31 | 2021-07-23 | 深圳市优必选科技股份有限公司 | Automatic guide vehicle and parking control method and control device thereof |
CN113147738A (en) * | 2021-02-26 | 2021-07-23 | 重庆智行者信息科技有限公司 | Automatic parking positioning method and device |
CN113313966A (en) * | 2020-02-27 | 2021-08-27 | 华为技术有限公司 | Pose determination method and related equipment |
CN113494911A (en) * | 2020-04-02 | 2021-10-12 | 宝马股份公司 | Method and system for positioning vehicle |
WO2022105024A1 (en) * | 2020-11-17 | 2022-05-27 | 深圳市优必选科技股份有限公司 | Method and apparatus for determining pose of robot, robot and storage medium |
CN114619441A (en) * | 2020-12-10 | 2022-06-14 | 北京极智嘉科技股份有限公司 | Robot and two-dimensional code pose detection method |
CN115661299A (en) * | 2022-12-27 | 2023-01-31 | 安徽蔚来智驾科技有限公司 | Method for constructing lane line map, computer device and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110194779A1 (en) * | 2010-02-08 | 2011-08-11 | Cheng Zhong | Apparatus and method for detecting multi-view specific object |
CN102175222A (en) * | 2011-03-04 | 2011-09-07 | 南开大学 | Crane obstacle-avoidance system based on stereoscopic vision |
CN103196440A (en) * | 2013-03-13 | 2013-07-10 | 上海交通大学 | M sequence discrete-type artificial signpost arrangement method and related mobile robot positioning method |
US8761439B1 (en) * | 2011-08-24 | 2014-06-24 | Sri International | Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit |
US20160063303A1 (en) * | 2014-09-02 | 2016-03-03 | Hong Kong Baptist University | Method and apparatus for eye gaze tracking |
CN106708037A (en) * | 2016-12-05 | 2017-05-24 | 北京贝虎机器人技术有限公司 | Autonomous mobile equipment positioning method and device, and autonomous mobile equipment |
CN106814737A (en) * | 2017-01-20 | 2017-06-09 | 安徽工程大学 | A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms |
CN107563308A (en) * | 2017-08-11 | 2018-01-09 | 西安电子科技大学 | SLAM closed loop detection methods based on particle swarm optimization algorithm |
CN107564062A (en) * | 2017-08-16 | 2018-01-09 | 清华大学 | Pose method for detecting abnormality and device |
CN108829116A (en) * | 2018-10-09 | 2018-11-16 | 上海岚豹智能科技有限公司 | Barrier-avoiding method and equipment based on monocular cam |
CN109059930A (en) * | 2018-08-31 | 2018-12-21 | 西南交通大学 | A kind of method for positioning mobile robot of view-based access control model odometer |
CN109087359A (en) * | 2018-08-30 | 2018-12-25 | 网易(杭州)网络有限公司 | Pose determines method, pose determining device, medium and calculates equipment |
CN109126121A (en) * | 2018-06-01 | 2019-01-04 | 成都通甲优博科技有限责任公司 | AR terminal interconnected method, system, device and computer readable storage medium |
-
2019
- 2019-03-01 CN CN201910155155.5A patent/CN109887033B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110194779A1 (en) * | 2010-02-08 | 2011-08-11 | Cheng Zhong | Apparatus and method for detecting multi-view specific object |
CN102175222A (en) * | 2011-03-04 | 2011-09-07 | 南开大学 | Crane obstacle-avoidance system based on stereoscopic vision |
US8761439B1 (en) * | 2011-08-24 | 2014-06-24 | Sri International | Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit |
CN103196440A (en) * | 2013-03-13 | 2013-07-10 | 上海交通大学 | M sequence discrete-type artificial signpost arrangement method and related mobile robot positioning method |
US20160063303A1 (en) * | 2014-09-02 | 2016-03-03 | Hong Kong Baptist University | Method and apparatus for eye gaze tracking |
CN106708037A (en) * | 2016-12-05 | 2017-05-24 | 北京贝虎机器人技术有限公司 | Autonomous mobile equipment positioning method and device, and autonomous mobile equipment |
CN106814737A (en) * | 2017-01-20 | 2017-06-09 | 安徽工程大学 | A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms |
CN107563308A (en) * | 2017-08-11 | 2018-01-09 | 西安电子科技大学 | SLAM closed loop detection methods based on particle swarm optimization algorithm |
CN107564062A (en) * | 2017-08-16 | 2018-01-09 | 清华大学 | Pose method for detecting abnormality and device |
CN109126121A (en) * | 2018-06-01 | 2019-01-04 | 成都通甲优博科技有限责任公司 | AR terminal interconnected method, system, device and computer readable storage medium |
CN109087359A (en) * | 2018-08-30 | 2018-12-25 | 网易(杭州)网络有限公司 | Pose determines method, pose determining device, medium and calculates equipment |
CN109059930A (en) * | 2018-08-31 | 2018-12-21 | 西南交通大学 | A kind of method for positioning mobile robot of view-based access control model odometer |
CN108829116A (en) * | 2018-10-09 | 2018-11-16 | 上海岚豹智能科技有限公司 | Barrier-avoiding method and equipment based on monocular cam |
Non-Patent Citations (2)
Title |
---|
CHANG H等: "An Improved FastSLAM Using Resmapling Based on Particle Swarm Optimization", 《IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS》 * |
刘书池: "面向工业互联网的井下无人机单目视觉SLAM定位方法", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349211A (en) * | 2019-06-18 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | The method and apparatus of framing, storage medium |
CN110349211B (en) * | 2019-06-18 | 2022-08-30 | 达闼机器人股份有限公司 | Image positioning method and device, and storage medium |
CN110263209A (en) * | 2019-06-27 | 2019-09-20 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN112284399B (en) * | 2019-07-26 | 2022-12-13 | 北京魔门塔科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
CN112284399A (en) * | 2019-07-26 | 2021-01-29 | 北京初速度科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
WO2021017211A1 (en) * | 2019-07-29 | 2021-02-04 | 魔门塔(苏州)科技有限公司 | Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal |
CN110595459A (en) * | 2019-09-18 | 2019-12-20 | 百度在线网络技术(北京)有限公司 | Vehicle positioning method, device, equipment and medium |
CN110597266A (en) * | 2019-09-26 | 2019-12-20 | 青岛蚂蚁机器人有限责任公司 | Robot path dynamic planning method based on two-dimensional code |
CN110861082A (en) * | 2019-10-14 | 2020-03-06 | 北京云迹科技有限公司 | Auxiliary mapping method and device, mapping robot and storage medium |
CN110910311A (en) * | 2019-10-30 | 2020-03-24 | 同济大学 | Automatic splicing method for multi-channel panoramic camera based on two-dimensional code |
CN110910311B (en) * | 2019-10-30 | 2023-09-26 | 同济大学 | Automatic splicing method of multi-path looking-around camera based on two-dimension code |
CN112904331A (en) * | 2019-11-19 | 2021-06-04 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining movement track and storage medium |
CN112991440B (en) * | 2019-12-12 | 2024-04-12 | 纳恩博(北京)科技有限公司 | Positioning method and device for vehicle, storage medium and electronic device |
CN112991440A (en) * | 2019-12-12 | 2021-06-18 | 纳恩博(北京)科技有限公司 | Vehicle positioning method and device, storage medium and electronic device |
CN111274934A (en) * | 2020-01-19 | 2020-06-12 | 上海智勘科技有限公司 | Implementation method and system for intelligently monitoring forklift operation track in warehousing management |
CN113313966A (en) * | 2020-02-27 | 2021-08-27 | 华为技术有限公司 | Pose determination method and related equipment |
WO2021170129A1 (en) * | 2020-02-27 | 2021-09-02 | 华为技术有限公司 | Pose determination method and related device |
CN113494911A (en) * | 2020-04-02 | 2021-10-12 | 宝马股份公司 | Method and system for positioning vehicle |
CN112285738A (en) * | 2020-10-23 | 2021-01-29 | 中车株洲电力机车研究所有限公司 | Positioning method and device for rail transit vehicle |
WO2022105024A1 (en) * | 2020-11-17 | 2022-05-27 | 深圳市优必选科技股份有限公司 | Method and apparatus for determining pose of robot, robot and storage medium |
CN114619441A (en) * | 2020-12-10 | 2022-06-14 | 北京极智嘉科技股份有限公司 | Robot and two-dimensional code pose detection method |
CN114619441B (en) * | 2020-12-10 | 2024-03-26 | 北京极智嘉科技股份有限公司 | Robot and two-dimensional code pose detection method |
CN113147738A (en) * | 2021-02-26 | 2021-07-23 | 重庆智行者信息科技有限公司 | Automatic parking positioning method and device |
CN112927260A (en) * | 2021-02-26 | 2021-06-08 | 商汤集团有限公司 | Pose generation method and device, computer equipment and storage medium |
CN112927260B (en) * | 2021-02-26 | 2024-04-16 | 商汤集团有限公司 | Pose generation method and device, computer equipment and storage medium |
CN113156945A (en) * | 2021-03-31 | 2021-07-23 | 深圳市优必选科技股份有限公司 | Automatic guide vehicle and parking control method and control device thereof |
CN115661299A (en) * | 2022-12-27 | 2023-01-31 | 安徽蔚来智驾科技有限公司 | Method for constructing lane line map, computer device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109887033B (en) | 2021-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109887033A (en) | Localization method and device | |
CN110763251B (en) | Method and system for optimizing visual inertial odometer | |
CN110312912B (en) | Automatic vehicle parking system and method | |
Barth et al. | Estimating the driving state of oncoming vehicles from a moving platform using stereo vision | |
WO2019175286A1 (en) | Image annotation | |
Vatavu et al. | Stereovision-based multiple object tracking in traffic scenarios using free-form obstacle delimiters and particle filters | |
Suhr et al. | Automatic free parking space detection by using motion stereo-based 3D reconstruction | |
Pfeiffer et al. | Modeling dynamic 3D environments by means of the stixel world | |
Parra et al. | Robust visual odometry for vehicle localization in urban environments | |
CN112734852A (en) | Robot mapping method and device and computing equipment | |
US11887336B2 (en) | Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle | |
Gao et al. | Ground and aerial meta-data integration for localization and reconstruction: A review | |
CN111986261B (en) | Vehicle positioning method and device, electronic equipment and storage medium | |
CN111862673A (en) | Parking lot vehicle self-positioning and map construction method based on top view | |
CN109115232B (en) | Navigation method and device | |
CN113580134B (en) | Visual positioning method, device, robot, storage medium and program product | |
CN114758063A (en) | Local obstacle grid map construction method and system based on octree structure | |
Maier et al. | Appearance-based traversability classification in monocular images using iterative ground plane estimation | |
Birk et al. | Simultaneous localization and mapping (SLAM) | |
Barth et al. | Vehicle tracking at urban intersections using dense stereo | |
Cheda et al. | Camera egomotion estimation in the ADAS context | |
CN112747757A (en) | Method and device for providing radar data, computer program and computer-readable storage medium | |
JPH04205320A (en) | Method for evaluating moving path data | |
Pagel | Robust monocular egomotion estimation based on an iekf | |
CN116612459B (en) | Target detection method, target detection device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096 Patentee after: Beijing Idriverplus Technology Co.,Ltd. Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096 Patentee before: Beijing Idriverplus Technology Co.,Ltd. |