CN109887033B - Positioning method and device - Google Patents

Positioning method and device Download PDF

Info

Publication number
CN109887033B
CN109887033B CN201910155155.5A CN201910155155A CN109887033B CN 109887033 B CN109887033 B CN 109887033B CN 201910155155 A CN201910155155 A CN 201910155155A CN 109887033 B CN109887033 B CN 109887033B
Authority
CN
China
Prior art keywords
information
dimensional code
pose
vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910155155.5A
Other languages
Chinese (zh)
Other versions
CN109887033A (en
Inventor
张放
李晓飞
张德兆
王肖
霍舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201910155155.5A priority Critical patent/CN109887033B/en
Publication of CN109887033A publication Critical patent/CN109887033A/en
Application granted granted Critical
Publication of CN109887033B publication Critical patent/CN109887033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Navigation (AREA)

Abstract

The invention provides a positioning method, which comprises the following steps: acquiring image information of a parking lot; obtaining attitude information measured by the IMU; processing the image information to obtain lane line information and two-dimensional code information; determining path track information of the vehicle under a self-vehicle coordinate system according to the lane line information; determining first position and attitude information according to the two-dimensional code information and a preset first map; extracting the features of the image information, matching the extracted feature information with the second map, and determining second posture information; calculating third pose information according to the path track information under the own vehicle coordinate system and the path track information under the own vehicle coordinate system predicted based on the global pose; and fusing the first position information, the second position information, the third position information and the posture information to obtain the target position information of the vehicle. Therefore, the robustness and practicability of positioning are greatly improved.

Description

Positioning method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a positioning method and apparatus.
Background
With the increase of the holding capacity of the automobile, the parking of the automobile is a pain point for driving and traveling at present, and a lot of traveling time is consumed for parking. Meanwhile, with the rapid development of the automatic driving technology, the low-speed passenger car parking field in specific scenes such as underground garages begins to get more and more attention of companies and research institutions. It is believed that with the development of technology, low-cost car parking substitute products are accepted by more and more users, and meanwhile, the car parking substitute technology changes the car taking and parking modes of many individuals and car rental companies.
Because the environment of an underground garage is complex, a Satellite Navigation System (GNSS) cannot perform positioning, and currently adopted positioning methods mainly include a positioning method based on laser radar synchronous positioning and mapping (SLAM) and a positioning method based on visual SLAM.
A positioning method of an underground parking lot based on a laser radar SLAM technology is used for acquiring map data of the underground parking lot by a map acquisition vehicle and constructing a map of the underground parking lot. In actual use, prior initial positioning needs to be provided to accelerate the convergence speed of the SLAM and improve the positioning accuracy. When the laser radar SLAM is actually positioned, three-dimensional point cloud data obtained by scanning in the environment needs to be matched with a point cloud map to obtain the real-time pose of a vehicle.
However, there is a possibility of a positioning failure in a case where the environmental characteristics are not abundant enough. Therefore, when positioning is performed based on the laser radar SLAM, the pose information of the vehicle is fused and output by combining the information of an Inertial Measurement Unit (IMU) so as to obtain more accurate pose information. However, the method has the disadvantages of large calculation amount, high performance requirement on a calculation platform, high laser radar cost and difficulty in large-scale mass production.
The method for positioning the underground parking lot based on the visual SLAM has the similar part with the method based on the laser radar SLAM, and needs to acquire data of the underground parking lot and establish a point cloud map of the underground parking lot. In actual use, detected pictures are subjected to Scale-invariant feature transform (SIFT), feature extraction such as Speeded Up Robust Features (SURF) is accelerated, and extracted feature points are matched with point clouds to acquire the real-time pose of a vehicle.
However, the degree of dependence on light is high, the possibility of positioning failure exists in some regions without textures or with few textures, and even if pose information is fused by combining IMU information, the problem of poor positioning accuracy caused by error accumulation exists in practical use.
Disclosure of Invention
The embodiment of the invention aims to provide a positioning method and a positioning device, and aims to solve the problems of high positioning cost and poor positioning precision in the prior art.
In order to solve the above problem, in a first aspect, the present invention provides a positioning method, including:
acquiring image information of a parking lot;
acquiring attitude information measured by an inertial measurement unit IMU;
processing the image information to obtain lane line information and two-dimensional code information;
determining path track information under a self-vehicle coordinate system according to the lane line information;
determining first position information according to the two-dimension code information and a preset first map;
extracting the features of the image information, matching the extracted feature information with a second map, and determining second posture information;
calculating third pose information according to the path track information under the own vehicle coordinate system and the path track information under the own vehicle coordinate system predicted based on the global pose;
and obtaining target pose information of the vehicle by fusing the first pose information, the second pose information, the third pose information and the pose information.
In a possible implementation manner, the processing the image information to obtain lane line information specifically includes:
intercepting a plurality of frames of images in the image information to obtain an image area;
carrying out gray level processing on the image area to obtain a gray level image;
carrying out binarization processing on the gray level image to obtain a binarized image;
filtering the binary image to obtain a filtered binary image;
determining edge points of the filtered binary image through edge detection;
determining a straight line in the filtered binary image through Hough change;
and determining lane line information according to the straight line and the edge points.
In a possible implementation manner, the processing the image information to obtain two-dimensional code information specifically includes:
detecting the image information and judging whether a two-dimensional code exists or not;
when the two-dimension code exists, judging whether the two-dimension code is valid;
when the two-dimension code is valid, extracting two-dimension code information; the two-dimension code information comprises an index and two-dimension code numbers, wherein the index comprises the numbers of all two-dimension codes in the ground library and the position of each two-dimension code in all the two-dimension codes in the first map.
In a possible implementation manner, the determining first pose information according to the two-dimensional code information and a preset first map specifically includes:
determining the position of the two-dimensional code in a first map according to the number and the index;
acquiring size information of the two-dimensional code;
determining the relative pose of the two-dimensional code and a vehicle through angular point auxiliary positioning according to the size information of the two-dimensional code;
and determining first position and posture information of the vehicle in a second map according to the position of the two-dimensional code in the first map and the relative position and posture of the two-dimensional code and the vehicle.
In a possible implementation manner, the extracting the features of the map information, matching the extracted feature information with the second map, and determining the second posture information specifically includes:
and performing feature extraction through an SVO algorithm in the visual SLAM, matching with the second map, and determining second position and attitude information of the vehicle in the second map.
In a possible implementation manner, the calculating third pose information according to the path trajectory information in the own vehicle coordinate system and the path trajectory information in the own vehicle coordinate system predicted based on the global pose specifically includes:
determining actual path track information under a self-vehicle coordinate system according to the lane line information detected by the image;
determining predicted path track information under a self-vehicle coordinate system according to the vehicle global pose prediction information;
and matching path points of the actual track information and the predicted track information to determine third posture information.
In a second aspect, the present invention provides a positioning device, comprising:
an acquisition unit configured to acquire image information of a parking lot;
the acquisition unit is also used for acquiring attitude information measured by the inertial measurement unit IMU;
the processing unit is used for processing the image information to obtain lane line information and two-dimensional code information;
the determining unit is used for determining path track information under a self-vehicle coordinate system according to the lane line information;
the determining unit is further used for determining first position and orientation information according to the two-dimensional code information and a preset first map;
the extraction unit is used for extracting the features of the image information, matching the extracted feature information with a second map and determining second posture information;
the calculating unit is used for calculating third pose information according to the path track information under the own vehicle coordinate system and the path track information under the own vehicle coordinate system predicted based on the global pose;
and the fusion unit is used for obtaining the target pose information of the vehicle by fusing the first pose information, the second pose information, the third pose information and the pose information.
In a third aspect, the invention provides an apparatus comprising a memory for storing a program and a processor for performing the method of any of the first aspects.
In a fourth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of the first aspect.
In a fifth aspect, the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
By applying the positioning method and the positioning device provided by the invention, the three kinds of pose information and posture information are fused, so that the robustness and the practicability of positioning are greatly improved.
Drawings
Fig. 1 is a schematic flow chart of a positioning method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a positioning device according to a second embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic flow chart of a positioning method according to an embodiment of the present invention. The positioning method can be applied to scenes such as an underground garage and the like which do not have network signals or have weak network signals, such as an underground parking lot, and vehicles cannot be positioned by GNSS due to the fact that no satellite positioning signals or weak signals exist in the underground parking lot. The vehicle here is an autonomous vehicle. As shown in fig. 1, the positioning method includes:
step 110, acquiring image information of the parking lot.
Wherein the parking lot may be an underground parking lot having the characteristic of poor or no satellite positioning signal.
And step 120, obtaining attitude information measured by the IMU.
The vehicle is provided with the all-round-looking camera and the IMU, image information of the parking lot can be acquired through the all-round-looking camera, and the image information can comprise a multi-frame image. The all-round-looking camera has the advantages of mature technology, low cost and large light angle, and can acquire more environmental texture information, thereby facilitating subsequent visual SLAM positioning. The IMU can acquire the attitude information of the vehicle in real time so as to improve the positioning precision. The attitude information of the vehicle includes, but is not limited to, tilting, forward tilting, left turning or right turning information of the vehicle.
And step 130, processing the image information to obtain lane line information and a two-dimensional code.
Wherein, the lane line detection can be performed by edge detection and hough transform.
Specifically, before the panoramic camera is used, internal reference, that is, parameters of characteristics of the panoramic camera, such as focal length and pixel size, and external reference, that is, parameters of the panoramic camera in a vehicle coordinate system, such as position and rotation direction, may be calibrated.
The image information comprises a plurality of frames of images, each frame of image in the plurality of frames of images can be intercepted, and then only the image area containing the lane line information is processed. Next, gradation processing is performed on the image area including the lane line information to obtain gradation image information. And then, carrying out binarization processing to obtain a binarized image, and then carrying out denoising and filtering processing, wherein the denoising and filtering processing comprises removing Gaussian noise and filtering line segments with extremely small acute angles and extremely small obtuse angles. Subsequently, canny edge detection is performed to identify edge points and detect the contour of the lane line. Then, a straight line is detected by hough transform. And finally, obtaining lane line information according to the edge points and the straight lines. The lane line detection method is good in robustness and has good adaptability to ambient light.
The lane line detection method based on the perspective transformation, the lane line detection method based on the fitting, the lane line detection method based on the learning method, and the like may be used.
The two-dimensional code is the two-dimensional code that sets up in advance in the garage. In the garage, the air brushing two-dimensional code in advance, the actual size of this two-dimensional code is also preset, and the position of two-dimensional code can be the garage subaerial, the region that the camera can be gathered such as on the wall. After the inkjet printing is completed, the position of the two-dimensional code in the garage is known, and the position of the two-dimensional code in the garage can be reflected on the first map. The first map is the garage map. The first map also has an index corresponding to the number of the two-dimensional code and the location in the first map.
The image information can be detected firstly, whether the two-dimensional code exists or not is judged, whether the two-dimensional code is effective or not is judged when the two-dimensional code exists, the two-dimensional code can be read normally effectively, and when the two-dimensional code is partially shielded, the two-dimensional code can be regarded as invalid. And when the two-dimension code is effective, extracting the two-dimension code information, wherein the two-dimension code information comprises an index and the number of the two-dimension code.
And step 140, determining path track information under the own vehicle coordinate system according to the lane line information.
Specifically, the position information of the vehicle running along the lane line can be calculated according to internal parameters of the look-around camera, such as the focal length, and the depth information of the multi-frame image information, and the path track information of the vehicle under the own vehicle coordinate system can be determined by splicing a plurality of position information.
And 150, determining first position and orientation information according to the two-dimensional code information and a preset first map.
Specifically, as for the previous example, the serial number may be a serial number preset by a plurality of two-dimensional codes in the garage so as to distinguish the plurality of two-dimensional codes, and through the serial number, the index may be searched to locate the position of the two-dimensional code in the first map, that is, the position of the two-dimensional code in the garage map is located.
Size information of the two-dimensional code can also be acquired from the image information. The size information may be the size of the two-dimensional code, and in addition, the two-dimensional code further includes 3 angular points (3 angular points may be calculated by a two-dimensional code angular point detection algorithm). In one example, for example, the two-dimensional code is located in the middle of a parking space of a garage, and the size of the two-dimensional code is 45cm x 45cm, and the relative position of the two-dimensional code and the around-looking camera can be calculated by combining the size of the two-dimensional code and angular point auxiliary positioning, so that the pose relation of the two-dimensional code and the vehicle can be calculated. And calculating absolute pose information of the vehicle in a second map, namely a global map, according to the position of the two-dimensional code in the first map and the pose relationship between the two-dimensional code and the vehicle, wherein the absolute pose information can also be called as first pose information.
The pose information may include a position and a posture of the vehicle, the position represents a position (translation) of the vehicle relative to world coordinates, and is generally represented by coordinates (x, y), and the posture represents a yaw angle of the vehicle, i.e., a deviation angle between an actual forward direction and an expected forward direction of the vehicle, and may be represented by Φ. Therefore, the position and pose information is corresponding to the three-dimensional space information which can be represented by (x, y, phi). The corresponding first posture information may be represented by n1 ═ (x1, y1, Φ 1).
And 160, extracting the features of the image information, matching the extracted feature information with the second map, and determining second position and orientation information.
Specifically, after feature extraction is performed on the image information, the extracted feature information is matched with a second map determined according to the visual SLAM, so that second pose information, that is, absolute pose information of the vehicle in the global map, is determined. As to how to determine the second map according to the visual SLAM, the present application is not described in detail. By way of example and not limitation, sparse feature points may be extracted through a Semi-direct Visual Odometry (SVO) algorithm, and in implementation, a small block of 4 × 4 is used for block matching to estimate the motion of the camera resource itself. The algorithm has extremely high speed, can achieve real-time performance on a low-end computing platform, and is suitable for occasions with limited computing platforms.
The specific flow of the SVO algorithm includes tracking and depth filtering, so as to obtain the second pose information, which can be represented by n2 ═ x2, y2, Φ 2. The detailed calculation process is not described in detail herein.
It can be understood that the second pose information can also be obtained by using a Parallel Tracking and Mapping (PTAM) algorithm, an ORB-SLAM algorithm, a Large Scale Direct simple object (LSD) -SLAM algorithm, and the like, which is not described herein again.
And 170, calculating third pose information according to the path track information in the own vehicle coordinate system and the path track information in the own vehicle coordinate system predicted based on the global pose.
Specifically, based on the lane line information detected by the image, the actual path trajectory information of the vehicle in the vehicle coordinate system can be obtained, and based on the prediction information of the global pose (the prediction information can be obtained by calculating according to the current pose information, the current speed and the like), the decision planning module in the vehicle can obtain the predicted path trajectory information in the vehicle coordinate system, and according to the actual path trajectory information and the predicted path trajectory information, path point matching is performed through a difference algorithm, so that the absolute pose information of the vehicle in the global map, namely the third pose information, can be obtained, and can also be called as pose information based on lane line correction. The third posture information may be represented by n3 ═ (x3, y3, Φ 3).
And step 180, fusing the first position information, the second position information, the third position information and the posture information to obtain the target position information of the vehicle.
Specifically, the first, second, and third pose information and the pose information may be fused by an Extended Kalman Filter (EKF) to obtain the target pose information of the vehicle. Therefore, the vehicle can be positioned according to the target pose information.
By applying the positioning method provided by the invention, the three pose information and the attitude information are fused, so that the robustness and the practicability of positioning are greatly improved.
It can be understood that, under extreme conditions, if the visual SLAM loses the positioning information and does not detect the two-dimensional code tag, the fusion algorithm can be degraded into a lane line-based local positioning algorithm, the positioning information can still be reliably provided, and the global pose information can be quickly corrected after the visual SLAM recovers or the two-dimensional code tag is detected.
Fig. 2 is a schematic structural diagram of a positioning device according to a second embodiment of the present invention. As shown in fig. 2, the positioning apparatus is applied in a positioning method, and the positioning apparatus 200 includes: an acquisition unit 210, a processing unit 220, a determination unit 230, an extraction unit 240, a calculation unit 250, and a fusion unit 260.
The acquisition unit 210 is used for acquiring image information of a parking lot;
the obtaining unit 210 is further configured to obtain attitude information measured by the IMU;
the processing unit 220 is configured to process the image information to obtain lane line information and two-dimensional code information;
the determining unit 230 is configured to determine path track information in a host vehicle coordinate system according to the lane line information;
the determining unit 230 is further configured to determine first pose information according to the two-dimensional code information and a preset first map;
the extracting unit 240 is configured to perform feature extraction on the image information, match the extracted feature information with the second map, and determine second pose information;
the calculating unit 250 is configured to calculate third pose information according to the path trajectory information in the own vehicle coordinate system and the path trajectory information in the own vehicle coordinate system predicted based on the global pose;
the fusion unit 260 is configured to obtain target pose information of the vehicle by filtering the first pose information, the second pose information, the third pose information, and the pose information.
The specific functions of each unit in the device are the same as those in the method, and are not described in detail here.
By applying the positioning device provided by the invention, the three pose information and the posture information are fused, so that the robustness and the practicability of positioning are greatly improved.
The third embodiment of the invention provides equipment, which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be a non-volatile memory such as a hard disk drive and a flash memory, in which a software program and a device driver are stored. The software program is capable of performing various functions of the above-described methods provided by embodiments of the present invention; the device drivers may be network and interface drivers. The processor is used for executing a software program, and the software program can realize the method provided by the embodiment of the invention when being executed.
A fourth embodiment of the present invention provides a computer program product including instructions, which, when the computer program product runs on a computer, causes the computer to execute the method provided in the first embodiment of the present invention.
The fifth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method provided in the first embodiment of the present invention is implemented.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A positioning method, characterized in that the positioning method comprises:
the automatic driving vehicle acquires image information of a parking lot;
acquiring attitude information of a vehicle measured by an inertial measurement unit IMU; the attitude information comprises upwarping, forward leaning, left turning and right turning information;
processing the image information to obtain lane line information and two-dimensional code information of a two-dimensional code preset in a garage;
splicing the position information according to the position information of lane line driving in the lane line information to determine the path track information of the vehicle in a self-vehicle coordinate system;
determining the position of the two-dimensional code in a preset first map according to the index and the two-dimensional code number in the two-dimensional code information, then acquiring the size information of the two-dimensional code, determining the relative pose of the two-dimensional code and the vehicle through angular point auxiliary positioning according to the size information of the two-dimensional code, and then determining the first pose information of the vehicle in a second map according to the position of the two-dimensional code in the first map and the relative pose of the two-dimensional code and the vehicle;
extracting the features of the image information, matching the extracted feature information with a second map, and determining second posture information;
calculating third pose information according to the path track information under the own vehicle coordinate system and the path track information under the own vehicle coordinate system predicted based on the global pose;
and obtaining target pose information of the vehicle by fusing the first pose information, the second pose information, the third pose information and the pose information.
2. The method according to claim 1, wherein the processing the image information to obtain lane line information specifically includes:
intercepting a plurality of frames of images in the image information to obtain an image area;
carrying out gray level processing on the image area to obtain a gray level image;
carrying out binarization processing on the gray level image to obtain a binarized image;
filtering the binary image to obtain a filtered binary image;
determining edge points of the filtered binary image through edge detection;
determining a straight line in the filtered binary image through Hough change;
and determining lane line information according to the straight line and the edge points.
3. The method according to claim 1, wherein the processing the image information to obtain two-dimensional code information specifically includes:
detecting the image information and judging whether a two-dimensional code exists or not;
when the two-dimension code exists, judging whether the two-dimension code is valid;
when the two-dimension code is valid, extracting two-dimension code information; the two-dimension code information comprises an index and two-dimension code numbers, wherein the index comprises the numbers of all two-dimension codes in the ground library and the position of each two-dimension code in all the two-dimension codes in the first map.
4. The method according to claim 1, wherein the extracting the features of the image information, matching the extracted feature information with a second map, and determining second pose information specifically comprises:
and performing feature extraction through an SVO algorithm in the visual SLAM, matching with the second map, and determining second position and attitude information of the vehicle in the second map.
5. The method according to claim 1, wherein the calculating of the third pose information according to the path trajectory information in the own vehicle coordinate system and the path trajectory information in the own vehicle coordinate system predicted based on the global pose specifically includes:
determining actual path track information under a self-vehicle coordinate system according to the lane line information detected by the image;
determining predicted path track information under a self-vehicle coordinate system according to the vehicle global pose prediction information;
and matching path points of the actual path track information and the predicted path track information to determine third posture information.
6. A positioning device, characterized in that it comprises:
an acquisition unit configured to acquire image information of a parking lot;
the acquisition unit is also used for acquiring the attitude information of the vehicle measured by the inertial measurement unit IMU; the attitude information comprises upwarping, forward leaning, left turning and right turning information;
the processing unit is used for processing the image information to obtain lane line information and two-dimensional code information of a two-dimensional code preset in the garage;
the determining unit is used for splicing the position information according to the position information of lane line driving in the lane line information to determine the path track information under the own vehicle coordinate system;
the determining unit is further configured to determine a position of the two-dimensional code in a preset first map according to the index and the two-dimensional code number in the two-dimensional code information, then acquire size information of the two-dimensional code, determine a relative pose of the two-dimensional code and the vehicle through angular point auxiliary positioning according to the size information of the two-dimensional code, and then determine first pose information of the vehicle in a second map according to the position of the two-dimensional code in the first map and the relative pose of the two-dimensional code and the vehicle;
the extraction unit is used for extracting the features of the image information, matching the extracted feature information with a second map and determining second posture information;
the calculating unit is used for calculating third pose information according to the path track information under the own vehicle coordinate system and the path track information under the own vehicle coordinate system predicted based on the global pose;
and the fusion unit is used for obtaining the target pose information of the vehicle by fusing the first pose information, the second pose information, the third pose information and the pose information.
7. A computer device comprising a memory for storing a program and a processor for performing the method of any one of claims 1-5.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910155155.5A 2019-03-01 2019-03-01 Positioning method and device Active CN109887033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910155155.5A CN109887033B (en) 2019-03-01 2019-03-01 Positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910155155.5A CN109887033B (en) 2019-03-01 2019-03-01 Positioning method and device

Publications (2)

Publication Number Publication Date
CN109887033A CN109887033A (en) 2019-06-14
CN109887033B true CN109887033B (en) 2021-03-19

Family

ID=66930187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910155155.5A Active CN109887033B (en) 2019-03-01 2019-03-01 Positioning method and device

Country Status (1)

Country Link
CN (1) CN109887033B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349211B (en) * 2019-06-18 2022-08-30 达闼机器人股份有限公司 Image positioning method and device, and storage medium
CN110263209B (en) * 2019-06-27 2021-07-09 北京百度网讯科技有限公司 Method and apparatus for generating information
CN112284399B (en) * 2019-07-26 2022-12-13 北京魔门塔科技有限公司 Vehicle positioning method based on vision and IMU and vehicle-mounted terminal
CN112308913B (en) * 2019-07-29 2024-03-29 北京魔门塔科技有限公司 Vehicle positioning method and device based on vision and vehicle-mounted terminal
CN110595459B (en) * 2019-09-18 2021-08-17 百度在线网络技术(北京)有限公司 Vehicle positioning method, device, equipment and medium
CN110597266A (en) * 2019-09-26 2019-12-20 青岛蚂蚁机器人有限责任公司 Robot path dynamic planning method based on two-dimensional code
CN110861082B (en) * 2019-10-14 2021-01-22 北京云迹科技有限公司 Auxiliary mapping method and device, mapping robot and storage medium
CN110910311B (en) * 2019-10-30 2023-09-26 同济大学 Automatic splicing method of multi-path looking-around camera based on two-dimension code
CN112904331B (en) * 2019-11-19 2024-05-07 杭州海康威视数字技术股份有限公司 Method, device, equipment and storage medium for determining moving track
CN112991440B (en) * 2019-12-12 2024-04-12 纳恩博(北京)科技有限公司 Positioning method and device for vehicle, storage medium and electronic device
CN111274934A (en) * 2020-01-19 2020-06-12 上海智勘科技有限公司 Implementation method and system for intelligently monitoring forklift operation track in warehousing management
CN113313966A (en) * 2020-02-27 2021-08-27 华为技术有限公司 Pose determination method and related equipment
CN113494911B (en) * 2020-04-02 2024-06-07 宝马股份公司 Method and system for positioning vehicle
CN112285738B (en) * 2020-10-23 2023-01-31 中车株洲电力机车研究所有限公司 Positioning method and device for rail transit vehicle
CN112097768B (en) * 2020-11-17 2021-03-02 深圳市优必选科技股份有限公司 Robot posture determining method and device, robot and storage medium
CN114619441B (en) * 2020-12-10 2024-03-26 北京极智嘉科技股份有限公司 Robot and two-dimensional code pose detection method
CN112712558B (en) * 2020-12-25 2024-11-05 北京三快在线科技有限公司 Positioning method and device for unmanned equipment
CN113147738A (en) * 2021-02-26 2021-07-23 重庆智行者信息科技有限公司 Automatic parking positioning method and device
CN112927260B (en) * 2021-02-26 2024-04-16 商汤集团有限公司 Pose generation method and device, computer equipment and storage medium
CN113156945A (en) * 2021-03-31 2021-07-23 深圳市优必选科技股份有限公司 Automatic guide vehicle and parking control method and control device thereof
CN115661299B (en) * 2022-12-27 2023-03-21 安徽蔚来智驾科技有限公司 Method for constructing lane line map, computer device and storage medium
CN117830604B (en) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 Two-dimensional code anomaly detection method and medium for positioning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222A (en) * 2011-03-04 2011-09-07 南开大学 Crane obstacle-avoidance system based on stereoscopic vision
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms
CN107563308A (en) * 2017-08-11 2018-01-09 西安电子科技大学 SLAM closed loop detection methods based on particle swarm optimization algorithm
CN107564062A (en) * 2017-08-16 2018-01-09 清华大学 Pose method for detecting abnormality and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147851B (en) * 2010-02-08 2014-06-04 株式会社理光 Device and method for judging specific object in multi-angles
US8761439B1 (en) * 2011-08-24 2014-06-24 Sri International Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit
CN103196440B (en) * 2013-03-13 2015-07-08 上海交通大学 M sequence discrete-type artificial signpost arrangement method and related mobile robot positioning method
US9563805B2 (en) * 2014-09-02 2017-02-07 Hong Kong Baptist University Method and apparatus for eye gaze tracking
CN106708037A (en) * 2016-12-05 2017-05-24 北京贝虎机器人技术有限公司 Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN109126121B (en) * 2018-06-01 2022-01-04 成都通甲优博科技有限责任公司 AR terminal interconnection method, system, device and computer readable storage medium
CN109087359B (en) * 2018-08-30 2020-12-08 杭州易现先进科技有限公司 Pose determination method, pose determination apparatus, medium, and computing device
CN109059930B (en) * 2018-08-31 2020-12-22 西南交通大学 Mobile robot positioning method based on visual odometer
CN108829116B (en) * 2018-10-09 2019-01-01 上海岚豹智能科技有限公司 Barrier-avoiding method and equipment based on monocular cam

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222A (en) * 2011-03-04 2011-09-07 南开大学 Crane obstacle-avoidance system based on stereoscopic vision
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms
CN107563308A (en) * 2017-08-11 2018-01-09 西安电子科技大学 SLAM closed loop detection methods based on particle swarm optimization algorithm
CN107564062A (en) * 2017-08-16 2018-01-09 清华大学 Pose method for detecting abnormality and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chang H等.An Improved FastSLAM Using Resmapling Based on Particle Swarm Optimization.《IEEE Conference on Industrial Electronics and Applications》.2016,第229-234页. *
面向工业互联网的井下无人机单目视觉SLAM定位方法;刘书池;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20170615(第6期);第B021-58页 *

Also Published As

Publication number Publication date
CN109887033A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN109887033B (en) Positioning method and device
CN109143207B (en) Laser radar internal reference precision verification method, device, equipment and medium
JP6670071B2 (en) Vehicle image recognition system and corresponding method
CN112101092A (en) Automatic driving environment sensing method and system
US10872246B2 (en) Vehicle lane detection system
US20190005667A1 (en) Ground Surface Estimation
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN112740225B (en) Method and device for determining road surface elements
CN111091037B (en) Method and device for determining driving information
CN112781599B (en) Method for determining the position of a vehicle
WO2004072901A1 (en) Real-time obstacle detection with a calibrated camera and known ego-motion
CN114485698B (en) Intersection guide line generation method and system
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN114549286A (en) Lane line generation method and device, computer-readable storage medium and terminal
Hu et al. Accurate global trajectory alignment using poles and road markings
CN113580134A (en) Visual positioning method, device, robot, storage medium and program product
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar
CN114419573A (en) Dynamic occupancy grid estimation method and device
CN114248778A (en) Positioning method and positioning device of mobile equipment
Eraqi et al. Static free space detection with laser scanner using occupancy grid maps
KR102540624B1 (en) Method for create map using aviation lidar and computer program recorded on record-medium for executing method therefor
CN116142172A (en) Parking method and device based on voxel coordinate system
US11733373B2 (en) Method and device for supplying radar data
KR102675138B1 (en) Method for calibration of multiple lidars, and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee before: Beijing Idriverplus Technology Co.,Ltd.