CN109711363B - Vehicle positioning method, device, equipment and storage medium - Google Patents

Vehicle positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN109711363B
CN109711363B CN201811631573.9A CN201811631573A CN109711363B CN 109711363 B CN109711363 B CN 109711363B CN 201811631573 A CN201811631573 A CN 201811631573A CN 109711363 B CN109711363 B CN 109711363B
Authority
CN
China
Prior art keywords
image frame
image
determining
sequence
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811631573.9A
Other languages
Chinese (zh)
Other versions
CN109711363A (en
Inventor
姚萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811631573.9A priority Critical patent/CN109711363B/en
Publication of CN109711363A publication Critical patent/CN109711363A/en
Application granted granted Critical
Publication of CN109711363B publication Critical patent/CN109711363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle positioning method, a vehicle positioning device, vehicle positioning equipment and a storage medium. The method comprises the following steps: acquiring a reference image sequence and a current image sequence of the surrounding environment of the vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames; determining, for each of the second image frames in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame; and determining the vehicle position corresponding to the second image frame according to at least one first image frame matched with the second image frame. The embodiment of the invention has higher vehicle positioning efficiency.

Description

Vehicle positioning method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a vehicle positioning method, device, equipment and storage medium.
Background
With the progress of scientific technology, especially the development of automobile manufacturing and information technology, the automatic driving technology becomes a research hotspot in recent years. Autonomous driving systems typically include several large modules for self-positioning, environmental awareness, decision planning, and motion control. Where self-positioning technology is the basis for all autonomous driving systems. The self-positioning requirements of the automatic driving system are relatively high.
In the related art, a vehicle self-positioning algorithm generally adopts a positioning algorithm based on sparse point cloud, namely, point cloud matching is carried out on an acquired image sequence of the surrounding environment of a vehicle and a reference image sequence.
Disclosure of Invention
The invention provides a vehicle positioning method, a vehicle positioning device, vehicle positioning equipment and a storage medium, and aims to improve vehicle positioning efficiency.
In a first aspect, the present invention provides a vehicle positioning method, comprising:
acquiring a reference image sequence and a current image sequence of the surrounding environment of the vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames;
determining, for each of the second image frames in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame;
and determining the vehicle position corresponding to the second image frame according to at least one first image frame matched with the second image frame.
Optionally, determining the at least one first image frame matched with the second image frame according to the global timing characteristics of the reference image frame and the second image frame in the reference image sequence includes:
respectively calculating the similarity of the global time sequence characteristics of each reference image frame and the second image frame in the reference image sequence;
determining the at least one first image frame matched with the second image frame according to the similarity of the global time sequence characteristics of the reference image frames and the second image frame.
Optionally, the determining, according to the similarity of the global timing features of the respective reference image frames and the second image frame, the at least one first image frame matched with the second image frame includes:
and using a preset number of reference image frames near the reference image frame with the maximum similarity as the at least one first image frame matched with the second image frame.
Optionally, the determining, according to at least one first image frame matched with the second image frame, a vehicle position corresponding to the second image frame includes:
respectively carrying out feature point matching on each first image frame and the second image frame, and determining a first image frame matched with the second image frame;
and determining the vehicle position corresponding to the second image frame according to the first image frame.
Optionally, the determining the first image frame matching the second image frame includes:
and respectively carrying out Scale Invariant Feature Transform (SIFT) feature matching on the point cloud of each first image frame and the point cloud of the second image frame, and determining the first image frame matched with the second image frame.
Optionally, the determining, according to the first image frame, the vehicle position corresponding to the second image frame includes:
and determining the vehicle position corresponding to the second image frame according to the vehicle position determined by the motion recovery structure SFM model corresponding to the first image frame.
In a second aspect, the present invention provides a vehicle positioning apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a reference image sequence and a current image sequence of the surrounding environment of the vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames;
a preprocessing module, configured to determine, for each second image frame in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame;
and the processing module is used for determining the vehicle position corresponding to the second image frame according to at least one first image frame matched with the second image frame.
Optionally, the preprocessing module is specifically configured to:
determining the at least one first image frame matched with the second image frame according to the global time sequence characteristics of a reference image frame and the second image frame in the reference image sequence.
Optionally, the preprocessing module is specifically configured to:
respectively calculating the similarity of the global time sequence characteristics of each reference image frame and the second image frame in the reference image sequence;
determining the at least one first image frame matched with the second image frame according to the similarity of the global time sequence characteristics of the reference image frames and the second image frame.
Optionally, the preprocessing module is specifically configured to:
and using a preset number of reference image frames near the reference image frame with the maximum similarity as the at least one first image frame matched with the second image frame.
Optionally, the processing module is specifically configured to:
respectively carrying out feature point matching on each first image frame and the second image frame, and determining a first image frame matched with the second image frame;
and determining the vehicle position corresponding to the second image frame according to the first image frame.
Optionally, the processing module is specifically configured to:
and respectively carrying out Scale Invariant Feature Transform (SIFT) feature matching on the point cloud of each first image frame and the point cloud of the second image frame, and determining the first image frame matched with the second image frame.
Optionally, the processing module is specifically configured to:
and determining the vehicle position corresponding to the second image frame according to the vehicle position determined by the motion recovery structure SFM model corresponding to the first image frame.
In a third aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method described in any one of the first aspect.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of the first aspects via execution of the executable instructions.
The vehicle positioning method, the device, the equipment and the storage medium provided by the embodiment of the invention are used for acquiring a reference image sequence and a current image sequence of the surrounding environment of the vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames; determining, for each of the second image frames in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame; and determining the vehicle position corresponding to the second image frame according to at least one first image frame matched with the second image frame, wherein the reference image sequence is matched with the current image sequence in advance, so that the approximate range of the reference image frame corresponding to the current second image frame in the reference image sequence is found, the calculation amount of subsequent vehicle positioning is reduced, the interference of road sections with similar scenes in different places on the vehicle positioning is reduced, and the vehicle positioning efficiency is higher.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is an application scenario diagram provided in an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a vehicle locating method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a vehicle locating method provided by the present invention;
FIG. 4 is a schematic structural diagram of an embodiment of a vehicle positioning device provided by the present invention;
fig. 5 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this invention and the drawings described herein are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Firstly, the application scene related to the invention is introduced:
the vehicle positioning method provided by the embodiment of the invention is applied to a vehicle self-positioning scene to improve the positioning efficiency, for example, the vehicle self-positioning in an automatic driving scene. The vehicle is, for example, an autonomous vehicle or a general vehicle.
The perception capability of the vehicle to the environment is the most key index for measuring the intellectualization degree of the vehicle. And accurate positioning of the vehicle is the basis for the vehicle's perception of the environment. Currently, a vehicle is usually located by combining a GPS and an inertial navigation device. However, since the error between the GPS and the data output from the inertial navigation device is large, it is difficult to position the vehicle with high accuracy. In the related art, vehicle positioning is generally performed based on matching of sparse point clouds, and a large number of feature point matching is required, so that the calculation complexity is high, and the efficiency is low.
Therefore, the method of the embodiment of the invention reduces the matching range of the sparse point cloud in advance, greatly reduces the time complexity of the positioning algorithm and improves the efficiency of vehicle positioning.
Fig. 1 is an application scenario diagram according to an embodiment of the present invention, and optionally, as shown in fig. 1, the application scenario includes a server 11 and an electronic device 12; the electronic device 12 may be an in-vehicle terminal on a vehicle, or a controller of the vehicle.
The electronic device 12 and the server 11 may be connected via a network, for example, a communication network such as 3G, 4G, or Wireless Fidelity (WIFI).
The method provided by the present invention can be implemented by the electronic device 12 such as a processor executing corresponding software codes, or can be implemented by the electronic device 12 executing corresponding software codes and performing data interaction with the server 11, for example, the server executes a part of operations to control the electronic device to execute the positioning method.
The following embodiments are all described with electronic devices as the executing bodies. In the following embodiments, an autonomous vehicle is described as an example.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flow chart of an embodiment of a vehicle positioning method provided by the present invention. As shown in fig. 2, the method provided by this embodiment includes:
step 201, acquiring a reference image sequence and a current image sequence of the surrounding environment of a vehicle; the reference image sequence includes a plurality of reference image frames; the current image sequence includes a plurality of second image frames.
Specifically, the reference image sequence may be an image sequence of an environment around the vehicle, which is acquired by the vehicle traveling for the first time on the current route, and specifically, the image sequence may be acquired by a vehicle-mounted camera of the vehicle. The reference image sequence includes a plurality of first image frames.
The current image sequence is a currently acquired image sequence of the vehicle surroundings, comprising a plurality of second image frames.
The reference image sequence may generate a Structure From Motion (SFM) model M represented by a sparse point cloud in advance, where the SFM model records a point cloud corresponding to each first image frame, that is:
Sb={fi|pm∈fi,Pm∈M}
wherein S isbIs a baseQuasi-image sequence, fiFor reference image frame, PmAs sparse points in the SFM model, pnIs its corresponding 2D feature point. i is an integer greater than 1.
Step 202, for each second image frame in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame is determined.
In particular, for any second image frame f in the current image sequenceqFrom a reference image frame fiWhere the matching at least one first image frame is found. The matching first image frame may be determined, for example, based on global timing characteristics of the image frames. The global timing feature may be obtained by down-sampling the image frame and locally normalizing the image frame. The global timing feature is a global feature of the time domain of the image frame.
And step 203, determining the vehicle position corresponding to the second image frame according to the at least one first image frame matched with the second image frame.
Specifically, at least one first image frame is subjected to feature matching with the second image frame, and a first image frame matched with the second image frame, namely a first image frame most similar to the second image frame, is determined.
And determining the vehicle position corresponding to the second image frame according to the SFM model corresponding to the first image frame.
In this embodiment, the rough range of the reference image frame corresponding to the current second video frame in the reference image sequence, for example, the 3-5 first image frames, is found by using the global timing characteristics, so that the calculation amount of the characteristic matching is reduced, and the interference of road segments with similar scenes in different places on the characteristic matching is reduced.
The method comprises the steps of acquiring a reference image sequence and a current image sequence of the surrounding environment of a vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames; determining, for each of the second image frames in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame; and determining the vehicle position corresponding to the second image frame according to at least one first image frame matched with the second image frame, wherein the reference image sequence is matched with the current image sequence in advance, so that the approximate range of the reference image frame corresponding to the current second image frame in the reference image sequence is found, the calculation amount of subsequent vehicle positioning is reduced, the interference of road sections with similar scenes in different places on the vehicle positioning is reduced, and the vehicle positioning efficiency is higher.
On the basis of the foregoing embodiment, optionally, step 202 may be specifically implemented by the following method:
and determining at least one first image frame matched with a second image frame according to the global time sequence characteristics of the reference image frame and the second image frame in the reference image sequence.
As shown in fig. 3, the reference image sequence and the current image sequence are subjected to global feature matching, that is, matching is performed according to global timing features of image frames, and an initial position, that is, a range of a reference image frame corresponding to a second image frame in the reference image sequence, that is, a first image frame, is locked.
The method can be specifically realized by the following steps:
respectively calculating the similarity of the global time sequence characteristics of each reference image frame and the second image frame in the reference image sequence;
and determining at least one first image frame matched with the second image frame according to the similarity of the global time sequence characteristics of each reference image frame and the second image frame.
For any second image frame f in the current image sequenceqFrom a reference image frame fiAt least one first image frame is found, namely, at least one first image frame is determined according to the similarity of global time sequence characteristics, and an initial position set L is establishedinit
Linit(fq)={fj-w,fj-w+1,…,fj,…,fj+w-1,j+w},
Wherein,
Figure BDA0001929105290000071
wherein, w is a preset value, F (-) is a function for calculating global time sequence characteristics, and D (-) is a function for calculating similarity; j is the number of the reference image frame in the case where the similarity is maximum.
Further, a preset number of reference image frames near the reference image frame having the greatest similarity may be used as the at least one first image frame matched with the second image frame.
In particular, f can bejA neighboring preset number of reference image frames { f }j-w,fj-w+1,…,fj,…,fj+w-1,j+wAs the first image frame.
In an embodiment of the present invention, a reference image frame with a continuous similarity greater than a preset threshold may be further used as the at least one first image frame.
In other embodiments of the present invention, the first image frame may be determined in other manners, which is not limited by the embodiments of the present invention.
In the embodiment, global features are used for pre-matching, the relative position of the current image frame in the SFM model is quickly positioned, and the relative position is used as an initial position result, so that the number of sparse point clouds subsequently participating in calculation is greatly reduced, the vehicle positioning speed is accelerated, and the interference of road sections with similar scenes in different places on sparse point cloud matching is reduced.
On the basis of the foregoing embodiment, optionally, step 203 may be specifically implemented by:
respectively carrying out feature point matching on each first image frame and each second image frame, and determining the first image frame matched with the second image frame;
and determining the vehicle position corresponding to the second image frame according to the first image frame.
Specifically, as shown in fig. 3, each first image frame is subjected to feature point matching, such as SIFT feature matching, with the second image frame, and the first image frame matching with the second image frame is determined. And obtaining the vehicle position corresponding to the second image frame, namely the final positioning result, based on the vehicle position determined by the motion recovery structure SFM model corresponding to the first image frame.
Further, determining the first image frame matched with the second image frame may specifically be implemented as follows:
and respectively carrying out Scale Invariant Feature Transform (SIFT) feature matching on the point cloud of each first image frame and the point cloud of the second image frame, and determining the first image frame matched with the second image frame.
Further, according to the first image frame, the vehicle position corresponding to the second image frame is determined, which may specifically be implemented as follows:
and determining the vehicle position corresponding to the second image frame according to the vehicle position determined by the motion recovery structure SFM model corresponding to the first image frame.
Specifically, SIFT point cloud matching is performed on a first image frame and a current second image frame which are located in an initial position set, and calculation is performed as follows:
Figure BDA0001929105290000081
wherein X (-) is the current second image frame fqAnd (3) the vehicle position in the corresponding world coordinate system, (x, y, z) represents the three-dimensional coordinates of the vehicle position, and O (-) is a SIFT feature matching function.
Furthermore, in the local feature matching stage, a Graphics Processing Unit (GPU for short) can be used to accelerate feature matching calculation, which not only ensures the accuracy of feature matching and the accuracy of positioning, but also reduces the time overhead of the algorithm.
In summary, the method of the embodiment of the invention uses the global features for pre-matching, quickly locates the relative position of the current image frame in the SFM model, takes the relative position as the initial position result, and performs sparse point cloud matching on the basis of the initial position result so as to calculate the position of the vehicle, thereby greatly reducing the number of sparse point clouds participating in calculation, accelerating the speed of vehicle positioning and improving the efficiency of vehicle positioning.
Fig. 4 is a structural diagram of an embodiment of a vehicle positioning device provided in the present invention, and as shown in fig. 4, the vehicle positioning device of the present embodiment includes:
an obtaining module 401, configured to obtain a reference image sequence and a current image sequence of a surrounding environment of a vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames;
a preprocessing module 402, configured to determine, for each of the second image frames in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame;
a processing module 403, configured to determine, according to at least one first image frame matched with the second image frame, a vehicle position corresponding to the second image frame.
Optionally, the preprocessing module 402 is specifically configured to:
determining the at least one first image frame matched with the second image frame according to the global time sequence characteristics of a reference image frame and the second image frame in the reference image sequence.
Optionally, the preprocessing module 402 is specifically configured to:
respectively calculating the similarity of the global time sequence characteristics of each reference image frame and the second image frame in the reference image sequence;
determining the at least one first image frame matched with the second image frame according to the similarity of the global time sequence characteristics of the reference image frames and the second image frame.
Optionally, the preprocessing module 402 is specifically configured to:
and using a preset number of reference image frames near the reference image frame with the maximum similarity as the at least one first image frame matched with the second image frame.
Optionally, the processing module 403 is specifically configured to:
respectively carrying out feature point matching on each first image frame and the second image frame, and determining a first image frame matched with the second image frame;
and determining the vehicle position corresponding to the second image frame according to the first image frame.
Optionally, the processing module 403 is specifically configured to:
and respectively carrying out Scale Invariant Feature Transform (SIFT) feature matching on the point cloud of each first image frame and the point cloud of the second image frame, and determining the first image frame matched with the second image frame.
Optionally, the processing module 403 is specifically configured to:
and determining the vehicle position corresponding to the second image frame according to the vehicle position determined by the motion recovery structure SFM model corresponding to the first image frame.
The apparatus of this embodiment may be configured to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 5 is a structural diagram of an embodiment of an electronic device provided in the present invention, and as shown in fig. 5, the electronic device includes:
a processor 501, and a memory 502 for storing executable instructions for the processor 501.
Optionally, the method may further include: a communication interface 503 for achieving alignment with the charging post.
The above components may communicate over one or more buses.
The processor 501 is configured to execute the corresponding method in the foregoing method embodiment by executing the executable instruction, and the specific implementation process of the method may refer to the foregoing method embodiment, which is not described herein again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method in the foregoing method embodiment is implemented.
Embodiments of the present application further provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the vehicle positioning method executed by the electronic device in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. A vehicle positioning method, characterized by comprising:
acquiring a reference image sequence and a current image sequence of the surrounding environment of the vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames;
determining, for each of the second image frames in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame;
determining a vehicle position corresponding to the second image frame according to at least one first image frame matched with the second image frame;
the determining at least one first image frame in the reference image sequence that matches the second image frame comprises:
determining the at least one first image frame matched with the second image frame according to the global time sequence characteristics of a reference image frame and the second image frame in the reference image sequence.
2. The method of claim 1, wherein determining the at least one first image frame matching the second image frame based on a reference image frame in the reference image sequence and global timing characteristics of the second image frame comprises:
respectively calculating the similarity of the global time sequence characteristics of each reference image frame and the second image frame in the reference image sequence;
determining the at least one first image frame matched with the second image frame according to the similarity of the global time sequence characteristics of the reference image frames and the second image frame.
3. The method of claim 2, wherein determining the at least one first image frame that matches the second image frame based on the similarity of the global timing features of the respective reference image frames and the second image frame comprises:
and using a preset number of reference image frames near the reference image frame with the maximum similarity as the at least one first image frame matched with the second image frame.
4. The method according to any one of claims 1-3, wherein said determining a vehicle location corresponding to said second image frame from at least one first image frame matched to said second image frame comprises:
respectively carrying out feature point matching on each first image frame and the second image frame, and determining a first image frame matched with the second image frame;
and determining the vehicle position corresponding to the second image frame according to the first image frame.
5. The method of claim 4, wherein the determining a first image frame that matches the second image frame comprises:
and respectively carrying out Scale Invariant Feature Transform (SIFT) feature matching on the point cloud of each first image frame and the point cloud of the second image frame, and determining the first image frame matched with the second image frame.
6. The method of claim 4, wherein determining the vehicle location corresponding to the second image frame from the first image frame comprises:
and determining the vehicle position corresponding to the second image frame according to the vehicle position determined by the motion recovery structure SFM model corresponding to the first image frame.
7. A vehicle positioning device, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a reference image sequence and a current image sequence of the surrounding environment of the vehicle; the reference image sequence comprises a plurality of reference image frames; the current image sequence comprises a plurality of second image frames;
a preprocessing module, configured to determine, for each second image frame in the current image sequence, at least one first image frame in the reference image sequence that matches the second image frame;
the processing module is used for determining the vehicle position corresponding to the second image frame according to at least one first image frame matched with the second image frame;
the preprocessing module is specifically configured to determine, for each second image frame in the current image sequence, the at least one first image frame that matches the second image frame according to a reference image frame in the reference image sequence and a global timing feature of the second image frame.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-6.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-6 via execution of the executable instructions.
CN201811631573.9A 2018-12-29 2018-12-29 Vehicle positioning method, device, equipment and storage medium Active CN109711363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811631573.9A CN109711363B (en) 2018-12-29 2018-12-29 Vehicle positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811631573.9A CN109711363B (en) 2018-12-29 2018-12-29 Vehicle positioning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109711363A CN109711363A (en) 2019-05-03
CN109711363B true CN109711363B (en) 2021-02-19

Family

ID=66258180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811631573.9A Active CN109711363B (en) 2018-12-29 2018-12-29 Vehicle positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109711363B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783611B (en) * 2020-06-28 2023-12-29 阿波罗智能技术(北京)有限公司 Unmanned vehicle positioning method and device, unmanned vehicle and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7590261B1 (en) * 2003-07-31 2009-09-15 Videomining Corporation Method and system for event detection by analysis of linear feature occlusion
US9165181B2 (en) * 2012-01-13 2015-10-20 Sony Corporation Image processing device, method and program for moving gesture recognition using difference images
CN106092093A (en) * 2016-05-26 2016-11-09 浙江工业大学 A kind of indoor orientation method based on earth magnetism fingerprint matching algorithm

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6557973B2 (en) * 2015-01-07 2019-08-14 株式会社リコー MAP GENERATION DEVICE, MAP GENERATION METHOD, AND PROGRAM
CN106407315B (en) * 2016-08-30 2019-08-16 长安大学 A kind of vehicle autonomic positioning method based on street view image database
KR101851155B1 (en) * 2016-10-12 2018-06-04 현대자동차주식회사 Autonomous driving control apparatus, vehicle having the same and method for controlling the same
CN107808117A (en) * 2017-09-29 2018-03-16 上海工程技术大学 A kind of shared Vehicle positioning system and its localization method based on cloud computing
CN108172025B (en) * 2018-01-30 2021-03-30 东软睿驰汽车技术(上海)有限公司 Driving assisting method and device, vehicle-mounted terminal and vehicle
CN108388876B (en) * 2018-03-13 2022-04-22 腾讯科技(深圳)有限公司 Image identification method and device and related equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7590261B1 (en) * 2003-07-31 2009-09-15 Videomining Corporation Method and system for event detection by analysis of linear feature occlusion
US9165181B2 (en) * 2012-01-13 2015-10-20 Sony Corporation Image processing device, method and program for moving gesture recognition using difference images
CN106092093A (en) * 2016-05-26 2016-11-09 浙江工业大学 A kind of indoor orientation method based on earth magnetism fingerprint matching algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于移动机器人的人群异常事件检测研究》;吴庆甜;《中国优秀硕士学位论文全文数据库》;20180615;全文 *

Also Published As

Publication number Publication date
CN109711363A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
US10878621B2 (en) Method and apparatus for creating map and positioning moving entity
CN108230235B (en) Disparity map generation system, method and storage medium
CN111829532B (en) Aircraft repositioning system and method
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN111815742A (en) Lane line generation method and system
CN111652072A (en) Track acquisition method, track acquisition device, storage medium and electronic equipment
CN114565863B (en) Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
CN112700486B (en) Method and device for estimating depth of road surface lane line in image
WO2021254019A1 (en) Method, device and system for cooperatively constructing point cloud map
CN110645996B (en) Method and system for extracting perception data
CN113189989B (en) Vehicle intention prediction method, device, equipment and storage medium
WO2024077935A1 (en) Visual-slam-based vehicle positioning method and apparatus
CN116543143A (en) Training method of target detection model, target detection method and device
CN111707275A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN115713560A (en) Camera and vehicle external parameter calibration method and device, electronic equipment and storage medium
CN110738668A (en) method and system for intelligently controlling high beam and vehicle
CN114494435A (en) Rapid optimization method, system and medium for matching and positioning of vision and high-precision map
CN109711363B (en) Vehicle positioning method, device, equipment and storage medium
CN112859109B (en) Unmanned aerial vehicle panoramic image processing method and device and electronic equipment
CN116309943B (en) Parking lot semantic map road network construction method and device and electronic equipment
CN109376653B (en) Method, apparatus, device and medium for locating vehicle
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN117011481A (en) Method and device for constructing three-dimensional map, electronic equipment and storage medium
CN113963060B (en) Vehicle information image processing method and device based on artificial intelligence and electronic equipment
CN111383337B (en) Method and device for identifying objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant