CN117830397A - Repositioning method, repositioning device, electronic equipment, medium and vehicle - Google Patents

Repositioning method, repositioning device, electronic equipment, medium and vehicle Download PDF

Info

Publication number
CN117830397A
CN117830397A CN202311623204.6A CN202311623204A CN117830397A CN 117830397 A CN117830397 A CN 117830397A CN 202311623204 A CN202311623204 A CN 202311623204A CN 117830397 A CN117830397 A CN 117830397A
Authority
CN
China
Prior art keywords
image
frame image
pose
current frame
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311623204.6A
Other languages
Chinese (zh)
Inventor
戴必林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Autopilot Technology Co Ltd
Original Assignee
Human Horizons Shanghai Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Autopilot Technology Co Ltd filed Critical Human Horizons Shanghai Autopilot Technology Co Ltd
Priority to CN202311623204.6A priority Critical patent/CN117830397A/en
Publication of CN117830397A publication Critical patent/CN117830397A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present disclosure provides a repositioning method, a repositioning device, an electronic apparatus, a medium and a vehicle, and relates to the technical field of computer vision and the technical field of automatic driving. The specific implementation scheme is as follows: determining a plurality of candidate frame images matched with the current frame image in the historical image frame set; the historical image frame set stores the image pose, the global descriptor and the two-dimensional characteristic points of each historical image frame; calculating the estimated position relation between the current frame image and each candidate frame image; and determining the repositioning pose of the current frame image according to the estimated position relation and the image pose of each candidate frame image. According to the technical scheme, the image pose of the current frame image can be determined as the repositioning pose based on the image pose and the two-dimensional feature points of each historical image frame in the historical image frame set, so that quick and high-precision repositioning is realized, the resource consumption for processing the historical image frames is greatly saved, and the construction difficulty of a memory map is reduced.

Description

Repositioning method, repositioning device, electronic equipment, medium and vehicle
Technical Field
The present disclosure relates to the field of computer vision and autopilot technology, and in particular to a repositioning method, apparatus, electronic device, medium and vehicle.
Background
In recent years, the machine realizes high-precision positioning under a specific map scene based on a stored memory map, and has important application in the fields of robot positioning and navigation, unmanned aerial vehicle, augmented reality, virtual reality and the like, such as autonomous memory parking, restaurant intelligent meal delivery robot, unmanned aerial vehicle autonomous cruising and the like. The existing machine repositioning based on the memory map has higher requirements on the pre-built memory map, the characteristic points of each historical image frame in the memory map need to be tracked in advance to generate a 3D point cloud, and a large amount of computing resources need to be consumed in the learning process of the memory map.
Disclosure of Invention
The present disclosure provides a repositioning method, apparatus, electronic device, medium, and vehicle.
According to a first aspect of the present disclosure, there is provided a relocation method comprising:
determining a plurality of candidate frame images matched with the current frame image in the historical image frame set; the historical image frame set stores the image pose, the global descriptor and the two-dimensional characteristic points of each historical image frame;
calculating the estimated position relation between the current frame image and each candidate frame image;
and determining the repositioning pose of the current frame image according to the estimated position relation and the image pose of each candidate frame image.
According to a second aspect of the present disclosure there is provided a relocating device comprising:
a matching module, configured to determine a plurality of candidate frame images that match the current frame image in the historical image frame set; the historical image frame set stores the image pose, the global descriptor and the two-dimensional characteristic points of each historical image frame;
the calculating module is used for calculating the estimated position relation between the current frame image and each candidate frame image;
and the pose determining module is used for determining the repositioning pose of the current frame image according to the estimated position relation and the image pose of each candidate frame image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to a fifth aspect of the present disclosure there is provided a vehicle comprising the apparatus of the second aspect of the present disclosure and the electronic device of the third aspect.
According to the technology disclosed by the invention, the image pose of the current frame image can be determined as the repositioning pose based on the image pose and the two-dimensional feature points of each historical image frame in the historical image frame set, so that quick and high-precision repositioning is realized, the feature points in the historical image frames are not required to be tracked in advance and three-dimensional feature points are generated, the resource consumption for processing the historical image frames is greatly saved, and the construction difficulty of a memory map is reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a relocation method according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of a plurality of recursive poses on a reference travel track saved in a memory map according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a geometric relationship between a current frame image and a candidate frame image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a relocating device according to one embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing a relocation method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flow diagram of a relocation method according to an embodiment of the present disclosure, including:
step S110, determining a plurality of candidate frame images matched with the current frame image in a historical image frame set; the historical image frame set stores the image pose, the global descriptor and the two-dimensional characteristic points of each historical image frame;
step S120, calculating the estimated position relation between the current frame image and each candidate frame image;
step S130, determining the repositioning pose of the current frame image according to the estimated position relation and the image pose of each candidate frame image.
Illustratively, the execution subject of the repositioning method in the present embodiment may be a local device or a network device, the local device may be a repositioning device installed in a vision device, and the vision device may be a vision sensor installed in a vehicle, an unmanned plane, an intelligent robot, or the like. The network device may be a conventional server or a cloud server, and is not limited in this embodiment. For convenience of description, explanation of the embodiments will be made later taking a repositioning device installed in a vision device of a vehicle as an example.
The current frame image may be an ambient image acquired by a visual device of the vehicle at the current moment, and the visual device of the vehicle may be a monocular camera or a multi-eye camera. The set of historical image frames is stored in a memory map of the vehicle, and a usage scene of the memory map of the vehicle may be any area capable of performing automatic driving with reference to a historical driving trajectory, including, but not limited to, a parking lot and the like.
Each historical image frame in the historical image frame set can be an image shot by the visual equipment of the vehicle at different positions on the first driving track of the vehicle in the target scene, or can be an image shot by the visual equipment of the vehicle at different positions on the reference driving track subjected to multiple training correction. The image pose of the historical image frame, i.e., the camera pose of the vehicle's vision equipment when the historical image frame is taken.
In the existing repositioning means, besides the fact that a word bag model needs to be trained in advance to aggregate local features of historical image frames into global features (namely global descriptors), feature points in each historical image frame need to be tracked, and a wheel speed meter and an inertial measurement unit (Inertial Measurement Unit, IMU) are combined to acquire three-dimensional pose of each feature point in a map coordinate system, so that the obtained global features of a current frame image are utilized to search and determine repositioning image frames in a historical image frame set, and multi-view geometric relation calculation is carried out on the basis of two-dimensional feature points of the current frame image and three-dimensional point clouds in the repositioning image frames to obtain the current pose (namely repositioning pose) of repositioning equipment.
According to the embodiment, the image pose of the current frame image can be determined as the repositioning pose based on the image pose and the two-dimensional feature points of each historical image frame in the historical image frame set, so that quick and high-precision repositioning is realized, in the learning process of the memory map, only the pose of the recursive repositioning equipment is used as the image pose of the corresponding historical frame image, the feature points and the global descriptors of each frame image are extracted and stored, the tracking of the feature points in the image and the generation of the three-dimensional point cloud are not needed, the resource consumption for processing the historical image frames is greatly saved, and the construction difficulty of the memory map is reduced.
Fig. 2 shows a plurality of recursive poses on a reference running track stored in a memory map of the present embodiment, taking a repositioning device on a vehicle as an example, each triangle in fig. 2 has two-dimensional coordinates on a map coordinate system where the reference running track is located, and a direction angle obtained by calculation according to a wheel speed meter table and an IMU of the vehicle, so that the poses of the repositioning device at different positions on the reference running track can be represented, each triangle corresponds to one historical image frame, and the poses of the repositioning device represented by the triangle are taken as the image poses of the corresponding historical image frames.
In one embodiment, in step S110, determining a plurality of candidate frame images matching the current frame image in the historical image frame set includes: acquiring a global descriptor of a current frame image; calculating the similarity between the global descriptor of the current frame image and the global descriptor of each historical image frame; determining a historical image frame corresponding to the maximum similarity as a key frame image; a plurality of candidate frame images is determined in the historical image frame set based on the key frame images.
For example, the distance between the global descriptor of each historical image frame and the global descriptor of the current frame image may be calculated, and the similarity between the global descriptor of the current frame image and the global descriptor of each historical image frame may be determined according to the calculated distance, where the closer the distance is, the greater the similarity is. In addition, since the feature points in each historical image frame in the historical image frame set only have two-dimensional pose, the image pose of the current frame image cannot be determined through the matching of the feature points in the current frame image and a single key frame image, and therefore after the key frame image is determined, a plurality of candidate frame images are also required to be determined according to the key frame image and used for matching the feature points with the current frame image so as to determine the image pose of the current frame image.
Specifically, determining a plurality of candidate frame images in the historical image frame set based on the key frame images includes: determining at least one associated image of the key frame image in the historical image frame set; wherein the associated image is a historical image frame within a preset frame number range from the key frame image; the key frame image and the associated image are determined as candidate frame images.
For example, the preset frame number may be set according to the actual situation, for example, the preset frame number is set to 5, then the associated image of the key frame image meeting the requirement may be a historical image frame within a range of 5 frames before and after the key frame image, that is, there may be 10 associated images, and the key frame image and the associated image are determined as candidate frame images for determining the image pose of the current frame image.
In one embodiment, in step S120, calculating the estimated positional relationship between the current frame image and each candidate frame image includes: determining the corresponding relation between the two-dimensional characteristic points in the current frame image and the two-dimensional characteristic points in each candidate frame image; and estimating the rotation and translation of the current frame image relative to each candidate frame image according to the corresponding relation, and taking the rotation and translation as the estimated position relation between the current frame image and each candidate frame image.
The corresponding relation between the characteristic points in the current frame image and the characteristic points in the candidate frame images can be determined based on the matching between the characteristic points in the current frame image and the candidate frame images, and the rotation and translation of the current frame image relative to the candidate frame images can be estimated by combining a pair-level constraint principle.
Specifically, estimating rotation and translation of the current frame image with respect to each candidate frame image according to the correspondence relation includes: constructing an essential matrix of corresponding two-dimensional feature points between the current frame image and each candidate frame image according to the corresponding relation; and performing singular value decomposition on the essential matrix to obtain rotation and translation of the current frame image relative to each candidate frame image.
Wherein, as shown in FIG. 3, if I 1 I is the plane of the current frame image 2 For the plane of a candidate frame image, the spatial point P is in the image plane I 1 The corresponding feature point is P 1 The spatial point P is in the image plane I 2 The corresponding feature point is P 2 Image plane I 1 The origin of the self image coordinate system is O 1 Image plane I 2 The origin of the self image coordinate system is O 2 Point O 1 、O 2 The three points P can determine a polar plane O 1 O 2 Is connected with plane I 1 And plane I 2 The intersection points of (a) are respectively e 1 、e 2 Wherein e is 1 、e 2 Is pole, O 1 O 2 At baseline, P 1 e 1 Is connected with line l 1 And P 2 e 2 Is connected with line l 2 Is the plane O 1 O 2 P are respectively with the image plane I 1 、I 2 Intersecting lines between, called epipolar lines, in the image plane I 1 Fall to I 1 A feature point on the image plane I2, the corresponding feature point of the feature point must fall on l 2 And the characteristic point is from the image plane I 1 To image plane I 2 Can be represented by a rotation R and a translation t.
Thus, an essential matrix e=t Λ R corresponding to the image plane I1 and the image plane I2 can be constructed, and then the rotation R and the translation t can be solved by singular value decomposition (Singular Value Decomposition, SVD).
It is to be noted that, in the case where the visual device of the vehicle is a monocular camera, since the monocular camera lacks depth information, only the relative positional relationship irrelevant to a specific scale can be reflected. The rotation R and the translation t have 6 degrees of freedom in total, but the essential matrix E can only represent the constraint of the relative positional relationship irrelevant to the specific scale, and only has 5 degrees of freedom, and cannot describe the absolute positional relationship between the two images. Therefore, the resolved translation t is not provided with specific scale information, the position relationship between the current frame image and the candidate frame image represented by the rotation R and the translation t is not an actual absolute position relationship, belongs to a predicted relative position relationship, cannot directly calculate the image pose of the current frame image through the estimated position relationship between the single candidate frame image and the current frame image, and needs to combine the position relationships between a plurality of candidate frame images and the current frame image to eliminate scale influence and calculate the image pose of the current frame image.
Specifically, in step S130, determining the repositioning pose of the current frame image according to the estimated positional relationship and the image pose of each candidate frame image, including: for any candidate frame image, constructing a repositioning pose calculation formula of the current frame image corresponding to the candidate frame image according to the image pose of the candidate frame image and the estimated position relation between the candidate frame image and the current frame image; and determining the repositioning pose based on the repositioning pose calculation formula of the current frame image corresponding to each candidate frame image.
The repositioning pose of the current frame image, that is, the image pose of the current frame image under the map coordinate system, is calculated as follows for the candidate frame n:
p=R n p n +t n ; (1)
wherein p represents the repositioning pose of the current frame image (namely the pose of the current frame image under the map coordinate system), and n representing the pose of the current frame image under the n coordinate system of the candidate frame, R n And t n And representing the pose of the candidate frame n under the map coordinate system, namely the image pose of the candidate frame n.
Assuming that the scale information between the current frame image and the candidate frame n is s, the pose of the current frame image in the candidate frame n coordinate system may be expressed as:
P n =sP n1 ; (2)
wherein P is n1 That is, the pre-estimated position relation between the current frame image and the candidate frame n is obtained through the rotation and translation of the current frame image estimated previously relative to the candidate frame n. The combination of equation (1) and equation (2) can result in:
P-t n =sR n p n1 i.e. R n p n1 ×(P-t n ) =0, thereby yielding:
(R n p n1 )∧p=(R n p n1 )∧t n ; (3)
for a plurality of candidate frame images, a calculation formula as shown in formula (3) can be constructed, the formula (3) corresponding to different constructed candidate frame images can be combined, and the least square method is used for obtaining p.
Based on the method, according to the embodiment, a calculation formula can be constructed through the corresponding relation of the two-dimensional feature points between the plurality of candidate frame images and the current frame image, and the repositioning pose of the current frame image can be obtained through comprehensive solution, so that the method can be suitable for images shot by a monocular camera, and the tracking of the feature points is not needed in the construction stage of a memory map to generate three-dimensional feature points, so that the resource consumption for constructing the memory map is greatly saved, the construction difficulty of the memory map is reduced, and the monocular camera-based high-precision repositioning can still be realized.
As an implementation of the above methods, as shown in fig. 4, an embodiment of the disclosure further provides a repositioning device, which may include:
a matching module 410 for determining a plurality of candidate frame images matching the current frame image in the historical image frame set; the historical image frame set stores the image pose, the global descriptor and the two-dimensional characteristic points of each historical image frame;
a calculating module 420, configured to calculate a predicted positional relationship between the current frame image and each candidate frame image;
the pose determining module 430 is configured to determine a repositioning pose of the current frame image according to the estimated position relationship and the image pose of each candidate frame image.
Illustratively, the matching module 410 is configured to: acquiring a global descriptor of a current frame image;
calculating the similarity between the global descriptor of the current frame image and the global descriptor of each historical image frame; determining a historical image frame corresponding to the maximum similarity as a key frame image; a plurality of candidate frame images is determined in the historical image frame set based on the key frame images.
Illustratively, the matching module 410 is further configured to: determining at least one associated image of the key frame image in the historical image frame set; wherein the associated image is a historical image frame within a preset frame number range from the key frame image; the key frame image and the associated image are determined as candidate frame images.
Illustratively, the computing module 420 is to: determining the corresponding relation between the two-dimensional characteristic points in the current frame image and the two-dimensional characteristic points in each candidate frame image; and estimating the rotation and translation of the current frame image relative to each candidate frame image according to the corresponding relation, and taking the rotation and translation as the estimated position relation between the current frame image and each candidate frame image.
Illustratively, the computing module 420 is further configured to: constructing an essential matrix of corresponding two-dimensional feature points between the current frame image and each candidate frame image according to the corresponding relation; and performing singular value decomposition on the essential matrix to obtain rotation and translation of the current frame image relative to each candidate frame image.
Illustratively, the pose determination module 430 is configured to: for any candidate frame image, constructing a repositioning pose calculation formula of the current frame image corresponding to the candidate frame image according to the image pose of the candidate frame image and the estimated position relation between the candidate frame image and the current frame image;
and determining the repositioning pose based on the repositioning pose calculation formula of the current frame image corresponding to each candidate frame image.
The functions of each unit, module or sub-module in each device of the embodiments of the present disclosure may be referred to the corresponding descriptions in the above method embodiments, which have corresponding beneficial effects and are not described herein again.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Fig. 5 shows a block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device includes: memory 510 and processor 520, and instructions executable on processor 520 are stored in memory 510. The processor 520, when executing the instructions, implements the methods of the embodiments described above. The number of memories 510 and processors 520 may be one or more. The electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
The electronic device may further include a communication interface 530 for communicating with external devices for data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 520 may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of a GUI on an external input/output device, such as a display device coupled to an interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 510, the processor 520, and the communication interface 530 are integrated on a chip, the memory 510, the processor 520, and the communication interface 530 may communicate with each other through internal interfaces.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
The present embodiments provide a computer readable storage medium (such as the memory 510 described above) storing computer instructions that when executed by a processor implement the methods provided in the embodiments of the present application.
Alternatively, the memory 510 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created from the use of the electronic device that identifies the lane edge, and the like. In addition, memory 510 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 510 optionally includes memory generated remotely from processor 520, which may be connected to the electronic device that identifies lane edges via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), random Access Memory (RAM) of other physical types, read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage media, or any other non-transmission media, that can be used to store information that can be accessed by a computing device. Computer-readable Media, as defined herein, does not include non-Transitory computer-readable Media (transmission Media), such as modulated data signals and carrier waves.
The present embodiment also provides a vehicle comprising a controller, which may be used to perform the method of the present embodiment, or which may comprise any of the apparatus of the present embodiment, or which may be any of the electronic devices of the present embodiment.
For example, the processor in the controller or electronic device may include at least one of an autopilot domain control module, a body domain control module, and an audio-visual entertainment domain control module.
The vehicle in the present embodiment may be exemplified by any power-driven vehicle such as a fuel vehicle, an electric vehicle, a solar vehicle, or the like. The vehicle in the present embodiment may be an autonomous vehicle, for example.
Other structures of the vehicle of the present embodiment, such as the specific structures of the frame and the wheels, the connection fastening members, etc., may be applied to various technical solutions that are known to those skilled in the art now and in the future, and will not be described in detail herein.
In the description of this specification, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, and these should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A repositioning method, comprising:
determining a plurality of candidate frame images matched with the current frame image in the historical image frame set; the historical image frame set stores the image pose, the global descriptor and the two-dimensional feature points of each historical image frame;
calculating the estimated position relation between the current frame image and each candidate frame image;
and determining the repositioning pose of the current frame image according to the estimated position relation and the image pose of each candidate frame image.
2. The method of claim 1, wherein determining a plurality of candidate frame images in the set of historical image frames that match the current frame image comprises:
acquiring a global descriptor of the current frame image;
calculating the similarity between the global descriptor of the current frame image and the global descriptor of each historical image frame;
determining a historical image frame corresponding to the maximum similarity as a key frame image;
the plurality of candidate frame images are determined in the historical image frame set based on the key frame images.
3. The method of claim 2, wherein determining the plurality of candidate frame images in the set of historical image frames based on the key frame image comprises:
determining at least one associated image of the key frame image in the set of historical image frames; wherein the associated image is a historical image frame within a preset frame number range from the key frame image;
and determining the key frame image and the associated image as the candidate frame image.
4. The method of claim 1, wherein calculating the predicted positional relationship between the current frame image and each of the candidate frame images comprises:
determining the corresponding relation between the two-dimensional characteristic points in the current frame image and the two-dimensional characteristic points in each candidate frame image;
and estimating rotation and translation of the current frame image relative to each candidate frame image according to the corresponding relation, and taking the rotation and translation as the estimated position relation between the current frame image and each candidate frame image.
5. The method of claim 4, wherein estimating rotation and translation of the current frame image relative to each of the candidate frame images based on the correspondence comprises:
constructing an essential matrix of corresponding two-dimensional feature points between the current frame image and each candidate frame image according to the corresponding relation;
and carrying out singular value decomposition on the essential matrix to obtain rotation and translation of the current frame image relative to each candidate frame image.
6. The method of claim 1, wherein determining the repositioning pose of the current frame image based on the pre-estimated positional relationship and the image pose of each of the candidate frame images comprises:
for any candidate frame image, constructing a repositioning pose calculation formula of the current frame image corresponding to the candidate frame image according to the image pose of the candidate frame image and the estimated position relation between the candidate frame image and the current frame image;
and determining the repositioning pose based on the repositioning pose calculation formula of the current frame image corresponding to each candidate frame image.
7. A relocating device comprising:
a matching module, configured to determine a plurality of candidate frame images that match the current frame image in the historical image frame set; the historical image frame set stores the image pose, the global descriptor and the two-dimensional feature points of each historical image frame;
the calculating module is used for calculating the estimated position relation between the current frame image and each candidate frame image;
and the pose determining module is used for determining the repositioning pose of the current frame image according to the estimated position relation and the image pose of each candidate frame image.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
9. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
10. A vehicle comprising the apparatus of claim 7 and the electronic device of claim 8.
CN202311623204.6A 2023-11-30 2023-11-30 Repositioning method, repositioning device, electronic equipment, medium and vehicle Pending CN117830397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311623204.6A CN117830397A (en) 2023-11-30 2023-11-30 Repositioning method, repositioning device, electronic equipment, medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311623204.6A CN117830397A (en) 2023-11-30 2023-11-30 Repositioning method, repositioning device, electronic equipment, medium and vehicle

Publications (1)

Publication Number Publication Date
CN117830397A true CN117830397A (en) 2024-04-05

Family

ID=90510523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311623204.6A Pending CN117830397A (en) 2023-11-30 2023-11-30 Repositioning method, repositioning device, electronic equipment, medium and vehicle

Country Status (1)

Country Link
CN (1) CN117830397A (en)

Similar Documents

Publication Publication Date Title
US10878621B2 (en) Method and apparatus for creating map and positioning moving entity
US10659925B2 (en) Positioning method, terminal and server
JP6745328B2 (en) Method and apparatus for recovering point cloud data
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
JP7240367B2 (en) Methods, apparatus, electronic devices and storage media used for vehicle localization
CN109214980B (en) Three-dimensional attitude estimation method, three-dimensional attitude estimation device, three-dimensional attitude estimation equipment and computer storage medium
CN110095752A (en) Localization method, device, equipment and medium
US20200226392A1 (en) Computer vision-based thin object detection
CN112700486B (en) Method and device for estimating depth of road surface lane line in image
Caldini et al. Smartphone-based obstacle detection for the visually impaired
Laflamme et al. Driving datasets literature review
WO2022199195A1 (en) Map updating method and system, vehicle-mounted terminal, server, and storage medium
WO2022156447A1 (en) Localization method and apparatus, and computer apparatus and computer-readable storage medium
CN113295159B (en) Positioning method and device for end cloud integration and computer readable storage medium
John et al. Registration of GPS and stereo vision for point cloud localization in intelligent vehicles using particle swarm optimization
Cheng et al. Positioning and navigation of mobile robot with asynchronous fusion of binocular vision system and inertial navigation system
CN117745845A (en) Method, device, equipment and storage medium for determining external parameter information
Hoang et al. Motion estimation based on two corresponding points and angular deviation optimization
Yang et al. Simultaneous estimation of ego-motion and vehicle distance by using a monocular camera
CN110853098A (en) Robot positioning method, device, equipment and storage medium
CN117830397A (en) Repositioning method, repositioning device, electronic equipment, medium and vehicle
Yu et al. On designing computing systems for autonomous vehicles: A perceptin case study
CN114648639A (en) Target vehicle detection method, system and device
CN110389349B (en) Positioning method and device
CN113763468A (en) Positioning method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination