CN111860051A - Vehicle-based loop detection method and device and vehicle-mounted terminal - Google Patents

Vehicle-based loop detection method and device and vehicle-mounted terminal Download PDF

Info

Publication number
CN111860051A
CN111860051A CN201910346797.3A CN201910346797A CN111860051A CN 111860051 A CN111860051 A CN 111860051A CN 201910346797 A CN201910346797 A CN 201910346797A CN 111860051 A CN111860051 A CN 111860051A
Authority
CN
China
Prior art keywords
image
image frame
location
place
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910346797.3A
Other languages
Chinese (zh)
Inventor
李天威
徐抗
童哲航
刘一龙
谢国富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chusudu Technology Co ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201910346797.3A priority Critical patent/CN111860051A/en
Publication of CN111860051A publication Critical patent/CN111860051A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The embodiment of the invention discloses a vehicle-based loop detection method and device and a vehicle-mounted terminal. The method comprises the following steps: acquiring first image frames of a plurality of directions around a vehicle, which are acquired at a first position by an image acquisition device; extracting key points in each first image frame according to pixel value distribution of pixel points in the image frames, and determining characteristic information of each key point in the first image frames; according to the characteristic information corresponding to each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database respectively; when a first matching result meeting a preset location matching condition is obtained, a location pointing to the same location as the first location is determined from the second locations indicated by the first matching result. By applying the scheme provided by the embodiment of the invention, the accuracy in the loop detection can be improved.

Description

Vehicle-based loop detection method and device and vehicle-mounted terminal
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a vehicle-based loop detection method and device and a vehicle-mounted terminal.
Background
And loop detection, namely matching the acquired picture with the historical picture when the movable object provided with the camera moves to a certain position, and considering that the movable object moves to the position corresponding to the historical picture which is successfully matched when the matching is successful. For example, the movable object may be a vehicle, a robot, or the like. The loop detection can associate a location corresponding to the current image with a location corresponding to the historical image.
However, when the vehicle moves to a certain position again, the angle of the image captured by the camera on the vehicle changes, and when the image is matched with the history image, the matching may not be successful. Therefore, the position same as the current position cannot be detected, and the accuracy of loop detection is not high enough.
Disclosure of Invention
The invention provides a vehicle-based loop detection method and device and a vehicle-mounted terminal, which aim to improve the accuracy of loop detection. The specific technical scheme is as follows.
In a first aspect, an embodiment of the invention discloses a loop detection method based on a vehicle, which includes:
acquiring first image frames of a plurality of directions around the vehicle, acquired by an image acquisition device at a first location;
Extracting key points in each first image frame according to pixel value distribution of pixel points in the image frames, and determining characteristic information of each key point in the first image frames;
according to the characteristic information corresponding to each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database respectively; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
determining locations pointing to the same location as the first location from the second locations.
Optionally, the image database is further configured to store feature information of image frames at a plurality of second-type locations, where an image frame at each second-type location includes an image frame of a direction around the vehicle at the location; after determining feature information for each keypoint in the first image frame, the method further comprises:
According to the characteristic information corresponding to each image frame, matching each first image frame with one image frame at each place in the image database respectively; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
determining locations pointing to the same location as the first location from the third locations.
Optionally, after acquiring each first image frame acquired by the image acquisition device at the first location, the method further includes:
splicing the first image frames to obtain a first panoramic image frame at the first place;
extracting key points in the first panoramic image frame according to the pixel value distribution of pixel points in the image frame, and determining the characteristic information of each key point in the first panoramic image frame;
matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in an image database according to the characteristic information corresponding to each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
Determining a location pointing to the same location as the first location from the fourth locations.
Optionally, when there are a plurality of second locations, the step of determining a location pointing to the same location as the first location from the second locations includes:
acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
Optionally, the step of respectively matching each first image frame with a plurality of image frames corresponding to a plurality of locations in an image database according to the feature information corresponding to each image frame includes:
comparing each piece of feature information of each first image frame with each piece of feature information of the image frame respectively aiming at each first image frame and each image frame in an image database to obtain the feature information coincidence degree between the first image frame and the image frame; and when the coincidence degree of the feature information is greater than a preset coincidence degree threshold value, determining that the first image frame is successfully matched with the image frame.
Optionally, when it is determined that a matching result meeting the preset location matching condition cannot be obtained, the method further includes:
and storing the characteristic information of each first image frame at the first position into the image database.
Optionally, the step of determining feature information of each keypoint in the first image frame includes:
determining a descriptor corresponding to each key point in the first image frame according to the key point and pixel points at preset positions around the key point;
determining words corresponding to the descriptors of each key point in the first image frame from a preset descriptor dictionary, and taking the words corresponding to all key points in the first image frame as the feature information of the first image frame.
In a second aspect, an embodiment of the present invention discloses a loop detection device based on a vehicle, including:
an acquisition module configured to acquire a first image frame of a plurality of directions around a vehicle acquired at a first location by an image acquisition device;
the first extraction module is configured to extract key points in each first image frame according to pixel value distribution of pixel points in the image frames and determine characteristic information of each key point in the first image frames;
The first matching module is configured to match each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information corresponding to each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
a first determination module configured to determine locations pointing to the same location as the first location from the second locations.
Optionally, the image database is further configured to store feature information of image frames at a plurality of second-type locations, where an image frame at each second-type location includes an image frame of a direction around the vehicle at the location; the device further comprises:
the second matching module is configured to match each first image frame with one image frame at each place in the image database according to the feature information corresponding to each image frame after determining the feature information of each key point in the first image frame; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
A second determination module configured to determine locations pointing to the same location as the first location from among the third locations.
Optionally, the apparatus further comprises:
the splicing module is configured to splice the first image frames acquired by the image acquisition device at the first location to obtain a first panoramic image frame at the first location;
the second extraction module is configured to extract key points in the first panoramic image frame according to the pixel value distribution of pixel points in the image frame and determine the characteristic information of each key point in the first panoramic image frame;
a third matching module configured to match the first panoramic image frame with panoramic image frames corresponding to a plurality of locations in an image database according to feature information corresponding to each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
And the third determining module is configured to determine the places which point to the same place as the first place from the fourth places.
Optionally, when the second location is multiple, the first determining module is specifically configured to:
acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
Optionally, the first matching module is specifically configured to:
comparing each piece of feature information of each first image frame with each piece of feature information of the image frame respectively aiming at each first image frame and each image frame in an image database to obtain the feature information coincidence degree between the first image frame and the image frame; and when the coincidence degree of the feature information is greater than a preset coincidence degree threshold value, determining that the first image frame is successfully matched with the image frame.
Optionally, the apparatus further comprises:
a storage module configured to store feature information of each first image frame at the first location into the image database when it is determined that a matching result satisfying the preset location matching condition cannot be obtained.
Optionally, when determining the feature information of each keypoint in the first image frame, the first extraction module includes:
determining a descriptor corresponding to each key point in the first image frame according to the key point and pixel points at preset positions around the key point;
determining words corresponding to the descriptors of each key point in the first image frame from a preset descriptor dictionary, and taking the words corresponding to all key points in the first image frame as the feature information of the first image frame.
In a third aspect, an embodiment of the present invention discloses a vehicle-mounted terminal, including: a processor and an image acquisition device; the processor includes: the device comprises an acquisition module, a first extraction module, a first matching module and a first determination module;
the acquisition module is used for acquiring first image frames of multiple directions around the vehicle acquired by the image acquisition equipment at a first position;
The first extraction module is used for extracting key points in each first image frame according to the pixel value distribution of pixel points in the image frames and determining the characteristic information of each key point in the first image frames;
the first matching module is used for respectively matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information corresponding to each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
the first determining module is configured to determine a location pointing to the same location as the first location from the second locations.
Optionally, the image database is further configured to store feature information of image frames at a plurality of second-type locations, where an image frame at each second-type location includes an image frame of a direction around the vehicle at the location; the processor further comprises:
The second matching module is used for matching each first image frame with one image frame at each place in the image database according to the characteristic information corresponding to each image frame after the characteristic information of each key point in the first image frame is determined; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
and the second determining module is used for determining the positions pointing to the same position as the first position from the third positions.
Optionally, the processor further includes:
the splicing module is used for splicing each first image frame acquired by the image acquisition equipment at the first place to obtain a first panoramic image frame at the first place;
the second extraction module is used for extracting key points in the first panoramic image frame according to the pixel value distribution of pixel points in the image frame and determining the characteristic information of each key point in the first panoramic image frame;
The third matching module is used for matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in an image database according to the characteristic information corresponding to each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
and the third determining module is used for determining the positions pointing to the same position as the first position from the fourth positions.
Optionally, when there are a plurality of second locations, the first determining module is specifically configured to:
acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
And if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
Optionally, the first matching module is specifically configured to:
comparing each piece of feature information of each first image frame with each piece of feature information of the image frame respectively aiming at each first image frame and each image frame in an image database to obtain the feature information coincidence degree between the first image frame and the image frame; and when the coincidence degree of the feature information is greater than a preset coincidence degree threshold value, determining that the first image frame is successfully matched with the image frame.
Optionally, the processor further includes:
and the storage module is used for storing the characteristic information of each first image frame at the first place into the image database when the matching result meeting the preset place matching condition is determined not to be obtained.
Optionally, when determining the feature information of each keypoint in the first image frame, the first extraction module includes:
determining a descriptor corresponding to each key point in the first image frame according to the key point and pixel points at preset positions around the key point;
Determining words corresponding to the descriptors of each key point in the first image frame from a preset descriptor dictionary, and taking the words corresponding to all key points in the first image frame as the feature information of the first image frame.
As can be seen from the above, the method and the device for detecting a loop based on a vehicle and the vehicle-mounted terminal provided by the embodiments of the present invention can acquire the first image frames collected at the first location in multiple directions around the vehicle, and when the first image frames are matched with the image frames in the image database, match a plurality of image frames between every two locations. However, in the related art, each location corresponds to one image frame, and when the angle of the image frame photographed again changes, the matching may be unsuccessful. In the embodiment of the invention, the image frames in multiple directions are collected at each place, the image frames in multiple directions can embody more comprehensive characteristic information at the place, and when at least a preset number of groups of successfully matched image frames exist, the matching result meeting the preset place matching condition is determined to be obtained, so that the matching process is more accurate, and the accuracy of loop detection can be improved.
The innovation points of the embodiment of the invention comprise:
1. each place corresponds to image frames in multiple directions, and under the condition that each place in the image database contains image frames in all directions, when at least a preset number of groups of successfully matched image frames exist, a matching result meeting a preset place matching condition is determined. This makes the matching in the loop detection not limited by the shooting angle, and the accuracy of the same place obtained by matching is higher.
2. When each place in the image database contains a certain image frame, when at least one first image frame is successfully matched with the image frame in the image database, determining to obtain a matching result meeting a preset place matching condition. Therefore, the loop detection can be compatible with the condition that each place in the database corresponds to one image frame, and the compatibility is improved.
3. A plurality of image frames corresponding to each place are spliced into a panoramic image frame, so that the matching times can be reduced, the flow during matching is shortened, and the matching efficiency is improved.
4. And the accuracy of loop detection can be improved by judging based on the image sequence.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flowchart of a loop detection method based on a vehicle according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a process of generating a descriptor dictionary according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of another vehicle-based loop detection method according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating another method for vehicle-based loop detection according to an embodiment of the present invention;
FIGS. 5-7 are schematic structural diagrams of a vehicle-based loop detection apparatus according to several embodiments of the present invention;
fig. 8 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a vehicle-based loop detection method and device and a vehicle-mounted terminal, which can improve the accuracy of loop detection. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flowchart of a loop detection method based on a vehicle according to an embodiment of the present invention. The method is applied to the electronic equipment. The electronic device can be a common computer, a server or an intelligent mobile terminal and the like. The electronic device may also be a device for calculation processing installed in a vehicle. The method specifically comprises the following steps.
S110: first image frames of a plurality of directions around a vehicle captured at a first location by an image capture device are acquired.
The image acquisition devices can be multiple and are respectively installed at different positions of the vehicle, and each image acquisition device acquires image frames in one direction around the vehicle. For example, one camera may be installed in each of front, rear, left, and right directions of the vehicle, in which case 4 first image frames may be acquired at each location. The image acquisition device can be a common lens camera and also can be a fisheye lens camera.
The electronic device may receive a first image frame captured by each image capture device while the vehicle is at a first location. At the same time and the same place, each image acquisition device acquires a first image frame. The electronic device may associate each first image frame with a first location. Wherein the first location may be represented in two-dimensional or three-dimensional coordinates.
The image acquisition device may acquire the image frames at a preset frame rate. In performing loop detection, the electronic device may acquire a key frame (I frame) in the sequence of image frames acquired by the image acquisition device as the first image frame. This can reduce the amount of calculation. In the positioning process, the movement of the vehicle is relatively slow compared with the acquisition speed of the image acquisition equipment, so that loop detection is not required to be performed on each image frame, and the processing burden of the system can be reduced by selecting key frames in the image frames for loop detection. This also preserves the validity of the information.
When acquiring the first image frames of the plurality of directions around the vehicle acquired by the image acquisition device at the first location, the method may specifically include: the method comprises the steps of acquiring initial image frames of multiple directions around a vehicle acquired by image acquisition equipment at a first place, converting each initial image frame into a gray image, and obtaining each first image frame. The above embodiments are applicable to the case where the image directly captured by the image capturing apparatus is a color image.
S120: and extracting key points in each first image frame according to the pixel value distribution of pixel points in the image frames, and determining the characteristic information of each key point in the first image frames.
The key points may include object corner points, points with large pixel changes, boundary points, contour points, bright points in darker areas, dark points in lighter areas, and the like in the image frame. Each first image frame may contain a plurality of keypoints. Specifically, a FAST (Features From estimated Segment Test) algorithm may be used to extract the keypoints in the first image frame. When detecting a key point, the pixel values of other pixel points around the pixel point can be detected based on a pixel point, and if the difference between the pixel values of other pixel points more than a specified number and the pixel point around the pixel point is greater than a preset pixel value threshold, the pixel point is determined as the key point.
The feature information of each key point may be information including a positional relationship and a pixel value relationship between the key point and a pixel point around the key point. And each key point corresponds to one piece of feature information, and the feature information of the first image frame comprises the feature information of all the key points in the first image frame.
S130: and according to the characteristic information corresponding to each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database.
The image database is used for storing feature information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place.
And when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition. The first matching process between the first place and each first-class place in the image database comprises a pairwise matching process between each first image frame corresponding to the first place and each image frame corresponding to the first-class place.
The location indicated by the first matching result is a second location, and the second location is a location in the first category of locations. The second location may be represented in two-dimensional or three-dimensional coordinates.
The preset number may be a number previously determined according to the total number of the first image frames included in each of the first locations. For example, when the total number of the first image frames is 4, the preset number may be 2 to 4.
For example, the first location includes 4 first image frames, and each first-type location in the image database includes 4 image frames. In the matching, feature information of each first image frame may be matched with each image frame in each first-type location. That is, for each first location and each first-type location, 4 × 4 image matching may be performed for 8 times, and when there are image frames whose matching is successful in more than a preset number of groups, it is considered that matching between the first location and the first-type location satisfies a preset location matching condition, and the first-type location is a second location. And when the number of the matched image groups is less than the preset number of groups, the matching between the first place and the first class place is considered not to meet the preset place matching condition, namely, no matching result meeting the preset place matching condition exists.
When it is determined that a matching result meeting the preset location matching condition cannot be obtained, feature information of each first image frame at the first location may be stored in an image database so as to be used for subsequent loop detection.
S140: a location pointing to the same location as the first location is determined from the second locations.
When the number of the second locations is one, the second locations may be directly determined as locations directed to the same location as the first location. When the number of the second locations is greater than one, one location may be selected from the second locations as a location pointing to the same location as the first location, or all of the second locations may be determined as locations pointing to the same location as the first location.
A location that is co-located with the first location, it being understood that the coordinates of the location and the first location may be different, but both locations represent the same location.
After determining a location that is co-located with the first location, the first location may be further modified based on the determined location.
In an application scenario, in some special places without Global Positioning System (GPS) signals, a vehicle cannot determine a specific position of the vehicle according to the GPS signals, and cannot complete map construction. For example, GPS signals cannot be relied upon to obtain global position information for a vehicle in an underground garage or the like. In this case, the current vehicle pose can only be determined by means of odometer data and the previous vehicle pose. The map constructed by the method has large accumulative error. To reduce the accumulated error, an image database may be constructed that stores image frames captured by the camera during vehicle travel for use in loopback detection.
When the vehicle moves to the first place, if the first matching result meeting the preset place matching condition can be obtained from the image database, the first place can be corrected according to the determined place, the vehicle pose at the first place is corrected, and the accumulated error in the positioning process is reduced.
As can be seen from the above, the present embodiment may acquire the first image frames captured at the first location in multiple directions around the vehicle, and when matching the first image frames with the image frames in the image database, match a plurality of image frames between every two locations. However, in the related art, each location corresponds to one image frame, and when the angle of the image frame photographed again changes, the matching may be unsuccessful. In the embodiment, the image frames in multiple directions are collected at each location, the image frames in multiple directions can embody more comprehensive characteristic information of the location, and when at least a preset number of groups of successfully matched image frames exist, a matching result meeting preset location matching conditions is determined to be obtained, so that the matching process is more accurate, and therefore the accuracy of loop detection can be improved.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, when there are a plurality of second locations, step S140 may be implemented to determine a location pointing to the same location as the first location from the second locations, and specifically, steps 1a to 3a may be included.
Step 1 a: and acquiring a fifth place indicated by the matching result determined from the image database at the last place before the first place.
In each loop back detection, a plurality of second locations matching the first location may be determined from the image database. Also, there are cases where the frame numbers of image frames matched from the image database at the consecutive locations are consecutive. When the image frame employs keyframes, consecutive frame numbers refer to consecutive keyframe frame numbers. In order to more accurately determine a location pointing to the same location as the first location from among the plurality of second locations, the plurality of second locations may be filtered based on consecutive locations, i.e., based on a sequence of consecutive images.
When the vehicle is in the continuous moving process, the image acquisition equipment can acquire key frames with continuous frame numbers. In the same place, the frame numbers of the first image frames acquired by the image acquisition devices on the same vehicle are the same.
Wherein, the last place can be one or more. I.e. matching results at one or more locations before the first location may be obtained.
Step 2 a: judging whether target image frames continuous to the image frame corresponding to the fifth place exist in the image frames corresponding to the plurality of second places in the image database, and if yes, executing the step 3 a; if not, it may not be processed.
The image frame corresponding to the fifth location refers to an image frame corresponding to a location pointing to the same location in an image database corresponding to the fifth location. The fifth place may be one or more.
For example, the current point is point 3, the vehicle moves in the order of point 1 → point 2 → point 3, and point 1, point 2 and point 3 are three consecutive points. At location 3, the location(s) from the image database that point to the same location as location 3 includes: location a-image frame a, location b-image frame b, location c-image frame c. The last place may include place 1 and place 2 with respect to place 3. When the vehicle is acquired at the place 2, a place which is determined from the image database and points to the same place as the place 2 is a place m, and the frame number of an image frame corresponding to the place m in the image database is m; when the vehicle is at the location 1, the location which is determined from the image database and points to the same location as the location 1 is a location n, and the frame number of the image frame corresponding to the location n in the image database is n. And judging that the image frame corresponding to the location 3 (second location) comprises an image a, an image b and an image c, wherein the frame numbers of the image a, the image frame m and the image n are continuous, and then the image a can be determined as the target image frame.
When loop detection is performed for each location, if the frame numbers of the image frames in the image database corresponding to several consecutive locations are continuous, the location in the loop detection can be determined more accurately.
Step 3 a: and determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
In summary, the present embodiment performs the determination based on the image sequence, and further determines a location pointing to the same location as the first location. Compared with matching based on a single frame, the embodiment can avoid wrong matching caused by the fact that single frame image content is similar in different places, and can improve accuracy of loop detection.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, in step S130, the step of respectively matching each first image frame with a plurality of image frames corresponding to a plurality of locations in an image database according to the feature information corresponding to each image frame may specifically include:
and comparing each piece of characteristic information of the first image frame with each piece of characteristic information of the image frame respectively aiming at each first image frame and each image frame in the image database to obtain the coincidence degree of the characteristic information between the first image frame and the image frame.
And when the coincidence degree of the feature information is greater than a preset coincidence degree threshold value, determining that the first image frame is successfully matched with the image frame. And when the coincidence degree of the feature information is not greater than a preset coincidence degree threshold value, determining that the first image frame is failed to be matched with the image frame.
The preset contact ratio threshold may be a specific numerical value or a proportional value. The coincidence degree of the obtained characteristic information can be a numerical value or a proportional value.
For example, the same number of pieces of feature information may be obtained by comparing each piece of feature information of the first image frame with each piece of feature information of the image frame, and a ratio of the same number of pieces of feature information to the total number of pieces of feature information of the first image frame may be used as a proportional value.
For example, a first image frame includes 100 key points, that is, the first image frame includes 100 feature information, and a certain image frame in the image database includes 120 feature information. When the feature information comparison is performed, if 90 pieces of feature information obtained by comparing 100 pieces of feature information of the first image frame with 120 pieces of feature information of the image frame are the same, the degree of overlap of the feature information between the two pieces of feature information may be 90 or 90/100.
In summary, the present embodiment provides a specific implementation manner when each first image frame is matched with each image frame in the image database, so that the matching efficiency can be improved.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, the step of determining the feature information of each keypoint in the first image frame in step S120 may specifically include steps 1b and 2 b.
Step 1 b: and determining a descriptor corresponding to each key point in the first image frame according to the key point and pixel points at preset positions around the key point.
The descriptor is a multidimensional feature vector and is used for describing distribution features of pixel points around the key point. When determining the descriptors corresponding to the keypoints, N pixel point pairs may be specifically selected in a certain pattern around the keypoints P, and the comparison results of the N pixel point pairs are combined to serve as the descriptors. N is an integer. Specifically, a descriptor corresponding to the key point can be determined by adopting a BRIEF (binary Robust Independent element features) algorithm.
And step 2 b: determining words corresponding to the descriptors of each key point in the first image frame from a preset descriptor dictionary, and taking the words corresponding to all the key points in the first image frame as the feature information of the first image frame.
The descriptor dictionary is used for storing the corresponding relation between the descriptors and the words. The word is smaller than the descriptor in data size, and the descriptor is converted into a corresponding word, so that the calculation amount in matching can be reduced.
The descriptor dictionary may be pre-established in the following way: acquiring a large number of scene images in the underground garage, extracting key points in the scene images, and determining a descriptor of each key point according to each key point and pixel points at preset positions around the key point; after obtaining a plurality of descriptors of the key points, the descriptors that are close to each other in the description space may be clustered to correspond to the same word according to the position of each descriptor in the description space.
Referring to fig. 2, after a large number of scene images are collected, a large number of descriptors in the scene images are determined, clustering is performed on the large number of descriptors in a description space, and the descriptors belonging to the same cluster are set to correspond to the same word, so that a descriptor dictionary containing a plurality of descriptors belonging to the same cluster and corresponding to one word is obtained. Wherein each black dot represents a descriptor.
In summary, in the present embodiment, descriptors of keypoints are determined, and each descriptor is converted into a word, and all words of each image frame constitute feature information of the image frame.
In another embodiment of the invention, the embodiment shown in fig. 3 can be obtained on the basis of the embodiment shown in fig. 1. In the embodiment shown in fig. 3, the image database is further configured to store feature information of image frames at a plurality of second-type locations, each image frame at a second-type location comprising an image frame of a direction around the vehicle at the location. The method comprises the following steps.
S310: first image frames of a plurality of directions around a vehicle captured at a first location by an image capture device are acquired.
S320: and extracting key points in each first image frame according to the pixel value distribution of pixel points in the image frames, and determining the characteristic information of each key point in the first image frames.
In this embodiment, the above steps S310 and S320 are the same as steps S110 and S120 in the embodiment shown in fig. 1, respectively, and the detailed description can refer to the embodiment shown in fig. 1.
S330: and according to the characteristic information corresponding to each image frame, matching each first image frame with one image frame at each place in the image database. And when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting the preset location matching condition.
Wherein the third location indicated by the second matching result is: and the image database is provided with the corresponding position of the image frame which is successfully matched.
In this embodiment, each location in the image database corresponds to one image frame. When the image matching is carried out, aiming at the first location and each location in the image database, each first image frame is respectively matched with the image frame corresponding to the location in the image database, and when at least one first image frame is successfully matched with the image frame, the first location and the location in the image database are considered to meet the preset location matching condition. The above is a location matching process, and the first location may be matched with all locations in the image database.
When it is determined that a matching result satisfying the preset location matching condition cannot be obtained, the correspondence relationship between the first location, the first image frame, and the feature information may be stored in the image database for use in subsequent matching.
S340: from the third locations, locations pointing to the same location as the first location are determined.
For a specific implementation of this step, reference may be made to the description of step S140, which is not described herein again.
In summary, in this embodiment, when each location in the image database corresponds to one image frame, each first image frame corresponding to the first location may be respectively matched with the image frame corresponding to each location in the image database, and when at least one first image frame is successfully matched with the image frame in the image database, it is determined that a second matching result meeting a preset location matching condition is obtained. Therefore, the method and the device can take account of the condition of the single-view image frame in the image database and improve the accuracy of loop detection.
In another embodiment of the invention, the embodiment shown in fig. 4 can be obtained on the basis of the embodiment shown in fig. 1. The method of the embodiment shown in fig. 4 may include the following steps.
S410: first image frames of a plurality of directions around a vehicle captured at a first location by an image capture device are acquired. For a detailed description of this step, refer to step S110, which is not described herein again.
S420: and splicing the first image frames to obtain a first panoramic image frame at a first place.
When the first image frames are spliced, the first image frames can be spliced according to a preset position sequence according to a preset position relationship of each image acquisition device, so that a first panoramic image frame containing the surrounding environment of a first place is obtained.
S430: and extracting key points in the first panoramic image frame according to the pixel value distribution of the pixel points in the image frame, and determining the characteristic information of each key point in the first panoramic image frame.
For a specific embodiment of determining the feature information of the first panoramic image frame in this step, reference may be made to the embodiment of determining the feature information of the first image frame in step S120, and details are not repeated here.
S440: and matching the first panoramic image frame with the panoramic image frames corresponding to a plurality of places in the image database according to the characteristic information corresponding to each panoramic image frame.
When the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining to obtain a third matching result meeting the preset location matching condition, wherein a fourth location indicated by the third matching result is: and the image database is used for storing the corresponding positions of the successfully matched panoramic image frames.
Wherein the image database is used for storing feature information of panoramic image frames at a plurality of locations.
In this embodiment, each location corresponds to one panoramic image frame. And when the matching is successful, determining to obtain a third matching result meeting the preset location matching condition.
S450: and determining the positions pointing to the same position as the first position from the fourth positions. The detailed description of this step can refer to the description in step S140.
In summary, in this embodiment, the first image frames at the first locations are spliced to obtain the first panoramic image frame, and during matching, the first panoramic image frame is respectively matched with the panoramic image frame at each location in the image database. The first image frames at the first place are spliced, so that the matching times between the panoramic image frames and the panoramic image frames in the image database can be reduced, the matching process is shortened, and the matching efficiency is improved.
In another embodiment of the present invention, when the feature information of each keypoint in the first panoramic image frame is determined in S430, steps 1c and 2c may be specifically included.
Step 1 c: and aiming at each key point in the first panoramic image frame, determining a descriptor corresponding to the key point according to the key point and pixel points at preset positions around the key point.
And step 2 c: determining words corresponding to the descriptors of each key point in the first panoramic image frame from a preset descriptor dictionary, and taking the words corresponding to all the key points in the first panoramic image frame as the feature information of the first panoramic image frame.
For specific descriptions of steps 1c and 2c in this embodiment, refer to steps 1b and 2b, which are not described herein again.
Fig. 5 is a vehicle-based loop detection apparatus according to an embodiment of the present invention. The apparatus corresponds to the method embodiment shown in fig. 1. The device includes:
an acquisition module 510 configured to acquire a first image frame of a plurality of directions around a vehicle acquired by an image acquisition device at a first location;
a first extraction module 520, configured to extract key points in each first image frame according to the pixel value distribution of the pixel points in the image frames, and determine feature information of each key point in the first image frame;
A first matching module 530, configured to match each first image frame with a plurality of image frames corresponding to a plurality of locations in an image database according to the feature information corresponding to each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in an image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
a first determining module 540 configured to determine locations pointing to the same location as the first location from the second locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, when the second location is multiple, the first determining module 540 is specifically configured to:
acquiring a fifth place indicated by the matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with an image frame corresponding to a fifth place exists in image frames corresponding to a plurality of second places in an image database;
And if so, determining the second place corresponding to the target image frame as a place pointing to the same place as the first place.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the first matching module 530 is specifically configured to:
comparing each piece of characteristic information of the first image frame with each piece of characteristic information of the image frame respectively aiming at each first image frame and each image frame in the image database to obtain the coincidence degree of the characteristic information between the first image frame and the image frame; and when the coincidence degree of the feature information is greater than a preset coincidence degree threshold value, determining that the first image frame is successfully matched with the image frame.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the apparatus may further include:
a storage module (not shown in the figure) configured to store feature information of each first image frame at the first location into the image database when it is determined that a matching result satisfying the preset location matching condition cannot be obtained.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the determining, by the first extraction module 520, the feature information of each keypoint in the first image frame includes:
Determining a descriptor corresponding to each key point in the first image frame according to the key point and pixel points at preset positions around the key point;
determining words corresponding to the descriptors of each key point in the first image frame from a preset descriptor dictionary, and taking the words corresponding to all key points in the first image frame as the feature information of the first image frame.
In another embodiment of the invention, the embodiment shown in fig. 6 can be obtained based on the embodiment shown in fig. 5. In this embodiment, the image database is further configured to store feature information of image frames at a plurality of second-type locations, and each image frame at a second-type location includes an image frame of a direction around the vehicle at the location. This embodiment corresponds to the method embodiment shown in fig. 3. The device includes: an obtaining module 610, a first extracting module 620, a second matching module 630 and a second determining module 640. The obtaining module 610 and the first extracting module 620 are respectively the same as the obtaining module 510 and the first extracting module 520 in the embodiment shown in fig. 5, and detailed description is omitted here.
A second matching module 630, configured to, after determining feature information of each key point in the first image frames, match each first image frame with one image frame at each location in the image database according to the feature information corresponding to each image frame; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
A second determining module 640 configured to determine locations pointing to the same location as the first location from the third locations.
In another embodiment of the present invention, the embodiment of the apparatus shown in fig. 7 can be obtained based on the embodiment shown in fig. 5, and the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 4. The device also includes: an obtaining module 710, a splicing module 720, a second extracting module 730, a third matching module 740, and a third determining module 750. The obtaining module 710 is the same as the obtaining module 510 in the embodiment shown in fig. 5, and detailed description thereof is omitted here.
A stitching module 720, configured to, after acquiring each first image frame acquired by the image acquisition device at the first location, stitch each first image frame to obtain a first panoramic image frame at the first location;
a second extraction module 730, configured to extract key points in the first panoramic image frame according to the pixel value distribution of pixel points in the image frame, and determine feature information of each key point in the first panoramic image frame;
a third matching module 740 configured to match the first panoramic image frame with panoramic image frames corresponding to a plurality of locations in an image database according to feature information corresponding to each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
A third determining module 750 configured to determine locations pointing to the same location as the first location from the fourth locations.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Fig. 8 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. The vehicle-mounted terminal comprises: a processor 810 and an image acquisition device 820. The processor 810 includes: the device comprises an acquisition module 11, a first extraction module 12, a first matching module 13 and a first determination module 14.
An acquisition module 11, configured to acquire a first image frame of multiple directions around a vehicle acquired at a first location by an image acquisition device 820;
the first extraction module 12 is configured to extract key points in each first image frame according to pixel value distribution of pixels in the image frame, and determine feature information of each key point in the first image frame;
the first matching module 13 is configured to match each first image frame with a plurality of image frames corresponding to a plurality of locations in an image database according to feature information corresponding to each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in an image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
A first determining module 14, configured to determine a location pointing to the same location as the first location from the second locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the image database is further configured to store feature information of image frames at a plurality of second-type locations, each image frame at a second-type location includes an image frame in one direction around the vehicle at the location; the processor 810 further includes:
a second matching module (not shown in the figure), configured to, after determining feature information of each key point in the first image frames, match each first image frame with one image frame at each location in the image database according to the feature information corresponding to each image frame; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting the preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
and a second determining module (not shown in the figure) for determining a location pointing to the same location as the first location from the third locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the processor 810 further includes:
the image acquisition device comprises a splicing module (not shown in the figure) for splicing each first image frame acquired by the image acquisition device at a first place to obtain a first panoramic image frame at the first place;
a second extraction module (not shown in the figure), configured to extract, according to the pixel value distribution of the pixel points in the image frame, the key points in the first panoramic image frame, and determine feature information of each key point in the first panoramic image frame;
a third matching module (not shown in the figure) for matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in the image database according to the feature information corresponding to each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining to obtain a third matching result meeting the preset location matching condition, wherein a fourth location indicated by the third matching result is: the image database is used for storing the matched panoramic image frames; the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
And a third determining module (not shown in the figure) for determining the location pointing to the same location as the first location from the fourth locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, when the second location is multiple, the first determining module 14 is specifically configured to:
acquiring a fifth place indicated by the matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth place exists in the image frames corresponding to the second places in the image database;
and if so, determining the second place corresponding to the target image frame as a place pointing to the same place as the first place.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the first matching module 13 is specifically configured to:
comparing each piece of characteristic information of the first image frame with each piece of characteristic information of the image frame respectively aiming at each first image frame and each image frame in an image database to obtain the coincidence degree of the characteristic information between the first image frame and the image frame; and when the coincidence degree of the feature information is greater than a preset coincidence degree threshold value, determining that the first image frame is successfully matched with the image frame.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the processor 810 further includes:
a storage module (not shown in the figure) for storing the feature information of each first image frame at the first location into the image database when it is determined that the matching result satisfying the preset location matching condition cannot be obtained.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the determining, by the first extraction module 12, the feature information of each keypoint in the first image frame includes:
determining a descriptor corresponding to each key point in the first image frame according to the key point and pixel points at preset positions around the key point;
determining words corresponding to the descriptors of each key point in the first image frame from a preset descriptor dictionary, and taking the words corresponding to all key points in the first image frame as the feature information of the first image frame.
The terminal embodiment and the method embodiment shown in fig. 1 are embodiments based on the same inventive concept, and the relevant points can be referred to each other. The terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, reference is made to the method embodiment.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A loop detection method based on a vehicle is characterized by comprising the following steps:
acquiring first image frames of a plurality of directions around the vehicle, acquired by an image acquisition device at a first location;
Extracting key points in each first image frame according to pixel value distribution of pixel points in the image frames, and determining characteristic information of each key point in the first image frames;
according to the characteristic information corresponding to each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database respectively; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
determining locations pointing to the same location as the first location from the second locations.
2. The method of claim 1, wherein the image database is further configured to store feature information for image frames at a plurality of second type locations, each image frame at a second type location comprising an image frame of a direction around the vehicle at the location; after determining feature information for each keypoint in the first image frame, the method further comprises:
According to the characteristic information corresponding to each image frame, matching each first image frame with one image frame at each place in the image database respectively; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
determining locations pointing to the same location as the first location from the third locations.
3. The method of any of claims 1-2, wherein after acquiring each first image frame acquired by the image acquisition device at the first location, the method further comprises:
splicing the first image frames to obtain a first panoramic image frame at the first place;
extracting key points in the first panoramic image frame according to the pixel value distribution of pixel points in the image frame, and determining the characteristic information of each key point in the first panoramic image frame;
matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in an image database according to the characteristic information corresponding to each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
Determining a location pointing to the same location as the first location from the fourth locations.
4. A method according to any one of claims 1 to 3, wherein, when the second location is plural, the step of determining a location from the second location that is co-located with the first location comprises:
acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
5. The method of claim 1, wherein the step of matching each first image frame with a plurality of image frames corresponding to a plurality of locations in an image database according to the feature information corresponding to each image frame comprises:
comparing each piece of feature information of each first image frame with each piece of feature information of the image frame respectively aiming at each first image frame and each image frame in an image database to obtain the feature information coincidence degree between the first image frame and the image frame; and when the coincidence degree of the feature information is greater than a preset coincidence degree threshold value, determining that the first image frame is successfully matched with the image frame.
6. The method of claim 1, when it is determined that a matching result satisfying the preset location matching condition cannot be obtained, the method further comprising:
and storing the characteristic information of each first image frame at the first position into the image database.
7. The method of claim 1, wherein the step of determining feature information for each keypoint in the first image frame comprises:
determining a descriptor corresponding to each key point in the first image frame according to the key point and pixel points at preset positions around the key point;
determining words corresponding to the descriptors of each key point in the first image frame from a preset descriptor dictionary, and taking the words corresponding to all key points in the first image frame as the feature information of the first image frame.
8. A vehicle-based loop detection apparatus, comprising:
an acquisition module configured to acquire a first image frame of a plurality of directions around a vehicle acquired at a first location by an image acquisition device;
the first extraction module is configured to extract key points in each first image frame according to pixel value distribution of pixel points in the image frames and determine characteristic information of each key point in the first image frames;
The first matching module is configured to match each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information corresponding to each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
a first determination module configured to determine locations pointing to the same location as the first location from the second locations.
9. The apparatus of claim 8, wherein the apparatus further comprises:
the splicing module is configured to splice the first image frames acquired by the image acquisition device at the first location to obtain a first panoramic image frame at the first location;
The second extraction module is configured to extract key points in the first panoramic image frame according to the pixel value distribution of pixel points in the image frame and determine the characteristic information of each key point in the first panoramic image frame;
a third matching module configured to match the first panoramic image frame with panoramic image frames corresponding to a plurality of locations in an image database according to feature information corresponding to each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
a third determination module configured to determine locations pointing to the same location as the first location from among the fourth locations.
10. A vehicle-mounted terminal characterized by comprising: a processor and an image acquisition device; the processor includes: the device comprises an acquisition module, a first extraction module, a first matching module and a first determination module;
The acquisition module is used for acquiring first image frames of multiple directions around the vehicle acquired by the image acquisition equipment at a first position;
the first extraction module is used for extracting key points in each first image frame according to the pixel value distribution of pixel points in the image frames and determining the characteristic information of each key point in the first image frames;
the first matching module is used for respectively matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information corresponding to each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
the first determining module is configured to determine a location pointing to the same location as the first location from the second locations.
CN201910346797.3A 2019-04-27 2019-04-27 Vehicle-based loop detection method and device and vehicle-mounted terminal Pending CN111860051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910346797.3A CN111860051A (en) 2019-04-27 2019-04-27 Vehicle-based loop detection method and device and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910346797.3A CN111860051A (en) 2019-04-27 2019-04-27 Vehicle-based loop detection method and device and vehicle-mounted terminal

Publications (1)

Publication Number Publication Date
CN111860051A true CN111860051A (en) 2020-10-30

Family

ID=72952463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910346797.3A Pending CN111860051A (en) 2019-04-27 2019-04-27 Vehicle-based loop detection method and device and vehicle-mounted terminal

Country Status (1)

Country Link
CN (1) CN111860051A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591865A (en) * 2021-07-28 2021-11-02 深圳甲壳虫智能有限公司 Loop detection method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243379A1 (en) * 2010-03-31 2011-10-06 Aisin Aw Co., Ltd. Vehicle position detection system
US20140300686A1 (en) * 2013-03-15 2014-10-09 Tourwrist, Inc. Systems and methods for tracking camera orientation and mapping frames onto a panoramic canvas
CN106875442A (en) * 2016-12-26 2017-06-20 上海蔚来汽车有限公司 Vehicle positioning method based on image feature data
CN107507230A (en) * 2017-08-31 2017-12-22 成都观界创宇科技有限公司 Method for tracking target and panorama camera applied to panoramic picture
CN109034237A (en) * 2018-07-20 2018-12-18 杭州电子科技大学 Winding detection method based on convolutional Neural metanetwork road sign and sequence search
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243379A1 (en) * 2010-03-31 2011-10-06 Aisin Aw Co., Ltd. Vehicle position detection system
US20140300686A1 (en) * 2013-03-15 2014-10-09 Tourwrist, Inc. Systems and methods for tracking camera orientation and mapping frames onto a panoramic canvas
CN106875442A (en) * 2016-12-26 2017-06-20 上海蔚来汽车有限公司 Vehicle positioning method based on image feature data
CN107507230A (en) * 2017-08-31 2017-12-22 成都观界创宇科技有限公司 Method for tracking target and panorama camera applied to panoramic picture
CN109034237A (en) * 2018-07-20 2018-12-18 杭州电子科技大学 Winding detection method based on convolutional Neural metanetwork road sign and sequence search
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张云帆: "基于CUDA的回环检测实现", 中国优秀硕士学位论文全文数据库-信息科技辑, no. 2, pages 140 - 700 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591865A (en) * 2021-07-28 2021-11-02 深圳甲壳虫智能有限公司 Loop detection method and device and electronic equipment
CN113591865B (en) * 2021-07-28 2024-03-26 深圳甲壳虫智能有限公司 Loop detection method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN107742311B (en) Visual positioning method and device
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
Uittenbogaard et al. Privacy protection in street-view panoramas using depth and multi-view imagery
CN108960211B (en) Multi-target human body posture detection method and system
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
Coates et al. Multi-camera object detection for robotics
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN112381132A (en) Target object tracking method and system based on fusion of multiple cameras
CN112435223B (en) Target detection method, device and storage medium
CN111507138A (en) Image recognition method and device, computer equipment and storage medium
CN112052702A (en) Method and device for identifying two-dimensional code
JP2011039974A (en) Image search method and system
CN111860051A (en) Vehicle-based loop detection method and device and vehicle-mounted terminal
CN110930437B (en) Target tracking method and device
CN112396634A (en) Moving object detection method, moving object detection device, vehicle and storage medium
CN112396654A (en) Method and device for determining pose of tracking object in image tracking process
CN114926508B (en) Visual field boundary determining method, device, equipment and storage medium
CN111767839A (en) Vehicle driving track determining method, device, equipment and medium
CN112802112B (en) Visual positioning method, device, server and storage medium
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN111860050A (en) Loop detection method and device based on image frame and vehicle-mounted terminal
CN113610967B (en) Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN114332814A (en) Parking frame identification method and device, electronic equipment and storage medium
CN114445787A (en) Non-motor vehicle weight recognition method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination