CN111860050A - Loop detection method and device based on image frame and vehicle-mounted terminal - Google Patents

Loop detection method and device based on image frame and vehicle-mounted terminal Download PDF

Info

Publication number
CN111860050A
CN111860050A CN201910346794.XA CN201910346794A CN111860050A CN 111860050 A CN111860050 A CN 111860050A CN 201910346794 A CN201910346794 A CN 201910346794A CN 111860050 A CN111860050 A CN 111860050A
Authority
CN
China
Prior art keywords
image
image frame
location
pixel
place
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910346794.XA
Other languages
Chinese (zh)
Other versions
CN111860050B (en
Inventor
李天威
徐抗
童哲航
刘一龙
谢国富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201910346794.XA priority Critical patent/CN111860050B/en
Publication of CN111860050A publication Critical patent/CN111860050A/en
Application granted granted Critical
Publication of CN111860050B publication Critical patent/CN111860050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a loop detection method and device based on image frames and a vehicle-mounted terminal. The method comprises the following steps: acquiring first image frames of a plurality of directions around the vehicle, acquired by an image acquisition device at a first location; aiming at each pixel point in each first image frame, updating the pixel value of the pixel point according to the surrounding pixel points of the pixel point, and taking all the updated pixel points in the first image frame as the characteristic information of the first image frame; and according to the characteristic information of each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in the image database respectively, and determining a place which points to the same place as the first place from second places indicated by the first matching result. By applying the scheme provided by the embodiment of the invention, the accuracy in the loop detection can be improved.

Description

Loop detection method and device based on image frame and vehicle-mounted terminal
Technical Field
The invention relates to the technical field of automatic driving, in particular to a loop detection method and device based on image frames and a vehicle-mounted terminal.
Background
And loop detection, namely matching the acquired picture with the historical picture when the movable object provided with the camera moves to a certain position, and considering that the movable object moves to the position corresponding to the historical picture which is successfully matched when the matching is successful. For example, the movable object may be a vehicle, a robot, or the like. The loop detection can associate a location corresponding to the current image with a location corresponding to the historical image.
However, when the vehicle moves to a certain position again, the angle of the image captured by the camera on the vehicle changes, and when the image is matched with the history image, the matching may not be successful. When the image is matched with the historical image, the matching is usually performed according to key points in the image, but when light rays and shooting angles of the external environment change, the key points in the image correspondingly change, and the matching according to the key points may not be successful. Based on the above analysis, the accuracy of the loop detection is not high enough.
Disclosure of Invention
The invention provides a loop detection method and device based on image frames and a vehicle-mounted terminal, and aims to improve the accuracy of loop detection. The specific technical scheme is as follows.
In a first aspect, an embodiment of the present invention discloses a loop detection method based on an image frame, including:
acquiring first image frames of a plurality of directions around the vehicle, acquired by an image acquisition device at a first location;
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the surrounding pixel points of the pixel point, and taking all the updated pixel points in the first image frame as the characteristic information of the first image frame;
according to the feature information of each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database respectively; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
Determining locations pointing to the same location as the first location from the second locations.
Optionally, the image database is further configured to store feature information of image frames at a plurality of second-type locations, where an image frame at each second-type location includes an image frame of a direction around the vehicle at the location; after obtaining the feature information of the first image frame, the method further comprises:
according to the characteristic information of each image frame, matching each first image frame with one image frame at each place in the image database respectively; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
determining locations pointing to the same location as the first location from the third locations.
Optionally, after acquiring each first image frame acquired by the image acquisition device at the first location, the method further includes:
Splicing the first image frames to obtain a first panoramic image frame at the first place;
updating the pixel values of the pixel points according to the surrounding pixel points of the pixel points aiming at each pixel point in each first panoramic image frame, and taking all the updated pixel points in the first panoramic image frame as the characteristic information of the first panoramic image frame;
matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in an image database according to the characteristic information of each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
determining a location pointing to the same location as the first location from the fourth locations.
Optionally, when there are a plurality of second locations, the step of determining a location pointing to the same location as the first location from the second locations includes:
Acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
Optionally, the step of respectively matching each first image frame with a plurality of image frames corresponding to each location in an image database according to the feature information of each image frame includes:
calculating an absolute value of a difference between a pixel value of each pixel in each first image frame and a pixel value of a pixel at the same position in the image frame for each first image frame and each image frame in an image database, and taking a sum of absolute values of differences between pixel values of two corresponding pixels between the first image frame and the image frame as a feature information similarity between the first image frame and the image frame; and when the feature information similarity is smaller than a preset similarity threshold value, determining that the first image frame is successfully matched with the image frame.
Optionally, when it is determined that a matching result meeting the preset location matching condition cannot be obtained, the method further includes:
and storing the characteristic information of each first image frame at the first position into an image database.
Optionally, the step of updating, for each pixel point in each first image frame, the pixel value of the pixel point according to the surrounding pixel points of the pixel point includes:
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the following formula:
P’=(P-Pμ)/Pσ
wherein P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point.
Optionally, the step of acquiring the first image frames of the plurality of directions around the vehicle acquired by the image acquisition device at the first location includes:
the method comprises the steps of obtaining initial image frames of multiple directions around the vehicle, collected by image collection equipment at a first place, and conducting down-sampling processing on each initial image frame to obtain each first image frame.
In a second aspect, an embodiment of the present invention discloses an image frame-based loop detection apparatus, including:
an acquisition module configured to acquire a first image frame of a plurality of directions around the vehicle acquired by an image acquisition device at a first location;
the first updating module is configured to update pixel values of pixel points according to surrounding pixel points of the pixel points for each pixel point in each first image frame, and take all updated pixel points in the first image frame as feature information of the first image frame;
the first matching module is configured to match each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information of each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
A first determination module configured to determine locations pointing to the same location as the first location from the second locations.
Optionally, the image database is further configured to store feature information of image frames at a plurality of second-type locations, where an image frame at each second-type location includes an image frame of a direction around the vehicle at the location; the device further comprises:
the second matching module is configured to match each first image frame with one image frame at each position in the image database according to the characteristic information of each image frame after the characteristic information of the first image frame is obtained; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
a second determination module configured to determine locations pointing to the same location as the first location from among the third locations.
Optionally, the apparatus further comprises:
The splicing module is configured to splice the first image frames acquired by the image acquisition device at the first location to obtain a first panoramic image frame at the first location;
the second updating module is configured to update the pixel values of the pixel points according to the surrounding pixel points of the pixel points for each pixel point in each first panoramic image frame, and take all the updated pixel points in the first panoramic image frame as the characteristic information of the first panoramic image frame;
a third matching module configured to match the first panoramic image frame with panoramic image frames corresponding to a plurality of locations in an image database according to feature information of each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
A third determination module configured to determine locations pointing to the same location as the first location from among the fourth locations.
Optionally, when the second location is multiple, the first determining module is specifically configured to:
acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
Optionally, the first matching module is specifically configured to:
calculating an absolute value of a difference between a pixel value of each pixel in each first image frame and a pixel value of a pixel at the same position in the image frame for each first image frame and each image frame in an image database, and taking a sum of absolute values of differences between pixel values of two corresponding pixels between the first image frame and the image frame as a feature information similarity between the first image frame and the image frame; and when the feature information similarity is smaller than a preset similarity threshold value, determining that the first image frame is successfully matched with the image frame.
Optionally, the apparatus further comprises:
a storage module configured to store feature information of each first image frame at the first location into the image database when it is determined that a matching result satisfying the preset location matching condition cannot be obtained.
Optionally, the first updating module is specifically configured to:
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the following formula:
P’=(P-Pμ)/Pσ
wherein P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point.
Optionally, the obtaining module is specifically configured to:
the method comprises the steps of obtaining initial image frames of multiple directions around the vehicle, collected by image collection equipment at a first place, and conducting down-sampling processing on each initial image frame to obtain each first image frame.
In a third aspect, an embodiment of the present invention discloses a vehicle-mounted terminal, including: a processor and an image acquisition device; the processor includes: the device comprises an acquisition module, a first updating module, a first matching module and a first determining module;
The acquisition module is used for acquiring first image frames of multiple directions around the vehicle acquired by an image acquisition device at a first position;
the first updating module is configured to update, for each pixel point in each first image frame, a pixel value of the pixel point according to a pixel point around the pixel point, and use all updated pixel points in the first image frame as feature information of the first image frame;
the first matching module is used for respectively matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information of each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
The first determining module is configured to determine a location pointing to the same location as the first location from the second locations.
Optionally, the image database is further configured to store feature information of image frames at a plurality of second-type locations, where an image frame at each second-type location includes an image frame of a direction around the vehicle at the location; the processor further comprises:
the second matching module is used for matching each first image frame with one image frame at each place in the image database according to the characteristic information of each image frame after the characteristic information of the first image frame is obtained; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
and the second determining module is used for determining the positions pointing to the same position as the first position from the third positions.
Optionally, the processor further includes:
The splicing module is used for splicing each first image frame acquired by the image acquisition equipment at the first place to obtain a first panoramic image frame at the first place;
the second updating module is used for updating the pixel values of the pixels according to the surrounding pixel points of the pixels aiming at each pixel in each first panoramic image frame, and taking all the updated pixels in the first panoramic image frame as the characteristic information of the first panoramic image frame;
the third matching module is used for matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in an image database according to the characteristic information of each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
And the third determining module is used for determining the positions pointing to the same position as the first position from the fourth positions.
Optionally, when there are a plurality of second locations, the first determining module is specifically configured to:
acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
Optionally, the first matching module is specifically configured to:
calculating an absolute value of a difference between a pixel value of each pixel in each first image frame and a pixel value of a pixel at the same position in the image frame for each first image frame and each image frame in an image database, and taking a sum of absolute values of differences between pixel values of two corresponding pixels between the first image frame and the image frame as a feature information similarity between the first image frame and the image frame; and when the feature information similarity is smaller than a preset similarity threshold value, determining that the first image frame is successfully matched with the image frame.
Optionally, the processor further includes:
and the storage module is used for storing the characteristic information of each first image frame at the first place into an image database when the matching result meeting the preset place matching condition is determined not to be obtained.
Optionally, the first updating module is specifically configured to:
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the following formula:
P’=(P-Pμ)/Pσ
wherein P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point.
Optionally, the obtaining module is specifically configured to:
the method comprises the steps of obtaining initial image frames of multiple directions around the vehicle, collected by image collection equipment at a first place, and conducting down-sampling processing on each initial image frame to obtain each first image frame.
As can be seen from the above, the method and device for detecting loop based on image frames and the vehicle-mounted terminal provided in the embodiments of the present invention can obtain the first image frames in multiple directions around the vehicle collected at the first location, and when the first image frames are matched with the image frames in the image database, match the multiple image frames between every two locations, so that even if the shooting angle during shooting again changes, loop detection can be accurately achieved, and the robustness of loop detection in the aspect of the image shooting angle is improved; pixel values of pixel points in the first image frame are updated according to surrounding pixel points, even if the environment relationship changes during shooting again, the matched image frame can be accurately detected, and the robustness of the image in response to light changes is improved; the global characteristics of all pixel points of the first image frame are used as characteristic information, and accuracy of loop detection can be improved. Therefore, the embodiment of the invention can improve the accuracy of loop detection.
The innovation points of the embodiment of the invention comprise:
1. the pixel value of the obtained image frame is updated, all pixel points of the image frame are used as characteristic information, and the robustness of the image frame in response to light changes can be improved by the global characteristic and the updating processing of the pixel value; each place corresponds to image frames in multiple directions, and under the condition that each place in the image database contains image frames in all directions, when at least a preset number of groups of successfully matched image frames exist, a matching result meeting a preset place matching condition is determined. This makes the matching in the loop detection not limited by the shooting angle, and the accuracy of the same place obtained by matching is higher.
2. When each place in the image database contains a certain image frame, when at least one first image frame is successfully matched with the image frame in the image database, determining to obtain a matching result meeting a preset place matching condition. Therefore, the loop detection can be compatible with the condition that each place in the database corresponds to one image frame, and the compatibility is improved.
3. A plurality of image frames corresponding to each place are spliced into a panoramic image frame, so that the matching times can be reduced, the flow during matching is shortened, and the matching efficiency is improved.
4. And the accuracy of loop detection can be improved by judging based on the image sequence.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flowchart of an image frame-based loop detection method according to an embodiment of the present invention;
fig. 2 is a reference diagram before and after a pixel value update is performed on an image frame according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating another image frame-based loop detection method according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating another image frame-based loop detection method according to an embodiment of the present invention;
FIGS. 5-7 are schematic structural diagrams of an image frame-based loop detection apparatus according to several embodiments of the present invention;
fig. 8 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a loop detection method and device based on an image frame and a vehicle-mounted terminal, which can improve the accuracy of loop detection. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flowchart of an image frame-based loop detection method according to an embodiment of the present invention. The method is applied to the electronic equipment. The electronic device can be a common computer, a server or an intelligent mobile terminal and the like. The electronic device may also be a device for calculation processing installed in a vehicle. The method specifically comprises the following steps.
S110: first image frames of a plurality of directions around a vehicle captured at a first location by an image capture device are acquired.
The image acquisition devices can be multiple and are respectively installed at different positions of the vehicle, and each image acquisition device acquires image frames in one direction around the vehicle. For example, one camera may be installed in each of front, rear, left, and right directions of the vehicle, in which case 4 first image frames may be acquired at each location. The image acquisition device can be a common lens camera and also can be a fisheye lens camera.
The electronic device may receive a first image frame captured by each image capture device while the vehicle is at a first location. At the same time and the same place, each image acquisition device acquires a first image frame. The electronic device may associate each first image frame with a first location. Wherein the first location may be represented in two-dimensional or three-dimensional coordinates.
The image acquisition device may acquire the image frames at a preset frame rate. In performing loop detection, the electronic device may acquire a key frame (I frame) in the sequence of image frames acquired by the image acquisition device as the first image frame. This can reduce the amount of calculation. In the positioning process, the movement of the vehicle is relatively slow compared with the acquisition speed of the image acquisition equipment, so that loop detection is not required to be performed on each image frame, and the processing burden of the system can be reduced by selecting key frames in the image frames for loop detection. This also preserves the validity of the information.
When acquiring the first image frames of the plurality of directions around the vehicle acquired by the image acquisition device at the first location, the method may specifically include: the method comprises the steps of acquiring initial image frames of multiple directions around a vehicle acquired by image acquisition equipment at a first place, converting each initial image frame into a gray image, and obtaining each first image frame. The above embodiments are applicable to the case where the image directly captured by the image capturing apparatus is a color image.
S120: and aiming at each pixel point in each first image frame, updating the pixel value of the pixel point according to the surrounding pixel points of the pixel point, and taking all the updated pixel points in the first image frame as the characteristic information of the first image frame.
And updating each pixel point in each first image frame according to the above mode to obtain the characteristic information of each first image frame.
Specifically, for each pixel point in each first image frame, the pixel value of the pixel point may be updated according to the following formula:
P’=(P-Pμ)/Pσ
wherein, P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point. The surrounding pixels of the pixel may include 8 adjacent pixels around the pixel, and may also be referred to as eight-neighborhood pixels. The surrounding pixels may also be other defined pixels. P μ is the arithmetic mean of the pixel values of the surrounding pixels.
Figure BDA0002042558940000111
Wherein k is the total number of surrounding pixels of the pixel, and i is the number of each surrounding pixel.
After the pixel value of the pixel point is updated according to the surrounding pixel points of the pixel point, the robustness to the light intensity during image frame matching can be improved. According to the experimental result data, after the pixel values of the pixel points are updated according to the P' ═ P μ/P σ, the robustness to the light intensity during the image frame matching can be effectively improved, so that the image matching is more accurate.
Referring to fig. 2, fig. 2 is a reference diagram after the pixel points of the first image frame are updated. The two images above are two image frames in the underground garage, and after the pixel values of the pixel points of the two image frames are processed, the first image frame with the pixel values of the pixel points updated below can be obtained respectively.
In this embodiment, all updated pixel points in the first image frame are used as the feature information of the first image frame, and it is known that the feature information of the first image frame adopts the global feature of the image frame, and compared with the feature information of the key points in the image frame which is used as the feature information of the image frame, the global feature can have invariance to extreme appearance influence caused by reasons such as day and night, weather, season, and the like, so that the robustness to light intensity during image frame matching can be improved.
S130: and according to the characteristic information of each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in the image database.
The image database is used for storing feature information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place.
And when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition. The first matching process between the first place and each first-class place in the image database comprises a pairwise matching process between each first image frame corresponding to the first place and each image frame corresponding to the first-class place.
The location indicated by the first matching result is a second location, and the second location is a location in the first category of locations. The second location may be represented in two-dimensional or three-dimensional coordinates.
The preset number may be a number previously determined according to the total number of the first image frames included in each of the first locations. For example, when the total number of the first image frames is 4, the preset number may be 2 to 4.
For example, the first location includes 4 first image frames, and each first-type location in the image database includes 4 image frames. In the matching, feature information of each first image frame may be matched with each image frame in each first-type location. That is, for each first location and each first-type location, 4 × 4 image matching may be performed for 8 times, and when there are image frames whose matching is successful in more than a preset number of groups, it is considered that matching between the first location and the first-type location satisfies a preset location matching condition, and the first-type location is a second location. And when the number of the matched image groups is less than the preset number of groups, the matching between the first place and the first class place is considered not to meet the preset place matching condition, namely, no matching result meeting the preset place matching condition exists.
When it is determined that a matching result meeting the preset location matching condition cannot be obtained, feature information of each first image frame at the first location may be stored in an image database so as to be used for subsequent loop detection.
S140: a location pointing to the same location as the first location is determined from the second locations.
When the number of the second locations is one, the second locations may be directly determined as locations directed to the same location as the first location. When the number of the second locations is greater than one, one location may be selected from the second locations as a location pointing to the same location as the first location, or all of the second locations may be determined as locations pointing to the same location as the first location.
A location that is co-located with the first location, it being understood that the coordinates of the location and the first location may be different, but both locations represent the same location.
After determining a location that is co-located with the first location, the first location may be further modified based on the determined location.
In an application scenario, in some special places without Global Positioning System (GPS) signals, a vehicle cannot determine a specific position of the vehicle according to the GPS signals, and cannot complete map construction. For example, GPS signals cannot be relied upon to obtain global position information for a vehicle in an underground garage or the like. In this case, the current vehicle pose can only be determined by means of odometer data and the previous vehicle pose. The map constructed by the method has large accumulative error. To reduce the accumulated error, an image database may be constructed that stores image frames captured by the camera during vehicle travel for use in loopback detection.
When the vehicle moves to the first place, if the first matching result meeting the preset place matching condition can be obtained from the image database, the first place can be corrected according to the determined place, the vehicle pose at the first place is corrected, and the accumulated error in the positioning process is reduced.
As can be seen from the above, in the embodiment, the first image frames in multiple directions around the vehicle, which are acquired at the first location, are acquired, and when the first image frames are matched with the image frames in the image database, the multiple image frames between every two locations are matched, so that loop detection can be accurately realized even if the shooting angle is changed during shooting again, and the robustness of the loop detection in the aspect of the image shooting angle is improved; pixel values of pixel points in the first image frame are updated according to surrounding pixel points, even if the environment relationship changes during shooting again, the matched image frame can be accurately detected, and the robustness of the image in response to light changes is improved; the global characteristics of all pixel points of the first image frame are used as characteristic information, and accuracy of loop detection can be improved. Therefore, the accuracy of loop detection can be improved.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, when there are a plurality of second locations, step S140 may be implemented to determine a location pointing to the same location as the first location from the second locations, and specifically, the step may include steps 1a to 3 a.
Step 1 a: and acquiring a fifth place indicated by the matching result determined from the image database at the last place before the first place.
In each loop back detection, a plurality of second locations matching the first location may be determined from the image database. Also, there are cases where the frame numbers of image frames matched from the image database at the consecutive locations are consecutive. When the image frame employs keyframes, consecutive frame numbers refer to consecutive keyframe frame numbers. In order to more accurately determine a location pointing to the same location as the first location from among the plurality of second locations, the plurality of second locations may be filtered based on consecutive locations, i.e., based on a sequence of images.
When the vehicle is in the continuous moving process, the image acquisition equipment can acquire key frames with continuous frame numbers. In the same place, the frame numbers of the first image frames acquired by the image acquisition devices on the same vehicle are the same.
Wherein, the last place can be one or more. I.e. matching results at one or more locations before the first location may be obtained.
Step 2 a: judging whether target image frames continuous to the image frame corresponding to the fifth place exist in the image frames corresponding to the plurality of second places in the image database, and if yes, executing the step 3 a; if not, it may not be processed.
The image frame corresponding to the fifth location refers to an image frame corresponding to a location pointing to the same location in an image database corresponding to the fifth location. The fifth place may be one or more.
For example, the current point is point 3, the vehicle moves in the order of point 1 → point 2 → point 3, and point 1, point 2 and point 3 are three consecutive points. At location 3, the location(s) from the image database that point to the same location as location 3 includes: location a-image frame a, location b-image frame b, location c-image frame c. The last place may include place 1 and place 2 with respect to place 3. When the vehicle is acquired at the place 2, a place which is determined from the image database and points to the same place as the place 2 is a place m, and the frame number of an image frame corresponding to the place m in the image database is m; when the vehicle is at the location 1, the location which is determined from the image database and points to the same location as the location 1 is a location n, and the frame number of the image frame corresponding to the location n in the image database is n. After judgment, the image frame corresponding to the location 3 (second location) includes an image frame a, an image frame b and an image frame c, wherein the frame numbers of the image frame a, the image frame m and the image frame n are consecutive, and at this time, the image frame a can be determined to be the target image frame.
When loop detection is performed for each location, if the frame numbers of the image frames in the image database corresponding to several consecutive locations are continuous, the location in the loop detection can be determined more accurately.
Step 3 a: and determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
In summary, the present embodiment performs the determination based on the image sequence, and further determines a location pointing to the same location as the first location. Compared with matching based on a single frame, the embodiment can avoid wrong matching caused by the fact that single frame image content is similar in different places, and can improve accuracy of loop detection.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, in step S130, the step of respectively matching each first image frame with a plurality of image frames corresponding to each location in an image database according to the feature information of each image frame may specifically include:
calculating the absolute value of the difference between the pixel value of each pixel in the first image frame and the pixel value of the pixel at the same position in the image frame aiming at each first image frame and each image frame in the image database, and taking the sum of the absolute values of the differences between the pixel values of two corresponding pixels between the first image frame and the image frame as the similarity of the characteristic information between the first image frame and the image frame.
And when the similarity of the feature information is smaller than a preset similarity threshold value, determining that the first image frame is successfully matched with the image frame. And when the similarity of the feature information is not less than a preset similarity threshold value, determining that the first image frame is failed to be matched with the image frame.
The preset similarity threshold may be a value set empirically in advance.
The following describes how to determine the similarity of feature information between two image frames using a specific example. For example, the first image frame a and the image frame B need to be matched, and it is known that the first image frame a and the image frame B are both images of 60px by 80px, that is, 4800 pixels are located in each image frame. And respectively differentiating pixel points at the same positions of the first image frame A and the image frame B to obtain 4800 difference values. And summing the absolute values of the 4800 differences to obtain the similarity of the feature information between the first image frame A and the image frame B.
When the first image frame is more similar to the image frame, the obtained feature information is less similar. Therefore, when the feature information similarity is smaller than the preset similarity threshold, the first image frame and the image frame may be considered to be successfully matched.
In another embodiment, a sad (pixel by pixel Sum of absoluteddifferences) algorithm may be used to determine the similarity of feature information between the first image frame and the image frame.
In summary, the present embodiment provides a specific implementation manner when each first image frame is matched with each image frame in the image database, so that the matching efficiency can be improved.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, in order to reduce the complexity of the calculation and improve the processing efficiency, step S110, the step of acquiring the first image frames of multiple directions around the vehicle, which are acquired by the image acquisition device at the first location, may specifically include:
the method comprises the steps of obtaining initial image frames of a plurality of directions around a vehicle, collected at a first place by image collection equipment, and conducting down-sampling processing on each initial image frame to obtain each first image frame.
When performing down-sampling processing on each initial image frame, the method may specifically include: and according to a preset first target resolution, each initial image frame is subjected to down-sampling. Each of the obtained first image frames is a target resolution. The first target resolution is less than the resolution of the initial image frame.
When performing down-sampling processing on each initial image frame, the method may further include: and determining a second target resolution of the image frames after the down-sampling according to the resolution of the initial image frames, and down-sampling each initial image frame to the image frame with the second target resolution to obtain each first image frame. The second target resolution is less than the resolution of the initial image frame.
For example, the resolution of the initial image frame is M × N pixels, and when the initial image frame is down-sampled by s times, the first image frame with the resolution of (M/s) × (N/s) can be obtained, an image in an s × s window in the initial image frame will become a pixel, and this pixel is the average of all pixels in the window.
In another embodiment, before performing the down-sampling processing on each initial image frame, the distortion removal processing may be performed on each initial image frame, and the down-sampling processing may be performed on each initial image frame after the distortion removal processing. The accuracy of the image frame characteristic information can be improved through distortion removal, and the accuracy of image frame matching is further improved.
In summary, in this embodiment, the obtained initial image frames are down-sampled to obtain each first image frame, so that the resolution of the image frame can be reduced, the amount of feature information of the image frame can be reduced, the amount of calculation during image frame matching can be reduced, and the calculation efficiency can be improved.
In another embodiment of the invention, the embodiment shown in fig. 3 can be obtained on the basis of the embodiment shown in fig. 1. In the embodiment shown in fig. 3, the image database is further configured to store feature information of image frames at a plurality of second-type locations, each image frame at a second-type location comprising an image frame of a direction around the vehicle at the location.
S310: first image frames of a plurality of directions around a vehicle captured at a first location by an image capture device are acquired.
S320: and aiming at each pixel point in each first image frame, updating the pixel value of the pixel point according to the surrounding pixel points of the pixel point, and taking all the updated pixel points in the first image frame as the characteristic information of the first image frame.
In this embodiment, the above steps S310 and S320 are the same as steps S110 and S120 in the embodiment shown in fig. 1, respectively, and the detailed description can refer to the embodiment shown in fig. 1.
S330: and respectively matching each first image frame with one image frame at each place in the image database according to the characteristic information of each image frame. And when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting the preset location matching condition.
Wherein the third location indicated by the second matching result is: and the image database is provided with the corresponding position of the image frame which is successfully matched.
In this embodiment, each location in the image database corresponds to one image frame. When the image matching is carried out, aiming at the first location and each location in the image database, each first image frame is respectively matched with the image frame corresponding to the location in the image database, and when at least one first image frame is successfully matched with the image frame, the first location and the location in the image database are considered to meet the preset location matching condition. The above is a location matching process, and the first location may be matched with all locations in the image database.
When it is determined that a matching result satisfying the preset location matching condition cannot be obtained, the correspondence relationship between the first location, the first image frame, and the feature information may be stored in the image database for use in subsequent matching.
S340: from the third locations, locations pointing to the same location as the first location are determined.
For a specific implementation of this step, reference may be made to the description of step S140, which is not described herein again.
In summary, in this embodiment, when each location in the image database corresponds to one image frame, each first image frame corresponding to the first location may be respectively matched with the image frame corresponding to each location in the image database, and when at least one first image frame is successfully matched with the image frame in the image database, it is determined that a second matching result meeting a preset location matching condition is obtained. Therefore, the method and the device can take account of the condition of the single-view image frame in the image database and improve the accuracy of loop detection.
In another embodiment of the invention, the embodiment shown in fig. 4 can be obtained on the basis of the embodiment shown in fig. 1. The method of the embodiment shown in fig. 4 may include the following steps.
S410: first image frames of a plurality of directions around a vehicle captured at a first location by an image capture device are acquired. For a detailed description of this step, refer to step S110, which is not described herein again.
S420: and splicing the first image frames to obtain a first panoramic image frame at a first place.
When the first image frames are spliced, the first image frames can be spliced according to a preset position sequence according to a preset position relationship of each image acquisition device, so that a first panoramic image frame containing the surrounding environment of a first place is obtained.
S430: and aiming at each pixel point in each first panoramic image frame, updating the pixel value of the pixel point according to the surrounding pixel points of the pixel point, and taking all the updated pixel points in the first panoramic image frame as the characteristic information of the first panoramic image frame.
In this step, for each pixel point in each first panoramic image frame, the pixel value of the pixel point is updated according to the following formula:
P’=(P-Pμ)/Pσ
wherein, P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point.
For the detailed description of this step, reference may also be made to the description in step S120, which is not described herein again.
S440: and matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in an image database according to the characteristic information of each panoramic image frame.
When the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining to obtain a third matching result meeting the preset location matching condition, wherein a fourth location indicated by the third matching result is: and the image database is used for storing the corresponding positions of the successfully matched panoramic image frames.
Wherein the image database is used for storing feature information of panoramic image frames at a plurality of locations.
In this embodiment, each location corresponds to one panoramic image frame. And when the matching is successful, determining to obtain a third matching result meeting the preset location matching condition.
S450: and determining the positions pointing to the same position as the first position from the fourth positions. The detailed description of this step can refer to the description in step S140.
In summary, in this embodiment, the first image frames at the first locations are spliced to obtain the first panoramic image frame, and during matching, the first panoramic image frame is respectively matched with the panoramic image frame at each location in the image database. The first image frames at the first place are spliced, so that the matching times between the panoramic image frames and the panoramic image frames in the image database can be reduced, the matching process is shortened, and the matching efficiency is improved.
Fig. 5 is a diagram illustrating an apparatus for loop detection based on image frames according to an embodiment of the present invention. The apparatus corresponds to the method embodiment shown in fig. 1. The device is applied to electronic equipment. The device includes:
an acquisition module 510 configured to acquire a first image frame of a plurality of directions around the vehicle acquired by an image acquisition device at a first location;
a first updating module 520, configured to update, for each pixel point in each first image frame, a pixel value of the pixel point according to a surrounding pixel point of the pixel point, and use all updated pixel points in the first image frame as feature information of the first image frame;
a first matching module 530 configured to match each first image frame with a plurality of image frames corresponding to a plurality of locations in the image database, respectively, according to the feature information of each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in an image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is a second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
A first determining module 540 configured to determine locations pointing to the same location as the first location from the second locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, when the second location is multiple, the first determining module 540 is specifically configured to:
acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with an image frame corresponding to a fifth place exists in image frames corresponding to a plurality of second places in the image database;
and if so, determining the second place corresponding to the target image frame as a place pointing to the same place as the first place.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the first matching module 530 is specifically configured to:
calculating the absolute value of the difference between the pixel value of each pixel in each first image frame and the pixel value of a pixel at the same position in the image frame aiming at each first image frame and each image frame in an image database, and taking the sum of the absolute values of the differences between the pixel values of two corresponding pixels between the first image frame and the image frame as the similarity of the characteristic information between the first image frame and the image frame; and when the similarity of the feature information is smaller than a preset similarity threshold value, determining that the first image frame is successfully matched with the image frame.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the apparatus further includes:
a storage module (not shown in the figure) configured to store feature information of each first image frame at the first location into the image database when it is determined that a matching result satisfying the preset location matching condition cannot be obtained.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the first updating module 520 is specifically configured to:
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the following formula:
P’=(P-Pμ)/Pσ
wherein P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the obtaining module 510 is specifically configured to:
the method comprises the steps of obtaining initial image frames of multiple directions around the vehicle, collected by image collection equipment at a first place, and conducting down-sampling processing on each initial image frame to obtain each first image frame.
In another embodiment of the invention, the embodiment shown in fig. 6 can be obtained based on the embodiment shown in fig. 5. In this embodiment, the image database is further configured to store feature information of image frames at a plurality of second-type locations, and each image frame at a second-type location includes an image frame of a direction around the vehicle at the location. This embodiment corresponds to the method embodiment shown in fig. 3. The device includes: an obtaining module 610, a first updating module 620, a second matching module 630 and a second determining module 640. The obtaining module 610 and the first updating module 620 are respectively the same as the obtaining module 510 and the first updating module 520 in the embodiment shown in fig. 5, and detailed description is omitted here.
A second matching module 630, configured to match, after obtaining the feature information of the first image frames, the respective first image frames with one image frame at each location in the image database according to the feature information of each image frame; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
A second determining module 640 configured to determine locations pointing to the same location as the first location from the third locations.
In another embodiment of the present invention, the embodiment of the apparatus shown in fig. 7 can be obtained based on the embodiment shown in fig. 5, and the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 4. The device also includes: an obtaining module 710, a splicing module 720, a second updating module 730, a third matching module 740, and a third determining module 750. The obtaining module 710 is the same as the obtaining module 510 in the embodiment shown in fig. 5, and detailed description thereof is omitted here.
A stitching module 720, configured to, after acquiring each first image frame acquired by the image acquisition device at the first location, stitch each first image frame to obtain a first panoramic image frame at the first location;
a second updating module 730, configured to update, for each pixel point in each first panoramic image frame, a pixel value of the pixel point according to a surrounding pixel point of the pixel point, and use all updated pixel points in the first panoramic image frame as feature information of the first panoramic image frame;
a third matching module 740 configured to match the first panoramic image frame with panoramic image frames corresponding to a plurality of locations in an image database according to feature information of each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
A third determining module 750 configured to determine locations pointing to the same location as the first location from the fourth locations.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Fig. 8 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. The vehicle-mounted terminal includes: a processor 810 and an image acquisition device 820; the processor 810 includes: an obtaining module 11, a first updating module 12, a first matching module 13 and a first determining module 14.
An acquisition module 11, configured to acquire a first image frame of multiple directions around a vehicle acquired at a first location by an image acquisition device 820;
a first updating module 12, configured to update, for each pixel point in each first image frame, a pixel value of the pixel point according to a pixel point around the pixel point, and use all updated pixel points in the first image frame as feature information of the first image frame;
the first matching module 13 is configured to match each first image frame with a plurality of image frames corresponding to a plurality of locations in the image database according to the feature information of each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
A first determining module 14, configured to determine a location pointing to the same location as the first location from the second locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the image database is further configured to store feature information of image frames at a plurality of second-type locations, each image frame at a second-type location including an image frame in one direction around the vehicle at the location; the processor 810 further includes:
a second matching module (not shown in the figure), configured to match, according to the feature information of each image frame, each first image frame with one image frame at each location in the image database, after obtaining the feature information of the first image frame; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
and a second determining module (not shown in the figure) for determining a location pointing to the same location as the first location from the third locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the processor 810 further includes:
a stitching module (not shown in the figure), configured to, after acquiring each first image frame acquired by the image acquisition device at the first location, stitch each first image frame to obtain a first panoramic image frame at the first location;
a second updating module (not shown in the figure), configured to update, for each pixel point in each first panoramic image frame, a pixel value of the pixel point according to a pixel point around the pixel point, and use all updated pixel points in the first panoramic image frame as feature information of the first panoramic image frame;
a third matching module (not shown in the figure) for matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in the image database according to the feature information of each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining to obtain a third matching result meeting the preset location matching condition, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
And a third determining module (not shown in the figure) for determining the location pointing to the same location as the first location from the fourth locations.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, when the second location is multiple, the first determining module 14 is specifically configured to:
acquiring a fifth place indicated by the matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous to an image frame corresponding to a fifth place exists in image frames corresponding to a plurality of second places in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the first matching module 13 is specifically configured to:
calculating the absolute value of the difference between the pixel value of each pixel in the first image frame and the pixel value of the pixel at the same position in the image frame aiming at each first image frame and each image frame in an image database, and taking the sum of the absolute values of the differences between the pixel values of two corresponding pixels between the first image frame and the image frame as the similarity of the characteristic information between the first image frame and the image frame; and when the similarity of the feature information is smaller than a preset similarity threshold value, determining that the first image frame is successfully matched with the image frame.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the processor 810 further includes:
a storage module (not shown in the figure) for storing the feature information of each first image frame at the first location into the image database when it is determined that the matching result satisfying the preset location matching condition cannot be obtained.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the first updating module 12 is specifically configured to:
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the following formula:
P’=(P-Pμ)/Pσ
wherein, P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point.
In another embodiment of the present invention, based on the embodiment shown in fig. 8, the obtaining module 11 is specifically configured to:
initial image frames of multiple directions around the vehicle, acquired by the image acquisition device 820 at a first location, are acquired, and each initial image frame is down-sampled to obtain each first image frame.
The terminal embodiment and the method embodiment shown in fig. 1 are embodiments based on the same inventive concept, and the relevant points can be referred to each other. The terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, reference is made to the method embodiment.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image frame-based loop detection method is characterized by comprising the following steps:
acquiring first image frames of a plurality of directions around the vehicle, acquired by an image acquisition device at a first location;
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the surrounding pixel points of the pixel point, and taking all the updated pixel points in the first image frame as the characteristic information of the first image frame;
according to the feature information of each image frame, matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database respectively; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
Determining locations pointing to the same location as the first location from the second locations.
2. The method of claim 1, wherein the image database is further configured to store feature information for image frames at a plurality of second type locations, each image frame at a second type location comprising an image frame of a direction around the vehicle at the location; after obtaining the feature information of the first image frame, the method further comprises:
according to the characteristic information of each image frame, matching each first image frame with one image frame at each place in the image database respectively; when at least one first image frame exists in each first image frame and is successfully matched with the image frames in the image database, determining to obtain a second matching result meeting a preset location matching condition; wherein the third location indicated by the second matching result is: the image database is used for storing the image frames matched successfully;
determining locations pointing to the same location as the first location from the third locations.
3. The method of any of claims 1-2, wherein after acquiring each first image frame acquired by the image acquisition device at the first location, the method further comprises:
Splicing the first image frames to obtain a first panoramic image frame at the first place;
updating the pixel values of the pixel points according to the surrounding pixel points of the pixel points aiming at each pixel point in each first panoramic image frame, and taking all the updated pixel points in the first panoramic image frame as the characteristic information of the first panoramic image frame;
matching the first panoramic image frame with panoramic image frames corresponding to a plurality of places in an image database according to the characteristic information of each panoramic image frame; when the first panoramic image frame is successfully matched with the panoramic image frames in the image database, determining that a third matching result meeting a preset location matching condition is obtained, wherein a fourth location indicated by the third matching result is: the image database is used for storing the image data of the panoramic image frame; wherein the image database is used for storing characteristic information of panoramic image frames at a plurality of places;
determining a location pointing to the same location as the first location from the fourth locations.
4. The method of claim 1, wherein when the second location is plural, the step of determining a location from the second location that is co-located with the first location comprises:
Acquiring a fifth place indicated by a matching result determined from the image database at the last place before the first place;
judging whether a target image frame continuous with the image frame corresponding to the fifth location exists in the image frames corresponding to the second locations in the image database;
and if so, determining a second place corresponding to the target image frame as a place pointing to the same place as the first place.
5. The method as claimed in claim 1, wherein the step of matching each of the first image frames with a plurality of image frames corresponding to each location in the image database according to the feature information of each of the image frames comprises:
calculating an absolute value of a difference between a pixel value of each pixel in each first image frame and a pixel value of a pixel at the same position in the image frame for each first image frame and each image frame in an image database, and taking a sum of absolute values of differences between pixel values of two corresponding pixels between the first image frame and the image frame as a feature information similarity between the first image frame and the image frame; and when the feature information similarity is smaller than a preset similarity threshold value, determining that the first image frame is successfully matched with the image frame.
6. The method of claim 1, when it is determined that a matching result satisfying the preset location matching condition cannot be obtained, the method further comprising:
and storing the characteristic information of each first image frame at the first position into the image database.
7. The method of claim 1, wherein said step of updating, for each pixel point in each first image frame, the pixel value of the pixel point based on surrounding pixel points of the pixel point comprises:
for each pixel point in each first image frame, updating the pixel value of the pixel point according to the following formula:
P’=(P-Pμ)/Pσ
wherein P' is the updated pixel value of the pixel point, P is the pixel value of the pixel point, P μ is the average value of the pixel values of the surrounding pixel points of the pixel point, and P σ is the standard deviation of the pixel values of the surrounding pixel points of the pixel point.
8. The method of any one of claims 1 to 7, wherein the step of acquiring a first image frame of multiple directions around the vehicle acquired by an image acquisition device at a first location comprises:
Acquiring initial image frames of a plurality of directions around the vehicle, acquired by an image acquisition device at a first location;
and performing down-sampling processing on each initial image frame to obtain each first image frame.
9. An image frame-based loop detection apparatus, comprising:
an acquisition module configured to acquire a first image frame of a plurality of directions around the vehicle acquired by an image acquisition device at a first location;
the first updating module is configured to update pixel values of pixel points according to surrounding pixel points of the pixel points for each pixel point in each first image frame, and take all updated pixel points in the first image frame as feature information of the first image frame;
the first matching module is configured to match each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information of each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
A first determination module configured to determine locations pointing to the same location as the first location from the second locations.
10. A vehicle-mounted terminal characterized by comprising: a processor and an image acquisition device; the processor includes: the device comprises an acquisition module, a first updating module, a first matching module and a first determining module;
the acquisition module is used for acquiring first image frames of multiple directions around the vehicle acquired by an image acquisition device at a first position;
the first updating module is configured to update, for each pixel point in each first image frame, a pixel value of the pixel point according to a pixel point around the pixel point, and use all updated pixel points in the first image frame as feature information of the first image frame;
the first matching module is used for respectively matching each first image frame with a plurality of image frames corresponding to a plurality of places in an image database according to the characteristic information of each image frame; when at least a preset number of image frames successfully matched exist in a plurality of image frames corresponding to each first image frame and a second place in the image database, determining to obtain a first matching result meeting a preset place matching condition; wherein the location indicated by the first matching result is the second location; the image database is used for storing characteristic information of image frames at a plurality of first-class places, and the image frames at each first-class place comprise image frames in a plurality of directions around a vehicle at the place;
The first determining module is configured to determine a location pointing to the same location as the first location from the second locations.
CN201910346794.XA 2019-04-27 2019-04-27 Loop detection method and device based on image frames and vehicle-mounted terminal Active CN111860050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910346794.XA CN111860050B (en) 2019-04-27 2019-04-27 Loop detection method and device based on image frames and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910346794.XA CN111860050B (en) 2019-04-27 2019-04-27 Loop detection method and device based on image frames and vehicle-mounted terminal

Publications (2)

Publication Number Publication Date
CN111860050A true CN111860050A (en) 2020-10-30
CN111860050B CN111860050B (en) 2024-07-02

Family

ID=72952253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910346794.XA Active CN111860050B (en) 2019-04-27 2019-04-27 Loop detection method and device based on image frames and vehicle-mounted terminal

Country Status (1)

Country Link
CN (1) CN111860050B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113820694A (en) * 2021-11-24 2021-12-21 腾讯科技(深圳)有限公司 Simulation ranging method, related device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5173946A (en) * 1991-05-31 1992-12-22 Texas Instruments Incorporated Corner-based image matching
US20040167861A1 (en) * 2003-02-21 2004-08-26 Hedley Jay E. Electronic toll management
US20070122058A1 (en) * 2005-11-28 2007-05-31 Fujitsu Limited Method and apparatus for analyzing image, and computer product
US9275302B1 (en) * 2012-08-24 2016-03-01 Amazon Technologies, Inc. Object detection and identification
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database
CN106875442A (en) * 2016-12-26 2017-06-20 上海蔚来汽车有限公司 Vehicle positioning method based on image feature data
CN106926800A (en) * 2017-03-28 2017-07-07 重庆大学 The vehicle-mounted visually-perceptible system of multi-cam adaptation
CN108537844A (en) * 2018-03-16 2018-09-14 上海交通大学 A kind of vision SLAM winding detection methods of fusion geological information
CN109101981A (en) * 2018-07-19 2018-12-28 东南大学 Winding detection method based on global image bar code under a kind of streetscape scene
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image
CN109671119A (en) * 2018-11-07 2019-04-23 中国科学院光电研究院 A kind of indoor orientation method and device based on SLAM

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5173946A (en) * 1991-05-31 1992-12-22 Texas Instruments Incorporated Corner-based image matching
US20040167861A1 (en) * 2003-02-21 2004-08-26 Hedley Jay E. Electronic toll management
US20070122058A1 (en) * 2005-11-28 2007-05-31 Fujitsu Limited Method and apparatus for analyzing image, and computer product
US9275302B1 (en) * 2012-08-24 2016-03-01 Amazon Technologies, Inc. Object detection and identification
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database
CN106875442A (en) * 2016-12-26 2017-06-20 上海蔚来汽车有限公司 Vehicle positioning method based on image feature data
CN106926800A (en) * 2017-03-28 2017-07-07 重庆大学 The vehicle-mounted visually-perceptible system of multi-cam adaptation
CN108537844A (en) * 2018-03-16 2018-09-14 上海交通大学 A kind of vision SLAM winding detection methods of fusion geological information
CN109101981A (en) * 2018-07-19 2018-12-28 东南大学 Winding detection method based on global image bar code under a kind of streetscape scene
CN109671119A (en) * 2018-11-07 2019-04-23 中国科学院光电研究院 A kind of indoor orientation method and device based on SLAM
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
赵矿军;: "基于RGB-D摄像机的室内三维彩色点云地图构建", 哈尔滨商业大学学报(自然科学版), no. 01 *
赵矿军;: "基于RGB-D摄像机的室内三维彩色点云地图构建", 哈尔滨商业大学学报(自然科学版), no. 01, 15 February 2018 (2018-02-15) *
韩煦深;邹丹平;蒋铃鸽;刘佩林;: "融合几何信息的视觉SLAM回环检测方法", 信息技术, no. 07 *
韩煦深;邹丹平;蒋铃鸽;刘佩林;: "融合几何信息的视觉SLAM回环检测方法", 信息技术, no. 07, 24 July 2018 (2018-07-24) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113820694A (en) * 2021-11-24 2021-12-21 腾讯科技(深圳)有限公司 Simulation ranging method, related device, equipment and storage medium
CN113820694B (en) * 2021-11-24 2022-03-01 腾讯科技(深圳)有限公司 Simulation ranging method, related device, equipment and storage medium

Also Published As

Publication number Publication date
CN111860050B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
CN109272530B (en) Target tracking method and device for space-based monitoring scene
CN106791710B (en) Target detection method and device and electronic equipment
Uittenbogaard et al. Privacy protection in street-view panoramas using depth and multi-view imagery
CN111899282B (en) Pedestrian track tracking method and device based on binocular camera calibration
US6687386B1 (en) Object tracking method and object tracking apparatus
CN111860352B (en) Multi-lens vehicle track full tracking system and method
US11430199B2 (en) Feature recognition assisted super-resolution method
JP2012185540A (en) Image processing device, image processing method, and image processing program
CN112381132A (en) Target object tracking method and system based on fusion of multiple cameras
CN103366155B (en) Temporal coherence in unobstructed pathways detection
CN112396073A (en) Model training method and device based on binocular images and data processing equipment
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN112435223B (en) Target detection method, device and storage medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN112802112B (en) Visual positioning method, device, server and storage medium
CN112396634B (en) Moving object detection method, moving object detection device, vehicle and storage medium
CN110909620A (en) Vehicle detection method and device, electronic equipment and storage medium
CN110930437B (en) Target tracking method and device
CN111860050A (en) Loop detection method and device based on image frame and vehicle-mounted terminal
CN109726684B (en) Landmark element acquisition method and landmark element acquisition system
CN111860051A (en) Vehicle-based loop detection method and device and vehicle-mounted terminal
CN115565155A (en) Training method of neural network model, generation method of vehicle view and vehicle
CN113011212B (en) Image recognition method and device and vehicle
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information
CN117201708B (en) Unmanned aerial vehicle video stitching method, device, equipment and medium with position information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240910

Address after: 215100 floor 23, Tiancheng Times Business Plaza, No. 58, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou, Jiangsu Province

Patentee after: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 28, 4 / F, block a, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing 100089

Patentee before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right