CN110146096B - Vehicle positioning method and device based on image perception - Google Patents

Vehicle positioning method and device based on image perception Download PDF

Info

Publication number
CN110146096B
CN110146096B CN201811242957.1A CN201811242957A CN110146096B CN 110146096 B CN110146096 B CN 110146096B CN 201811242957 A CN201811242957 A CN 201811242957A CN 110146096 B CN110146096 B CN 110146096B
Authority
CN
China
Prior art keywords
semantic features
electronic map
vehicle
automatic driving
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811242957.1A
Other languages
Chinese (zh)
Other versions
CN110146096A (en
Inventor
杜志颖
单乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING MOMENTA TECHNOLOGY Co.,Ltd.
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201811242957.1A priority Critical patent/CN110146096B/en
Publication of CN110146096A publication Critical patent/CN110146096A/en
Application granted granted Critical
Publication of CN110146096B publication Critical patent/CN110146096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Abstract

The invention relates to the field of image processing, and discloses a vehicle positioning method and device based on image perception, which comprises the following steps: obtaining semantic features of a road image of a road where a vehicle is located, and determining relative position relation among the semantic features; determining alternative positions matched with the semantic features and the relative position relation from the automatic driving navigation electronic map, and calculating the confidence of each alternative position; and determining the candidate position with the highest confidence coefficient as a target position, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position. By implementing the embodiment of the invention, a plurality of alternative positions can be determined from the automatic driving navigation electronic map according to the semantic features and the relative relation between the semantic features in the road image acquired by the camera, the confidence coefficient of each alternative position is calculated, and the vehicle is repositioned according to the alternative position with the highest confidence coefficient, so that the accuracy of automobile positioning is improved through a deep learning algorithm and an image perception technology.

Description

Vehicle positioning method and device based on image perception
Technical Field
The invention relates to the field of image processing, in particular to a vehicle positioning method and device based on image perception.
Background
In recent years, the development of auto automobiles (autonomus vehicles) has taken a new step with the rise of Artificial Intelligence (AI). The automatic driving of the vehicle is realized without leaving the vehicle-mounted navigation System, and the vehicle-mounted navigation System can realize the Positioning of the vehicle by using a Global Positioning System (GPS). However, in practice, the global positioning system has a large error in positioning, and cannot meet the requirement of vehicle positioning accuracy in automatic driving.
Disclosure of Invention
The embodiment of the invention discloses a vehicle positioning method and device based on image perception, which can improve the accuracy of vehicle positioning.
The embodiment of the invention discloses a vehicle positioning method based on image perception in a first aspect, and the method comprises the following steps:
acquiring a road image of a road where the vehicle is located;
performing semantic feature recognition on the road image to obtain semantic features of the road image, and determining a relative position relationship between any two semantic features;
determining at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map, and calculating the confidence of each alternative position;
and determining the alternative position with the highest confidence coefficient as a target position, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the performing semantic feature recognition on the road image, obtaining semantic features of the road image, and determining a relative position relationship between any two of the semantic features includes:
performing semantic feature recognition on the road image through a deep learning algorithm to obtain semantic features of the road image;
calculating the orientation relation and the relative distance of any two semantic features;
and determining the orientation relation and the relative distance of any two semantic features as the relative position relation between the corresponding two semantic features.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining at least one candidate location matching the semantic features and the relative location relationship from a preset electronic map of automatic driving navigation, and calculating a confidence of each candidate location includes:
acquiring at least one first initial selection position containing all the semantic features from a preset automatic driving navigation electronic map;
determining at least one alternative position matched with the relative position relationship from all the first initial positions;
calculating first position information of the vehicle corresponding to each alternative position;
constructing a three-dimensional simulation scene corresponding to each alternative position according to the alternative positions and the first attitude information;
comparing the three-dimensional simulation scenes with the automatic driving navigation electronic map to obtain a first Euclidean distance of the alternative position corresponding to each three-dimensional simulation scene;
and calculating the confidence degree of each alternative position according to the first Euclidean distance.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining at least one candidate location matching the semantic features and the relative location relationship from a preset electronic map of automatic driving navigation, and calculating a confidence of each candidate location includes:
acquiring at least one second primary selection position containing all the semantic features from a preset automatic driving navigation electronic map;
determining at least one alternative position matched with the relative position relationship from all the second initial positions;
calculating second position information of the vehicle corresponding to each alternative position;
projecting a three-dimensional scene corresponding to the alternative position in the automatic driving navigation electronic map according to the alternative position and the second posture information to obtain a two-dimensional scene image corresponding to each alternative position, wherein the automatic driving navigation electronic map is a three-dimensional electronic map;
acquiring a second Euclidean distance of the alternative position corresponding to each two-dimensional scene image according to the two-dimensional scene image and the road image;
and calculating the confidence degree of each alternative position according to the second Euclidean distance.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining the candidate position with the highest confidence as a target position, and repositioning the vehicle in the electronic map for automatic driving navigation according to the target position includes:
traversing the confidence coefficient by using a traversal algorithm to obtain a target confidence coefficient with the highest confidence coefficient;
judging whether the target confidence reaches a preset confidence threshold value;
and if the target confidence reaches the preset confidence threshold, determining the alternative position corresponding to the target confidence as a target position, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position.
The second aspect of the embodiment of the invention discloses a vehicle positioning device based on image perception, which comprises:
an acquisition unit configured to acquire a road image of a road on which the vehicle is located;
the recognition unit is used for recognizing semantic features of the road image, acquiring the semantic features of the road image and determining the relative position relationship between any two semantic features;
the calculation unit is used for determining at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map and calculating the confidence of each alternative position;
and the repositioning unit is used for determining the alternative position with the highest confidence coefficient as a target position and repositioning the vehicle in the automatic driving navigation electronic map according to the target position.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the identification unit includes:
the recognition subunit is used for performing semantic feature recognition on the road image through a deep learning algorithm to obtain semantic features of the road image;
the first calculation subunit is used for calculating the orientation relation and the relative distance of any two semantic features;
and the first determining subunit is used for determining the orientation relation and the relative distance of any two semantic features as the relative position relation between the corresponding two semantic features.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the calculation unit includes:
the first acquisition subunit is used for acquiring at least one first initial selection position containing all the semantic features from a preset automatic driving navigation electronic map;
a second determining subunit, configured to determine at least one candidate position that matches the relative position relationship from all the first primary positions;
the second calculating subunit is used for calculating first attitude information of the vehicle corresponding to each alternative position;
the construction subunit is configured to construct, according to the candidate positions and the first pose information, a three-dimensional simulation scene corresponding to each candidate position;
the comparison subunit is used for comparing the three-dimensional simulation scenes with the automatic driving navigation electronic map to obtain a first Euclidean distance of the alternative position corresponding to each three-dimensional simulation scene;
the second calculating subunit is further configured to calculate, according to the first euclidean distance, a confidence level of each candidate position.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the calculation unit includes:
the second acquisition subunit is used for acquiring at least one second initial selection position containing all the semantic features from a preset automatic driving navigation electronic map;
a third determining subunit, configured to determine at least one candidate position that matches the relative position relationship from all the second primary positions;
a third calculating subunit, configured to calculate second attitude information of the vehicle corresponding to each of the candidate positions;
the projection subunit is configured to project, according to the candidate positions and the second position and posture information, three-dimensional scenes corresponding to the candidate positions in the automatic driving navigation electronic map to obtain two-dimensional scene images corresponding to each of the candidate positions, where the automatic driving navigation electronic map is a three-dimensional electronic map;
the second obtaining subunit is further configured to obtain, according to the two-dimensional scene images and the road image, a second euclidean distance of the candidate position corresponding to each of the two-dimensional scene images;
and the third computing subunit is further configured to calculate a confidence degree of each candidate position according to the second euclidean distance.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the relocation unit includes:
the traversal subunit is used for traversing the confidence coefficient by utilizing a traversal algorithm to obtain a target confidence coefficient with the highest confidence coefficient;
the judging subunit is used for judging whether the target confidence coefficient reaches a preset confidence coefficient threshold value;
and the repositioning subunit is used for determining the alternative position corresponding to the target confidence as a target position when the judgment result of the judging subunit is yes, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position.
A third aspect of an embodiment of the present invention discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect.
A fourth aspect of the present embodiments discloses a computer-readable storage medium storing a program code, where the program code includes instructions for performing part or all of the steps of any one of the methods of the first aspect.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the invention has the advantages that:
1. the vehicle positioning device based on image perception can acquire a road image of a road where a vehicle is located; performing semantic feature recognition on the road image to obtain semantic features of the road image and determine the relative position relationship between any two semantic features; determining at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map, and calculating the confidence of each alternative position; and determining the candidate position with the highest confidence coefficient as a target position, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position. Therefore, the device can determine a plurality of alternative positions from the automatic driving navigation electronic map according to the semantic features in the road image acquired by the camera and the relative relation between the semantic features, calculate the confidence coefficient of each alternative position, and reposition the vehicle according to the alternative position with the highest confidence coefficient, so that the accuracy of automobile positioning is improved through a deep learning algorithm and an image perception technology.
2. The vehicle positioning device based on image perception can determine the relative position relation between any two semantic features in a road image through a deep learning algorithm, and the accuracy of determining the relative position relation between any two semantic features is improved.
3. The vehicle positioning device based on image perception can determine at least one alternative position in an automatic driving navigation electronic map through a relative position relation, projects the three-dimensional alternative position into a two-dimensional scene image, obtains the confidence coefficient of each alternative position according to the coincidence degree between the two-dimensional scene image and the road image, and accordingly simplifies the calculation mode of the confidence coefficient on the basis of ensuring the accuracy of the confidence coefficient.
4. The vehicle positioning device based on image perception can traverse each confidence coefficient, avoid missing in the confidence coefficient judgment process, detect whether the highest confidence coefficient meets the standard, and determine the alternative position with the highest confidence coefficient as the target position if the highest confidence coefficient meets the standard, thereby ensuring the positioning accuracy of the electronic equipment.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a vehicle positioning method based on image sensing according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another image-perception-based vehicle positioning method disclosed in the embodiments of the present invention;
FIG. 3 is a schematic flow chart of another image-perception-based vehicle positioning method disclosed in the embodiments of the present invention;
FIG. 4 is a schematic flowchart of a vehicle positioning method based on image sensing according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of another image-perception-based vehicle positioning device disclosed in the embodiment of the invention;
FIG. 6 is a schematic structural diagram of another image-perception-based vehicle positioning device disclosed in the embodiment of the invention;
FIG. 7 is a schematic structural diagram of another image-perception-based vehicle positioning device disclosed in the embodiment of the invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a vehicle positioning method and device based on image perception, which can improve the accuracy of vehicle positioning. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a vehicle positioning method based on image sensing according to an embodiment of the present invention. As shown in fig. 1, the image perception-based vehicle positioning method may include the steps of:
101. the electronic device acquires a road image of a road on which the vehicle is located.
In the embodiment of the present invention, the electronic device may be a vehicle-mounted computer or a driving computer built in a vehicle. The electronic device may control a plurality of terminal devices arranged on the vehicle, for example, the electronic device may control a camera arranged on the vehicle to acquire an image, may also control a central control large screen or an instrument panel to output information, and may also control a device such as a speed controller to acquire information such as a current driving speed, a current driving direction, and a current acceleration of the vehicle, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, the electronic device can shoot and obtain the road image through a camera (such as a roof camera or a front-view camera) arranged on the vehicle, wherein the shooting direction of the camera arranged on the vehicle can be fixed or rotatable. The electronic equipment can control the shooting direction of the camera to be adjusted to be consistent with the current driving direction of the vehicle, and control the camera to shoot road images of the road where the vehicle is located.
As an optional implementation, the manner of acquiring, by the electronic device, the road image of the road on which the vehicle is located may further include the following steps:
the electronic equipment detects the position information of the obstacle in the environment where the vehicle is located through an infrared detector;
the electronic equipment divides the detection range of the infrared detector into a plurality of areas, and determines a target area with the maximum obstacle density from the plurality of areas according to the obstacle position information;
the electronic equipment adjusts the direction of the camera to the center line of the target area and shoots a road image, wherein the road image comprises an obstacle image in the target area.
The implementation of the embodiment can enable the road image shot by the camera to include a plurality of obstacle images around the road, so that the electronic device can determine the alternative position more accurately in the automatic driving navigation electronic map subsequently, and the invention also belongs to one of the invention points of the invention.
102. The electronic equipment identifies the semantic features of the road image, obtains the semantic features of the road image and determines the relative position relationship between any two semantic features.
In the embodiment of the invention, the road image can contain a plurality of sub-image information, such as sub-image information of a lane line, a guardrail, a traffic light, an office building and the like, and one sub-image information can correspond to one semantic feature, such as the semantic feature corresponding to the sub-image of the lane line can be the lane line, and the semantic feature corresponding to the sub-image of the traffic light can be the traffic light.
In the embodiment of the invention, the electronic device can establish a planar rectangular coordinate system in the road image, the electronic device can confirm the coordinates or coordinate groups corresponding to each semantic feature in the planar rectangular coordinate system, and can determine the relative position relationship between any two semantic features according to the coordinates or coordinate groups corresponding to each semantic feature, and the relative position relationship can be information such as the angle and distance of one semantic feature relative to the other semantic feature in any two semantic features.
103. The electronic equipment determines at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map, and calculates the confidence degree of each alternative position.
In the embodiment of the present invention, the electronic map for automatic driving navigation may be pre-stored in the electronic device, and the electronic map for automatic driving navigation may be three-dimensional (3D), and the electronic device controls the electronic map for automatic driving navigation to zoom in or zoom out without affecting the effect of the map displayed on the display of the electronic device. The confidence level may be the confidence level between each candidate location and the current vehicle location.
As an alternative embodiment, the manner in which the electronic device determines at least one alternative location matching the semantic features and the relative location relationship from the preset electronic map for automatic driving navigation may include the following steps:
the electronic equipment acquires semantic features and the number of each semantic feature contained in a road image, and determines feature positions matched with the semantic features from a preset automatic driving navigation electronic map, wherein one semantic feature can be matched with a plurality of feature positions;
the electronic equipment performs permutation and combination on the basis of the feature positions to generate a plurality of feature combinations, wherein semantic features corresponding to the feature positions in one feature combination are the same as those contained in the road image, and the number of the feature positions corresponding to any one semantic feature in one feature combination is equal to that of the semantic features in the road image;
the electronic equipment determines at least one target feature combination with the same relative position relation between any two feature positions and the same relative position relation between any two semantic features in the road image from the plurality of feature combinations;
the electronic equipment determines alternative positions according to the feature positions in the target feature combinations, wherein one target combination feature determines one alternative position.
By implementing the implementation mode, all the alternative positions matched with the semantic features and the relative position relationship of the semantic features in the road image in the automatic driving navigation electronic map can be determined, and the accuracy rate of determining the alternative positions is improved, which also belongs to one of the invention points of the invention.
104. And the electronic equipment determines the candidate position with the highest confidence coefficient as a target position, and relocates the vehicle in the automatic driving navigation electronic map according to the target position.
In the embodiment of the invention, the electronic device can determine the candidate position with the highest confidence coefficient as the target position, and the electronic device can determine that the current position of the vehicle is the same as the target position, so that the electronic device can determine the target position in the electronic map for automatic driving navigation as the corresponding position of the vehicle in the electronic map for automatic driving navigation, thereby realizing the relocation of the vehicle in the electronic map for automatic driving navigation by the electronic device.
In the method described in fig. 1, a plurality of candidate positions can be determined from an automatic navigation electronic map according to the semantic features and the relative relationship between the semantic features in a road image acquired by a camera, the confidence of each candidate position is calculated, and a vehicle is repositioned according to the candidate position with the highest confidence, so that the accuracy of vehicle positioning is improved through a deep learning algorithm and an image perception technology. The road image shot by the camera can also comprise a plurality of obstacle images around the road, so that the electronic equipment can determine the alternative position more accurately in the automatic driving navigation electronic map subsequently. In addition, the confidence coefficient can be calculated for each alternative position, and each alternative position can be judged.
Example two
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another vehicle positioning method based on image sensing according to an embodiment of the present invention. As shown in fig. 2, the image perception-based vehicle positioning method may include the steps of:
201. the electronic device acquires a road image of a road on which the vehicle is located.
202. The electronic equipment identifies the semantic features of the road image through a deep learning algorithm to obtain the semantic features of the road image.
In the embodiment of the invention, Deep Learning (Deep Learning) is a method for performing characterization Learning on data in machine Learning, and electronic equipment can identify a plurality of semantic features contained in a road image by using a Deep Learning algorithm, for example, the electronic equipment can identify semantic features such as buildings, lane lines and signal lamps contained in the road image.
203. The electronic device calculates the orientation relationship and relative distance of any two semantic features.
In the embodiment of the invention, the electronic equipment can determine the central point of each semantic feature in the road image, and the relative distance between any two semantic features can be calculated according to the central points corresponding to the semantic features. The electronic device can also establish a planar rectangular coordinate system on the road image, and can determine the coordinates of the central point corresponding to each semantic feature in the planar rectangular coordinate system, and the electronic device can calculate the relative distance between any two semantic features according to the coordinates of the central points corresponding to the semantic features.
204. The electronic equipment determines the orientation relation and the relative distance of any two semantic features as the relative position relation between the two corresponding semantic features.
In the embodiment of the invention, the electronic equipment can determine the orientation relation of any two semantic features by calculating the angle relation between the central points corresponding to any two semantic features according to the plane rectangular coordinate system. The electronic device can synthesize the orientation relation and the relative distance of any two semantic features to generate the relative position relation of any two semantic features.
In the embodiment of the present invention, by implementing the above step 202 to step 205, the relative position relationship between any two semantic features in the road can be determined through a deep learning algorithm, so that the accuracy of determining the relative position relationship between any two semantic features is improved, which also belongs to one of the invention points of the present invention.
205. The electronic equipment acquires at least one first initial selection position containing all semantic features from a preset automatic driving navigation electronic map.
In the embodiment of the invention, the first initial selection position comprises objects corresponding to each semantic feature in the road image, and the number of each semantic feature is the same as that of the objects corresponding to the semantic feature in the first initial selection position.
206. The electronic equipment determines at least one alternative position matched with the relative position relation from all the first initial positions.
In the embodiment of the present invention, the electronic device may obtain the relative position relationship between any two objects in the first initial selection position, the electronic device may further match the relative position relationship corresponding to the first initial selection position with the relative position relationship corresponding to the road image, and if the matching degree is high, the electronic device may confirm that the first initial selection position is the alternative position. The electronic equipment can execute the steps on each first primary selection position, so that the electronic equipment can determine all alternative positions in the first primary selection positions, the situation that alternative positions are omitted is avoided, and the problem of positioning errors caused by the omission of the alternative positions is avoided.
207. The electronic device calculates first position information of the vehicle corresponding to each alternative position.
In the embodiment of the invention, the electronic equipment can calculate the first position and orientation information of the vehicle through the coordinates of the object in the alternative position in the automatic driving navigation electronic map and the road image. The alternative position may include a plurality of objects, each object may have a three-dimensional coordinate in the electronic map for automated driving navigation, the electronic map for automated driving navigation may be a three-dimensional map, and a spatial rectangular coordinate system may be established in the electronic map for automated driving navigation, and therefore, objects in the alternative position embodied in the electronic map for automated driving navigation may have a three-dimensional coordinate using the spatial rectangular coordinate system as a standard. The first attitude information of the vehicle may be a three-dimensional coordinate of the vehicle in the automated driving navigation electronic map and a three-axis rotation angle of a camera of the vehicle based on a spatial rectangular coordinate system in the automated driving navigation electronic map. The electronic equipment can obtain the three-dimensional coordinates of the vehicle in the space rectangular coordinate system and the three-axis rotation angle of the camera of the vehicle based on the space rectangular coordinate system according to the three-dimensional coordinates of the object in the road image in the automatic driving navigation electronic map and the imaging calculation of the object in the road image in the image, so as to determine the first attitude information of the vehicle, and each alternative position can correspond to one first attitude information.
208. And the electronic equipment constructs a three-dimensional simulation scene corresponding to each alternative position according to the alternative positions and the first attitude information.
In the embodiment of the invention, the first position and orientation information can be obtained by calculation according to the alternative position in the automatic driving navigation electronic map. Because the semantic features in the road image identified by the electronic equipment only identify semantics and detail information corresponding to the semantic features is not identified, the electronic equipment can restore all information contained in the road image by taking the first attitude information as a standard, can construct a three-dimensional simulated scene corresponding to the road image and construct all information in the road image into the three-dimensional simulated scene by taking the first attitude information as a standard, and thus converts the two-dimensional road image into the three-dimensional road simulated scene. Because there may be several first pose information, there may also be several three-dimensional simulated scenes constructed by the electronic device, and one first pose information may correspond to one three-dimensional simulated scene.
209. The electronic equipment compares the three-dimensional simulation scenes with the automatic driving navigation electronic map to obtain a first Euclidean distance of the alternative position corresponding to each three-dimensional simulation scene.
In the embodiment of the present invention, Euclidean Distance (Euclidean Distance) is also referred to as Euclidean Metric (Euclidean Metric), and Euclidean Distance is an actual Distance between two objects in a two-dimensional space or a three-dimensional space. Because each alternative position in the automatic driving navigation electronic map corresponds to one piece of first pose information, and one piece of first pose information corresponds to one three-dimensional simulation scene, each alternative position can correspond to one three-dimensional simulation scene, and the electronic equipment needs to compare the three-dimensional simulation scene with the alternative position corresponding to the three-dimensional simulation scene to obtain a first Euclidean distance of each alternative position.
210. And the electronic equipment calculates the confidence coefficient of each alternative position according to the first Euclidean distance.
In the embodiment of the present invention, the smaller the first euclidean distance is, the closer the candidate positions corresponding to the three-dimensional simulated scene and the three-dimensional simulated scene are, so that the magnitude of the first euclidean distance is inversely proportional to the confidence degrees of the candidate positions, and the method for calculating the confidence degree of each candidate position is not limited in the embodiment of the present invention.
In the embodiment of the present invention, when the above steps 206 to 211 are implemented, at least one candidate position may be determined in the electronic map for automatic driving navigation according to the relative position relationship between the semantic features, and then the three-dimensional simulation scene generated according to the candidate position and the relative position relationship is compared with the candidate position in the electronic map for automatic driving navigation to obtain the confidence of the candidate position, so that the result of the confidence is more accurate, which also belongs to one of the invention points of the present invention.
211. And the electronic equipment determines the candidate position with the highest confidence coefficient as a target position, and relocates the vehicle in the automatic driving navigation electronic map according to the target position.
In the method described in fig. 2, a plurality of candidate positions can be determined from an automatic navigation electronic map according to the semantic features and the relative relationship between the semantic features in a road image acquired by a camera, the confidence of each candidate position is calculated, and the vehicle is repositioned according to the candidate position with the highest confidence, so that the accuracy of vehicle positioning is improved through a deep learning algorithm and an image perception technology. Semantic features can be obtained from the road image by combining with a deep learning algorithm, and the digital image processing capability of the electronic equipment is improved. In addition, the electronic equipment can convert the two-dimensional road image into the three-dimensional scene image, so that the electronic equipment can calculate the Euclidean distance more accurately.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating another vehicle positioning method based on image sensing according to an embodiment of the present invention. As shown in fig. 3, the image perception-based vehicle positioning method may include the steps of:
301. the electronic device acquires a road image of a road on which the vehicle is located.
302. The electronic equipment identifies the semantic features of the road image, obtains the semantic features of the road image and determines the relative position relationship between any two semantic features.
303. And the electronic equipment acquires at least one second primary selection position containing all semantic features from a preset automatic driving navigation electronic map.
In the embodiment of the invention, the second initial selection position comprises objects corresponding to each semantic feature in the road image, and the number of each semantic feature is the same as that of the objects corresponding to the semantic feature in the second initial selection position.
304. The electronic equipment determines at least one alternative position matched with the relative position relationship from all the second initial positions.
In the embodiment of the present invention, the electronic device may obtain the relative position relationship between any two objects in the second initial selection position, the electronic device may further match the relative position relationship corresponding to the second initial selection position with the relative position relationship corresponding to the road image, and if the matching degree is high, the electronic device may confirm that the second initial selection position is the alternative position.
305. The electronic device calculates second position information of the vehicle corresponding to each alternative position.
In the embodiment of the invention, the electronic equipment can calculate the second position and posture information of the vehicle through the coordinates of the object in the alternative position in the automatic driving navigation electronic map and the road image. The electronic equipment can obtain the three-dimensional coordinates of the vehicle in the space rectangular coordinate system and the three-axis rotation angle of the camera of the vehicle based on the space rectangular coordinate system according to the three-dimensional coordinates of the object in the road image in the automatic driving navigation electronic map and the imaging calculation of the object in the road image in the image, so as to determine the second attitude information of the vehicle, and each alternative position can correspond to one second attitude information.
306. And the electronic equipment projects the three-dimensional scene corresponding to the alternative position in the automatic driving navigation electronic map according to the alternative position and the second position and posture information to obtain a two-dimensional scene image corresponding to each alternative position, wherein the automatic driving navigation electronic map is a three-dimensional electronic map.
In the embodiment of the present invention, the second pose information calculated by the electronic device at different candidate locations may be different, so that the projection of the three-dimensional scene corresponding to the candidate location by the electronic device is based on the second pose information corresponding to the candidate location, and the electronic device projects the three-dimensional image of the candidate location in the three-dimensional scene into the two-dimensional scene image according to the second pose information, so as to perform conversion from the three-dimensional scene to the two-dimensional scene image.
307. And the electronic equipment acquires a second Euclidean distance of the alternative position corresponding to each two-dimensional scene image according to the two-dimensional scene images and the road images.
In the embodiment of the invention, each alternative position can be projected as a two-dimensional scene image, and the electronic device can compare each two-dimensional scene image with the road image to generate the second Euclidean distance of the alternative position corresponding to the two-dimensional scene image.
308. And the electronic equipment calculates the confidence coefficient of each alternative position according to the second Euclidean distance.
In the embodiment of the present invention, the smaller the second euclidean distance, the higher the coincidence degree between the two-dimensional scene image and the road image is, so that the size of the second euclidean distance is inversely proportional to the confidence of the candidate position, and the method for calculating the confidence of each candidate position is not limited in the embodiment of the present invention.
In the embodiment of the present invention, by implementing the above steps 303 to 308, at least one candidate position may be determined in the automatic driving navigation electronic map through a relative position relationship, the three-dimensional candidate position may be projected as a two-dimensional scene image, the two-dimensional scene image and the road image are compared, and the confidence of each candidate position is obtained according to the contact ratio between the images, so that the calculation mode of the confidence is simplified on the basis of ensuring the accuracy of the confidence, which also belongs to one of the invention points of the present invention.
309. And the electronic equipment traverses the confidence degrees by using a traversal algorithm to obtain the target confidence degree with the highest confidence degree.
In the embodiment of the invention, a Traversal (Traversal) algorithm can traverse all confidence degrees acquired by the electronic equipment so as to avoid the omission condition. The convenient algorithm may be a RANSAC algorithm, etc., and the embodiment of the present invention is not limited thereto. The electronic device can obtain the target confidence coefficient with the highest confidence coefficient by traversing the confidence coefficient.
310. The electronic device judges whether the target confidence reaches a preset confidence threshold, if so, the step 311 is executed; if not, the flow is ended.
In the embodiment of the present invention, because the electronic device may have an error in the calculation process, the candidate location corresponding to the highest confidence calculated by the electronic device may also be different from the current location of the vehicle, at this time, the electronic device needs to detect the target confidence again, and if the target execution degree reaches the preset confidence threshold, the electronic device may consider that the candidate location corresponding to the target confidence matches the current location of the vehicle.
311. And the electronic equipment determines the alternative position corresponding to the target confidence as the target position and relocates the vehicle in the automatic driving navigation electronic map according to the target position.
In the embodiment of the present invention, by implementing the above steps 309 to 311, each confidence level can be traversed, so as to avoid missing in the confidence level determination process, and whether the highest confidence level meets the standard can be detected, and if so, the candidate position with the highest confidence level can be determined as the target position, so as to ensure the accuracy of positioning the electronic device, which also belongs to one of the inventions of the present invention.
In the method described in fig. 3, a plurality of candidate positions can be determined from the automatic driving navigation electronic map according to the semantic features and the relative relationship between the semantic features in the road image acquired by the camera, the confidence of each candidate position is calculated, and the vehicle is repositioned according to the candidate position with the highest confidence, so that the accuracy of vehicle positioning is improved through a deep learning algorithm and an image perception technology. The alternative positions can be converted into two-dimensional scene images from three-dimensional images, the calculation amount of the electronic equipment is simplified, and the calculation efficiency of the electronic equipment is improved. In addition, the target confidence coefficient with the highest confidence coefficient can be determined through a traversal algorithm, and each confidence coefficient can be identified.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of a vehicle positioning device based on image sensing according to an embodiment of the present invention. As shown in fig. 4, the image perception-based vehicle localization apparatus may include:
an acquiring unit 401 is configured to acquire a road image of a road on which a vehicle is located.
As an optional implementation manner, the manner of acquiring the road image of the road where the vehicle is located by the acquiring unit 401 may specifically be:
detecting obstacle position information in the environment where the vehicle is located through an infrared detector;
dividing the detection range of the infrared detector into a plurality of areas, and determining a target area with the maximum obstacle density from the plurality of areas according to the obstacle position information;
and adjusting the orientation of the camera to the central line of the target area, and shooting a road image, wherein the road image comprises the obstacle image in the target area.
By implementing the implementation mode, the road image shot by the camera can comprise a plurality of obstacle images around the road, so that the electronic equipment can determine the alternative position more accurately in the automatic driving navigation electronic map subsequently.
An identifying unit 402, configured to perform semantic feature identification on the road image acquired by the acquiring unit 401, obtain semantic features of the road image, and determine a relative position relationship between any two semantic features.
A calculating unit 403, configured to determine at least one candidate location matching the semantic features and the relative location relationship identified by the identifying unit 402 from a preset electronic map for automatic driving navigation, and calculate a confidence of each candidate location.
As an alternative embodiment, the manner of determining at least one candidate location matching the semantic features and the relative location relationship from the preset electronic map for automatic driving navigation by the computing unit 403 may specifically be:
obtaining semantic features and the number of each semantic feature contained in a road image, and determining feature positions matched with the semantic features from a preset automatic driving navigation electronic map, wherein one semantic feature can be matched with a plurality of feature positions;
performing permutation and combination on the basis of the feature positions to generate a plurality of feature combinations, wherein semantic features corresponding to the feature positions in one feature combination are the same as semantic features contained in the road image, and the number of the feature positions corresponding to any one semantic feature in one feature combination is equal to the number of the semantic features in the road image;
determining at least one target feature combination with the same relative position relation between any two feature positions and the relative position relation between any two semantic features in the road image from the plurality of feature combinations;
and determining alternative positions according to the feature positions in the target feature combinations, wherein one target combination feature determines one alternative position.
By the implementation of the implementation mode, all the alternative positions matched with the semantic features and the relative position relationship of the semantic features in the road image in the automatic driving navigation electronic map can be determined, and the accuracy rate of determining the alternative positions is improved.
A repositioning unit 404, configured to determine the candidate position with the highest confidence degree calculated by the calculating unit 403 as the target position, and reposition the vehicle in the electronic map for automatic driving navigation according to the target position.
In the vehicle positioning device based on image perception described in fig. 4, a plurality of candidate positions can be determined from an automatic navigation electronic map according to the semantic features and the relative relationship between the semantic features in a road image acquired by a camera, the confidence of each candidate position is calculated, and the vehicle is repositioned according to the candidate position with the highest confidence, so that the accuracy of vehicle positioning is improved through a deep learning algorithm and an image perception technology. The road image shot by the camera can also comprise a plurality of obstacle images around the road, so that the electronic equipment can determine the alternative position more accurately in the automatic driving navigation electronic map subsequently. In addition, the confidence coefficient can be calculated for each alternative position, and each alternative position can be judged.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another vehicle positioning device based on image sensing according to an embodiment of the disclosure. The vehicle positioning device based on image perception shown in fig. 5 is optimized by the vehicle positioning device based on image perception shown in fig. 4. Compared to the image perception based vehicle localization apparatus shown in fig. 4, the identification unit 402 of the image perception based vehicle localization apparatus shown in fig. 5 may include:
the identifying subunit 4021 is configured to perform semantic feature identification on the road image through a deep learning algorithm to obtain semantic features of the road image.
The first calculating subunit 4022 is configured to calculate an orientation relationship and a relative distance between any two semantic features identified by the identifying subunit 4021.
The first determining subunit 4023 is configured to determine the orientation relationship and the relative distance between any two semantic features calculated by the first calculating subunit 4022 as the relative position relationship between the two corresponding semantic features.
In the embodiment of the invention, the relative position relation between any two semantic features in the road image can be determined through a deep learning algorithm, so that the accuracy of determining the relative position relation between any two semantic features is improved.
As an alternative embodiment, the repositioning unit 404 of the image-aware vehicle positioning device shown in FIG. 5 may include:
the traversal subunit 4041 is configured to traverse the confidence level by using a traversal algorithm, and obtain a target confidence level with the highest confidence level;
the judging subunit 4042 is configured to judge whether the target confidence obtained by traversing the subunit 4041 reaches a preset confidence threshold;
the repositioning sub-unit 4043 is configured to, when the determination result of the determining sub-unit 4042 is yes, determine the candidate position corresponding to the target confidence obtained by traversing the sub-unit 4041 as the target position, and reposition the vehicle in the electronic map for automated driving navigation according to the target position.
By implementing the implementation mode, each confidence coefficient can be traversed, the missing situation in the confidence coefficient judgment process is avoided, whether the highest confidence coefficient meets the standard or not can be detected, if the highest confidence coefficient meets the standard, the alternative position with the highest confidence coefficient can be determined as the target position, and the positioning accuracy of the electronic equipment is ensured.
In the vehicle positioning device based on image perception described in fig. 5, a plurality of candidate positions can be determined from an automatic navigation electronic map according to the semantic features and the relative relationship between the semantic features in a road image acquired by a camera, the confidence of each candidate position is calculated, and the vehicle is repositioned according to the candidate position with the highest confidence, so that the accuracy of vehicle positioning is improved through a deep learning algorithm and an image perception technology. Semantic features can be obtained from the road image by combining with a deep learning algorithm, and the digital image processing capability of the electronic equipment is improved. In addition, the target confidence coefficient with the highest confidence coefficient can be determined through a traversal algorithm, and each confidence coefficient can be identified.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another vehicle positioning device based on image sensing according to an embodiment of the disclosure. The vehicle positioning device based on image perception shown in fig. 6 is optimized by the vehicle positioning device based on image perception shown in fig. 5. Compared to the image perception based vehicle localization apparatus shown in fig. 5, the calculation unit 403 of the image perception based vehicle localization apparatus shown in fig. 6 may include:
a first obtaining subunit 4031, configured to obtain at least one first initial selection location including all semantic features from a preset electronic map of automatic driving navigation.
A second determining subunit 4032, configured to determine at least one candidate location that matches the relative location relationship from all the first preliminary locations acquired by the first acquiring subunit 4031.
A second calculating subunit 4033, configured to calculate first pose information of the vehicle corresponding to each candidate location determined by the second determining subunit 4032.
A constructing subunit 4034, configured to construct a three-dimensional simulated scene corresponding to each candidate position according to the candidate positions determined by the second determining subunit 4032 and the first pose information calculated by the second calculating subunit 4033.
The comparison subunit 4035 is configured to compare the three-dimensional simulation scene constructed by the construction subunit 4034 with the automatic driving navigation electronic map, and obtain a first euclidean distance of the candidate location corresponding to each three-dimensional simulation scene.
The second calculating subunit 4033 is further configured to calculate a confidence of each candidate location according to the first euclidean distance obtained by the second calculating subunit 4035.
In the embodiment of the invention, at least one alternative position can be determined in the automatic driving navigation electronic map according to the relative position relation between the semantic features, and then the three-dimensional simulation scene generated according to the alternative position and the relative position relation is compared with the alternative position in the automatic driving navigation electronic map to obtain the confidence coefficient of the alternative position, so that the result of the confidence coefficient is more accurate.
In the vehicle positioning device based on image perception described in fig. 6, a plurality of candidate positions can be determined from an automatic navigation electronic map according to the semantic features and the relative relationship between the semantic features in a road image acquired by a camera, the confidence of each candidate position is calculated, and the vehicle is repositioned according to the candidate position with the highest confidence, so that the accuracy of vehicle positioning is improved through a deep learning algorithm and an image perception technology. And the two-dimensional road image can be converted into a three-dimensional scene image, so that the Euclidean distance calculation of the electronic equipment is more accurate.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another vehicle positioning device based on image sensing according to an embodiment of the disclosure. The vehicle positioning device based on image perception shown in fig. 7 is optimized by the vehicle positioning device based on image perception shown in fig. 5. Compared to the image perception based vehicle localization apparatus shown in fig. 5, the calculation unit 403 of the image perception based vehicle localization apparatus shown in fig. 7 may include:
and a second obtaining subunit 4036, configured to obtain at least one second initial selection location including all semantic features from a preset electronic map of automatic driving navigation.
A third determining subunit 4037, configured to determine at least one candidate location that matches the relative location relationship from all the second preliminary locations acquired by the second acquiring subunit 4036.
A third calculating subunit 4038, configured to calculate second pose information of the vehicle corresponding to each candidate location determined by the third determining subunit 4037.
A shadow projecting unit 4039, configured to project, according to the candidate location determined by the third determining subunit 4037 and the second pose information calculated by the third calculating subunit 4038, a three-dimensional scene corresponding to the candidate location in the electronic map for automated driving navigation, to obtain a two-dimensional scene image corresponding to each candidate location, where the electronic map for automated driving navigation is a three-dimensional electronic map.
The second obtaining subunit 4036 is further configured to obtain, according to the two-dimensional scene image and the road image obtained by the projection subunit 4039, a second euclidean distance of the candidate position corresponding to each two-dimensional scene image.
The third calculating subunit 4038 is further configured to calculate a confidence of each candidate location according to the second euclidean distance obtained by the second obtaining subunit 4036.
In the embodiment of the invention, at least one alternative position can be determined in the automatic driving navigation electronic map through the relative position relationship, the three-dimensional alternative position is projected to be a two-dimensional scene image, the two-dimensional scene image is compared with the road image, and the confidence coefficient of each alternative position is obtained according to the contact ratio between the images, so that the calculation mode of the confidence coefficient is simplified on the basis of ensuring the accuracy of the confidence coefficient.
In the vehicle positioning device based on image perception described in fig. 7, a plurality of candidate positions can be determined from an automatic navigation electronic map according to the semantic features and the relative relationship between the semantic features in a road image acquired by a camera, the confidence of each candidate position is calculated, and the vehicle is repositioned according to the candidate position with the highest confidence, so that the accuracy of vehicle positioning is improved through a deep learning algorithm and an image perception technology. The alternative positions can be converted into two-dimensional scene images from three-dimensional images, the calculation amount of the electronic equipment is simplified, and the calculation efficiency of the electronic equipment is improved.
Example eight
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 8, the electronic device may include:
a memory 801 in which executable program code is stored;
a processor 802 coupled with the memory 801;
wherein the processor 802 calls the executable program code stored in the memory 801 to perform some or all of the steps of the methods in the above method embodiments.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores program codes, wherein the program codes comprise instructions for executing part or all of the steps of the method in the above method embodiments.
Embodiments of the present invention also disclose a computer program product, wherein, when the computer program product is run on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be understood that the embodiments described in this specification are exemplary of alternative embodiments and that the acts and modules illustrated are not required to practice the invention. It should also be understood by those skilled in the art that the sequence numbers of the above-mentioned processes do not imply any necessary sequence of execution, and the execution sequence of each process should be determined by its function and its inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
The vehicle positioning method and device based on image sensing disclosed by the embodiment of the invention are described in detail, specific examples are applied in the description to explain the principle and the implementation of the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A method for vehicle localization based on image perception, the method comprising:
acquiring a road image of a road where the vehicle is located;
performing semantic feature recognition on the road image to obtain semantic features of the road image, and determining a relative position relationship between any two semantic features;
determining at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map, and calculating the confidence of each alternative position;
determining the alternative position with the highest confidence coefficient as a target position, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position;
the determining at least one alternative position matched with the semantic features and the relative position relationship from a preset automatic driving navigation electronic map, and calculating the confidence of each alternative position comprises:
acquiring at least one first initial selection position containing all the semantic features from a preset automatic driving navigation electronic map; determining at least one alternative position matched with the relative position relationship from all the first initial positions; calculating first position information of the vehicle corresponding to each alternative position; constructing a three-dimensional simulation scene corresponding to each alternative position according to the alternative positions and the first attitude information; comparing the three-dimensional simulation scenes with the automatic driving navigation electronic map to obtain a first Euclidean distance of the alternative position corresponding to each three-dimensional simulation scene; and calculating the confidence degree of each alternative position according to the first Euclidean distance.
2. The method according to claim 1, wherein the semantic feature recognition of the road image, obtaining semantic features of the road image, and determining a relative position relationship between any two of the semantic features comprises:
performing semantic feature recognition on the road image through a deep learning algorithm to obtain semantic features of the road image;
calculating the orientation relation and the relative distance of any two semantic features;
and determining the orientation relation and the relative distance of any two semantic features as the relative position relation between the corresponding two semantic features.
3. The method according to claim 1 or 2, wherein the determining the candidate position with the highest confidence as a target position and repositioning the vehicle in the automatic driving navigation electronic map according to the target position comprises:
traversing the confidence coefficient by using a traversal algorithm to obtain a target confidence coefficient with the highest confidence coefficient;
judging whether the target confidence reaches a preset confidence threshold value;
and if the target confidence reaches the preset confidence threshold, determining the alternative position corresponding to the target confidence as a target position, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position.
4. A method for vehicle localization based on image perception, the method comprising:
acquiring a road image of a road where the vehicle is located;
performing semantic feature recognition on the road image to obtain semantic features of the road image, and determining a relative position relationship between any two semantic features;
determining at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map, and calculating the confidence of each alternative position;
determining the alternative position with the highest confidence coefficient as a target position, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position;
the determining at least one alternative position matched with the semantic features and the relative position relationship from a preset automatic driving navigation electronic map, and calculating the confidence of each alternative position comprises:
acquiring at least one second primary selection position containing all the semantic features from a preset automatic driving navigation electronic map; determining at least one alternative position matched with the relative position relationship from all the second initial positions; calculating second position information of the vehicle corresponding to each alternative position; projecting a three-dimensional scene corresponding to the alternative position in the automatic driving navigation electronic map according to the alternative position and the second posture information to obtain a two-dimensional scene image corresponding to each alternative position, wherein the automatic driving navigation electronic map is a three-dimensional electronic map; acquiring a second Euclidean distance of the alternative position corresponding to each two-dimensional scene image according to the two-dimensional scene image and the road image; and calculating the confidence degree of each alternative position according to the second Euclidean distance.
5. An image perception-based vehicle locating device, comprising:
an acquisition unit configured to acquire a road image of a road on which the vehicle is located;
the recognition unit is used for recognizing semantic features of the road image, acquiring the semantic features of the road image and determining the relative position relationship between any two semantic features;
the calculation unit is used for determining at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map and calculating the confidence of each alternative position;
a repositioning unit, configured to determine the candidate location with the highest confidence as a target location, and reposition the vehicle in the automatic driving navigation electronic map according to the target location;
the calculation unit includes:
the first acquisition subunit is used for acquiring at least one first initial selection position containing all the semantic features from a preset automatic driving navigation electronic map;
a second determining subunit, configured to determine at least one candidate position that matches the relative position relationship from all the first primary positions;
the second calculating subunit is used for calculating first attitude information of the vehicle corresponding to each alternative position;
the construction subunit is configured to construct, according to the candidate positions and the first pose information, a three-dimensional simulation scene corresponding to each candidate position;
the comparison subunit is used for comparing the three-dimensional simulation scenes with the automatic driving navigation electronic map to obtain a first Euclidean distance of the alternative position corresponding to each three-dimensional simulation scene;
the second calculating subunit is further configured to calculate, according to the first euclidean distance, a confidence level of each candidate position.
6. The image perception-based vehicle localization apparatus of claim 5, wherein the identification unit includes:
the recognition subunit is used for performing semantic feature recognition on the road image through a deep learning algorithm to obtain semantic features of the road image;
the first calculation subunit is used for calculating the orientation relation and the relative distance of any two semantic features;
and the first determining subunit is used for determining the orientation relation and the relative distance of any two semantic features as the relative position relation between the corresponding two semantic features.
7. The image perception-based vehicle positioning apparatus of claim 5 or 6, wherein the repositioning unit includes:
the traversal subunit is used for traversing the confidence coefficient by utilizing a traversal algorithm to obtain a target confidence coefficient with the highest confidence coefficient;
the judging subunit is used for judging whether the target confidence coefficient reaches a preset confidence coefficient threshold value;
and the repositioning subunit is used for determining the alternative position corresponding to the target confidence as a target position when the judgment result of the judging subunit is yes, and repositioning the vehicle in the automatic driving navigation electronic map according to the target position.
8. An image perception based vehicle localization apparatus, the apparatus comprising:
an acquisition unit configured to acquire a road image of a road on which the vehicle is located;
the recognition unit is used for recognizing semantic features of the road image, acquiring the semantic features of the road image and determining the relative position relationship between any two semantic features;
the calculation unit is used for determining at least one alternative position matched with the semantic features and the relative position relation from a preset automatic driving navigation electronic map and calculating the confidence of each alternative position;
a repositioning unit, configured to determine the candidate location with the highest confidence as a target location, and reposition the vehicle in the automatic driving navigation electronic map according to the target location;
the calculation unit includes:
the second acquisition subunit is used for acquiring at least one second initial selection position containing all the semantic features from a preset automatic driving navigation electronic map;
a third determining subunit, configured to determine at least one candidate location that matches the relative location relationship from all the second primary locations;
a third calculating subunit, configured to calculate second attitude information of the vehicle corresponding to each of the candidate positions;
the projection subunit is configured to project, according to the candidate positions and the second position and posture information, three-dimensional scenes corresponding to the candidate positions in the automatic driving navigation electronic map to obtain two-dimensional scene images corresponding to each of the candidate positions, where the automatic driving navigation electronic map is a three-dimensional electronic map;
the second obtaining subunit is further configured to obtain, according to the two-dimensional scene images and the road image, a second euclidean distance of the candidate position corresponding to each of the two-dimensional scene images;
and the third computing subunit is further configured to calculate a confidence degree of each candidate position according to the second euclidean distance.
CN201811242957.1A 2018-10-24 2018-10-24 Vehicle positioning method and device based on image perception Active CN110146096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811242957.1A CN110146096B (en) 2018-10-24 2018-10-24 Vehicle positioning method and device based on image perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811242957.1A CN110146096B (en) 2018-10-24 2018-10-24 Vehicle positioning method and device based on image perception

Publications (2)

Publication Number Publication Date
CN110146096A CN110146096A (en) 2019-08-20
CN110146096B true CN110146096B (en) 2021-07-20

Family

ID=67588409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811242957.1A Active CN110146096B (en) 2018-10-24 2018-10-24 Vehicle positioning method and device based on image perception

Country Status (1)

Country Link
CN (1) CN110146096B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446234A (en) * 2019-08-28 2021-03-05 北京初速度科技有限公司 Position determination method and device based on data association
CN110765224A (en) * 2019-10-25 2020-02-07 驭势科技(北京)有限公司 Processing method of electronic map, vehicle vision repositioning method and vehicle-mounted equipment
CN112837365B (en) * 2019-11-25 2023-09-12 北京魔门塔科技有限公司 Image-based vehicle positioning method and device
CN110979346B (en) * 2019-11-29 2021-08-31 北京百度网讯科技有限公司 Method, device and equipment for determining lane where vehicle is located
CN111508258B (en) * 2020-04-17 2021-11-05 北京三快在线科技有限公司 Positioning method and device
CN111707277B (en) * 2020-05-22 2022-01-04 上海商汤临港智能科技有限公司 Method, device and medium for acquiring road semantic information
CN112484744B (en) * 2020-12-07 2023-05-02 广州小鹏自动驾驶科技有限公司 Evaluation method and device of autonomous parking semantic map
CN113095184B (en) * 2021-03-31 2023-01-31 上海商汤临港智能科技有限公司 Positioning method, driving control method, device, computer equipment and storage medium
CN114088099A (en) * 2021-11-18 2022-02-25 北京易航远智科技有限公司 Semantic relocation method and device based on known map, electronic equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4577655B2 (en) * 2005-12-27 2010-11-10 アイシン・エィ・ダブリュ株式会社 Feature recognition device
KR100815153B1 (en) * 2006-11-08 2008-03-19 한국전자통신연구원 Apparatus and method for guiding a cross road of car navigation using a camera
CN108303103B (en) * 2017-02-07 2020-02-07 腾讯科技(深圳)有限公司 Method and device for determining target lane
CN107144285B (en) * 2017-05-08 2020-06-26 深圳地平线机器人科技有限公司 Pose information determination method and device and movable equipment
CN107742311B (en) * 2017-09-29 2020-02-18 北京易达图灵科技有限公司 Visual positioning method and device

Also Published As

Publication number Publication date
CN110146096A (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN110146096B (en) Vehicle positioning method and device based on image perception
US11915099B2 (en) Information processing method, information processing apparatus, and recording medium for selecting sensing data serving as learning data
US20220392108A1 (en) Camera-only-localization in sparse 3d mapped environments
CN109540148B (en) Positioning method and system based on SLAM map
CN109353334B (en) Parking space detection method and device
JP4702569B2 (en) Image processing apparatus for vehicle
JP2020035447A (en) Object identification method, device, apparatus, vehicle and medium
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
KR102167835B1 (en) Apparatus and method of processing image
CN111750882B (en) Method and device for correcting vehicle pose during initialization of navigation map
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN111750881A (en) Vehicle pose correction method and device based on light pole
JP6552448B2 (en) Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection
CN111340877A (en) Vehicle positioning method and device
KR20200094075A (en) Method and device for merging object detection information detected by each of object detectors corresponding to each camera nearby for the purpose of collaborative driving by using v2x-enabled applications, sensor fusion via multiple vehicles
CN111105695A (en) Map making method and device, electronic equipment and computer readable storage medium
CN113137968B (en) Repositioning method and repositioning device based on multi-sensor fusion and electronic equipment
CN114969221A (en) Method for updating map and related equipment
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
CN110298320B (en) Visual positioning method, device and storage medium
KR102371617B1 (en) Apparatus and method for recognizing object of vehicle
JP2020513551A (en) Method and apparatus for determining the exact position of a vehicle based on radar signatures around the vehicle
CN113513983A (en) Precision detection method, device, electronic equipment and medium
CN108981700A (en) A kind of positioning and orientation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220303

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Patentee after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: 100083 room 28, 4 / F, block a, Dongsheng building, 8 Zhongguancun East Road, Haidian District, Beijing

Patentee before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.