CN110658539A - Vehicle positioning method, device, vehicle and computer readable storage medium - Google Patents

Vehicle positioning method, device, vehicle and computer readable storage medium Download PDF

Info

Publication number
CN110658539A
CN110658539A CN201810714095.1A CN201810714095A CN110658539A CN 110658539 A CN110658539 A CN 110658539A CN 201810714095 A CN201810714095 A CN 201810714095A CN 110658539 A CN110658539 A CN 110658539A
Authority
CN
China
Prior art keywords
feature
vehicle
features
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810714095.1A
Other languages
Chinese (zh)
Other versions
CN110658539B (en
Inventor
李杨
刘效飞
万超
白军明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201810714095.1A priority Critical patent/CN110658539B/en
Publication of CN110658539A publication Critical patent/CN110658539A/en
Application granted granted Critical
Publication of CN110658539B publication Critical patent/CN110658539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle positioning method, a vehicle positioning device, a vehicle and a storage medium. The method comprises the following steps: acquiring an image shot by a vehicle, and extracting features in the image to obtain an identification feature set; determining fuzzy position information of a vehicle; acquiring a target feature map of an area where a fuzzy position is located from a pre-established feature map according to the fuzzy position information, wherein the pre-established feature map comprises all image features on each road and longitude and latitude coordinate information of all the image features; and obtaining the target position of the vehicle according to the identification feature set and the target feature map of the area where the fuzzy position is located. The method can greatly reduce the cost of positioning hardware and is more beneficial to the popularization of the automatic driving technology of the vehicle in the road environment.

Description

Vehicle positioning method, device, vehicle and computer readable storage medium
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to a vehicle positioning method and apparatus, a vehicle, and a computer-readable storage medium.
Background
At present, domestic and foreign enterprises compete to develop the automatic driving technology of vehicles, and the automatic driving of the vehicles is the inevitable trend of the future driving technology. The positioning system is a key technology and is directly related to the realization of the whole automatic driving technology. The existing self-positioning scheme is usually realized by using high-precision GNSS and high-precision inertial navigation combined navigation.
However, the existing combined navigation scheme of high-precision GNSS and high-precision inertial navigation has the problems of high cost and incapability of positioning in a tunnel or a high-rise sheltered scene without satellite signals, so that the popularization of the automatic driving technology is greatly hindered.
Disclosure of Invention
The present invention aims to solve at least to some extent one of the above mentioned technical problems.
To this end, a first object of the invention is to propose a vehicle positioning method. The method can greatly reduce the cost of positioning hardware and is more beneficial to the popularization of the automatic driving technology of the vehicle in the road environment.
A second object of the present invention is to provide a vehicle positioning apparatus.
A third object of the invention is to propose a vehicle.
A fourth object of the invention is to propose a computer-readable storage medium.
In order to achieve the above object, a vehicle positioning method according to an embodiment of a first aspect of the present invention includes: acquiring an image shot by a vehicle, and extracting features in the image to obtain an identification feature set; determining fuzzy position information of the vehicle; acquiring a target feature map of an area where a fuzzy position is located from a pre-established feature map according to the fuzzy position information, wherein the pre-established feature map comprises all image features on each road and longitude and latitude coordinate information of all the image features; and obtaining the target position of the vehicle according to the identification feature set and the target feature map of the area where the fuzzy position is located.
According to the vehicle positioning method provided by the embodiment of the invention, the vehicle can be positioned through the vehicle-mounted satellite positioning system with ordinary accuracy to obtain the fuzzy position, so that the characteristic map near the fuzzy position is found according to the fuzzy position, the camera on the vehicle is used for carrying out real-time visual identification based on the characteristic map, and then the characteristic identified in real time is matched with the characteristic map to realize self-positioning of the vehicle and obtain the accurate position of the vehicle. Therefore, in the whole vehicle positioning process, the dependence of the vehicle to be positioned on the auxiliary satellite positioning is not strong, and due to the continuous position relation among the characteristics, even if the satellite signals are lost for a short time, the positioning of the vehicle is not influenced, the positioning hardware cost is greatly reduced, and the popularization of the automatic driving technology of the vehicle under the road environment is facilitated.
In order to achieve the above object, a vehicle positioning device according to an embodiment of a second aspect of the present invention includes: the image acquisition module is used for acquiring an image shot by a vehicle; the characteristic extraction module is used for extracting the characteristics in the image to obtain an identification characteristic set; the determining module is used for determining fuzzy position information of the vehicle; the characteristic map acquisition module is used for acquiring a target characteristic map of an area where a fuzzy position is located from a pre-established characteristic map according to the fuzzy position information, wherein the pre-established characteristic map comprises all image characteristics on each road and longitude and latitude coordinate information of all the image characteristics; and the positioning module is used for obtaining the target position of the vehicle according to the identification feature set and the target feature map of the area where the fuzzy position is located.
According to the vehicle positioning device provided by the embodiment of the invention, the vehicle can be positioned through the vehicle-mounted satellite positioning system with ordinary accuracy to obtain the fuzzy position, so that the characteristic map near the fuzzy position is found according to the fuzzy position, the camera on the vehicle is used for carrying out real-time visual identification based on the characteristic map, and then the characteristic identified in real time is matched with the characteristic map, so that the self-positioning of the vehicle is realized, and the accurate position of the vehicle is obtained. Therefore, in the whole vehicle positioning process, the dependence of the vehicle to be positioned on the auxiliary satellite positioning is not strong, and due to the continuous position relation among the characteristics, even if the satellite signals are lost for a short time, the positioning of the vehicle is not influenced, the positioning hardware cost is greatly reduced, and the popularization of the automatic driving technology of the vehicle under the road environment is facilitated.
In order to achieve the above object, a vehicle according to a third aspect of the present invention includes: the system comprises a camera, a vehicle-mounted satellite positioning system, a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the camera is used for collecting images of the surrounding environment of the vehicle; the vehicle-mounted satellite positioning system is used for positioning the vehicle, wherein the positioning accuracy of the vehicle-mounted satellite positioning system is smaller than a preset threshold value; the processor, when executing the program, implements the vehicle positioning method according to the embodiment of the first aspect of the present invention.
To achieve the above object, a non-transitory computer-readable storage medium according to a fourth embodiment of the present invention stores thereon a computer program, which when executed by a processor implements the vehicle positioning method according to the first embodiment of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a vehicle locating method according to one embodiment of the present invention;
FIG. 2 is a schematic diagram of a sensor arrangement in a vehicle according to an embodiment of the invention;
FIG. 3 is a flow chart of establishing the feature map according to an embodiment of the present invention;
FIG. 4 is a flow chart of a vehicle locating method according to an embodiment of the present invention;
FIG. 5 is a flow diagram of feature matching according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of the azimuth of a feature relative to a vehicle according to an embodiment of the present invention;
FIG. 7 is a flow chart for determining a target position of a vehicle according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of vehicle self-locating identification according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a vehicle locating device in accordance with one embodiment of the present invention;
FIG. 10 is a schematic diagram of a vehicle locating device in accordance with one embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a vehicle locating device in accordance with another embodiment of the present invention;
fig. 12 is a schematic structural diagram of a vehicle according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A vehicle positioning method, a device, a vehicle, and a computer-readable storage medium according to embodiments of the present invention are described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a vehicle localization method according to one embodiment of the present invention. It should be noted that the vehicle positioning method according to the embodiment of the present invention may be applied to the vehicle positioning device according to the embodiment of the present invention. The vehicle positioning device may be configured on a vehicle.
As shown in fig. 1, the vehicle positioning method may include:
and S110, acquiring an image shot by the vehicle, and extracting features in the image to obtain an identification feature set.
Optionally, the vehicle may have an image capture device, such as a camera, on which an image of the environment surrounding the vehicle may be captured. As an example, it is assumed that a vehicle is mounted with a camera, and the number of the cameras may be multiple, for example, 4. For example, as shown in fig. 2, the 4 cameras (i.e., the camera 1, the camera 2, the camera 3, and the camera 4) may be respectively disposed on the front side, the rear side, the left side, and the right side of the vehicle, for example, the camera 1 is disposed on a front windshield of the vehicle, the camera 2 is disposed on a rear windshield, the camera 3 is disposed on a door of the passenger seat side of the vehicle, and the camera 4 is disposed on a door of the driver seat side, so that image information around the vehicle can be captured in real time by the 4 cameras. Optionally, in an embodiment of the present invention, the optical axes of the 4 cameras are horizontal, and the horizontal field angles are all preset angles (e.g., 90 degrees).
After the images captured by the vehicle are obtained, features in the captured images can be identified and extracted according to a specific feature extraction algorithm, various features in the images can be obtained, and the features can be combined together to obtain an individual feature set. In one embodiment of the present invention, the set of identification features may include a plurality of features, which may include, but are not limited to, point features, line segment features, specific target identification features, and the like, which may include, but is not limited to, one or more of lane lines, directional arrows, stop lines, sidewalks, traffic lights, utility poles, road signs, and the like.
It will be appreciated that the feature extraction algorithm employed will vary as the type of features obtained varies. Optionally, the features in the image are identified and extracted by a plurality of feature extraction algorithms respectively, so as to obtain an identified feature set in the image. That is, after obtaining an image captured by a vehicle, the image may be respectively subjected to a plurality of feature extraction algorithms to respectively acquire various features included in the image, such as point features (or also referred to as point coordinate features), line segment features, specific object identification features, and the like.
As an example, the specific implementation manner of obtaining the recognition feature set in the image by respectively recognizing and extracting the features in the image through the multiple feature extraction algorithms may be as follows:
1) extracting point characteristics: identifying and extracting point features in the image through a scale invariant feature transform matching SURF algorithm; since the SURF algorithm has the characteristic of invariant dimension, the characteristic can be fully utilized, and the effect of feature matching can still be achieved when the distance and the angle between the vehicle and the feature are changed. In addition, the SURF algorithm realizes acceleration by utilizing the integral image, and performs quick calculation by calculating the sum of all pixel points in a certain rectangular region in the image, so that the extraction efficiency of the point features in the image can be greatly improved by extracting the point features in the image through the SURF algorithm.
2) Extracting line segment characteristics: identifying and extracting edge features in the image through an edge detection Canny algorithm, and identifying the extracted edge features based on a straight line detection algorithm (such as probability Hough transform) to obtain line segment features in the image; that is, the image can be subjected to edge detection by the Canny algorithm, and the Canny algorithm can perform edge thinning and connecting processing except filtering and gradient operation on the image, so that the edge positioning precision can be high.
In order to reduce the influence of noise on image edge detection, optionally, before extracting line segment features in an image, smoothing filtering the image by a gaussian operator, then calculating the gradient amplitude, direction and non-maximum suppression of the denoised image by a Canny algorithm, and then setting a high-low threshold to remove false edges and connect true edges to obtain edge features in the image.
After the edge feature extraction, fitting identification can be performed by using probability Hough transformation. For example, in a discretization grid of a parameter space, each pixel point is mapped to the parameter space by using 'many-to-one' mapping, and then the mapping of collinear pixel points in the parameter space is obtained by accumulating 'voting', so that the line segment characteristics after line segment hough transformation are obtained. As an example, the extracted line segment features may include, but are not limited to: position, slope, starting point, breaking point, ending point, line edge gradient distribution, line color distribution, and the like.
3) Extracting specific target identification features: identifying and extracting the specific target identification features in the image through a pre-trained deep learning network, wherein the deep learning network is obtained by pre-collecting an image data set of a specific target, classifying the target in the image data set according to an angle direction and then training by using the deep learning network.
That is, the training recognition can be performed using a neural network deep learning classification method. The specific target recognition features can be extracted from the images through the trained deep learning network. It will be appreciated that the specific object identifying feature may comprise a specific object and vector information for the specific object. The specific target may include, but is not limited to, one or more of a lane line, a directional arrow, a stop line, a sidewalk, a traffic light, a utility pole, a road sign, and the like.
In this way, after point features, line segment features, and specific object recognition features are extracted from the image, these features may be combined together to obtain a recognition feature set of the image.
And S120, determining fuzzy position information of the vehicle.
It should be noted that, in the embodiment of the present invention, an on-board satellite positioning system is installed on the vehicle, and the positioning accuracy of the on-board satellite positioning system is smaller than a preset threshold, for example, the on-board satellite positioning system may be a general-degree positioning module installed on the vehicle as shown in fig. 2. Optionally, the vehicle is initially located by the vehicle-mounted satellite positioning system to determine the fuzzy position information of the vehicle. That is, the vehicle is initially located using a general accuracy on-board satellite positioning system on the vehicle, resulting in an approximate location of the vehicle (i.e., the ambiguous location information).
And S130, acquiring a target feature map of the area where the fuzzy position is located from a pre-established feature map according to the fuzzy position information, wherein the pre-established feature map comprises all image features on each road and longitude and latitude coordinate information of all the image features.
Optionally, an area where the fuzzy position is located is formed by taking the fuzzy position information as a center and taking a preset distance as a radius, and a corresponding target feature map is acquired from the pre-established feature map according to the area where the fuzzy position is located. For example, after obtaining the approximate position of the vehicle (i.e. the ambiguous position has a certain error, such as less than L meters), a search may be performed in a pre-established feature map to extract all map feature sets D1 that are less than 2L meters away from the approximate position, and the all map feature sets D1 and the longitude and latitude coordinates of the features in the sets may form the target feature map.
As an example, as shown in fig. 3, the feature map may be pre-established by:
s310, acquiring a sample image acquired by acquisition equipment;
as an example, the acquisition device may be a vehicle or a drone, and may acquire a sample image acquired when the vehicle is traveling on a road, or may acquire a sample image acquired when the drone is flying in the air.
S320, identifying and extracting the features in the sample image to obtain the sample features in the sample image;
it can be understood that, in this step, the point features in the sample image may be identified and extracted through the SURF algorithm, the edge features in the sample image may be identified and extracted through the Canny algorithm, and the extracted edge features may be identified based on the straight line detection algorithm, so as to obtain the line segment features in the sample image; and identifying and extracting specific target identification features in the sample image through a pre-trained deep learning network. Thus, all the sample features in the sample image can be obtained.
S330, acquiring longitude and latitude coordinates of the acquisition equipment, and acquiring position coordinates of the sample characteristics relative to the acquisition equipment;
as an example, a vertical angle and a horizontal angle of the sample feature with respect to the collection device may be determined, distance information between the sample feature and the collection device may be obtained, and the position coordinates of the sample feature with respect to the collection device may be calculated based on the vertical angle and the horizontal angle of the sample feature with respect to the collection device, and the distance information.
S340, calculating longitude and latitude coordinates of the sample characteristics according to the position coordinates of the sample characteristics relative to the acquisition equipment and the longitude and latitude coordinates of the acquisition equipment;
and S350, generating a feature map according to the sample features and the longitude and latitude coordinates of the sample features.
Therefore, when the acquisition equipment continuously moves on a road or in the air, the acquisition equipment can continuously acquire images, so that all characteristics on the road and longitude and latitude coordinate records of the characteristics can be pulled down, corresponding relations between all sample characteristics and longitude and latitude coordinates of the characteristics are established, and all sample characteristics, the longitude and latitude coordinates of the characteristics and the corresponding relations are stored to form a characteristic map.
It should be noted that, in an embodiment of the present invention, the step S110 is not distinguished from the steps S120 and S130 in a sequential execution order. That is, step S110 may be performed first, and steps S120 and S130 are performed in the feature map; alternatively, steps S120 and S130 may be performed first, and then step S110 may be performed.
And S140, obtaining the target position of the vehicle according to the identification feature set and the target feature map of the area where the fuzzy position is located.
Optionally, matching the identification feature set with the feature set in the target feature map to obtain a successfully matched feature set, and calculating the longitude and latitude coordinates of the vehicle according to the features in the successfully matched feature set and the longitude and latitude coordinate information of the features. Therefore, the vehicle self-positioning method is realized by acquiring the feature map of the specific road in advance, performing real-time visual identification by using the vehicle-mounted camera, and then matching the features identified in real time with the feature map, so that the accuracy of vehicle positioning is improved.
As an example, as shown in fig. 4, the specific implementation process of obtaining the target position of the vehicle according to the recognition feature set and the target feature map of the area where the fuzzy position is located may be as follows:
s410, acquiring a map feature set in a target feature map of an area where the fuzzy position is located;
optionally, after obtaining the target feature map, each feature may be extracted from the target feature map, and the features may be combined together to obtain the map feature set.
S420, matching the features in the recognition feature set with the features in the map feature set to obtain a matched feature set;
for example, as shown in fig. 5, the matching feature set D3 may be obtained by sequentially extracting features from the identified feature set D1 and matching the features with the features in the map feature set D2, adding the successfully matched features to the set D3 if the matching is successful, and continuing to extract the next feature from the identified feature set D1 and matching the next feature with the features in the map feature set D2 if the matching is unsuccessful, and thus looping until all the features in the identified feature set D1 are matched with the features in the map feature set D2.
Optionally, in an embodiment of the present invention, in order to improve accuracy of the positioning result, a number of the matching features in the matching feature set may be determined, and when the number meets a requirement, the matching feature set may be used to perform subsequent positioning. As an example, after matching the features in the identified feature set with the features in the map feature set to obtain a matched feature set, the number of matched features in the matched feature set may be calculated, and it is determined whether the number of matched features is greater than a preset threshold, if yes, step S430 is performed.
For example, the total number of all the features in the matching feature set is counted, and the total number is used as the number of the matching features, and if the number is greater than a preset threshold, it indicates that the features included in the matching feature set are sufficient for positioning the vehicle.
For another example, considering that the importance degrees of the "point" feature, the "line segment" feature, and the "specific target" (i.e. vector target) identification feature are different, different weights may be taken for these three features: a. b and c. Piece goodsThe number of matched features P1, P2 and P3 are respectively assigned to the "point" features, the "line segment" features and the "vector target" features, and then the number P of matched features P is calculated in a weighted summation mode, namely a P1+ b P2+ c P3, and when the number P is larger than a preset threshold value PMAXAnd judging that the number of the matched features meets the requirement, namely, the features contained in the matched feature set are enough to realize the positioning of the vehicle. That is to say, after the matching feature set is obtained, the number of each feature in the matching feature set may be counted, weighted summation is performed according to the number of each feature and the weight corresponding to each feature, an obtained sum value is used as the matching feature number, whether the matching feature number is greater than a preset threshold value is determined, and if yes, step S430 is executed.
It should be noted that, in the embodiment of the present invention, regarding the content of matching three features, namely, the "point" feature, the "line segment" feature, and the "specific target" identification feature, the point feature may directly use the descriptor of SURF, the color information of the pixel point, and the like; the line segment characteristics can be matched with line edge gradient distribution, line color distribution characteristic information and the like of the line segment; a particular object identifying feature may directly match a particular type of the object, etc.
Therefore, the method and the device perform matching positioning by comprehensively using the three characteristics of the point characteristic, the line segment characteristic and the specific target identification characteristic, can better increase the number of the characteristics which can be extracted from the road, so as to increase the coverage rate of the characteristics in the road and improve the accuracy of subsequent positioning.
S430, acquiring longitude and latitude coordinate information of each feature in the matched feature set from a target feature map of an area where the fuzzy position is located;
s440, determining the azimuth angle of each feature in the matched feature set relative to the vehicle;
optionally, when the vehicle acquires an image through an image acquisition device (such as a camera), an azimuth angle of each feature in the matched feature set relative to the vehicle can be obtained. For example, as shown in fig. 6, the feature 1 and the feature 2 are two features in the matching feature set, respectively, and an azimuth angle α of the feature 1 with respect to the vehicle and an azimuth angle β of the feature 2 with respect to the vehicle can be obtained.
S450, determining the target position of the vehicle according to the longitude and latitude coordinate information of each feature in the matched feature set and the azimuth angle of each feature in the matched feature set relative to the vehicle.
As an example, as shown in fig. 7, the specific implementation process of determining the target position of the vehicle according to the longitude and latitude coordinate information of each feature in the matched feature set and the azimuth angle of each feature in the matched feature set relative to the vehicle may be as follows:
s710, taking every three features as a group, and carrying out permutation and combination on the features in the matched feature set in a permutation and combination mode to obtain a plurality of feature combinations;
s720, aiming at each feature combination, calculating a deviation included angle between two adjacent features in the feature combination according to the azimuth angle of each feature in the feature combination relative to the vehicle;
for example, as shown in fig. 8, in an example where the feature combination is composed of features 1, 2, and 3, a deviation angle γ 1 between two adjacent features, that is, the feature 1 and the feature 2, is calculated from the azimuth angles of the features 1, 2, and 3 with respect to the vehicle (as shown in fig. 6, the deviation angle γ 1 is β - α), and a deviation angle γ 2 between the feature 3 and the feature 2 is calculated (that is, the deviation angle γ 2 is obtained by subtracting the azimuth angle of the feature 2 with respect to the vehicle from the azimuth angle of the feature 3 with respect to the vehicle).
S730, calculating a circular equation passing through the two adjacent features according to the longitude and latitude coordinates of the two adjacent features and a deviation included angle between the two adjacent features;
it will be appreciated that knowing the coordinates of two features and the angle between the two features, the equation for the circle passing through the two features can be calculated.
S740, calculating intersection point coordinates among the circles according to each circle equation, and determining target intersection point coordinates from the intersection point coordinates among the circles according to relative orientation information among the features in the feature combination;
for example, as shown in fig. 8, a circle is formed by a feature 1 point a, a feature 2 point B, and a deviation angle between a point a and a point B, a circle is formed by a feature 2 point B, a feature 3 point C, and a deviation angle between a point B and a point C, and the intersection point of the two circles is two, at this time, as shown in fig. 8, the feature 2 is on the left side of the feature 1, the feature 1 is on the right side of the feature 2, the feature 2 is on the right side of the feature 3, and the feature 3 is on the left side of the feature 2, it can be known that the vehicle shooting the feature is necessarily at an intersection point O, that is, the O point is the anchor point of the. The vehicle is located at a point O on the circle passing through the points a and B and at the same time on the circle passing through the points B and C, so that the vehicle location can be obtained by finding the intersection of the two circles.
And S750, clustering a plurality of target intersection point coordinates corresponding to the plurality of feature combinations based on a clustering algorithm, and taking the intersection point coordinates obtained after clustering as the target position of the vehicle.
Therefore, the positioning can be realized by using 3 features, for the matched feature set, the features in the matched feature set can be arranged and combined in a permutation and combination mode to obtain a plurality of feature combinations, a plurality of positions can be obtained in a positioning mode based on the 3 features, at the moment, the wrong positioning can be filtered out through a clustering algorithm, then the rest positioning is weighted and averaged, and finally, the relatively accurate position is output.
To facilitate the use of the features to effect the positioning of the vehicle, optionally, in one embodiment of the invention, after matching features in the identified feature set with features in the map feature set to obtain a matched feature set, each point feature, each line segment feature and each specific object recognition feature in the set of matching features may be obtained, and extracting two point features from each line segment feature in the matching feature set, and for each specific target recognition feature in the set of matching features, three point features are extracted from each specific target recognition feature, then, each point feature in the matching feature set, the point feature extracted from each line segment feature, and the point feature extracted from each specific target recognition feature are combined to obtain a new matching feature set. Therefore, various features in the matching feature set are converted into point features, therefore, for a new matching feature set, a plurality of positions can be obtained in a permutation and combination mode, wrong positioning is filtered out through clustering, then the rest positioning is weighted and averaged, and finally a relatively accurate position is output.
In summary, the vehicle positioning method according to the embodiment of the present invention may first position the vehicle through the vehicle-mounted satellite positioning system with ordinary accuracy to obtain the fuzzy position, so as to find the feature map near the fuzzy position, perform real-time visual recognition by using the camera on the vehicle based on the feature map, and then match the feature recognized in real time with the feature map, so as to achieve self-positioning of the vehicle and obtain the accurate position of the vehicle. Therefore, in the whole vehicle positioning process, the dependence of the vehicle to be positioned on the auxiliary satellite positioning is not strong, and due to the continuous position relation among the characteristics, even if the satellite signals are lost for a short time, the positioning of the vehicle is not influenced, the positioning hardware cost is greatly reduced, and the popularization of the automatic driving technology of the vehicle under the road environment is facilitated.
Corresponding to the vehicle positioning methods provided in the foregoing embodiments, an embodiment of the present invention further provides a vehicle positioning device, and since the vehicle positioning device provided in the embodiment of the present invention corresponds to the vehicle positioning methods provided in the foregoing embodiments, the embodiments of the vehicle positioning method described above are also applicable to the vehicle positioning device provided in the present embodiment, and will not be described in detail in the present embodiment. Fig. 9 is a schematic structural view of a vehicle positioning apparatus according to an embodiment of the present invention. As shown in fig. 9, the vehicle positioning apparatus 900 may include: an image acquisition module 910, a feature extraction module 920, a determination module 930, a feature map acquisition module 940, and a localization module 950.
Specifically, the image acquiring module 910 is used for acquiring an image captured by a vehicle.
The feature extraction module 920 is configured to extract features in the image to obtain a set of identification features. In one embodiment of the present invention, the recognition feature set includes a plurality of features including point features, line segment features, and specific target recognition features including one or more of lane lines, guide arrows, stop lines, sidewalks, traffic lights, utility poles, and road signs. In an embodiment of the present invention, the feature extraction module 920 is specifically configured to: and respectively identifying and extracting the features in the image through a plurality of feature extraction algorithms to obtain an identification feature set in the image.
As an example, the feature extraction module 920 respectively identifies and extracts features in the image through multiple feature extraction algorithms, and a specific implementation process of obtaining an identified feature set in the image may be as follows: identifying and extracting point features in the image through a scale invariant feature transform matching SURF algorithm; identifying and extracting edge features in the image through an edge detection Canny algorithm, and identifying the extracted edge features based on a straight line detection algorithm to obtain line segment features in the image; identifying and extracting the specific target identification features in the image through a pre-trained deep learning network, wherein the deep learning network is obtained by pre-collecting an image data set of a specific target, classifying the target in the image data set according to an angle direction and then training by using the deep learning network.
The determining module 930 is used to determine the ambiguous location information where the vehicle is located. In an embodiment of the present invention, an on-board satellite positioning system is installed on the vehicle, and a positioning accuracy of the on-board satellite positioning system is less than a preset threshold. In an embodiment of the present invention, the determining module 930 locates the vehicle through the vehicle-mounted satellite positioning system to determine the fuzzy position information of the vehicle.
The feature map obtaining module 940 is configured to obtain a target feature map of an area where the fuzzy position is located from a pre-established feature map according to the fuzzy position information, where the pre-established feature map includes all image features on each road and longitude and latitude coordinate information where all the image features are located. As an example, the feature map obtaining module 940 uses the fuzzy position information as a center and a preset distance as a radius to form an area where the fuzzy position is located, and obtains a corresponding target feature map from the pre-established feature map according to the area where the fuzzy position is located.
The positioning module 950 is configured to obtain a target location of the vehicle according to the recognition feature set and the target feature map of the area where the fuzzy location is located. As an example, as shown in fig. 10, the positioning module 950 may include: a feature acquisition unit 951, a feature matching unit 952, a coordinate acquisition unit 953, a determination unit 954, and a positioning unit 955. The feature acquisition unit 951 is configured to acquire a map feature set in a target feature map of an area where the blurred position is located; the feature matching unit 952 is configured to match features in the identification feature set with features in the map feature set to obtain a matching feature set; the coordinate acquiring unit 953 is configured to acquire longitude and latitude coordinate information of each feature in the matched feature set from a target feature map of an area where the fuzzy position is located; the determining unit 954 is configured to determine an azimuth angle of each feature in the matched feature set with respect to the vehicle; the positioning unit 955 is configured to determine a target position of the vehicle according to longitude and latitude coordinate information of each feature in the matching feature set and an azimuth angle of each feature in the matching feature set relative to the vehicle.
In one embodiment of the present invention, the positioning unit 955 may be specifically configured to: taking every three features as a group, and carrying out permutation and combination on each feature in the matched feature set in a permutation and combination mode to obtain a plurality of feature combinations; aiming at each feature combination, calculating a deviation included angle between two adjacent features in the feature combination according to the azimuth angle of each feature in the feature combination relative to the vehicle; calculating a circular equation passing through the two adjacent features according to the longitude and latitude coordinates of the two adjacent features and a deviation included angle between the two adjacent features; calculating intersection point coordinates between circles according to each circle equation, and determining target intersection point coordinates from the intersection point coordinates between the circles according to relative orientation information between the features in the feature combination; and clustering a plurality of target intersection point coordinates corresponding to the plurality of feature combinations based on a clustering algorithm, and taking the intersection point coordinates obtained after clustering as the target position of the vehicle.
As an example, as shown in fig. 11, on the basis of fig. 10, the positioning module 950 may further include: a calculation unit 956 and a judgment unit 957. The calculating unit 956 is configured to calculate the number of matched features in the matched feature set after the feature matching unit matches the features in the identified feature set with the features in the map feature set to obtain a matched feature set; the judging unit 957 is configured to judge whether the number of the matched features is greater than a preset threshold; the coordinate obtaining unit 953 is further configured to obtain longitude and latitude coordinate information of each feature in the matching feature set from a target feature map of an area where the fuzzy position is located when the number of the matching features is greater than the preset threshold.
According to the vehicle positioning device provided by the embodiment of the invention, the vehicle can be positioned through the vehicle-mounted satellite positioning system with ordinary accuracy to obtain the fuzzy position, so that the characteristic map near the fuzzy position is found according to the fuzzy position, the camera on the vehicle is used for carrying out real-time visual identification based on the characteristic map, and then the characteristic identified in real time is matched with the characteristic map, so that the self-positioning of the vehicle is realized, and the accurate position of the vehicle is obtained. Therefore, in the whole vehicle positioning process, the dependence of the vehicle to be positioned on the auxiliary satellite positioning is not strong, and due to the continuous position relation among the characteristics, even if the satellite signals are lost for a short time, the positioning of the vehicle is not influenced, the positioning hardware cost is greatly reduced, and the popularization of the automatic driving technology of the vehicle under the road environment is facilitated.
In order to realize the embodiment, the invention further provides a vehicle.
Fig. 12 is a schematic structural diagram of a vehicle according to an embodiment of the invention. As shown in fig. 12, the vehicle 1200 may include: a camera 1210, an in-vehicle satellite positioning system 1220, a memory 1230, a processor 1240, and a computer program 1250 stored on the memory 1230 and executable on the processor 1240. Wherein the content of the first and second substances,
the camera 1210 is used for acquiring images of the surrounding environment of the vehicle;
the vehicle-mounted satellite positioning system 1220 is used for positioning a vehicle, wherein the positioning accuracy of the vehicle-mounted satellite positioning system is smaller than a preset threshold;
the processor 1240, when executing the program 1250, implements the vehicle positioning method according to any of the above-described embodiments of the present invention.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the vehicle positioning method according to any of the above embodiments of the present invention.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (16)

1. A vehicle positioning method, characterized by comprising the steps of:
acquiring an image shot by a vehicle, and extracting features in the image to obtain an identification feature set;
determining fuzzy position information of the vehicle;
acquiring a target feature map of an area where a fuzzy position is located from a pre-established feature map according to the fuzzy position information, wherein the pre-established feature map comprises all image features on each road and longitude and latitude coordinate information of all the image features; and
and obtaining the target position of the vehicle according to the identification feature set and the target feature map of the area where the fuzzy position is located.
2. The vehicle positioning method according to claim 1, wherein the set of identification features includes a plurality of features including point features, line segment features, and specific target identification features including one or more of lane lines, guide arrows, stop lines, sidewalks, traffic lights, utility poles, and road signs; wherein the extracting features in the image to obtain an identification feature set comprises:
and respectively identifying and extracting the features in the image through a plurality of feature extraction algorithms to obtain an identification feature set in the image.
3. The vehicle positioning method according to claim 2, wherein the identifying and extracting the features in the image by using a plurality of feature extraction algorithms respectively to obtain the identified feature set in the image comprises:
identifying and extracting point features in the image through a scale invariant feature transform matching SURF algorithm;
identifying and extracting edge features in the image through an edge detection Canny algorithm, and identifying the extracted edge features based on a straight line detection algorithm to obtain line segment features in the image;
identifying and extracting the specific target identification features in the image through a pre-trained deep learning network, wherein the deep learning network is obtained by pre-collecting an image data set of a specific target, classifying the target in the image data set according to an angle direction and then training by using the deep learning network.
4. The vehicle positioning method according to claim 1, wherein an on-board satellite positioning system is installed on the vehicle, and positioning accuracy of the on-board satellite positioning system is less than a preset threshold; wherein the determining the fuzzy position information of the vehicle comprises:
and positioning the vehicle through the vehicle-mounted satellite positioning system to determine the fuzzy position information of the vehicle.
5. The vehicle positioning method according to claim 1, wherein the obtaining of the target feature map of the area where the fuzzy position is located from the pre-established feature map according to the fuzzy position information includes:
forming an area where the fuzzy position is located by taking the fuzzy position information as a center and taking a preset distance as a radius;
and acquiring a corresponding target feature map from the pre-established feature map according to the area of the fuzzy position.
6. The vehicle positioning method according to claim 1, wherein the obtaining of the target position of the vehicle from the recognition feature set and the target feature map of the area where the fuzzy position is located comprises:
acquiring a map feature set in a target feature map of an area where the fuzzy position is located;
matching the features in the identification feature set with the features in the map feature set to obtain a matched feature set;
acquiring longitude and latitude coordinate information of each feature in the matched feature set from a target feature map of an area where the fuzzy position is located;
determining an azimuth angle of each feature in the set of matched features relative to the vehicle;
and determining the target position of the vehicle according to the longitude and latitude coordinate information of each feature in the matched feature set and the azimuth angle of each feature in the matched feature set relative to the vehicle.
7. The vehicle localization method according to claim 6, wherein after matching the features in the identified feature set with the features in the map feature set to obtain a matched feature set, further comprising:
calculating the number of matched features in the matched feature set;
judging whether the number of the matched features is larger than a preset threshold value or not;
and if so, executing the step of acquiring longitude and latitude coordinate information of each feature in the matched feature set from a target feature map of the area where the fuzzy position is located.
8. The vehicle positioning method of claim 6, wherein the determining the target position of the vehicle according to the longitude and latitude coordinate information of each feature in the matched feature set and the azimuth angle of each feature in the matched feature set relative to the vehicle comprises:
taking every three features as a group, and carrying out permutation and combination on each feature in the matched feature set in a permutation and combination mode to obtain a plurality of feature combinations;
aiming at each feature combination, calculating a deviation included angle between two adjacent features in the feature combination according to the azimuth angle of each feature in the feature combination relative to the vehicle;
calculating a circular equation passing through the two adjacent features according to the longitude and latitude coordinates of the two adjacent features and a deviation included angle between the two adjacent features;
calculating intersection point coordinates between circles according to each circle equation, and determining target intersection point coordinates from the intersection point coordinates between the circles according to relative orientation information between the features in the feature combination;
and clustering a plurality of target intersection point coordinates corresponding to the plurality of feature combinations based on a clustering algorithm, and taking the intersection point coordinates obtained after clustering as the target position of the vehicle.
9. A vehicle positioning device, comprising:
the image acquisition module is used for acquiring an image shot by a vehicle;
the characteristic extraction module is used for extracting the characteristics in the image to obtain an identification characteristic set;
the determining module is used for determining fuzzy position information of the vehicle;
the characteristic map acquisition module is used for acquiring a target characteristic map of an area where a fuzzy position is located from a pre-established characteristic map according to the fuzzy position information, wherein the pre-established characteristic map comprises all image characteristics on each road and longitude and latitude coordinate information of all the image characteristics; and
and the positioning module is used for obtaining the target position of the vehicle according to the identification feature set and the target feature map of the area where the fuzzy position is located.
10. The vehicle locating device of claim 9, wherein the set of identifying features includes a plurality of features including point features, line segment features, and specific target identifying features including one or more of lane lines, directional arrows, stop lines, sidewalks, traffic lights, utility poles, road signs; wherein the feature extraction module is specifically configured to:
and respectively identifying and extracting the features in the image through a plurality of feature extraction algorithms to obtain an identification feature set in the image.
11. The vehicle positioning device according to claim 9, wherein an on-board satellite positioning system is installed on the vehicle, and positioning accuracy of the on-board satellite positioning system is less than a preset threshold; wherein the determining module is specifically configured to:
and positioning the vehicle through the vehicle-mounted satellite positioning system to determine the fuzzy position information of the vehicle.
12. The vehicle locating apparatus of claim 9, wherein the locating module comprises:
the characteristic acquisition unit is used for acquiring a map characteristic set in a target characteristic map of an area where the fuzzy position is located;
the characteristic matching unit is used for matching the characteristics in the identification characteristic set with the characteristics in the map characteristic set to obtain a matching characteristic set;
a coordinate obtaining unit, configured to obtain longitude and latitude coordinate information of each feature in the matched feature set from a target feature map of an area where the fuzzy position is located;
a determining unit, configured to determine an azimuth angle of each feature in the matched feature set with respect to the vehicle;
and the positioning unit is used for determining the target position of the vehicle according to the longitude and latitude coordinate information of each feature in the matched feature set and the azimuth angle of each feature in the matched feature set relative to the vehicle.
13. The vehicle locating apparatus of claim 12, wherein the locating module further comprises:
the calculating unit is used for calculating the number of the matched features in the matched feature set after the feature matching unit matches the features in the identified feature set with the features in the map feature set to obtain the matched feature set;
the judging unit is used for judging whether the number of the matched features is larger than a preset threshold value or not;
the coordinate obtaining unit is further configured to obtain longitude and latitude coordinate information of each feature in the matching feature set from a target feature map of an area where the fuzzy position is located when the number of the matching features is greater than the preset threshold.
14. The vehicle positioning apparatus of claim 12, wherein the positioning unit is specifically configured to:
taking every three features as a group, and carrying out permutation and combination on each feature in the matched feature set in a permutation and combination mode to obtain a plurality of feature combinations;
aiming at each feature combination, calculating a deviation included angle between two adjacent features in the feature combination according to the azimuth angle of each feature in the feature combination relative to the vehicle;
calculating a circular equation passing through the two adjacent features according to the longitude and latitude coordinates of the two adjacent features and a deviation included angle between the two adjacent features;
calculating intersection point coordinates between circles according to each circle equation, and determining target intersection point coordinates from the intersection point coordinates between the circles according to relative orientation information between the features in the feature combination;
and clustering a plurality of target intersection point coordinates corresponding to the plurality of feature combinations based on a clustering algorithm, and taking the intersection point coordinates obtained after clustering as the target position of the vehicle.
15. A vehicle, characterized by comprising: a camera, an on-board satellite positioning system, a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein,
the camera is used for acquiring images of the surrounding environment of the vehicle;
the vehicle-mounted satellite positioning system is used for positioning the vehicle, wherein the positioning accuracy of the vehicle-mounted satellite positioning system is smaller than a preset threshold value;
the processor, when executing the program, implements the vehicle positioning method according to any one of claims 1 to 8.
16. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the vehicle positioning method according to any one of claims 1 to 8.
CN201810714095.1A 2018-06-29 2018-06-29 Vehicle positioning method, device, vehicle and computer readable storage medium Active CN110658539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810714095.1A CN110658539B (en) 2018-06-29 2018-06-29 Vehicle positioning method, device, vehicle and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810714095.1A CN110658539B (en) 2018-06-29 2018-06-29 Vehicle positioning method, device, vehicle and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110658539A true CN110658539A (en) 2020-01-07
CN110658539B CN110658539B (en) 2022-03-18

Family

ID=69027083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810714095.1A Active CN110658539B (en) 2018-06-29 2018-06-29 Vehicle positioning method, device, vehicle and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110658539B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464938A (en) * 2020-03-30 2020-07-28 滴图(北京)科技有限公司 Positioning method, positioning device, electronic equipment and computer readable storage medium
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
CN112200059A (en) * 2020-09-30 2021-01-08 中华人民共和国广东海事局 Method and device for counting flow of aquatic moving target and computer equipment
CN112362047A (en) * 2020-11-26 2021-02-12 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN113191342A (en) * 2021-07-01 2021-07-30 中移(上海)信息通信科技有限公司 Lane positioning method and electronic equipment
CN113537314A (en) * 2021-06-30 2021-10-22 上海西井信息科技有限公司 Longitudinal positioning method and device for unmanned vehicle, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN105783936A (en) * 2016-03-08 2016-07-20 武汉光庭信息技术股份有限公司 Road sign drawing and vehicle positioning method and system for automatic drive
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN107024216A (en) * 2017-03-14 2017-08-08 重庆邮电大学 Introduce the intelligent vehicle fusion alignment system and method for panoramic map
CN107238814A (en) * 2016-03-29 2017-10-10 茹景阳 A kind of apparatus and method of vehicle location
CN107339996A (en) * 2017-06-30 2017-11-10 百度在线网络技术(北京)有限公司 Vehicle method for self-locating, device, equipment and storage medium
CN107451526A (en) * 2017-06-09 2017-12-08 蔚来汽车有限公司 The structure of map and its application

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN105783936A (en) * 2016-03-08 2016-07-20 武汉光庭信息技术股份有限公司 Road sign drawing and vehicle positioning method and system for automatic drive
CN107238814A (en) * 2016-03-29 2017-10-10 茹景阳 A kind of apparatus and method of vehicle location
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN107024216A (en) * 2017-03-14 2017-08-08 重庆邮电大学 Introduce the intelligent vehicle fusion alignment system and method for panoramic map
CN107451526A (en) * 2017-06-09 2017-12-08 蔚来汽车有限公司 The structure of map and its application
CN107339996A (en) * 2017-06-30 2017-11-10 百度在线网络技术(北京)有限公司 Vehicle method for self-locating, device, equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464938A (en) * 2020-03-30 2020-07-28 滴图(北京)科技有限公司 Positioning method, positioning device, electronic equipment and computer readable storage medium
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
CN111930877B (en) * 2020-09-18 2021-01-05 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
CN112200059A (en) * 2020-09-30 2021-01-08 中华人民共和国广东海事局 Method and device for counting flow of aquatic moving target and computer equipment
CN112362047A (en) * 2020-11-26 2021-02-12 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN113537314A (en) * 2021-06-30 2021-10-22 上海西井信息科技有限公司 Longitudinal positioning method and device for unmanned vehicle, electronic equipment and storage medium
CN113191342A (en) * 2021-07-01 2021-07-30 中移(上海)信息通信科技有限公司 Lane positioning method and electronic equipment

Also Published As

Publication number Publication date
CN110658539B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN110658539B (en) Vehicle positioning method, device, vehicle and computer readable storage medium
US11854272B2 (en) Hazard detection from a camera in a scene with moving shadows
CN110487562B (en) Driveway keeping capacity detection system and method for unmanned driving
CN109460709B (en) RTG visual barrier detection method based on RGB and D information fusion
CN110443225B (en) Virtual and real lane line identification method and device based on feature pixel statistics
CN106096525B (en) A kind of compound lane recognition system and method
US8611585B2 (en) Clear path detection using patch approach
US9037403B2 (en) Intensity map-based localization with adaptive thresholding
US9245188B2 (en) Lane detection system and method
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
US8634593B2 (en) Pixel-based texture-less clear path detection
US8379928B2 (en) Obstacle detection procedure for motor vehicle
US8452053B2 (en) Pixel-based texture-rich clear path detection
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
JP5747549B2 (en) Signal detector and program
CN115035338A (en) Method and system for vehicle localization from camera images
US10867403B2 (en) Vehicle external recognition apparatus
CN105716567A (en) Method for determining the distance between an object and a motor vehicle by means of a monocular imaging device
CN106092123B (en) A kind of video navigation method and device
CN110657812A (en) Vehicle positioning method and device and vehicle
US10846546B2 (en) Traffic signal recognition device
CN112198899A (en) Road detection method, equipment and storage medium based on unmanned aerial vehicle
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN104902261A (en) Device and method for road surface identification in low-definition video streaming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant