CN108416808B - Vehicle repositioning method and device - Google Patents

Vehicle repositioning method and device Download PDF

Info

Publication number
CN108416808B
CN108416808B CN201810157705.2A CN201810157705A CN108416808B CN 108416808 B CN108416808 B CN 108416808B CN 201810157705 A CN201810157705 A CN 201810157705A CN 108416808 B CN108416808 B CN 108416808B
Authority
CN
China
Prior art keywords
preset
environment image
feature information
vehicle
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810157705.2A
Other languages
Chinese (zh)
Other versions
CN108416808A (en
Inventor
卢彦斌
胡祝青
刘青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebred Network Technology Co Ltd
Original Assignee
Zebred Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebred Network Technology Co Ltd filed Critical Zebred Network Technology Co Ltd
Priority to CN201810157705.2A priority Critical patent/CN108416808B/en
Publication of CN108416808A publication Critical patent/CN108416808A/en
Application granted granted Critical
Publication of CN108416808B publication Critical patent/CN108416808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Abstract

The invention provides a method and a device for repositioning a vehicle, wherein the method comprises the following steps: acquiring an environment image of a vehicle to be positioned; extracting preset characteristic information in the environment image; the preset feature information comprises geometric feature information and/or semantic feature information; constructing a visual characteristic corresponding to the environment image according to preset characteristic information; matching the visual characteristics with preset visual characteristics to determine the position of the vehicle to be positioned; the preset visual features are visual features in the map data. The vehicle repositioning method and the vehicle repositioning device provided by the invention reduce the calculated amount in the repositioning process and improve the calculated robustness.

Description

Vehicle repositioning method and device
Technical Field
The invention relates to the technical field of vehicle positioning, in particular to a method and a device for vehicle repositioning.
Background
The vehicle networking system is a network and application system which is started in recent years and mainly aims at improving traffic efficiency and traffic safety, the vehicle positioning technology is a key technology, and the accurate position acquisition has important significance for improving the safety of intelligent vehicles and realizing autonomous driving.
At present, maps for high-precision navigation and positioning for automobiles are mainly divided into two types, one is a map mainly based on laser point cloud (laser radar map), and the other is a map mainly based on vector information (high-precision vector map). When the vehicle suddenly loses its position for some reason while traveling on a high-precision map, it is necessary to quickly and accurately restore its position (called repositioning) in the high-precision map to ensure the normal operation of the vehicle (particularly a navigation system). In the prior art, the main technologies include a laser point cloud matching-based repositioning method and an image point feature information-based repositioning method. The method based on laser point cloud matching provides a relatively accurate initial search position depending on auxiliary information such as a GPS, an IMU and a milemeter, and when the auxiliary information is lacked (such as a tunnel and a tall building), the calculation amount of relocation is very large and cannot be completed quickly. The information based on the image point features is less robust.
Therefore, how to reduce the amount of computation in the relocation process and improve the robustness of the computation is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The invention provides a vehicle repositioning method and device, which are used for reducing the calculation amount in the repositioning process and improving the calculation robustness.
The embodiment of the invention provides a vehicle repositioning method, which comprises the following steps:
acquiring an environment image of a vehicle to be positioned;
extracting preset characteristic information in the environment image; the preset feature information comprises geometric feature information and/or semantic feature information;
constructing a visual feature corresponding to the environment image according to the preset feature information;
matching the visual features with preset visual features to determine the position of the vehicle to be positioned; and the preset visual features are visual features in the map data.
In an embodiment of the present invention, the constructing the visual feature corresponding to the environment image according to the preset feature information includes:
determining a descriptor corresponding to the preset characteristic information;
determining words in a word bag model corresponding to the descriptors; wherein each of the words corresponds to one or more of the descriptors;
and constructing the visual features corresponding to the environment images according to the number of the descriptors matched with each word.
In an embodiment of the present invention, before the constructing the visual feature corresponding to the environment image according to the preset feature information, the method further includes:
dividing the environment image into a plurality of sub-regions;
the constructing of the visual feature corresponding to the environment image according to the preset feature information includes:
determining a feature vector corresponding to preset feature information in each sub-region;
and carrying out vector combination on the feature vectors corresponding to each sub-region according to distribution positions to construct visual features corresponding to the environment image.
In an embodiment of the present invention, before dividing the environment image into a plurality of sub-regions, the method further includes:
determining a blanking point in the ambient image;
the dividing the environment image into a plurality of sub-regions comprises:
dividing the environmental image into the plurality of sub-regions according to the blanking points.
In an embodiment of the present invention, the extracting the preset feature information in the environment image includes:
extracting characteristic information in the environment image;
selecting the preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
The embodiment of the invention also provides a vehicle repositioning device, which comprises:
the system comprises an acquisition unit, a positioning unit and a positioning unit, wherein the acquisition unit is used for acquiring an environment image of a vehicle to be positioned;
the extraction unit is used for extracting preset characteristic information in the environment image; the preset feature information comprises geometric feature information and/or semantic feature information;
the construction unit is used for constructing the visual characteristics corresponding to the environment image according to the preset characteristic information;
the determining unit is used for matching the visual features with preset visual features so as to determine the position of the vehicle to be positioned; and the preset visual features are visual features in the map data.
In an embodiment of the present invention, the constructing unit is specifically configured to determine a descriptor corresponding to the preset feature information; determining words in a word bag model corresponding to the descriptors; wherein each of the words corresponds to one or more of the descriptors; and constructing the visual features corresponding to the environment images according to the number of the descriptors matched with each word.
In an embodiment of the present invention, the apparatus for repositioning vehicles further comprises a dividing unit;
the dividing unit is used for dividing the environment image into a plurality of sub-areas;
the construction unit is specifically configured to determine a feature vector corresponding to preset feature information in each sub-region; and carrying out vector combination on the feature vectors corresponding to each sub-region according to distribution positions to construct visual features corresponding to the environment image.
In an embodiment of the present invention, the determining unit is further configured to determine a blanking point in the environment image;
the dividing unit is specifically configured to divide the environmental image into the plurality of sub-regions according to the blanking point.
In an embodiment of the invention, the environment image comprises laser point cloud data;
the extraction unit is specifically used for extracting feature information in the environment image; selecting the preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
According to the method and the device for repositioning the vehicle position, provided by the embodiment of the invention, the environmental image of the vehicle to be positioned is obtained, and the preset characteristic information in the environmental image is extracted; then, constructing a visual characteristic corresponding to the environment image according to preset characteristic information; and then, matching the visual characteristics with preset visual characteristics so as to determine the position of the vehicle to be positioned. Therefore, when the position of the vehicle to be positioned is determined, the method and the device for repositioning the vehicle position provided by the embodiment of the invention match the visual characteristics with the preset visual characteristics of the map data according to the visual characteristics corresponding to the pre-constructed environment image, so that the position of the vehicle to be positioned is determined, the calculated amount in the repositioning process is reduced, and the calculated robustness is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic illustration of a method of vehicle repositioning provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of an environment image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an environment image labeled with point features according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an environment image marked with line and circle features according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an environment image labeled with semantic features according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a visual feature corresponding to a constructed environment image according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a visual feature corresponding to an environment image constructed by corresponding words in a bag-of-words model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a visual feature corresponding to another constructed environment image according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a visual feature corresponding to an environment image constructed by dividing sub-regions according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an embodiment of dividing an environmental image by blanking points;
FIG. 11 is a schematic diagram of another embodiment of the present invention for dividing an environmental image by a blanking point;
FIG. 12 is a schematic diagram of another embodiment of dividing an environmental image by blanking points according to the present invention;
FIG. 13 is a schematic structural diagram of a vehicle repositioning device according to an embodiment of the invention;
fig. 14 is a schematic structural diagram of another vehicle repositioning device according to an embodiment of the present invention.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention and how to solve the above technical problems will be described in detail with specific examples. The following specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a vehicle repositioning method according to an embodiment of the present invention, where the vehicle repositioning method may be performed by a vehicle repositioning device, and the vehicle repositioning device may be disposed independently or in a processor of a vehicle, as shown in fig. 1, and the vehicle repositioning method may include:
s101, obtaining an environment image of the vehicle to be positioned.
Wherein the environment image is used for indicating the surrounding environment condition of the vehicle to be positioned. Optionally, the environment image may further include laser point cloud data and GPS data. The laser point cloud information can reflect real three-dimensional geometric information and material information of the surrounding environment; the GPS information can reflect latitude and longitude information of the surrounding environment.
In the embodiment of the invention, the environmental image of the vehicle to be positioned can be acquired through the sensor, and the environmental image of the vehicle to be positioned can also be acquired through other modes. For example, please refer to fig. 2, fig. 2 is a schematic diagram of an environment image according to an embodiment of the present invention, where the environment image may include lane line information, street lamp information, traffic light information, and the like.
And S102, extracting preset characteristic information in the environment image.
The preset feature information may include geometric feature information and/or semantic feature information.
The geometric feature information here may include point feature information, and may also include line feature information and circle feature information. That is, in the embodiment of the present invention, when the preset feature information in the environment image is extracted, only one of the geometric feature information and the semantic feature information in the environment image may be extracted, or the geometric feature information and the semantic feature information in the environment image may be extracted at the same time. In detail, when preset feature information in the environment image is extracted, only line feature information and circle feature information may be extracted; or only semantic feature information can be extracted; of course, the point feature information, the line feature information, and the circle feature information may be extracted; the point feature information and the semantic feature information may be extracted, the line feature information, the circle feature information, and the semantic feature information may be extracted, or the point feature information, the line feature information, the circle feature information, and the semantic feature information may be extracted at the same time.
For example, in the embodiment of the present invention, the grayscale features such as the point features may be image features having feature descriptors, such as SIFT features, SURF features, and ORB features, or image features of feature point combination descriptors, such as FAST feature points and BRISK descriptors. Because the gray scale features of the images such as the point features can reflect the texture information of the surrounding environment and have certain invariance, the similarity of the images can be measured by comparing the similarity of the feature points in two different images, and further the similarity of the vehicle positions can be measured. For example, please refer to fig. 3, where fig. 3 is a schematic diagram of an environment image labeled with point features according to an embodiment of the present invention.
The geometric features of the image can reflect the geometric projection information of the surrounding environment. Taking the example that the geometric features include Line features and circle features, the geometric Line features can be obtained by Hough transformation or a Line Segment Detector and the like. The description of the Line feature can be calculated by a Line Binary Descriptor (LBD) or the like. The geometric features of the image can reflect the geometric information of the surrounding environment, for example, the lane lines are oblique straight lines, the lampposts of the traffic lights are vertical straight lines, and the buildings have oblique, vertical and horizontal straight lines. Since the geometric line segments have a certain scale (length), and therefore, the geometric line segments have similar distribution in the images at the similar positions, the descriptors of the geometric features can also be used for measuring the similarity of the images, and therefore, the similarity of the vehicle positions can be measured by comparing the geometric features in two different images. For example, please refer to fig. 4, fig. 4 is a schematic diagram of an environment image labeled with a line feature and a circle feature according to an embodiment of the present invention, and it can be seen from fig. 4 that the lane line information may be labeled as a line feature and the traffic light information may be labeled as a circle feature.
The semantic features of the image can reflect the real meaning information of the surrounding environment, and the semantic feature information can be common road elements such as lane lines, road signboards, speed limit signs, street lamps, traffic lights, stop lines and the like, and can also be local information related to driving such as parking lot entrances and exits, parking spaces, gas stations and the like. Images of vehicles at similar positions necessarily contain extremely similar semantic information, and therefore, the similarity of the vehicle positions can be measured. Referring to fig. 5, fig. 5 is a schematic diagram of an environment image labeled with semantic features according to an embodiment of the present invention, and it can be seen from fig. 5 that lane line information can be labeled with semantic features, traffic light information can be labeled with semantic features, and street light information can be labeled with semantic features.
Optionally, when the environment image includes the laser point cloud data, extracting the preset feature information in the environment image may be implemented in the following possible manners:
extracting feature information in the environment image, and selecting preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
After the preset feature information in the environment image is extracted, S103, which is described below, may be performed to construct a visual feature corresponding to the environment image according to the preset feature information.
S103, constructing a visual characteristic corresponding to the environment image according to the preset characteristic information.
Optionally, in the embodiment of the present invention, the S103 constructing, according to the preset feature information, the visual feature corresponding to the environment image may be implemented in at least two possible manners, where one possible manner is to construct the visual feature corresponding to the environment image by a word in the corresponding bag-of-words model; another possible way is to construct the corresponding visual features of the environment image by dividing the sub-regions. In the following, these two possible implementations will be described in detail.
In a possible implementation manner, a visual feature corresponding to an environment image may be constructed by a word in a corresponding bag-of-words model, please refer to fig. 6, where fig. 6 is a schematic diagram of a visual feature corresponding to an environment image according to an embodiment of the present invention.
S601, determining a descriptor corresponding to the preset characteristic information.
S602, determining the words in the word bag model corresponding to the descriptors.
Wherein each word corresponds to one or more descriptors.
It should be noted that the preset feature information may correspond to a plurality of descriptors, each of the plurality of descriptors corresponds to a word in the word bag model, and when the plurality of descriptors corresponds to a word, there may be a case where the plurality of descriptors correspond to a word, so that the number of corresponding words is less than the number of descriptors.
And S603, constructing visual features corresponding to the environment image according to the number of the descriptors matched with each word.
After each descriptor is corresponding to a word in the bag-of-words model, the number of the descriptors corresponding to each word can be calculated, so that a feature vector is generated according to the number of the descriptors matched with each word, and the feature vector is the visual feature corresponding to the environment image. For example, if the extracted preset features correspond to 500 descriptors, where 200 descriptors correspond to word 1 in the bag-of-words model, 200 descriptors correspond to word 2 in the bag-of-words model, and the remaining 100 descriptors correspond to word 3 in the bag-of-words model, the number of descriptors matched with word 1 is 200, the number of descriptors matched with word 2 is 200, and the number of descriptors matched with word 3 is 100, then a feature vector (200, 200, 100) is generated according to the number of descriptors matched with each word, and the feature vector (200, 200, 100) is the visual feature corresponding to the environment image, so as to implement the construction of the visual feature corresponding to the environment image. For example, please refer to fig. 7, fig. 7 is a schematic diagram illustrating a visual feature corresponding to an environment image constructed by words in a corresponding bag-of-words model according to an embodiment of the present invention.
In another possible implementation manner, a visual feature corresponding to an environment image may be constructed by dividing sub-regions, please refer to fig. 8, where fig. 8 is a schematic diagram of another visual feature corresponding to an environment image according to an embodiment of the present invention.
S801, dividing the environment image into a plurality of sub-areas.
S802, determining a feature vector corresponding to the preset feature information in each sub-area.
And S803, performing vector combination on the feature vectors corresponding to each sub-region according to the distribution positions to construct visual features corresponding to the environment image.
In this way, when the visual features corresponding to the environment image are constructed, the environment image needs to be divided into a plurality of sub-regions, the feature vectors corresponding to the preset feature information in each sub-region are determined, then, the feature vectors corresponding to each sub-region are subjected to vector combination according to the distribution position, and the vectors obtained after the vector combination are the visual features corresponding to the environment image. For example, if the environment image is divided into 4 sub-regions, the feature vector corresponding to the first sub-region is a, the feature vector corresponding to the second sub-region is b, the feature vector corresponding to the third sub-region is c, and the feature vector corresponding to the fourth sub-region is d, if the distribution positions of the four sub-regions are the first sub-region, the second sub-region, the third sub-region, and the fourth sub-region, the feature vectors corresponding to each sub-region are vector-combined according to the distribution positions, and the obtained vector is (a, b, c, d). Referring to fig. 9, fig. 9 is a schematic diagram illustrating a visual feature corresponding to an environment image constructed by dividing sub-regions according to an embodiment of the present invention. As can be seen from fig. 9, when the visual features corresponding to the environment image are constructed, the environment image is divided into 9 sub-regions, so that the visual features corresponding to the environment image are constructed according to the feature vector corresponding to each sub-region in the 9 sub-regions.
Optionally, in the above scheme of constructing the visual feature corresponding to the environment image by dividing the sub-regions, the environment image may be divided into a plurality of sub-regions by the blanking point, and therefore the blanking point in the environment image needs to be determined first. Referring to fig. 10 to 12, fig. 10 is a schematic diagram illustrating an environmental image divided by a blanking point according to an embodiment of the present invention, fig. 11 is a schematic diagram illustrating another environmental image divided by a blanking point according to an embodiment of the present invention, and fig. 12 is a schematic diagram illustrating another environmental image divided by a blanking point according to an embodiment of the present invention, wherein fig. 10, fig. 11, and fig. 12 respectively determine blanking points from different spatial points, so as to divide the environmental image according to the blanking points.
It should be noted that the blanking point is an intersection point of a set of parallel straight lines in the real world in the image. Referring to fig. 10-12, the blanking point is the intersection of the extended lines (black dashed line) of the two lane lines in fig. 10-14. A blanking point is a point on the horizon at infinity, all of which make up the horizon. Thus, in an image, a blanking point may serve as a reference point for spatial division. In particular, on the road, all the lane lines constitute a set of parallel lines, which correspond to the same blanking point in the image. The position of the blanking point in the image is related to the focal length of the camera, the picture elements and the direction of the parallel lines in the real world. Due to differences in vehicle model, camera mounting position and angle, the blanking points in the image are not constant. Fig. 10 and 11 show schematic diagrams of blanking points in an environmental image captured at two different positions of a vehicle during driving. Fig. 10 and 12 show schematic diagrams of the blank points in the environmental images acquired at two different positions of different vehicles (such as cars and SUVs) during driving, and the areas divided based on the blank points have certain translation invariance, so that the accuracy of the visual feature comparison is increased.
And S104, matching the visual features with preset visual features to determine the position of the vehicle to be positioned.
The preset visual features are visual features in the map data.
After the visual features corresponding to the environment images are constructed through the steps, the constructed visual features can be matched with the preset visual features, and therefore the position of the vehicle to be positioned is determined according to the matching result.
For example, in the embodiment of the present invention, preset visual features of key positions in map data may be obtained in advance, and after the visual features and the preset visual features are obtained respectively, the visual features may be matched with the preset visual features, and the position of the vehicle to be positioned may be determined according to the matching result. For example, the visual features may be matched by point cloud matching, feature matching, and pose optimization matching.
The method for repositioning the vehicle position comprises the steps of obtaining an environment image of a vehicle to be positioned, and extracting preset characteristic information in the environment image; then, constructing a visual characteristic corresponding to the environment image according to preset characteristic information; and then, matching the visual characteristics with preset visual characteristics so as to determine the position of the vehicle to be positioned. Therefore, when the position of the vehicle to be positioned is determined, the method for repositioning the vehicle position provided by the embodiment of the invention determines the position of the vehicle to be positioned according to the visual features corresponding to the pre-constructed environment image and by matching the visual features with the preset visual features of the map data, so that the calculated amount in the repositioning process is reduced, and the calculated robustness is improved.
Fig. 13 is a schematic structural diagram of a vehicle repositioning device 130 according to an embodiment of the present invention, please refer to fig. 13, where the vehicle repositioning device 130 may include:
an obtaining unit 1301, configured to obtain an environment image of the vehicle to be located.
An extracting unit 1302, configured to extract preset feature information in an environment image; the preset feature information includes geometric feature information and/or semantic feature information.
And the constructing unit 1303 is configured to construct the visual features corresponding to the environment image according to the preset feature information.
A determining unit 1304, configured to match the visual characteristics with preset visual characteristics to determine a position of the vehicle to be positioned; the preset visual features are visual features in the map data.
Optionally, the constructing unit 1303 is specifically configured to determine a descriptor corresponding to the preset feature information; determining words in a word bag model corresponding to the descriptors; wherein each word corresponds to one or more descriptors; and constructing the visual features corresponding to the environment images according to the number of the descriptors matched with each word. .
Optionally, the vehicle repositioning device 130 may further include a dividing unit 1305, please refer to fig. 14, and fig. 14 is a schematic structural diagram of another vehicle repositioning device 130 according to an embodiment of the present invention.
A dividing unit 1305 for dividing the environment image into a plurality of sub-regions.
The constructing unit 1303 is specifically configured to determine a feature vector corresponding to preset feature information in each sub-region; and carrying out vector combination on the feature vectors corresponding to each sub-region according to the distribution positions to construct visual features corresponding to the environment image.
Optionally, the determining unit 1304 is further configured to determine a blanking point in the environment image.
The dividing unit 1305 is specifically configured to divide the environment image into a plurality of sub-regions according to the blanking points.
Optionally, the environmental image comprises laser point cloud data.
An extracting unit 1302, specifically configured to extract feature information in the environment image; selecting preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
The device 130 for vehicle relocation shown in the embodiment of the present invention may implement the technical solution of the method for vehicle relocation shown in any of the above embodiments, and its implementation principle and beneficial effects are similar, and are not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (6)

1. A method of repositioning a vehicle, comprising:
acquiring an environment image of a vehicle to be positioned;
extracting preset characteristic information in the environment image; the preset feature information comprises geometric feature information and/or semantic feature information;
constructing a visual feature corresponding to the environment image according to the preset feature information;
matching the visual features with preset visual features to determine the position of the vehicle to be positioned; the preset visual features are visual features in the map data;
the constructing of the visual feature corresponding to the environment image according to the preset feature information includes:
determining a descriptor corresponding to the preset characteristic information;
determining words in a word bag model corresponding to the descriptors; wherein each of the words corresponds to one or more of the descriptors;
constructing visual features corresponding to the environment images according to the number of the descriptors matched with each word;
dividing the environment image into a plurality of sub-regions before constructing the visual features corresponding to the environment image according to the preset feature information;
determining a feature vector corresponding to preset feature information in each sub-region;
and carrying out vector combination on the feature vectors corresponding to each sub-region according to distribution positions to construct visual features corresponding to the environment image.
2. The method of claim 1, wherein prior to dividing the environmental image into a plurality of sub-regions, further comprising:
determining a blanking point in the ambient image;
the dividing the environment image into a plurality of sub-regions comprises:
dividing the environmental image into the plurality of sub-regions according to the blanking points.
3. The method according to any one of claims 1-2, wherein the environment image comprises laser point cloud data, and the extracting preset feature information in the environment image comprises:
extracting characteristic information in the environment image;
selecting the preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
4. An apparatus for repositioning a vehicle, comprising:
the system comprises an acquisition unit, a positioning unit and a positioning unit, wherein the acquisition unit is used for acquiring an environment image of a vehicle to be positioned;
the extraction unit is used for extracting preset characteristic information in the environment image; the preset feature information comprises geometric feature information and/or semantic feature information;
the construction unit is used for constructing the visual characteristics corresponding to the environment image according to the preset characteristic information;
the determining unit is used for matching the visual features with preset visual features so as to determine the position of the vehicle to be positioned; the preset visual features are visual features in the map data;
the construction unit is specifically configured to determine a descriptor corresponding to the preset feature information; determining words in a word bag model corresponding to the descriptors; wherein each of the words corresponds to one or more of the descriptors; constructing visual features corresponding to the environment images according to the number of the descriptors matched with each word;
a dividing unit configured to divide the environment image into a plurality of sub-regions;
the construction unit is specifically configured to determine a feature vector corresponding to preset feature information in each sub-region; and carrying out vector combination on the feature vectors corresponding to each sub-region according to distribution positions to construct visual features corresponding to the environment image.
5. The apparatus of claim 4,
the determining unit is further configured to determine a blanking point in the environment image;
the dividing unit is specifically configured to divide the environmental image into the plurality of sub-regions according to the blanking point.
6. The apparatus of any of claims 4-5, wherein the environmental image comprises laser point cloud data;
the extraction unit is specifically used for extracting feature information in the environment image; selecting the preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
CN201810157705.2A 2018-02-24 2018-02-24 Vehicle repositioning method and device Active CN108416808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810157705.2A CN108416808B (en) 2018-02-24 2018-02-24 Vehicle repositioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810157705.2A CN108416808B (en) 2018-02-24 2018-02-24 Vehicle repositioning method and device

Publications (2)

Publication Number Publication Date
CN108416808A CN108416808A (en) 2018-08-17
CN108416808B true CN108416808B (en) 2022-03-08

Family

ID=63128916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810157705.2A Active CN108416808B (en) 2018-02-24 2018-02-24 Vehicle repositioning method and device

Country Status (1)

Country Link
CN (1) CN108416808B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109141444B (en) * 2018-08-28 2019-12-06 北京三快在线科技有限公司 positioning method, positioning device, storage medium and mobile equipment
CN110147705B (en) * 2018-08-28 2021-05-04 北京初速度科技有限公司 Vehicle positioning method based on visual perception and electronic equipment
CN109255817A (en) * 2018-09-14 2019-01-22 北京猎户星空科技有限公司 A kind of the vision method for relocating and device of smart machine
CN111143489B (en) * 2018-11-06 2024-01-09 北京嘀嘀无限科技发展有限公司 Image-based positioning method and device, computer equipment and readable storage medium
CN109461211B (en) * 2018-11-12 2021-01-26 南京人工智能高等研究院有限公司 Semantic vector map construction method and device based on visual point cloud and electronic equipment
CN111322993B (en) * 2018-12-13 2022-03-04 杭州海康机器人技术有限公司 Visual positioning method and device
CN111750881B (en) * 2019-03-29 2022-05-13 北京魔门塔科技有限公司 Vehicle pose correction method and device based on light pole
CN110415297B (en) * 2019-07-12 2021-11-05 北京三快在线科技有限公司 Positioning method and device and unmanned equipment
CN110568447B (en) * 2019-07-29 2022-03-08 广东星舆科技有限公司 Visual positioning method, device and computer readable medium
EP3809313A1 (en) * 2019-10-16 2021-04-21 Ningbo Geely Automobile Research & Development Co. Ltd. A vehicle parking finder support system, method and computer program product for determining if a vehicle is at a reference parking location
CN111508258B (en) * 2020-04-17 2021-11-05 北京三快在线科技有限公司 Positioning method and device
CN114545400B (en) * 2022-04-27 2022-08-05 陕西欧卡电子智能科技有限公司 Global repositioning method of water surface robot based on millimeter wave radar

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364347A (en) * 2008-09-17 2009-02-11 同济大学 Detection method for vehicle delay control on crossing based on video
CN101862194A (en) * 2010-06-17 2010-10-20 天津大学 Imagination action EEG identification method based on fusion feature
CN102054178A (en) * 2011-01-20 2011-05-11 北京联合大学 Chinese painting image identifying method based on local semantic concept
CN102053249A (en) * 2009-10-30 2011-05-11 吴立新 Underground space high-precision positioning method based on laser scanning and sequence encoded graphics
CN103473739A (en) * 2013-08-15 2013-12-25 华中科技大学 White blood cell image accurate segmentation method and system based on support vector machine
CN103810505A (en) * 2014-02-19 2014-05-21 北京大学 Vehicle identification method and system based on multilayer descriptors
CN103971124A (en) * 2014-05-04 2014-08-06 杭州电子科技大学 Multi-class motor imagery brain electrical signal classification method based on phase synchronization
CN104217444A (en) * 2013-06-03 2014-12-17 支付宝(中国)网络技术有限公司 Card area positioning method and equipment
CN104268876A (en) * 2014-09-26 2015-01-07 大连理工大学 Camera calibration method based on partitioning
CN105404887A (en) * 2015-07-05 2016-03-16 中国计量学院 White blood count five-classification method based on random forest
CN106569244A (en) * 2016-11-04 2017-04-19 杭州联络互动信息科技股份有限公司 Vehicle positioning method based on intelligent equipment and apparatus thereof
CN106896353A (en) * 2017-03-21 2017-06-27 同济大学 A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar
CN106908775A (en) * 2017-03-08 2017-06-30 同济大学 A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN106960179A (en) * 2017-02-24 2017-07-18 北京交通大学 Rail line Environmental security intelligent monitoring method and device
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6879341B1 (en) * 1997-07-15 2005-04-12 Silverbrook Research Pty Ltd Digital camera system containing a VLIW vector processor
US8620026B2 (en) * 2011-04-13 2013-12-31 International Business Machines Corporation Video-based detection of multiple object types under varying poses
US20170024412A1 (en) * 2015-07-17 2017-01-26 Environmental Systems Research Institute (ESRI) Geo-event processor
CN107451574B (en) * 2017-08-09 2020-03-17 安徽大学 Motion estimation method based on Haar-like visual feature perception

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364347A (en) * 2008-09-17 2009-02-11 同济大学 Detection method for vehicle delay control on crossing based on video
CN102053249A (en) * 2009-10-30 2011-05-11 吴立新 Underground space high-precision positioning method based on laser scanning and sequence encoded graphics
CN101862194A (en) * 2010-06-17 2010-10-20 天津大学 Imagination action EEG identification method based on fusion feature
CN102054178A (en) * 2011-01-20 2011-05-11 北京联合大学 Chinese painting image identifying method based on local semantic concept
CN104217444A (en) * 2013-06-03 2014-12-17 支付宝(中国)网络技术有限公司 Card area positioning method and equipment
CN103473739A (en) * 2013-08-15 2013-12-25 华中科技大学 White blood cell image accurate segmentation method and system based on support vector machine
CN103810505A (en) * 2014-02-19 2014-05-21 北京大学 Vehicle identification method and system based on multilayer descriptors
CN103971124A (en) * 2014-05-04 2014-08-06 杭州电子科技大学 Multi-class motor imagery brain electrical signal classification method based on phase synchronization
CN104268876A (en) * 2014-09-26 2015-01-07 大连理工大学 Camera calibration method based on partitioning
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis
CN105404887A (en) * 2015-07-05 2016-03-16 中国计量学院 White blood count five-classification method based on random forest
CN106569244A (en) * 2016-11-04 2017-04-19 杭州联络互动信息科技股份有限公司 Vehicle positioning method based on intelligent equipment and apparatus thereof
CN106960179A (en) * 2017-02-24 2017-07-18 北京交通大学 Rail line Environmental security intelligent monitoring method and device
CN106908775A (en) * 2017-03-08 2017-06-30 同济大学 A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN106896353A (en) * 2017-03-21 2017-06-27 同济大学 A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar

Also Published As

Publication number Publication date
CN108416808A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108416808B (en) Vehicle repositioning method and device
US11105638B2 (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN108694882B (en) Method, device and equipment for labeling map
US10628671B2 (en) Road modeling from overhead imagery
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
CN109141444B (en) positioning method, positioning device, storage medium and mobile equipment
EP3836018B1 (en) Method and apparatus for determining road information data and computer storage medium
WO2018145602A1 (en) Lane determination method, device and storage medium
WO2015096717A1 (en) Positioning method and device
US8612138B2 (en) Lane-based road transport information generation
US20200167603A1 (en) Method, apparatus, and system for providing image labeling for cross view alignment
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
CN111830953A (en) Vehicle self-positioning method, device and system
CN108428254A (en) The construction method and device of three-dimensional map
WO2021230466A1 (en) Vehicle location determining method and system
CN109515439B (en) Automatic driving control method, device, system and storage medium
EP3644013B1 (en) Method, apparatus, and system for location correction based on feature point correspondence
US10515293B2 (en) Method, apparatus, and system for providing skip areas for machine learning
US20180211111A1 (en) Unsupervised online learning of overhanging structure detector for map generation
CN114509065B (en) Map construction method, system, vehicle terminal, server and storage medium
CN110765224A (en) Processing method of electronic map, vehicle vision repositioning method and vehicle-mounted equipment
Choi et al. Map-matching-based cascade landmark detection and vehicle localization
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN110827340B (en) Map updating method, device and storage medium
CN113902047B (en) Image element matching method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant