CN112284400B - Vehicle positioning method and device, electronic equipment and computer readable storage medium - Google Patents

Vehicle positioning method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112284400B
CN112284400B CN202011554483.1A CN202011554483A CN112284400B CN 112284400 B CN112284400 B CN 112284400B CN 202011554483 A CN202011554483 A CN 202011554483A CN 112284400 B CN112284400 B CN 112284400B
Authority
CN
China
Prior art keywords
lane
positioning
vehicle
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011554483.1A
Other languages
Chinese (zh)
Other versions
CN112284400A (en
Inventor
温拓朴
郑东方
徐一梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011554483.1A priority Critical patent/CN112284400B/en
Publication of CN112284400A publication Critical patent/CN112284400A/en
Application granted granted Critical
Publication of CN112284400B publication Critical patent/CN112284400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/421Determining position by combining or switching between position solutions or signals derived from different satellite radio beacon positioning systems; by combining or switching between position solutions or signals derived from different modes of operation in a single system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The application provides a vehicle positioning method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of positioning. The method comprises the following steps: acquiring the current positioning information of the vehicle; acquiring a target image of a current driving road of the vehicle, and acquiring first road information of the current driving road based on the target image; acquiring second road information of the current running road in a preset map based on the positioning information, and determining at least one initial pose of the vehicle based on the second road information; determining a positioning result corresponding to each of the at least one initial pose based on the first road information and the second road information; and determining a target positioning result from all positioning results based on the first road information and the preset map. The method and the device have the advantages that the vehicle can be positioned on the correct lane under the condition that the positioning information is weak, the calculated amount is reduced, and the algorithm efficiency is improved.

Description

Vehicle positioning method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a vehicle positioning method and apparatus, an electronic device, and a computer-readable storage medium.
Background
GNSS (Global Navigation Satellite System) is a space-based radio Navigation positioning System that can provide users with all-weather three-dimensional coordinates and speed and time information at any place on the earth's surface or in near-earth space, and can be used for vehicle positioning.
The existing vehicle positioning method has the following problems under the condition of low-precision GNSS:
1) because the prior art relies on initial values when resolving the pose, when the GNSS error is large, the correct initial values are difficult to provide, so that the positioning result can be trapped in local optimization, and the positioning result is positioned on a wrong lane, so that the positioning accuracy is poor. For example, when the vehicle is on a side road, the vehicle can be positioned on a main road;
2) if a better initial value is to be obtained, enough sampling points are randomly generated by taking GNSS information as a center, and a map matching positioning result is calculated by taking each sampling point as an initial value. However, the efficiency of the algorithm is reduced by a large number of sampling point calculations, and even the positioning result cannot be output in real time.
Disclosure of Invention
The application provides a vehicle positioning method and device, electronic equipment and a computer readable storage medium, which can solve the problems of poor positioning precision and low efficiency of the existing vehicle. The technical scheme is as follows:
in one aspect, a method for locating a vehicle is provided, the method comprising:
acquiring the current positioning information of the vehicle;
acquiring a target image of a current driving road of the vehicle, and acquiring first road information of the current driving road based on the target image;
acquiring second road information of the current running road in a preset map based on the positioning information, and determining at least one initial pose of the vehicle based on the second road information;
determining a positioning result corresponding to each of the at least one initial pose based on the first road information and the second road information;
and determining a target positioning result from all positioning results based on the first road information and the preset map.
In one or more embodiments, the acquiring a target image of a current driving road of the vehicle and acquiring first road information of the current driving road based on the target image includes:
acquiring a target image of the current driving road through image acquisition equipment carried by the vehicle;
and recognizing lane lines of the target image, and taking at least one recognized lane line obtained by recognition as the first road information.
In one or more embodiments, the preset map comprises at least one lane line and at least one lane center line; each lane line consists of at least one lane line discrete point, and each lane central line consists of at least one lane central line discrete point;
the obtaining of the second road information of the current driving road in a preset map based on the positioning information includes:
determining a corresponding position point of the positioning information in the preset map, and determining at least one target lane line and at least one target lane central line within a preset distance of the position point;
determining a target lane line discrete point with the distance from the position point not more than a preset distance from each lane line discrete point of the at least one target lane line to obtain a lane line discrete point set;
respectively determining at least one lane center line discrete point of each target lane center line, wherein the distance between the target lane center line discrete point and the position point is not more than the preset distance, and the target lane center line discrete point with the minimum distance is obtained to obtain a lane center line discrete point set;
and taking the at least one target lane line, the at least one target lane central line, the lane line discrete point set and the lane central line discrete point set as the second road information.
In one or more embodiments, the determining at least one initial pose of the vehicle based on the second road information includes:
determining a rotation matrix of the vehicle relative to the positioning information;
and respectively combining the rotation matrix with at least one target lane central line discrete point in the lane central line discrete point set to obtain at least one first posture matrix, and taking the at least one first posture matrix as an initial posture.
In one or more embodiments, the determining a rotation matrix of the vehicle relative to the positioning information comprises:
determining a first vector based on the extending direction of the central line of any item of the taxi track;
fitting each target lane line discrete point in the lane line discrete point set to obtain a second vector;
calculating based on the first vector and the second vector to obtain a third vector;
and combining the first vector, the second vector and the third vector to obtain the rotation matrix.
In one or more embodiments, the determining, based on the first road information and the second road information, a positioning result corresponding to each of the at least one initial pose includes:
respectively combining the rotation matrix with at least one target lane line discrete point in the lane line discrete point set to obtain at least one second attitude matrix;
calculating at least one first projection point corresponding to at least one target lane line discrete point in the target image according to a preset projection model;
determining lane pixel points corresponding to the at least one first projection point in the target image; the lane pixel points are pixel points belonging to the identified lane lines in the target image;
calculating a first distance between the at least one first projection point and the corresponding lane pixel point;
and calculating actual poses corresponding to the at least one initial pose through an objective function based on the at least one second pose matrix and the at least one first distance, and taking the actual poses as positioning results.
In one or more embodiments, determining a target positioning result from the positioning results based on the first road information and the preset map includes:
calculating second projection points corresponding to the positioning results in the preset map in the target image according to the projection model;
calculating a second distance between each second projection point and each lane pixel point;
and taking the positioning result corresponding to the minimum second distance as the target positioning result.
In another aspect, there is provided a positioning apparatus of a vehicle, the apparatus including:
the positioning information acquisition module is used for acquiring the current positioning information of the vehicle;
the first road information acquisition module is used for acquiring a target image of a current driving road of the vehicle and acquiring first road information of the current driving road based on the target image;
the second road information acquisition module is used for acquiring second road information of the current driving road in a preset map based on the positioning information;
an initial pose determination module for determining at least one initial pose of the vehicle based on the second road information;
a positioning result determining module, configured to determine, based on the first road information and the second road information, a positioning result corresponding to each of the at least one initial pose;
and the screening module is used for determining a target positioning result from all positioning results based on the first road information and the preset map.
In one or more embodiments, the first road information obtaining module includes:
the acquisition submodule is used for acquiring a target image of the current driving road through image acquisition equipment carried by the vehicle;
and the recognition submodule is used for recognizing lane lines of the target image and taking at least one recognized lane line obtained by recognition as the first road information.
In one or more embodiments, the preset map comprises at least one lane line and at least one lane center line; each lane line consists of at least one lane line discrete point, and each lane central line consists of at least one lane central line discrete point;
the second road information acquisition module includes:
the first processing submodule is used for determining a corresponding position point of the positioning information in the preset map and determining at least one target lane line and at least one target lane central line within a preset distance of the position point;
the second processing sub-module is used for determining a target lane line discrete point with the distance from the position point not exceeding a preset distance from each lane line discrete point of the at least one target lane line to obtain a lane line discrete point set;
the third processing submodule is used for respectively determining a target lane central line discrete point with the minimum distance from at least one lane central line discrete point of each target lane central line, wherein the distance between the target lane central line discrete point and the position point is not more than the preset distance, and the lane central line discrete point set is obtained;
and the fourth processing submodule is used for taking the at least one target lane line, the at least one target lane central line, the lane line discrete point set and the lane central line discrete point set as the second road information.
In one or more embodiments, the initial pose determination module includes:
a rotation matrix determination submodule for determining a rotation matrix of the vehicle relative to the positioning information;
and the first generation submodule is used for respectively combining the rotation matrix and at least one target lane central line discrete point in the lane central line discrete point set to obtain at least one first pose matrix, and taking the at least one first pose matrix as an initial pose.
In one or more embodiments, the rotation matrix determination sub-module comprises:
the first vector determination unit is used for determining a first vector based on the extending direction of the central line of any item of the taxi track;
the second vector determining unit is used for fitting each target lane line discrete point in the lane line discrete point set to obtain a second vector;
a third vector determination unit, configured to perform an operation based on the first vector and the second vector to obtain a third vector;
a generating unit, configured to combine the first vector, the second vector, and the third vector to obtain the rotation matrix.
In one or more embodiments, the positioning result determining module includes:
the second generation submodule is used for respectively combining the rotation matrix and at least one target lane line discrete point in the lane line discrete point set to obtain at least one second attitude matrix;
the first calculation submodule is used for calculating at least one first projection point corresponding to at least one target lane line discrete point in the target image according to a preset projection model;
a fifth processing submodule, configured to determine lane pixel points corresponding to the at least one first projection point in the target image; the lane pixel points are pixel points belonging to the identified lane lines in the target image;
the second calculation submodule is used for calculating a first distance between the at least one first projection point and the corresponding lane pixel point;
and the sixth processing submodule is used for calculating actual poses corresponding to the at least one initial pose through an objective function based on the at least one second pose matrix and the at least one first distance, and taking the actual poses as positioning results.
In one or more embodiments, a screening module, comprising:
the third calculation submodule is used for calculating second projection points, corresponding to the positioning results in the preset map, in the target image according to the projection model;
the fourth calculation submodule is used for calculating the second distance between each second projection point and each lane pixel point;
and the fifth processing submodule is used for taking the positioning result corresponding to the minimum second distance as the target positioning result.
In another aspect, an electronic device is provided, including:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is configured to, by invoking the operation instruction, execute the executable instruction to cause the processor to perform an operation corresponding to the vehicle positioning method shown in the first aspect of the present application.
In another aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the method for locating a vehicle according to the first aspect of the present application.
The beneficial effect that technical scheme that this application provided brought is:
the method comprises the steps of obtaining current positioning information of a vehicle, collecting a target image of a current running road of the vehicle, obtaining first road information of the current running road based on the target image, obtaining second road information of the current running road in a preset map based on the positioning information, determining at least one initial pose of the vehicle based on the second road information, sending positioning results corresponding to the at least one initial pose determined based on the first road information and the second road information, and determining a target positioning result from each positioning result based on the first road information and the preset map. Therefore, the first road information of the current driving road of the vehicle is obtained based on visual perception, at the same time, at least one lane where the vehicle may be located in a preset map is provided based on positioning information, second road information corresponding to each lane is obtained, then matching positioning is carried out on each lane based on the first road information and each second road information, and finally a correct lane where the vehicle is located (positioning result) is obtained.
Furthermore, even under the condition that the positioning information is weak, the embodiment of the invention can determine a plurality of initial poses with possible errors, and then determine the final positioning result from the plurality of initial poses based on visual perception and a preset map, so that the vehicle can be positioned on a correct lane under the condition that the positioning information is weak, and the number of sampling points is small due to the weak positioning information, thereby reducing the calculated amount and further improving the algorithm efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic application environment diagram of a vehicle positioning method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a method for locating a vehicle according to an embodiment of the present disclosure;
FIG. 3 is a detailed flowchart of step S202 in FIG. 2 according to the present application;
FIG. 4 is a first flowchart illustrating a part of the step S203 in FIG. 2;
FIG. 5 is a second flowchart illustrating a portion of the step S203 shown in FIG. 2;
FIG. 6 is a detailed flowchart of step S501 in FIG. 5;
FIG. 7 is a detailed flowchart of step S204 in FIG. 2;
FIG. 8 is a flowchart illustrating a detailed process of step S205 in FIG. 2;
FIG. 9 is a schematic structural diagram of a positioning device of a vehicle according to another embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device for positioning a vehicle according to yet another embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The present application provides a vehicle positioning method, apparatus, electronic device and computer-readable storage medium, which aim to solve the above technical problems in the prior art.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
An embodiment of the present invention provides an application environment of a vehicle positioning method, and referring to fig. 1, the application environment includes: a first device 101 and a second device 102. The first device 101 and the second device 102 are connected through a network, the first device 101 is an access device, and the second device 102 is an accessed device. The first device 101 may be a terminal device in a vehicle and the second device 102 may be a server. In the embodiment of the present invention, a vehicle positioning method may be executed in either a first device or a second device, and may be set according to actual requirements in practical applications, which is not limited in the embodiment of the present invention.
The terminal device may have the following features:
(1) on a hardware architecture, a device has a central processing unit, a memory, an input unit and an output unit, that is, the device is often a microcomputer device having a communication function. In addition, various input modes such as a keyboard, a mouse, a touch screen, a microphone, a camera and the like can be provided, and input can be adjusted as required. Meanwhile, the equipment often has a plurality of output modes, such as a telephone receiver, a display screen and the like, and can be adjusted according to needs;
(2) on a software system, the device must have an operating system, such as Windows Mobile, Symbian, Palm, Android, iOS, and the like. Meanwhile, the operating systems are more and more open, and personalized application programs developed based on the open operating system platforms are infinite, such as a communication book, a schedule, a notebook, a calculator, various games and the like, so that the requirements of personalized users are met to a great extent;
(3) in terms of communication capacity, the device has flexible access mode and high-bandwidth communication performance, and can automatically adjust the selected communication mode according to the selected service and the environment, thereby being convenient for users to use. The device may support computer network communication including, but not limited to, 3GPP (3 rd Generation Partnership Project), 4GPP (4 rd Generation Partnership Project), 5GPP (5 rd Generation Partnership Project, fifth Generation Partnership Project), LTE (Long Term Evolution), WIMAX (World Interoperability for Microwave Access), worldwide Interoperability for Microwave Access), TCP/IP (Transmission Control Protocol/Internet Protocol), UDP (User data Protocol, User Datagram Protocol), and short-range wireless Transmission based on bluetooth and infrared Transmission standards, not only support voice services, but also support various wireless data services;
(4) in the aspect of function use, the equipment focuses more on humanization, individuation and multi-functionalization. With the development of computer technology, devices enter a human-centered mode from a device-centered mode, and the embedded computing, control technology, artificial intelligence technology, biometric authentication technology and the like are integrated, so that the human-oriented purpose is fully embodied. Due to the development of software technology, the equipment can be adjusted and set according to individual requirements, and is more personalized. Meanwhile, the device integrates a plurality of software and hardware, and the function is more and more powerful.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services.
In the above application environment, a vehicle positioning method may be performed, as shown in fig. 2, the method including:
step S201, acquiring the current positioning information of the vehicle;
the positioning information may be GNSS information. In an embodiment of the present invention, a terminal device in a vehicle may obtain GNSS information from a GNSS as positioning information including, but not limited to, latitude and longitude and altitude of WGS84 (World Geodetic System 1984 ). WGS84 is a coordinate system whose origin is the earth's centroid.
Further, the acquisition of the positioning information may be acquired when a terminal device in the vehicle is started. For example, when the terminal device is powered on, GNSS information can be obtained from the GNSS.
It should be noted that the Positioning information may be GNSS information, including but not limited to GPS (Global Positioning System), GLONASS (Global NAVIGATION System SATELLITE SYSTEM, Global Satellite NAVIGATION System), Galileo (Galileo Satellite NAVIGATION System), BDS (BeiDou NAVIGATION Satellite NAVIGATION System, BeiDou Satellite NAVIGATION System), and other information for Positioning, and may be set according to actual requirements in practical application, which is not limited in this embodiment of the present invention.
Step S202, collecting a target image of a current driving road of a vehicle, and acquiring first road information of the current driving road based on the target image;
in an embodiment of the present invention, an image capturing device, such as a vehicle recorder, may be further installed in the vehicle for capturing a target image of the surroundings of the vehicle, including but not limited to the current driving road of the vehicle.
In the driving process of the vehicle, a plurality of target images can be acquired in real time, or acquired at preset time intervals, for example, one target image is acquired at every 1 second, or the target images can be acquired in other manners, and the target images can be set according to actual requirements in actual application, which is not limited in the embodiment of the invention.
After the target image is collected, the terminal equipment in the vehicle identifies the target image to acquire first road information of the current driving road.
Step S203, second road information of the current driving road in a preset map is obtained based on the positioning information, and at least one initial pose of the vehicle is determined based on the second road information;
after the terminal device in the vehicle acquires the positioning information, the positioning information can be positioned on the current driving road in the preset map, so that second road information of the current driving road is acquired from the preset map, and at least one initial pose of the vehicle is determined based on the second road information.
Step S204, determining a positioning result corresponding to each of at least one initial pose based on the first road information and the second road information;
after the first road information, the second road information and at least one initial pose are determined, a positioning result corresponding to each initial pose can be determined based on the first road information and the second road information. The positioning result can be an accurate actual pose corresponding to each initial pose.
And S205, determining a target positioning result from all positioning results based on the first road information and a preset map.
After each positioning result is determined, a target positioning result can be determined from each positioning result based on the first road information and the preset map, namely the position and posture of the vehicle in the preset map, which are closest to the position of the current driving road.
In the embodiment of the invention, the current positioning information of a vehicle is acquired, then a target image of a current running road of the vehicle is acquired, first road information of the current running road is acquired based on the target image, second road information of the current running road in a preset map is acquired based on the positioning information, at least one initial pose of the vehicle is determined based on the second road information, positioning results corresponding to the at least one initial pose are determined based on the first road information and the second road information, and a target positioning result is determined from each positioning result based on the first road information and the preset map. Therefore, the first road information of the current driving road of the vehicle is obtained based on visual perception, at the same time, at least one lane where the vehicle may be located in a preset map is provided based on positioning information, second road information corresponding to each lane is obtained, then matching positioning is carried out on each lane based on the first road information and each second road information, and finally a correct lane where the vehicle is located (positioning result) is obtained.
Furthermore, even under the condition that the positioning information is weak, the embodiment of the invention can determine a plurality of initial poses with possible errors, and then determine the final positioning result from the plurality of initial poses based on visual perception and a preset map, so that the vehicle can be positioned on a correct lane under the condition that the positioning information is weak, and the number of sampling points is small due to the weak positioning information, thereby reducing the calculated amount and further improving the algorithm efficiency.
In another embodiment, the steps of a method for locating a vehicle shown in FIG. 2 are described in detail.
Step S201, acquiring the current positioning information of the vehicle;
the positioning information may be GNSS information. In an embodiment of the present invention, a terminal device in a vehicle may obtain GNSS information from a GNSS as positioning information including, but not limited to, latitude and longitude and altitude of WGS84 (World Geodetic System 1984 ). WGS84 is a coordinate system whose origin is the earth's centroid.
Further, the acquisition of the positioning information may be acquired when a terminal device in the vehicle is started. For example, when the terminal device is powered on, GNSS information can be obtained from the GNSS.
It should be noted that the positioning information may be GNSS information, including but not limited to information used for positioning, such as GPS, GLONASS, Galileo, BDS, and the like, and may be set according to actual requirements in practical applications, which is not limited in this embodiment of the present invention.
Step S202, collecting a target image of a current driving road of a vehicle, and acquiring first road information of the current driving road based on the target image;
in an embodiment of the present invention, an image capturing device, such as a vehicle recorder, may be further installed in the vehicle for capturing a target image of the surroundings of the vehicle, including but not limited to the current driving road of the vehicle.
In the driving process of the vehicle, a plurality of target images can be acquired in real time, or acquired at preset time intervals, for example, one target image is acquired at every 1 second, or the target images can be acquired in other manners, and the target images can be set according to actual requirements in actual application, which is not limited in the embodiment of the invention.
After the target image is collected, the terminal equipment in the vehicle identifies the target image to acquire first road information of the current driving road.
In a preferred embodiment of the present invention, as shown in fig. 3, step S202 includes:
s301, acquiring a target image of a current driving road through image acquisition equipment carried by a vehicle;
step S302, lane line recognition is carried out on the target image, and at least one recognized lane line obtained through recognition is used as first road information.
Specifically, the image capturing device may capture a target image of a current traveling road (i.e., a current traveling road in front of the vehicle) in a current traveling direction of the vehicle, then perform lane line recognition on the target image, obtain at least one recognized lane line, and use the at least one recognized lane line as the first road information.
The lane line recognition may adopt an image and computer vision technology, and of course, other lane line recognition technologies are also applicable to the embodiment of the present invention, and may be set according to actual requirements in actual applications, which is not limited in the embodiment of the present invention.
Step S203, second road information of the current driving road in a preset map is obtained based on the positioning information, and at least one initial pose of the vehicle is determined based on the second road information;
after the terminal device in the vehicle acquires the positioning information, the positioning information can be positioned on the current driving road in the preset map, so that second road information of the current driving road is acquired from the preset map, and at least one initial pose of the vehicle is determined based on the second road information.
The preset map can be a high-precision map, and the preset map comprises at least one marked lane line and at least one lane central line; each lane line is composed of at least one lane line discrete point, and each lane central line is composed of at least one lane central line discrete point.
In the embodiment of the present invention, the first road information is different from the second road information in that: the first road information is acquired by a vehicle on a current driving road through an actually acquired target image, and the second road information is acquired from a preset map based on positioning information of the vehicle. Since the positioning information may have an error, there may be a case where the vehicle actually travels on the main road of the current travel road but is positioned on the sub road of the current travel road. In this case, errors may occur in the first road information and the second road information.
In a preferred embodiment of the present invention, as shown in fig. 4, the obtaining of the second road information of the current driving road in the preset map based on the positioning information in step S203 includes:
step S401, determining a corresponding position point of the positioning information in a preset map, and determining at least one target lane line and at least one target lane central line within a preset distance of the position point;
step S402, determining a target lane line discrete point with a distance from a position point not exceeding a preset distance from each lane line discrete point of at least one target lane line to obtain a lane line discrete point set;
step S403, respectively determining a target lane center line discrete point with the distance to a position point not exceeding a preset distance and the minimum distance from at least one lane center line discrete point of each target lane center line to obtain a lane center line discrete point set;
step S404, using at least one target lane line, at least one target lane center line, a lane line discrete point set, and a lane center line discrete point set as second road information.
Specifically, after the terminal device obtains the positioning information, the terminal device may determine a corresponding location point in a preset map based on the longitude and latitude and the height in the positioning information.
It should be noted that, because the terminal device in the vehicle may obtain the positioning information in real time, the terminal device may continuously obtain a plurality of positioning information, and the plurality of positioning information are not necessarily the same, and especially when the information of the terminal device is weak, a large error may exist between the plurality of positioning information. That is, the terminal device may acquire a plurality of positioning information and then correspondingly determine a plurality of location points from a preset map.
And then determining at least one target lane line and at least one target lane central line within a preset distance of each position point. For example, all lane lines and lane center lines within 100 meters of a certain position point are taken as the target lane line and the target lane center line, respectively. Of course, in practical application, the preset distance may be set according to actual requirements, and the embodiment of the present invention is not limited thereto.
Because the complete lengths of the target lane line and the target lane center line are usually very long, in the embodiment of the invention, the complete target lane line and the complete lane center line are not required to be used, so that for each position point, a target lane line discrete point, the distance between which and the position point does not exceed a preset distance, in each target lane line is determined, and a lane line discrete point set corresponding to the position point is obtained.
For example, three target lane lines A, B, C within a distance of 100 meters from a certain position point are determined, and it is assumed that A comprises complete lane line discrete points A1-A5000, B comprises complete lane line discrete points B1-B5000, and C comprises complete lane line discrete points C1-C5000. Then, the target lane line discrete points in the distance of 100 meters from the position point in the A are determined to be A1000-A1200, the target lane line discrete points in the distance of 100 meters from the position point in the B are B1000-B1200, and the target lane line discrete points in the distance of 100 meters from the position point in the C are C1000-C1200, so that a lane line discrete point set [ A1000-A1200, B1000-B1200, C1000-C1200 ] corresponding to the position point is obtained. Of course, the recording mode of the lane line discrete point and the recording mode of the lane line discrete point set may also adopt other modes, and may be set according to actual requirements in practical application, which is not limited in the embodiment of the present invention.
Further, for each position point, respectively determining a target lane center line discrete point with the distance to the position point not exceeding a preset distance and the distance being the smallest from at least one lane center line discrete point of each target lane center line, and obtaining a lane center line discrete point set.
For example, three target lane center lines D, E, F within a distance of 100 meters from a certain position point are determined, and it is assumed that D includes complete lane center line discrete points D1-D5000, E includes complete lane center line discrete points E1-E5000, and F includes complete lane center line discrete points F1-F5000. And then determining that the center line discrete points of the target lane within 100 meters of the position point in the D are D1000-D1200, the center line discrete points of the target lane within 100 meters of the position point in the E are E1000-E1200, and the center line discrete points of the target lane within 100 meters of the position point in the F are F1000-F1200. And then, the center line discrete point of the target lane closest to the position point is determined to be D1080 from D1000-D1200, the center line discrete point of the target lane closest to the position point is determined to be E1080 from E1000-E1200, and the center line discrete point of the target lane closest to the position point is determined to be F1080 from F1000-F1200, so that a lane center line discrete point set [ D1080, E1080 and F1080] corresponding to the position point is obtained. That is, one target lane center line corresponds to one target lane center discrete point. Of course, the recording manner of the lane center line discrete point and the recording manner of the lane center line discrete point set may also adopt other manners, and may be set according to actual requirements in practical application, which is not limited in this embodiment of the present invention.
After determining each entry car marking lane line, each entry car marking lane central line, a lane line discrete point set and a lane central line discrete point set, at least one target lane line, at least one target lane central line, a lane line discrete point set and a lane central line discrete point set can be used as second road information.
In a preferred embodiment of the present invention, as shown in fig. 5, the determining at least one initial pose of the vehicle based on the second road information in step S203 includes:
step S501, determining a rotation matrix of the vehicle relative to the positioning information;
step S502, the rotation matrix and at least one target lane central line discrete point in the lane central line discrete point set are respectively combined to obtain at least one first pose matrix, and the at least one first pose matrix is used as an initial pose.
Specifically, each target lane center line discrete point in the set of lane center line discrete points in the second road information is taken as a position initial value, and a rotation matrix of the vehicle relative to the positioning information is acquired, which may be a rotation matrix of the vehicle relative to the WGS84 in the embodiment of the present application. And then combining the rotation matrix with each discrete point of the center line of the target lane respectively to obtain each first position and posture matrix, and taking each first position and posture matrix as each initial position and posture respectively.
The pose may be coordinates of the image capturing device in WGS84, and then a lane where the coordinates are located in the preset map is determined, and since the image capturing device is installed in the vehicle, the lane where the image capturing device is located is determined, and the lane where the vehicle is located in the preset map is also determined. When the number of the target lane lines is more than or equal to 2, the lane can be an area formed by any two adjacent target lane lines; when the number of the target lane lines is less than 2 and greater than 0, the lane may be an area to the left or an area to the right of the target lane lines.
In a preferred embodiment of the present invention, as shown in fig. 6, step S501 includes:
step S601, determining a first vector based on the extending direction of the central line of any item lane;
step S602, fitting each target lane line discrete point in the lane line discrete point set to obtain a second vector;
step S603, calculating based on the first vector and the second vector to obtain a third vector;
step S604, the first vector, the second vector, and the third vector are combined to obtain a rotation matrix.
Specifically, for any one target lane center line, a first vector is determined through the extending direction of the center line and is recorded as p, then all target lane line discrete points in a lane line discrete point set in the second road information are fitted to obtain a normal vector of the road surface, namely a second vector which is recorded as n, then cross multiplication is performed on p and n to obtain a third vector which is recorded as h, and a rotation matrix is recorded as R, then R = [ pth n ].
Further, any one target lane central line discrete point is recorded as PiThat isThe discrete point set of the center line of the lane is [ P ]1,P2,…,Pn]And respectively combining the rotation matrix and each target lane central line discrete point to obtain each initial pose which can be recorded as: [ P ]1,R],[P2,R],…,[Pn,R]。
Wherein any initial pose is preferably a pose with six degrees of freedom. Specifically, the object has six degrees of freedom in space, i.e., a degree of freedom of movement in the directions of three orthogonal coordinate axes x, y, and z and a degree of freedom of rotation about the three coordinate axes. Therefore, to fully determine the position of the object, the six degrees of freedom need to be known. Of course, any initial pose may be a pose with six degrees of freedom or other forms, and may be set according to actual requirements in practical applications, which is not limited in the embodiments of the present invention.
Step S204, determining a positioning result corresponding to each of at least one initial pose based on the first road information and the second road information;
after the first road information, the second road information and at least one initial pose are determined, a positioning result corresponding to each initial pose can be determined based on the first road information and the second road information. The positioning result can be an accurate actual pose corresponding to each initial pose.
In a preferred embodiment of the present invention, as shown in fig. 7, step S204 includes:
step S701, combining the rotation matrix and at least one target lane line discrete point in the lane line discrete point set respectively to obtain at least one second attitude matrix;
step S702, calculating at least one first projection point corresponding to at least one target lane line discrete point in a target image according to a preset projection model;
step S703, determining lane pixel points corresponding to at least one first projection point in the target image; the lane pixel points are pixel points belonging to the identified lane lines in the target image;
step S704, calculating a first distance between at least one first projection point and each corresponding lane pixel point;
step S705, based on the at least one second pose matrix and the at least one first distance, calculating by an objective function to obtain actual poses corresponding to the at least one initial pose, and taking the actual poses as positioning results.
For convenience of description, in the embodiment of the present invention, any one of the target lane line discrete points is denoted as QiThen the discrete point set of lane lines is [ Q ]1,Q2,…,Qn](ii) a And recording the set of all the identified lane lines as S.
Specifically, the rotation matrix and each discrete point of the target lane line are combined respectively to obtain each second attitude matrix, which is marked as x, and then x = [ Q ]i,R]. Then, according to a preset camera projection model, calculating first projection points of each target lane line discrete point projected to a target image under x, and recording any first projection point as ciThen, the set of the first projection points corresponding to each discrete point of the target lane line is [ c ]1,c2,…,cn]. And then determining first lane pixel points corresponding to each first projection point in the target image, wherein the lane pixel points are pixel points belonging to the identified lane lines in the target image.
It should be noted that the corresponding relationship between the first projection point and the first lane pixel point may be determined based on the row and the column of the pixel point in the lane line, for example, if a certain first projection point is located in the second row and the third column of a certain entry lane line on the preset map, then the pixel point corresponding to the first projection point in the target image, corresponding to the entry lane line, that identifies the second row and the third column of the lane line. Of course, in addition to the above manners, the corresponding relationship between the two may also be determined in other manners, for example, the corresponding relationship between the two is determined based on the coordinate of any pixel point in the WGS84, and in practical application, the corresponding relationship may be set according to practical requirements, which is not limited in this embodiment of the present invention.
Then, aiming at each first projection point, calculating a first distance between the first projection point and the corresponding first lane pixel point, and then calculating a first distance between the first projection point and the corresponding first lane pixel pointBased on each second attitude matrix and each first distance, denoted as r (c)i,si) And calculating to obtain the actual pose corresponding to each initial pose through an objective function, and taking each actual pose as a positioning result.
Wherein, the objective function is shown as formula (1):
Figure DEST_PATH_IMAGE001
formula (1);
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE002
rxis defined as:
Figure DEST_PATH_IMAGE003
LaneWidth is the current lane width, rxRepresenting the constraint residual error of the lane central line to the vehicle positioning, the design of the residual error ensures that the initial value optimization result of the current lane does not deviate from the current lane, and the lane central line is assumed to be PlaneThen, then
Figure DEST_PATH_IMAGE004
And d is the lateral error of the vehicle positioning on the x-axis of the WGS84 coordinates.
ryIs defined as:
Figure DEST_PATH_IMAGE005
the constrained residual of the positioning information on the y-axis of the WGS84 coordinates, i.e., the longitudinal error of the vehicle positioning on the y-axis of the WGS84 coordinates, is represented.
rzIs defined as:
Figure DEST_PATH_IMAGE006
representing the constrained residual of the vehicle positioning provided by the elevation of the preset map, namely the vertical error of the vehicle positioning on the Z axis of the WGS84 coordinate; h is elevation information calculated based on a preset map.
σHDTo a preset placeWeighting coefficient of the graph, σxA weighting coefficient of x, σyA weighting coefficient of y, σzAs the weighting coefficient of z, each weighting coefficient may be set according to practical experience. Therefore, in practical applications, each weighting coefficient may be set according to actual requirements, and the embodiment of the present invention is not limited to this.
And S205, determining a target positioning result from all positioning results based on the first road information and a preset map.
After each positioning result is determined, a target positioning result can be determined from each positioning result based on the first road information and the preset map, namely the position and posture of the vehicle in the preset map, which are closest to the position of the current driving road.
In a preferred embodiment of the present invention, as shown in fig. 8, step S205 includes:
step S801, calculating second projection points corresponding to the positioning results in the preset map in the target image according to the projection model;
step S802, calculating a second distance between each second projection point and each lane pixel point;
step S803, the positioning result corresponding to the minimum second distance is taken as the target positioning result.
Specifically, second projection points corresponding to each positioning result in the preset map in the target image are calculated according to the projection model, and then a formula (2) is adopted to calculate a second distance between each second projection point and each lane pixel point:
Figure DEST_PATH_IMAGE007
formula (2);
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE008
. Thus, a plurality of second distances corresponding to each second projection point are obtained, that is, one second projection point corresponds to a plurality of second distances. Then, the positioning result with the minimum cost is used as the target positioning result.
In the embodiment of the invention, the current positioning information of a vehicle is acquired, then a target image of a current running road of the vehicle is acquired, first road information of the current running road is acquired based on the target image, second road information of the current running road in a preset map is acquired based on the positioning information, at least one initial pose of the vehicle is determined based on the second road information, positioning results corresponding to the at least one initial pose are determined based on the first road information and the second road information, and a target positioning result is determined from each positioning result based on the first road information and the preset map. Therefore, the first road information of the current driving road of the vehicle is obtained based on visual perception, at the same time, at least one lane where the vehicle may be located in a preset map is provided based on positioning information, second road information corresponding to each lane is obtained, then matching positioning is carried out on each lane based on the first road information and each second road information, and finally a correct lane where the vehicle is located (positioning result) is obtained.
Furthermore, even under the condition that the positioning information is weak, the embodiment of the invention can determine a plurality of initial poses with possible errors, and then determine the final positioning result from the plurality of initial poses based on visual perception and a preset map, so that the vehicle can be positioned on a correct lane under the condition that the positioning information is weak, and the number of sampling points is small due to the weak positioning information, thereby reducing the calculated amount and further improving the algorithm efficiency.
Fig. 9 is a schematic structural diagram of a positioning device of a vehicle according to another embodiment of the present application, and as shown in fig. 9, the device of this embodiment may include:
a positioning information obtaining module 901, configured to obtain current positioning information of a vehicle;
a first road information obtaining module 902, configured to collect a target image of a current driving road of a vehicle, and obtain first road information of the current driving road based on the target image;
a second road information obtaining module 903, configured to obtain second road information of the current driving road in a preset map based on the positioning information;
an initial pose determination module 904 for determining at least one initial pose of the vehicle based on the second road information;
a positioning result determining module 905, configured to determine, based on the first road information and the second road information, a positioning result corresponding to each of the at least one initial pose;
and the screening module 906 is configured to determine a target positioning result from the positioning results based on the first road information and a preset map.
In a preferred embodiment of the present invention, the first road information acquiring module includes:
the acquisition submodule is used for acquiring a target image of the current driving road through image acquisition equipment carried by a vehicle;
and the recognition submodule is used for recognizing the lane lines of the target image and taking at least one recognized lane line obtained by recognition as first road information.
In a preferred embodiment of the present invention, the preset map includes at least one lane line and at least one lane center line; each lane line consists of at least one lane line discrete point, and each lane central line consists of at least one lane central line discrete point;
a second road information acquisition module comprising:
the first processing submodule is used for determining a corresponding position point of the positioning information in a preset map and determining at least one target lane line and at least one target lane central line within a preset distance of the position point;
the second processing submodule is used for determining a target lane line discrete point with the distance from the position point not more than the preset distance from each lane line discrete point of at least one target lane line to obtain a lane line discrete point set;
the third processing submodule is used for respectively determining a target lane central line discrete point which has the minimum distance and the distance with the position point not more than the preset distance from at least one lane central line discrete point of each target lane central line to obtain a lane central line discrete point set;
and the fourth processing submodule is used for taking at least one target lane line, at least one target lane central line, a lane line discrete point set and a lane central line discrete point set as second road information.
In a preferred embodiment of the present invention, the initial pose determination module includes:
the rotation matrix determining submodule is used for determining a rotation matrix of the vehicle relative to the positioning information;
and the first generation submodule is used for respectively combining the rotation matrix and at least one target lane central line discrete point in the lane central line discrete point set to obtain at least one first pose matrix, and taking the at least one first pose matrix as an initial pose.
In a preferred embodiment of the present invention, the rotation matrix determination submodule includes:
the first vector determination unit is used for determining a first vector based on the extending direction of the central line of any item of the taxi track;
the second vector determining unit is used for fitting each target lane line discrete point in the lane line discrete point set to obtain a second vector;
a third vector determination unit, configured to perform an operation based on the first vector and the second vector to obtain a third vector;
and the generating unit is used for combining the first vector, the second vector and the third vector to obtain a rotation matrix.
In a preferred embodiment of the present invention, the positioning result determining module includes:
the second generation submodule is used for respectively combining the rotation matrix and at least one target lane line discrete point in the lane line discrete point set to obtain at least one second attitude matrix;
the first calculation submodule is used for calculating at least one first projection point corresponding to at least one target lane line discrete point in the target image according to a preset projection model;
the fifth processing submodule is used for determining lane pixel points corresponding to the at least one first projection point in the target image; the lane pixel points are pixel points belonging to the identified lane lines in the target image;
the second calculation submodule is used for calculating a first distance between at least one first projection point and each corresponding lane pixel point;
and the sixth processing submodule is used for calculating to obtain the actual poses corresponding to the at least one initial pose through an objective function based on the at least one second pose matrix and the at least one first distance, and taking the actual poses as positioning results.
In a preferred embodiment of the present invention, the screening module comprises:
the third calculation submodule is used for calculating a second projection point, corresponding to each positioning result in the preset map, in the target image according to the projection model;
the fourth calculation submodule is used for calculating the second distance between each second projection point and each lane pixel point;
and the fifth processing submodule is used for taking the positioning result corresponding to the minimum second distance as a target positioning result.
The vehicle positioning device of this embodiment can execute the vehicle positioning method shown in the first embodiment of this application, and the implementation principles thereof are similar, and are not described herein again.
In the embodiment of the invention, the current positioning information of a vehicle is acquired, then a target image of a current running road of the vehicle is acquired, first road information of the current running road is acquired based on the target image, second road information of the current running road in a preset map is acquired based on the positioning information, at least one initial pose of the vehicle is determined based on the second road information, positioning results corresponding to the at least one initial pose are determined based on the first road information and the second road information, and a target positioning result is determined from each positioning result based on the first road information and the preset map. Therefore, the first road information of the current driving road of the vehicle is obtained based on visual perception, at the same time, at least one lane where the vehicle may be located in a preset map is provided based on positioning information, second road information corresponding to each lane is obtained, then matching positioning is carried out on each lane based on the first road information and each second road information, and finally a correct lane where the vehicle is located (positioning result) is obtained.
Furthermore, even under the condition that the positioning information is weak, the embodiment of the invention can determine a plurality of initial poses with possible errors, and then determine the final positioning result from the plurality of initial poses based on visual perception and a preset map, so that the vehicle can be positioned on a correct lane under the condition that the positioning information is weak, and the number of sampling points is small due to the weak positioning information, thereby reducing the calculated amount and further improving the algorithm efficiency.
In another embodiment of the present application, there is provided an electronic device including: a memory and a processor; at least one program stored in the memory for execution by the processor, which when executed by the processor, implements: in the embodiment of the invention, the current positioning information of a vehicle is acquired, then a target image of a current running road of the vehicle is acquired, first road information of the current running road is acquired based on the target image, second road information of the current running road in a preset map is acquired based on the positioning information, at least one initial pose of the vehicle is determined based on the second road information, positioning results corresponding to the at least one initial pose are determined based on the first road information and the second road information, and a target positioning result is determined from each positioning result based on the first road information and the preset map. Therefore, the first road information of the current driving road of the vehicle is obtained based on visual perception, at the same time, at least one lane where the vehicle may be located in a preset map is provided based on positioning information, second road information corresponding to each lane is obtained, then matching positioning is carried out on each lane based on the first road information and each second road information, and finally a correct lane where the vehicle is located (positioning result) is obtained.
Furthermore, even under the condition that the positioning information is weak, the embodiment of the invention can determine a plurality of initial poses with possible errors, and then determine the final positioning result from the plurality of initial poses based on visual perception and a preset map, so that the vehicle can be positioned on a correct lane under the condition that the positioning information is weak, and the number of sampling points is small due to the weak positioning information, thereby reducing the calculated amount and further improving the algorithm efficiency.
In an alternative embodiment, there is provided an electronic device, as shown in fig. 10, an electronic device 10000 shown in fig. 10 includes: a processor 10001, and a memory 10003. The processor 10001 is coupled to the memory 10003, such as via a bus 10002. Optionally, the electronic device 10000 may further comprise a transceiver 10004. It should be noted that the transceiver 10004 is not limited to one in practical applications, and the structure of the electronic device 10000 is not limited to the embodiment of the present application.
The processor 10001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 10001 can also be a combination that performs a computing function, e.g., including one or more microprocessor combinations, combinations of DSPs and microprocessors, and the like.
Bus 10002 can include a path that conveys information between the aforementioned components. The bus 10002 may be a PCI bus, an EISA bus, or the like. The bus 10002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The memory 10003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, an optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 10003 is used for storing application program codes for executing the present application, and the processor 10001 controls the execution. The processor 10001 is configured to execute the application program code stored in the memory 10003 to implement any of the embodiments of the method described above.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
Yet another embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when run on a computer, enables the computer to perform the corresponding content in the aforementioned method embodiments. Compared with the prior art, in the embodiment of the invention, the current positioning information of the vehicle is acquired, the target image of the current running road of the vehicle is acquired, the first road information of the current running road is acquired based on the target image, the second road information of the current running road in the preset map is acquired based on the positioning information, at least one initial pose of the vehicle is determined based on the second road information, the positioning result corresponding to each initial pose is determined based on the first road information and the second road information, and the target positioning result is determined from each positioning result based on the first road information and the preset map. Therefore, the first road information of the current driving road of the vehicle is obtained based on visual perception, at the same time, at least one lane where the vehicle may be located in a preset map is provided based on positioning information, second road information corresponding to each lane is obtained, then matching positioning is carried out on each lane based on the first road information and each second road information, and finally a correct lane where the vehicle is located (positioning result) is obtained.
Furthermore, even under the condition that the positioning information is weak, the embodiment of the invention can determine a plurality of initial poses with possible errors, and then determine the final positioning result from the plurality of initial poses based on visual perception and a preset map, so that the vehicle can be positioned on a correct lane under the condition that the positioning information is weak, and the number of sampling points is small due to the weak positioning information, thereby reducing the calculated amount and further improving the algorithm efficiency.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device realizes the following when executed:
acquiring the current positioning information of the vehicle;
acquiring a target image of a current driving road of the vehicle, and acquiring first road information of the current driving road based on the target image;
acquiring second road information of the current running road in a preset map based on the positioning information, and determining at least one initial pose of the vehicle based on the second road information;
determining a positioning result corresponding to each of the at least one initial pose based on the first road information and the second road information;
and determining a target positioning result from all positioning results based on the first road information and the preset map.

Claims (10)

1. A method of locating a vehicle, comprising:
acquiring the current positioning information of the vehicle;
acquiring a target image of a current driving road of the vehicle, and acquiring first road information of the current driving road based on the target image;
acquiring second road information of the current running road in a preset map based on the positioning information, and determining at least one initial pose of the vehicle based on the second road information;
determining a positioning result corresponding to each of the at least one initial pose based on the first road information and the second road information;
and determining a target positioning result from all positioning results based on the first road information and the preset map.
2. The method for positioning a vehicle according to claim 1, wherein the acquiring a target image of a current driving road of the vehicle and acquiring first road information of the current driving road based on the target image comprises:
acquiring a target image of the current driving road through image acquisition equipment carried by the vehicle;
and recognizing lane lines of the target image, and taking at least one recognized lane line obtained by recognition as the first road information.
3. The vehicle positioning method according to claim 1, wherein the preset map includes at least one lane line and at least one lane center line; each lane line consists of at least one lane line discrete point, and each lane central line consists of at least one lane central line discrete point;
the obtaining of the second road information of the current driving road in a preset map based on the positioning information includes:
determining a corresponding position point of the positioning information in the preset map, and determining at least one target lane line and at least one target lane central line within a preset distance of the position point;
determining a target lane line discrete point with the distance from the position point not more than a preset distance from each lane line discrete point of the at least one target lane line to obtain a lane line discrete point set;
respectively determining at least one lane center line discrete point of each target lane center line, wherein the distance between the target lane center line discrete point and the position point is not more than the preset distance, and the target lane center line discrete point with the minimum distance is obtained to obtain a lane center line discrete point set;
and taking the at least one target lane line, the at least one target lane central line, the lane line discrete point set and the lane central line discrete point set as the second road information.
4. The method according to claim 1 or 3, wherein the determining at least one initial pose of the vehicle based on the second road information comprises:
determining a rotation matrix of the vehicle relative to the positioning information;
and respectively combining the rotation matrix with at least one target lane central line discrete point in the lane central line discrete point set to obtain at least one first posture matrix, and taking the at least one first posture matrix as an initial posture.
5. The method of claim 4, wherein said determining a rotation matrix of said vehicle relative to said positioning information comprises:
determining a first vector based on the extending direction of the central line of any item of the taxi track;
fitting each target lane line discrete point in the lane line discrete point set to obtain a second vector;
calculating based on the first vector and the second vector to obtain a third vector;
and combining the first vector, the second vector and the third vector to obtain the rotation matrix.
6. The vehicle positioning method according to claim 1, 2, 3 or 5, wherein the determining a positioning result corresponding to each of the at least one initial pose based on the first road information and the second road information comprises:
respectively combining the determined rotation matrix of the vehicle relative to the positioning information with at least one target lane line discrete point in a lane line discrete point set to obtain at least one second attitude matrix; the lane line discrete point set is obtained by determining a corresponding position point of the positioning information in the preset map, determining at least one target lane line and at least one target lane central line within a preset distance of the position point, and determining a target lane line discrete point with a distance from the position point not exceeding the preset distance from each lane line discrete point of the at least one target lane line;
calculating at least one first projection point corresponding to at least one target lane line discrete point in the target image according to a preset projection model;
determining lane pixel points corresponding to the at least one first projection point in the target image; the lane pixel points are pixel points belonging to the identified lane lines in the target image;
calculating a first distance between the at least one first projection point and the corresponding lane pixel point;
and calculating actual poses corresponding to the at least one initial pose through an objective function based on the at least one second pose matrix and the at least one first distance, and taking the actual poses as positioning results.
7. The vehicle positioning method according to claim 6, wherein determining a target positioning result from the respective positioning results based on the first road information and the preset map comprises:
calculating second projection points corresponding to the positioning results in the preset map in the target image according to the projection model;
calculating a second distance between each second projection point and each lane pixel point;
and taking the positioning result corresponding to the minimum second distance as the target positioning result.
8. A positioning device for a vehicle, comprising:
the first acquisition module is used for acquiring the current positioning information of the vehicle;
the second acquisition module is used for acquiring a target image of the current driving road of the vehicle and acquiring first road information of the current driving road based on the target image;
the third acquisition module is used for acquiring second road information of the current driving road in a preset map based on the positioning information;
a first determination module for determining at least one initial pose of the vehicle based on the second road information;
a second determining module, configured to determine, based on the first road information and the second road information, a positioning result corresponding to each of the at least one initial pose;
and the third determining module is used for determining a target positioning result from all positioning results based on the first road information and the preset map.
9. An electronic device, comprising:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is used for executing the vehicle positioning method according to any one of the claims 1 to 7 by calling the operation instruction.
10. A computer-readable storage medium for storing computer instructions which, when executed on a computer, cause the computer to perform the method of locating a vehicle of any one of claims 1 to 7.
CN202011554483.1A 2020-12-24 2020-12-24 Vehicle positioning method and device, electronic equipment and computer readable storage medium Active CN112284400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011554483.1A CN112284400B (en) 2020-12-24 2020-12-24 Vehicle positioning method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011554483.1A CN112284400B (en) 2020-12-24 2020-12-24 Vehicle positioning method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112284400A CN112284400A (en) 2021-01-29
CN112284400B true CN112284400B (en) 2021-03-19

Family

ID=74426074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011554483.1A Active CN112284400B (en) 2020-12-24 2020-12-24 Vehicle positioning method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112284400B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255619B (en) * 2021-07-09 2021-11-23 禾多科技(北京)有限公司 Lane line recognition and positioning method, electronic device, and computer-readable medium
CN114034307B (en) * 2021-11-19 2024-04-16 智道网联科技(北京)有限公司 Vehicle pose calibration method and device based on lane lines and electronic equipment
CN114563005A (en) * 2022-03-01 2022-05-31 小米汽车科技有限公司 Road positioning method, device, equipment, vehicle and storage medium
CN115931009B (en) * 2023-03-13 2023-04-28 北京航空航天大学 Inertial device centrifugal measurement method based on gyroscope and laser ranging
CN116883502B (en) * 2023-09-05 2024-01-09 深圳市智绘科技有限公司 Method, device, medium and equipment for determining camera pose and landmark point

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110567480A (en) * 2019-09-12 2019-12-13 北京百度网讯科技有限公司 Optimization method, device and equipment for vehicle positioning and storage medium
CN110595494A (en) * 2019-09-17 2019-12-20 百度在线网络技术(北京)有限公司 Map error determination method and device
CN110793544A (en) * 2019-10-29 2020-02-14 北京百度网讯科技有限公司 Sensing sensor parameter calibration method, device, equipment and storage medium
WO2020146102A1 (en) * 2019-01-08 2020-07-16 Qualcomm Incorporated Robust lane association by projecting 2-d image into 3-d world using map information
CN111854727A (en) * 2019-04-27 2020-10-30 北京初速度科技有限公司 Vehicle pose correction method and device
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 Vehicle fusion positioning method for V2X and laser point cloud registration for advanced automatic driving
CN111950434A (en) * 2020-08-07 2020-11-17 武汉中海庭数据技术有限公司 Lane line structuralization method and system based on discrete point scanning
CN111982133A (en) * 2019-05-23 2020-11-24 北京地平线机器人技术研发有限公司 Method and device for positioning vehicle based on high-precision map and electronic equipment
CN112016463A (en) * 2020-08-28 2020-12-01 佛山市南海区广工大数控装备协同创新研究院 Deep learning-based lane line detection method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020146102A1 (en) * 2019-01-08 2020-07-16 Qualcomm Incorporated Robust lane association by projecting 2-d image into 3-d world using map information
CN111854727A (en) * 2019-04-27 2020-10-30 北京初速度科技有限公司 Vehicle pose correction method and device
CN111982133A (en) * 2019-05-23 2020-11-24 北京地平线机器人技术研发有限公司 Method and device for positioning vehicle based on high-precision map and electronic equipment
CN110567480A (en) * 2019-09-12 2019-12-13 北京百度网讯科技有限公司 Optimization method, device and equipment for vehicle positioning and storage medium
CN110595494A (en) * 2019-09-17 2019-12-20 百度在线网络技术(北京)有限公司 Map error determination method and device
CN110793544A (en) * 2019-10-29 2020-02-14 北京百度网讯科技有限公司 Sensing sensor parameter calibration method, device, equipment and storage medium
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 Vehicle fusion positioning method for V2X and laser point cloud registration for advanced automatic driving
CN111950434A (en) * 2020-08-07 2020-11-17 武汉中海庭数据技术有限公司 Lane line structuralization method and system based on discrete point scanning
CN112016463A (en) * 2020-08-28 2020-12-01 佛山市南海区广工大数控装备协同创新研究院 Deep learning-based lane line detection method

Also Published As

Publication number Publication date
CN112284400A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN112284400B (en) Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN111524185A (en) Positioning method and device, electronic equipment and storage medium
CN111829532B (en) Aircraft repositioning system and method
CN105229490A (en) Use the positional accuracy of satellite visibility data for promoting
CN113807470B (en) Vehicle driving state determination method and related device
CN110160545B (en) Enhanced positioning system and method for laser radar and GPS
CN112362054B (en) Calibration method, calibration device, electronic equipment and storage medium
US20140286537A1 (en) Measurement device, measurement method, and computer program product
US11620755B2 (en) Method and system for tracking trajectory based on visual localization and odometry
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
CN112950710A (en) Pose determination method and device, electronic equipment and computer readable storage medium
US20210041259A1 (en) Methods and Systems for Determining Geographic Orientation Based on Imagery
CN114120301A (en) Pose determination method, device and equipment
CN113532444B (en) Navigation path processing method and device, electronic equipment and storage medium
CN115164936A (en) Global pose correction method and device for point cloud splicing in high-precision map manufacturing
CN109241233B (en) Coordinate matching method and device
Soloviev et al. Reconfigurable Integration Filter Engine (RIFE) for Plug-and-Play Navigation
Löchtefeld et al. PINwI: pedestrian indoor navigation without infrastructure
CN114494423B (en) Unmanned platform load non-central target longitude and latitude positioning method and system
US9881028B2 (en) Photo-optic comparative geolocation system
CN111121755A (en) Multi-sensor fusion positioning method, device, equipment and storage medium
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN114897942A (en) Point cloud map generation method and device and related storage medium
CN115063480A (en) Pose determination method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037347

Country of ref document: HK