CN117330058A - Vehicle-road joint accurate positioning method, device and storage medium - Google Patents

Vehicle-road joint accurate positioning method, device and storage medium Download PDF

Info

Publication number
CN117330058A
CN117330058A CN202311162308.1A CN202311162308A CN117330058A CN 117330058 A CN117330058 A CN 117330058A CN 202311162308 A CN202311162308 A CN 202311162308A CN 117330058 A CN117330058 A CN 117330058A
Authority
CN
China
Prior art keywords
road
vehicle
coding
information
positioning information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311162308.1A
Other languages
Chinese (zh)
Inventor
赵治国
梁凯冲
颜丹姝
杨一飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202311162308.1A priority Critical patent/CN117330058A/en
Publication of CN117330058A publication Critical patent/CN117330058A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/43Determining position using carrier phase measurements, e.g. kinematic positioning; using long or short baseline interferometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a vehicle-road joint positioning method, equipment and a storage medium, wherein the method comprises the following steps: s1: establishing a mapping relation between global positioning information and codes based on a hierarchical classification coding rule; s2: considering the information integrity, recognition and arrangement difficulty, selecting two-dimensional codes or bar code codes to represent road position information; s3: optimizing the layout parameters by adopting an intelligent optimization algorithm and carrying out road coding layout; step 4: based on the visual sensor, detecting and dividing road coding information by adopting a target detection model; s5: based on the code visual detection result, performing motion compensation by using inertial navigation equipment, and estimating the dynamic relative pose between the vehicle and the road code; s6: and a Kalman filter is adopted to fuse the autonomous positioning information of the vehicle and the road coding positioning information to obtain a combined positioning result. Compared with the prior art, the invention has the advantages of small visual disturbance to the driver, low cost, high durability, high accuracy, redundancy, robust positioning and the like of the vehicle.

Description

Vehicle-road joint accurate positioning method, device and storage medium
Technical Field
The invention relates to the technical field of vehicle positioning, in particular to a vehicle-road combined accurate positioning method, device and storage medium.
Background
The problems of traffic safety and low traffic efficiency caused by large-scale urban development are increasingly prominent, and the problems are still insufficient to be solved only by relying on the intellectualization of vehicles and traffic control. The road is an important ring of a 'man-vehicle-road' system, is hopeful to be endowed with diversified service capabilities such as accurate positioning and the like, and brings new opportunities for realizing high-safety and high-efficiency urban traffic.
The accurate positioning of the running vehicle is a key ring for realizing traffic guidance and improving traffic safety and efficiency, and at present, the vehicle is usually used for autonomous positioning by adopting methods such as GPS, inertial navigation and the like. However, in the complex urban canyon environments such as tunnels, viaducts, high-rise dense areas and the like, the GPS and inertial navigation technologies are easy to be interfered, the current real-time position is difficult to be determined by the vehicle, and related infrastructure such as roads is required to provide effective position information to assist the vehicle to realize reliable and accurate positioning. The Chinese patent application CN111274923A discloses a vehicle positioning method, a system, a medium and equipment based on road surface coding, wherein the method is characterized in that a coding pattern is arranged on a road surface in a preset coding mode, and the positioning of a vehicle is realized by searching position information corresponding to coding analysis matched with the pattern. However, the method does not consider the relation between the road codes and the vehicle pose, and does not consider the fusion between the vehicle positioning information and the road positioning information, so that the positioning accuracy is limited. The invention patent application CN113395663A discloses a vehicle positioning method in a tunnel based on vehicle-road cooperation, which comprises the steps of setting a plurality of vehicle positioning mark points on a tunnel ceiling at certain intervals, wherein mark point codes consist of two parts of letter identifiers and Arabic numerals, storing longitude and latitude elevation information represented by the mark points in a database, processing picture information acquired by a camera, extracting vehicle positioning mark point code information, carrying out data transmission with a coordinate database, and acquiring positioning information. However, the application scene of the method is limited to the road sections with low-altitude shed roofs such as tunnels, and the like, and the expandability is not high. The Chinese patent No. 109945858B discloses a multi-sensor fusion positioning method for a low-speed parking driving scene, which comprises the steps of firstly establishing an offline map, and then obtaining vehicle positioning information through visual map matching and fusion with a vehicle odometer. However, the offline map data amount and the visual map matching calculation amount required by the method are large.
The vehicle-road combined accurate positioning method which comprehensively considers the applicability of road codes, the pose relation between the road codes and vehicles and the effective fusion between the self-vehicle positioning information and the road positioning information is still urgent to be researched and developed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a vehicle-road combined accurate positioning method, device and storage medium, which have the advantages of small visual interference to a driver, low cost, high durability, capability of realizing high-precision, redundant and robust positioning of vehicles and the like.
The aim of the invention can be achieved by the following technical scheme:
according to a first aspect of the present invention, there is provided a vehicle-road joint positioning method, comprising the steps of:
step 1: establishing a mapping relation between global positioning information and codes based on a hierarchical classification coding rule;
step 2: comprehensively considering the information integrity, the recognition difficulty and the arrangement difficulty, and selecting a two-dimensional code or a bar code to code and characterize the road position information;
step 3: optimizing layout parameters by adopting an intelligent optimization algorithm, and carrying out road coding layout;
step 4: acquiring a road image by using a visual sensor, and detecting and dividing road coding information by using a target detection model to obtain a coding visual detection result;
step 5: based on the coded visual detection result, extracting representative geometric feature points, performing motion compensation by using inertial navigation equipment, and estimating dynamic relative pose between a vehicle and road coding;
step 6: and carrying out multi-mode fusion on the autonomous positioning information of the vehicle and the road coding positioning information by adopting a Kalman filter to obtain a final combined positioning result.
Preferably, the hierarchical classification coding rule in step 1 is specifically:
splitting information to be encoded into character units, encoding each character unit into encoding graphic units, and then combining the encoding graphic units into a complete encoding graphic according to the information to be encoded; and the coding graph is provided with check bits for verifying the accuracy of the decoding information.
Preferably, the road surface luminescent material in the step 3 includes, but is not limited to, a fluorescent stone.
Preferably, the intelligent optimization algorithm in the step 3 includes, but is not limited to, a particle swarm algorithm and an ant colony algorithm.
Preferably, the object detection model in the step 4 is a single-stage object detector based on a lightweight characteristic enhanced convolutional neural network.
Preferably, the step 4 includes the following substeps:
step 4-1: detecting by using a target detection model, and cutting out a detection frame containing road codes from the image;
step 4-2: cutting out a road code from the detection frame obtained in the step 4-1, and converting the road code to a top view angle through perspective conversion;
step 4-3: decoding the road code obtained in the step 4-2 according to the code characterization mode determined in the step 2 to obtain the transverse coordinate P of the road code in the world coordinate system x Longitudinal coordinate P y And the road traffic direction angle theta at the road coding position, and verifying the positioning information obtained after decoding.
Preferably, said step 5 comprises the following sub-steps:
step 5-1: establishing a road coding coordinate system by taking a road coding upper left corner point as an origin, taking the connecting line direction from the upper left corner point to the upper right corner point as an x-axis positive direction and taking the connecting line direction from the upper left corner point to the lower left corner point as a y-axis positive direction;
step 5-2: 4 geometric feature points are extracted from road coding, the pose of a visual sensor is estimated through an iteration method, and a rotation matrix R and a translation matrix T of a visual sensor coordinate system relative to the road coding coordinate system are obtained, wherein T= [ T ] x ,t y ,t z ],t x 、t y 、t z Respectively representing the transverse distance, the longitudinal distance and the vertical distance of the vision sensor relative to the origin of the coding coordinate system;
step 5-3: according to the road coding pose information obtained in the step 4 and the pose information of the visual sensor relative to the road coding obtained in the step 5-2, calculating a transverse position coordinate P of the center of the rear axle of the vehicle under a world coordinate system xr And a longitudinal position coordinate P yr
P xr =P x +T x cosθ+(T y +L)sinθ
P yr =P y +T x sinθ-(T y +L)cosθ
Wherein L is the distance between the installation position of the vision sensor and the center of the rear axle of the vehicle;
step 5-4: and (3) performing motion compensation on the positioning information obtained in the step (5-3) according to the operation time length of the algorithm by using the vehicle-mounted inertial navigation module, and calculating the compensated vehicle positioning information as follows if the operation time length of the algorithm is dt and the vehicle speed is v:
P xr_c =P x +T x cosθ+(T y +L-vdt)sinθ
P yr_c =P y +T x sinθ-(T y +L-vdt)cosθ。
preferably, the step 6 comprises the following sub-steps:
step 6-1: lateral position of vehicle in world coordinates obtained by using GNSS-IMU-RTK integrated navigation moduleCoordinate P xv And a longitudinal position coordinate P yv Obtaining autonomous positioning information of the vehicle;
step 6-2: the method comprises the steps of utilizing a Kalman filter to fuse autonomous positioning information of a vehicle and positioning information calculated based on road coding, recording a time stamp of each positioning information input, utilizing a prediction step of the Kalman filter to synchronize the vehicle positioning information and covariance matrix thereof to a new positioning information input moment, and then fusing an priori estimated value of the vehicle positioning information with a newly input positioning information observed value in an updating step of the Kalman filter to obtain a posterior estimated value of the vehicle positioning information.
According to a second aspect of the present invention there is provided an electronic device comprising a memory and a processor, the memory having stored thereon a computer program, the processor implementing the method of any one of the above when executing the program.
According to a third aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method of any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
(1) The positioning information density is high: the method based on the bar code or the two-dimensional code has high information density, can represent rich position information, captures and identifies the coding pattern through the visual sensor, and can assist the vehicle to realize high-precision positioning.
(2) Feasibility, accuracy and high efficiency are considered: the vehicle-road joint positioning method provided by the invention can provide the positioning coding graph with high identification degree and high-precision position information for the vehicle by using a simpler and lower-cost arrangement mode, and simultaneously takes the calculation speed and the solving quality of an image detection algorithm into consideration, so that the feasibility, the high efficiency and the accuracy of vehicle positioning are ensured.
Drawings
FIG. 1 is a schematic flow chart of a vehicle-road joint positioning method in the invention;
FIG. 2 is a schematic view of a fluorescent stone paving material according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a network structure of an object detector according to an embodiment of the present invention;
FIG. 4 is a schematic diagram showing the relative pose relationship between the visual sensor and the road code in the embodiment of the invention;
fig. 5 is a schematic diagram of vehicle road positioning information fusion in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Examples
A vehicle-road joint positioning method and device, the flow of which is shown in figure 1, comprises:
step 1: and establishing a verifiable and unique mapping relation between the global positioning information and the codes based on the hierarchical classification coding rule.
Step 2: considering information integrity, recognition and arrangement difficulty, the road coding information is characterized by adopting a two-dimensional code or bar code mode, and the method specifically comprises the following steps:
step 2-1: and (3) evaluating the information integrity of the global positioning information realized according to the coding rule in the step (1). If the information quantity is less, the information can be represented by a bar code with a certain length, and a bar code representation mode is adopted; otherwise, adopting a two-dimension code representation mode to realize the transmission of more information quantity;
step 2-2: the identification and arrangement difficulty of the global positioning information realized according to the coding rule in the step 1 are evaluated, such as the influence of the transverse and longitudinal directions of the arrangement of the bar codes on the identification effect, the length of the bar codes (or the area of the two-dimensional codes), the metering relation of the arrangement cost and the like;
step 2-3: 2-1 and 2-2 are integrated, and a scheme with better feasibility is selected from the bar code and the two-dimensional code to be used as a coding characterization mode;
step 2-4: and (3) characterizing the code obtained in the step (1) as a code pattern which can be paved on a road surface, such as a bar code or a two-dimensional code, according to the code characterization mode determined in the step (2-3).
Step 3: road codes are arranged by adopting road luminescent materials, and the arrangement positions, the quantity and the like of the codes are optimized based on an intelligent optimization algorithm;
the road surface luminescent material includes, but is not limited to, a road surface paving material such as fluorescent stone, as shown in fig. 2, and aims to: the method realizes the identification coding of the corresponding sensor, provides accurate position information, reduces the visual influence of the coding on a driver as much as possible, avoids using an active device, saves energy and reduces paving and maintenance costs.
The intelligent optimization algorithm in the embodiment includes, but is not limited to, a particle swarm algorithm, an ant colony algorithm, and the like, and aims to determine the minimum number of required coding patterns and the optimal arrangement positions thereof on the premise of providing accurate positioning information by using the coding patterns so as to realize the efficient positioning function of the vehicle.
Step 4: the visual sensor is utilized, the convolutional neural network based on lightweight characteristic enhancement is used for rapidly detecting and dividing road coding information, and further realizing accurate decoding and checking of the coding information, and the method specifically comprises the following steps:
step 4-1: detecting the road surface code in the step 3 by using a single-stage target detector based on a lightweight characteristic enhanced convolutional neural network, and cutting a detection frame containing the road code from an image, wherein the network structure of the target detector is shown in figure 3;
step 4-2: cutting out a road code from the detection frame obtained in the step 4-1, and converting the road code to a top view angle through perspective conversion;
step 4-3: decoding the road code obtained in the step 4-2 according to the coding mode determined in the step 2-3 to obtain the transverse coordinate P of the road code in the world coordinate system x Longitudinal coordinate P y And road traffic direction angle theta at road coding position, and positioning obtained after decodingAnd checking the information.
Step 5: based on the coded visual detection result, extracting representative geometric feature points, performing motion compensation by using inertial navigation equipment, and estimating dynamic relative pose between a vehicle and road coding, wherein the method specifically comprises the following steps:
step 5-1: selecting a road coding coordinate system as a rectangular coordinate system taking a road coding upper left corner point as an origin, taking the connecting line direction from the upper left corner point to the upper right corner point as an x-axis positive direction, and taking the connecting line direction from the upper left corner point to the lower left corner point as a y-axis positive direction;
step 5-2: 4 geometric feature points are extracted from road coding, the pose of a visual sensor is estimated through an iteration method, and a rotation matrix R and a translation matrix T of a visual sensor coordinate system relative to the road coding coordinate system are obtained, wherein T= [ T ] x ,t y ,t z ],t x 、t y 、t z Respectively representing the transverse distance, the longitudinal distance and the vertical distance of the vision sensor relative to the origin of the coding coordinate system;
step 5-3: according to the road coding pose information obtained in the step 4-3 and the pose information of the visual sensor relative to the road coding obtained in the step 5-2, calculating a transverse position coordinate P of the center of the rear axle of the vehicle under a world coordinate system xr And a longitudinal position coordinate P yr Setting the distance between the mounting position of the vision sensor and the center of the rear axle of the vehicle as L, P xr And P yr The calculation formulas of (a) are respectively as follows: p (P) xr =P x +T x cosθ+(T y +L)sinθ,P yr =P y +T x sinθ-(T y +L) cos θ as shown in FIG. 4.
Step 5-4: and (3) performing motion compensation on the positioning information obtained in the step (5-3) by using an inertial navigation module carried by the vehicle according to the operation time of the algorithm, and calculating the compensated vehicle positioning information if the operation time of the algorithm is dt and the vehicle speed is v: p (P) xr_c =P x +T x cosθ+(T y +L-vdt)sinθ,P yr_c =P y +T x sinθ-(T y +L-vdt)cosθ。
Step 6: based on a Kalman filter, the effective fusion of multi-mode (different characteristics) positioning information of multiple sources (autonomous vehicle and road coding positioning) is realized, specifically:
step 6-1: the autonomous positioning information of the vehicle means: obtaining a transverse position coordinate P of a vehicle in world coordinates by using a GNSS-IMU-RTK integrated navigation module xv And a longitudinal position coordinate P yv
Step 6-2: the vehicle autonomous positioning information and the positioning information calculated based on road coding are fused by using a Kalman filter, the time stamp of each positioning information input is recorded, the vehicle positioning information and the covariance matrix thereof are synchronized to the input moment of new positioning information (comprising the two positioning information) by using the prediction step of the Kalman filter, and then the prior estimated value of the vehicle positioning information and the newly input positioning information observed value are fused in the updating step of the Kalman filter, so that the posterior estimated value of the vehicle positioning information is obtained, as shown in figure 5.
The electronic device of the present invention includes a Central Processing Unit (CPU) that can perform various appropriate actions and processes according to computer program instructions stored in a Read Only Memory (ROM) or computer program instructions loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The CPU, ROM and RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
A plurality of components in a device are connected to an I/O interface, comprising: an input unit such as a keyboard, a mouse, etc.; an output unit such as various types of displays, speakers, and the like; a storage unit such as a magnetic disk, an optical disk, or the like; and communication units such as network cards, modems, wireless communication transceivers, and the like. The communication unit allows the device to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processing unit performs the respective methods and processes described above, for example, the methods S1 to S6. For example, in some embodiments, methods S1-S6 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as a storage unit. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device via the ROM and/or the communication unit. When the computer program is loaded into RAM and executed by the CPU, one or more steps of the methods S1 to S6 described above may be performed. Alternatively, in other embodiments, the CPU may be configured to perform methods S1-S6 in any other suitable manner (e.g., by means of firmware).
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), etc.
Program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The vehicle-road joint positioning method is characterized by comprising the following steps of:
step 1: establishing a mapping relation between global positioning information and codes based on a hierarchical classification coding rule;
step 2: comprehensively considering the information integrity, the recognition difficulty and the arrangement difficulty, and selecting a two-dimensional code or a bar code to code and characterize the road position information;
step 3: optimizing layout parameters by adopting an intelligent optimization algorithm, and carrying out road coding layout;
step 4: acquiring a road image by using a visual sensor, and detecting and dividing road coding information by using a target detection model to obtain a coding visual detection result;
step 5: based on the coded visual detection result, extracting representative geometric feature points, performing motion compensation by using inertial navigation equipment, and estimating dynamic relative pose between a vehicle and road coding;
step 6: and carrying out multi-mode fusion on the autonomous positioning information of the vehicle and the road coding positioning information by adopting a Kalman filter to obtain a final combined positioning result.
2. The vehicle-road joint positioning method according to claim 1, wherein the hierarchical classification coding rule in step 1 is specifically:
splitting information to be encoded into character units, encoding each character unit into encoding graphic units, and then combining the encoding graphic units into a complete encoding graphic according to the information to be encoded; and the coding graph is provided with check bits for verifying the accuracy of the decoding information.
3. A vehicle-road joint positioning method according to claim 1, wherein the road surface luminescent material in step 3 includes, but is not limited to, fluorite.
4. The vehicle-road joint positioning method according to claim 1, wherein the intelligent optimization algorithm in the step 3 includes, but is not limited to, a particle swarm algorithm and an ant colony algorithm.
5. The vehicle-road joint positioning method according to claim 1, wherein the target detection model in the step 4 is a single-stage target detector based on a lightweight characteristic enhanced convolutional neural network.
6. The vehicle-road joint positioning method according to claim 5, wherein the step 4 includes the sub-steps of:
step 4-1: detecting by using a target detection model, and cutting out a detection frame containing road codes from the image;
step 4-2: cutting out a road code from the detection frame obtained in the step 4-1, and converting the road code to a top view angle through perspective conversion;
step 4-3: decoding the road code obtained in the step 4-2 according to the code characterization mode determined in the step 2 to obtain the transverse coordinate P of the road code in the world coordinate system x Longitudinal coordinate P y And the road traffic direction angle theta at the road coding position, and verifying the positioning information obtained after decoding.
7. The vehicle-road joint positioning method according to claim 6, wherein the step 5 comprises the following sub-steps:
step 5-1: establishing a road coding coordinate system by taking a road coding upper left corner point as an origin, taking the connecting line direction from the upper left corner point to the upper right corner point as an x-axis positive direction and taking the connecting line direction from the upper left corner point to the lower left corner point as a y-axis positive direction;
step 5-2: 4 geometric feature points are extracted from road coding, the pose of a visual sensor is estimated through an iteration method, and a rotation matrix R and a translation matrix T of a visual sensor coordinate system relative to the road coding coordinate system are obtained, wherein T= [ T ] x ,t y ,t z ],t x 、t y 、t z Respectively representing the transverse distance, the longitudinal distance and the vertical distance of the vision sensor relative to the origin of the coding coordinate system;
step 5-3: according to the road coding pose information obtained in the step 4 and the pose information of the visual sensor relative to the road coding obtained in the step 5-2, calculating a transverse position coordinate P of the center of the rear axle of the vehicle under a world coordinate system xr And a longitudinal position coordinate P yr
P xr =P x +T x cosθ+(T y +L)sinθ
P yr =P y +T x sinθ-(T y +L)cosθ
Wherein L is the distance between the installation position of the vision sensor and the center of the rear axle of the vehicle;
step 5-4: and (3) performing motion compensation on the positioning information obtained in the step (5-3) according to the operation time length of the algorithm by using the vehicle-mounted inertial navigation module, and calculating the compensated vehicle positioning information as follows if the operation time length of the algorithm is dt and the vehicle speed is v:
P xr_c =P x +T x cosθ+(T y +L-vdt)sinθ
P yr_c =P y +T x sinθ-(T y +L-vdt)cosθ。
8. the vehicle-road joint positioning method according to claim 1, wherein the step 6 comprises the following sub-steps:
step 6-1: lateral position coordinate P of vehicle under world coordinates obtained by using GNSS-IMU-RTK integrated navigation module xv And a longitudinal position coordinate P yv Obtaining autonomous positioning information of the vehicle;
step 6-2: the method comprises the steps of utilizing a Kalman filter to fuse autonomous positioning information of a vehicle and positioning information calculated based on road coding, recording a time stamp of each positioning information input, utilizing a prediction step of the Kalman filter to synchronize the vehicle positioning information and covariance matrix thereof to a new positioning information input moment, and then fusing an priori estimated value of the vehicle positioning information with a newly input positioning information observed value in an updating step of the Kalman filter to obtain a posterior estimated value of the vehicle positioning information.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, characterized in that the processor, when executing the program, implements the method according to any of claims 1-8.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-8.
CN202311162308.1A 2023-09-08 2023-09-08 Vehicle-road joint accurate positioning method, device and storage medium Pending CN117330058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311162308.1A CN117330058A (en) 2023-09-08 2023-09-08 Vehicle-road joint accurate positioning method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311162308.1A CN117330058A (en) 2023-09-08 2023-09-08 Vehicle-road joint accurate positioning method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117330058A true CN117330058A (en) 2024-01-02

Family

ID=89294220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311162308.1A Pending CN117330058A (en) 2023-09-08 2023-09-08 Vehicle-road joint accurate positioning method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117330058A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117848331A (en) * 2024-03-06 2024-04-09 河北美泰电子科技有限公司 Positioning method and device based on visual tag map

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117848331A (en) * 2024-03-06 2024-04-09 河北美泰电子科技有限公司 Positioning method and device based on visual tag map

Similar Documents

Publication Publication Date Title
CN109284348B (en) Electronic map updating method, device, equipment and storage medium
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN107063275B (en) Intelligent vehicle map fusion system and method based on road side equipment
CN107563419B (en) Train positioning method combining image matching and two-dimensional code
Suhr et al. Sensor fusion-based low-cost vehicle localization system for complex urban environments
CN109214248B (en) Method and device for identifying laser point cloud data of unmanned vehicle
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
CN112212874B (en) Vehicle track prediction method and device, electronic equipment and computer readable medium
CN106441319A (en) System and method for generating lane-level navigation map of unmanned vehicle
KR20180041176A (en) METHOD, DEVICE, STORAGE MEDIUM AND DEVICE
CN102208013A (en) Scene matching reference data generation system and position measurement system
CN110389995B (en) Lane information detection method, apparatus, device, and medium
CN117330058A (en) Vehicle-road joint accurate positioning method, device and storage medium
CN104422451A (en) Road recognition method and road recognition apparatus
CN106918341A (en) Method and apparatus for building map
CN113537362A (en) Perception fusion method, device, equipment and medium based on vehicle-road cooperation
EP3968609A1 (en) Control method, vehicle, and server
CN114080537A (en) Collecting user contribution data relating to a navigable network
CN109544443A (en) A kind of route drawing generating method and device
CN116659524A (en) Vehicle positioning method, device, equipment and storage medium
CN102082996A (en) Self-locating mobile terminal and method thereof
Lee et al. Development of a car-free street mapping model using an integrated system with unmanned aerial vehicles, aerial mapping cameras, and a deep learning algorithm
CN112837414B (en) Method for constructing three-dimensional high-precision map based on vehicle-mounted point cloud data
CN114240816A (en) Road environment sensing method and device, storage medium, electronic equipment and vehicle
CN116202538B (en) Map matching fusion method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination