CN111854770B - Vehicle positioning system and method - Google Patents

Vehicle positioning system and method Download PDF

Info

Publication number
CN111854770B
CN111854770B CN201910364956.2A CN201910364956A CN111854770B CN 111854770 B CN111854770 B CN 111854770B CN 201910364956 A CN201910364956 A CN 201910364956A CN 111854770 B CN111854770 B CN 111854770B
Authority
CN
China
Prior art keywords
synchronization signal
vehicle
mcu
received
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910364956.2A
Other languages
Chinese (zh)
Other versions
CN111854770A (en
Inventor
王唯达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co Ltd
Original Assignee
Beijing Momenta Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Momenta Technology Co Ltd filed Critical Beijing Momenta Technology Co Ltd
Priority to CN201910364956.2A priority Critical patent/CN111854770B/en
Publication of CN111854770A publication Critical patent/CN111854770A/en
Application granted granted Critical
Publication of CN111854770B publication Critical patent/CN111854770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a positioning system and a method of a vehicle, wherein the positioning system comprises an MCU, an NPU and a processor; when the processor receives the ith hardware synchronization signal sent by the camera, the received sensing result sent by the NPU and the preprocessing result of the MCU aiming at the ith-1 th hardware synchronization signal are processed to estimate the running track of the vehicle; and when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, optimizing the running track to obtain a positioning result of the vehicle. When the camera sends a hardware synchronization signal, the NPU carries out sensing calculation on the received image data to obtain a sensing result corresponding to the hardware synchronization signal; and the MCU preprocesses the received data of other sensors to obtain a preprocessing result corresponding to the hardware synchronizing signal. By adopting the scheme, the technical problem that the algorithm cannot be normally executed due to insufficient computing power of the positioning system is solved.

Description

Vehicle positioning system and method
Technical Field
The invention relates to the technical field of automatic driving, in particular to a vehicle positioning system and a vehicle positioning method.
Background
The automatic driving deep learning visual method combines a Global Positioning System (GPS) Positioning method and uses a SLAM (simultaneous localization and mapping) technology to clearly define the longitude, latitude and height of a specific object. At present, functions such as real-time mapping and positioning can be achieved by a special deep learning chip, by utilizing an SLAM algorithm mapping and positioning technology, and hardware fusion of external multi-sensors such as a GPS, an IMU (Inertial measurement unit), a wheel speed and a radar.
Although the deep learning dedicated chip can integrate a plurality of hardware modules to realize real-time positioning of the vehicle, in the positioning process, in order to obtain an accurate positioning result of the vehicle, a positioning algorithm is generally complex. At this time, the SOC may cause the positioning algorithm to be unable to be executed due to insufficient calculation force, thereby affecting the normal positioning result of the vehicle.
Disclosure of Invention
The embodiment of the invention discloses a vehicle positioning system and a vehicle positioning method, which solve the technical problem that a positioning algorithm cannot be normally executed due to insufficient computing power of a deep learning special chip.
In a first aspect, an embodiment of the present invention discloses a positioning system for a vehicle, including:
the system comprises a microcontroller MCU, a deep learning module NPU and a processor respectively connected with the MCU and the NPU, wherein the MCU is externally connected with a plurality of sensors, and the NPU, the processor and the MCU are all connected with an externally connected camera;
the processor is configured to, when receiving the ith hardware synchronization signal sent by the camera, process the received sensing result sent by the NPU and the preprocessing result of the MCU aiming at the ith-1 th hardware synchronization signal so as to estimate the running track of the vehicle;
when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, optimizing the running track to obtain a positioning result of the vehicle;
wherein i is an integer greater than 1, each time the camera transmits a hardware synchronization signal, the NPU is configured to: carrying out perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; the MCU is configured to preprocess the received sensor data except the image data to obtain a preprocessing result corresponding to the secondary hardware synchronization signal.
Optionally, the treatment appliance is configured to:
when an ith hardware synchronization signal sent by the camera is received, determining position change information of feature points in a multi-frame perception image according to a received perception result sent by an NPU (neutral point Unit);
and carrying out covariance calculation on the preprocessing result of other sensor data sent by the MCU, and fusing the calculation result with the position change information to estimate the running track of the vehicle.
Optionally, the treatment appliance is configured to:
and when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, performing iterative optimization processing on the running track by adopting a nonlinear optimization algorithm to obtain a positioning result of the vehicle.
Optionally, the NPU is specifically configured to:
when a hardware synchronization signal sent by the camera is received and a starting identification instruction sent by the processor is received, sequentially identifying each piece of road information in the received image data, and sending an interrupt signal to the processor after each piece of road information is identified;
and when receiving a calling instruction of an identification model corresponding to the next road information, which is sent by the processor, identifying the next road information based on the identification model, wherein the identification model enables the road information to be associated with the position of the characteristic point in the road information.
Optionally, the MCU is specifically configured to:
acquiring other sensing data and time service information except image data, wherein the other sensing data comprises inertial measurement unit IMU data, position data and wheel speed data, and the time service information is acquired through a Global Positioning System (GPS);
and carrying out time stamp synchronization on the other sensing data according to the time service information to obtain time stamps corresponding to the other sensing data respectively.
Optionally, the MCU is specifically configured to: performing pre-integration processing on IMU data according to the following formula to obtain a pre-processing result of the IMU data:
Figure BDA0002047882860000021
Figure BDA0002047882860000022
Figure BDA0002047882860000023
where ω (t) represents the angular velocity at time t, bg(t) represents the zero offset of the gyroscope at the time t, the acceleration a (t) represents the acceleration at the time t, and the acceleration zero offset ba(t) represents acceleration zero offset, η at time tgd(t) represents noise, ηad(t), g denotes gravity, R (t + Δ t) denotes a rotation matrix at time (t + Δ t), and v (t + Δ t) denotes a velocity at time (t + Δ t)The matrix, p (t + Δ t), represents the displacement matrix at time (t + Δ t).
Optionally, the MCU is specifically configured to:
after the preprocessing results of other sensors are obtained, if the hardware synchronization signal sent by the camera is received again, the preprocessing results of the sensor data and the corresponding timestamp information are sent to the processor in the form of a whole packet.
In a second aspect, an embodiment of the present invention further provides a vehicle positioning method, which is applied to a processor in a vehicle positioning system, and the method includes:
when an ith hardware synchronization signal sent by the camera is received, processing a received sensing result sent by the NPU and a preprocessing result of the MCU aiming at the ith-1 th hardware synchronization signal so as to estimate the running track of the vehicle;
when an i +1 th hardware synchronization signal sent by the camera is received, optimizing the running track to obtain a positioning result of the vehicle;
wherein i is an integer greater than 1, and when the camera sends a synchronization signal, the NPU performs perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; and the MCU preprocesses the received sensor data except the image data to obtain a preprocessing result corresponding to the hardware synchronizing signal.
Optionally, when the ith hardware synchronization signal sent by the camera is received, the received sensing result sent by the deep learning module NPU and the preprocessing result of the microcontroller MCU for the ith-1 st hardware synchronization signal are processed to estimate the trajectory of the vehicle, including:
when an ith hardware synchronization signal sent by the camera is received, determining position change information of feature points in a multi-frame perception image according to a received perception result sent by an NPU (neutral point Unit);
and carrying out covariance calculation on the preprocessing result of other sensor data sent by the MCU, and fusing the calculation result with the position change information to estimate the running track of the vehicle.
Optionally, the optimizing the running track to obtain a positioning result of the vehicle includes:
and carrying out iterative optimization processing on the running track by adopting a nonlinear optimization algorithm to obtain a positioning result of the vehicle.
In a third aspect, an embodiment of the present invention further provides a vehicle-mounted terminal, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute part or all of the steps of the positioning method of the vehicle provided by any embodiment of the invention.
In a fourth aspect, the embodiments of the present invention further provide a computer-readable storage medium storing a computer program including instructions for executing part or all of the steps of the positioning method of the vehicle provided in any embodiment of the present invention.
In a fifth aspect, the embodiments of the present invention further provide a computer program product, which when run on a computer, causes the computer to execute part or all of the steps of the positioning method for a vehicle provided in any embodiment of the present invention.
In the technical scheme provided by the embodiment of the invention, the positioning system comprises an MCU, an NPU and a processor. The processing is configured to process the received sensing result sent by the NPU and the preprocessing result of the MCU aiming at the ith-1 th time hardware synchronization signal when the ith time hardware synchronization signal sent by the camera is received so as to estimate the running track of the vehicle. And when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, optimizing the running track to obtain a positioning result of the vehicle. Wherein each time the camera transmits a hardware synchronization signal, the NPU is configured to: carrying out perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; the MCU is configured to preprocess the received sensor data except the image data to obtain a preprocessing result corresponding to the secondary hardware synchronization signal. The arrangement enables the processor to execute the front-end algorithm, the image characteristic algorithm and the rear-end algorithm on the same data in a time sequence manner, and the CPU, the MCU and the processing process of the CPU and the NPU on the same data packet can also be executed in a time sequence manner, so that the technical problem of insufficient calculation power caused by the fact that the existing deep learning special chip executes the front-end algorithm, the image characteristic algorithm and the rear-end algorithm on the same data packet in the same time sequence is solved, the phenomenon of frequent interruption of the processor in a positioning system can be reduced, and the calculation rate of the vehicle positioning system is improved.
The invention comprises the following steps:
1. and when the ith hardware synchronization signal sent by the camera is received, the CPU processes the received sensing result sent by the NPU and the preprocessing result of the MCU aiming at the ith-1 th hardware synchronization signal so as to estimate the running track of the vehicle. And when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, optimizing the running track to obtain a positioning result of the vehicle. Wherein each time the camera transmits a hardware synchronization signal, the NPU is configured to: carrying out perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; the MCU is configured to preprocess the received sensor data except the image data to obtain a preprocessing result corresponding to the secondary hardware synchronization signal. The arrangement enables the processor to carry out front-end, image characteristic and rear-end algorithm on the same data in a time sequence manner, and the CPU, the MCU, the CPU and the NPU can also carry out processing on the same data packet in a time sequence manner, thereby solving the technical problem of insufficient computational power caused by the existing deep learning special chip carrying out the front-end, image characteristic and rear-end algorithm on the same data packet in the same time sequence, improving the operational rate of the vehicle positioning system, and being one of the invention points.
2. In the positioning system of the vehicle, the MCU sends the preprocessing result to the processor in a whole packet mode, and compared with the mode that the multi-sensor data directly enters the processor without passing through the MCU in the prior art, the method and the device for positioning the vehicle reduce the phenomenon of frequent interruption of the CPU caused by the fact that the multi-sensor data directly enter the CPU, improve the operation rate of the CPU, and are one of the invention points.
3. And each time the NPU identifies an object, selecting the identification model to be adopted according to the calling instruction of the identification model sent by the processor, and identifying the object based on the identification model. Compared with the mode of identifying all objects by adopting the same identification model in the prior art, the embodiment of the invention solves the problem of inaccurate object identification in the prior art, improves the accuracy of the identification result, and is one of the invention points.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1a is a block diagram of a vehicle positioning system according to an embodiment of the present invention;
FIG. 1b is a timing diagram illustrating the execution of modules in a positioning system according to an embodiment of the present invention;
FIG. 2a is a timing diagram of an NPU interacting with a CPU according to an embodiment of the present invention;
FIG. 2b is a timing diagram of the multi-sensor and the MCU, and the interaction between the MCU and the CPU according to the embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a method for locating a vehicle according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Example one
Referring to fig. 1a, fig. 1a is a block diagram of a vehicle positioning system according to an embodiment of the present invention. The system can be applied to the vehicle positioning process in automatic driving and can also be applied to the map building process of a navigation map. The positioning of the vehicle or the mapping process of the navigation map can be realized by a deep learning special chip. As shown in fig. 1a, the positioning system of the vehicle provided in this embodiment specifically includes: a Microcontroller 110 (MCU), a deep learning module 120 (NPU), and a processor 130 connected to the MCU and the NPU, respectively. The MCU is externally connected with a plurality of sensors 140, and the NPU, the MCU and the processor are all connected with an external camera 150. The processor in the embodiment of the present invention may be a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU). The technical solution of this embodiment preferably adopts a CPU as the processor.
The processor 130 is configured to, when receiving the ith-time hardware synchronization signal sent by the camera, process the received sensing result sent by the NPU and the preprocessing result of the MCU for the ith-1-time hardware synchronization signal to estimate the running track of the vehicle;
when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, optimizing the running track of the vehicle to obtain the positioning result of the vehicle;
where i is an integer greater than 1, each time the camera transmits a hardware synchronization signal, the NPU is configured to: carrying out perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; the MCU is configured to preprocess the received sensor data except the image data to obtain a preprocessing result corresponding to the secondary hardware synchronization signal.
For example, fig. 1b is an execution timing diagram of modules in a positioning system according to an embodiment of the present invention. As shown in fig. 1b, the hardware synchronization signal sent by the camera is specifically a Vsync signal. When the MCU receives the hardware synchronization signal Vsync1 sent by the camera, it preprocesses the received sensor data except the image data to obtain a preprocessing result corresponding to the hardware synchronization signal Vsync1, that is, a data packet 11. When receiving the hardware synchronization signal Vsync1 sent by the camera, the NPU performs perceptual calculation on the received image data to obtain a perceptual result corresponding to the hardware synchronization signal and the Vsync1, that is, a data packet 12. Wherein the other sensor data than the image is: IMU data, position data, and wheel speed data.
As shown in fig. 1b, when the next hardware synchronization signal sent by the camera is received, that is, when Vsync2 is received, the MCU and the NPU send their corresponding data packets to the CPU for processing, and process the received new sensor data and image data, respectively, to obtain data packet 21 and data packet 22. For the CPU, upon receiving the Vsync2, the preprocessing result of the MCU for the hardware synchronization signal of the Vsync1 th time, that is, the data packet 11, and the received sensing result (including the data packet 12 and other sensing results that the NPU has identified) are processed, that is, the front end 1 and the image feature 1 extraction algorithm are executed, and specifically, the CPU performs covariance calculation on other sensor data except for the image data according to the data packet 11 sent by the MCU to estimate the moving track of the vehicle. In addition, the CPU may extract the position feature of each feature point in the image data from the packet 12 for Vsync1 hardware synchronization signals sent by the NPU based on a graphic gradient feature point extraction algorithm such as ORB (organized FAST and Rotated BRIEF, an algorithm for FAST feature point extraction and description), SIFT (Scale-invariant feature transform), SURF (speedup Up Robust Features), and optical flow, and estimate the position change information of each feature point in combination with the sensing result of the NPU that has been received before Vsync1 hardware synchronization signals. The CPU fuses the position change information and the calculation result of the covariance to estimate the running track of the vehicle.
As shown in fig. 1b, when receiving a hardware synchronization signal Vsync3 sent by the camera for the next time, the CPU executes a back-end 1 algorithm on the estimated vehicle trajectory, specifically: and performing iterative optimization on the estimated running track of the vehicle by using a nonlinear optimization algorithm, thereby determining the positioning result of the vehicle. In addition, when the camera sends a hardware synchronization signal Vsync3, the MCU and the NPU continue to process the received other sensor data and image data to obtain data packets 31 and 32; the CPU still performs front-end 2 and image feature 2 algorithms on the preprocessing result of the corresponding MCU and the perceptual result of the NPU at Vsync 2.
As shown in fig. 1b, each module may operate according to the pipeline parallel processing manner when receiving the next hardware synchronization signal Vsync4 sent by the camera.
In this embodiment, in order to better describe the pipeline parallel processing manner, labels are set for the front-end, image feature and back-end algorithm of the CPU, and the labels are mainly used to describe the corresponding relationship between the algorithm and the processed data packet and do not have any limitation on the execution sequence. By sampling the pipeline parallel processing mode, the interruption frequency of a CPU in the existing deep learning special chip is reduced, the problem of insufficient computing power caused by the fact that the existing deep learning special chip executes front-end, image characteristics and rear-end algorithms on the same data packet in the same time sequence is solved, and the operation rate of the deep learning special chip is improved.
It should be noted that, in the positioning system for the vehicle, the modules such as the MCU and the GPU included in the positioning system may also be used independently as external hardware to directly communicate with the positioning system. In addition, in the positioning system of the embodiment, some communication protocol functions may also be provided, such as: UART (Universal Asynchronous Receiver/Transmitter), SPI (Serial Peripheral Interface), GPIO (General-purpose input/output), MIPI (Mobile Industry Processor Interface), HDMI (High Definition Multimedia Interface), WIFI (Wireless Fidelity) network, and 4/5G mesh hard disk, and the functions of the UART (Universal Asynchronous Receiver/Transmitter), SPI (Serial Peripheral Interface), GPIO (General-purpose input/output), MIPI (Mobile Industry Processor Interface), HDMI (High Definition Multimedia Interface ), and WIFI (Wireless Fidelity) network can be connected to the vehicle-mounted device to transmit and receive information and restore the hard disk.
Specifically, the following describes each module involved in the positioning system of the vehicle in this embodiment in detail:
for an NPU, the NPU is specifically configured to:
when a hardware synchronization signal sent by a camera is received, if an initial identification instruction sent by a processor is obtained, sequentially identifying each piece of road information in received image data, and sending an interrupt signal to the processor after each piece of road information is identified; and when receiving a calling instruction of an identification model corresponding to the next road information sent by the processor, identifying the next road information based on the identification model, wherein the identification model enables the road information to be associated with the position of the characteristic point in the road information.
The image data received by the NPU may be data preprocessed by the camera, and the preprocessing may be distortion removal, image enhancement, white balance, auto focus, auto exposure, and the like.
Specifically, fig. 2a is a timing diagram of interaction between an NPU and a CPU according to an embodiment of the present invention. As shown in fig. 2a, when the same hardware synchronization signal comes, the NPU may process the perception object obj, i.e. the road information in the image, based on algorithms such as YOLO (implemented object detection system), fast Regions with a consideration Neural Network (more rapid convolutional Neural Network), or google internet (a deep Neural Network), and the like, where the road information may include lane lines, traffic signs, street lamp poles, and the like. For the identification of each road information, a CPU (central processing unit) can issue an identification command, namely a memory pointer is configured, and an identification model is required to be called when the NPU identifies the road information. The NPU, upon receiving the command, may determine the location of the road information feature point based on the recognition model. Therefore, the accuracy of the NPU identification result can be improved by adopting the mode that the different objects correspond to different identification models.
For example, the recognition model may be trained as follows:
acquiring a historical image containing road information;
extracting characteristic points of the road information of the historical image to obtain a characteristic sequence corresponding to the historical image;
generating sample data based on the feature sequences of the plurality of historical images and the positions of the road information feature points in the historical images;
and training the initial neural network model by using the sample data to obtain an identification model, wherein the identification model enables the road information in the historical image to be associated with the positions of the characteristic points in the road information. For the MCU, the MCU is specifically configured to:
and when the ith hardware synchronizing signal sent by the camera is received, preprocessing the received sensor data except the image data to obtain a preprocessing result corresponding to the ith hardware synchronizing signal.
Specifically, fig. 2b is a timing diagram of the multiple sensors and the MCU and the interaction between the MCU and the CPU according to the embodiment of the present invention. As shown in fig. 2b, upon receiving the hardware synchronization signal sent by the camera, the IMU, Wheel speed meter (Wheel) and GPS send data signals to the MCU. Wherein the IMU provides angular velocity and acceleration of the vehicle, the GPS provides longitude, latitude, altitude, speed and heading of the vehicle, and the wheel speed meter provides vehicle speed of the vehicle.
For example, the MCU pre-processing the respective sensor data may include: pre-integrating IMU data, performing unit conversion on GPS data and the like. The IMU data pre-scoring process can be carried out according to the following formula, and the IMU data pre-processing result is obtained:
Figure BDA0002047882860000081
Figure BDA0002047882860000082
Figure BDA0002047882860000083
where ω (t) represents the angular velocity at time t, bg(t) represents the zero offset of the gyroscope at the time t, the acceleration a (t) represents the acceleration at the time t, and the acceleration zero offset ba(t) represents acceleration zero offset, η at time tgd(t) and ηad(t) represents noise of the IMU, g represents gravity, R (t + Δ t) represents a rotation matrix at time (t + Δ t), v (t + Δ t) represents a velocity matrix at time (t + Δ t), and p (t + Δ t) represents a displacement matrix at time (t + Δ t).
In addition, the MCU can also acquire time service information provided by the GPS, and can add a time stamp to each received data signal such as the IMU, the wheel speed meter and the like according to the time service information. Further, the added time stamp can be corrected by using a timer in the CPU. This arrangement allows the time stamps of the multiple sensors to be aligned.
Further, after the MCU completes the preprocessing of all the sensor data received by the MCU, the MCU can receive the hardware synchronization signal sent by the camera next time, and send the preprocessing results of the sensor data and the corresponding timestamp information to the processor in a whole packet form through the DMA mechanism or the transport protocol mechanism. Compared with the mode that the multi-sensor data directly enters the CPU without passing through the MCU in the prior art, the method has the advantages that the phenomenon that the CPU is frequently interrupted due to the fact that the multi-sensor data directly enters the CPU is reduced, and the operation rate of the CPU is improved.
For a processor, the processor is configured to:
when an ith hardware synchronization signal sent by a camera is received, determining position change information of feature points in a multi-frame perception image according to a received perception result sent by an NPU, and carrying out covariance calculation on preprocessing results corresponding to other sensor data sent by an MCU (microprogrammed control Unit), namely taking each sensor data and a corresponding timestamp thereof as covariance data; and fusing the calculation result of the covariance with the position change information to estimate the running track of the vehicle.
Specifically, the CPU may process the multiple frames of perceptual images based on a sliding window algorithm, that is, each time one frame of perceptual image is received, delete one frame of perceptual image at the head of the sliding window, and add a newly received perceptual image to the tail of the sliding window, so that the CPU may determine the position change information of the feature point in the perceptual image according to the multiple frames of perceptual images in the sliding window.
The feature points in the perception image are corner points in the image, such as the top of a light pole in the image, the top of a traffic sign and the like. After the position change information of the feature points in the perception image is determined, the position change information of the feature points and the results obtained after the covariance of other sensor data are calculated are fused, so that the running track of the vehicle is estimated, and a front-end algorithm is executed. The front-end algorithm may be a Kalman Filter, an Extended Kalman Filter (EKF), or a Multi-State constrained Kalman Filter (MSCKF), and the present embodiment does not specifically limit the present invention. The method can be realized by utilizing a front-end algorithm to perform interactive fusion on multiple sensors.
Further, when the processor receives the i +1 th hardware synchronization signal transmitted by the camera, the processor is configured to:
and performing iterative optimization processing on the estimated running track of the vehicle by adopting a nonlinear optimization algorithm to obtain a positioning result of the vehicle.
The nonlinear optimization algorithm may be a Local/Global BA (beam balancing) algorithm. Specifically, in the iterative process, the perception image can be used as a reference, and the pose of the vehicle provided by the GPS is continuously corrected by using the algorithm, so that more accurate positioning is obtained. In addition, in the running process of the CPU, the CPU runs in parallel with other modules in a time sequence according to a pipeline type processing mode, so that the phenomenon of frequent interruption of the CPU can be reduced, the problem of insufficient algorithm of the CPU is solved, and the operation rate of a positioning system is improved.
According to the technical scheme provided by the embodiment, the modules in the vehicle positioning system are controlled to run in a pipelined parallel mode in a time sequence, so that the frequent interruption phenomenon of a CPU in the positioning system can be reduced, the technical problem that a positioning algorithm cannot be normally executed due to insufficient computing power of the positioning system of the vehicle is avoided, and the operation rate of the positioning system is improved.
Example two
Referring to fig. 3, fig. 3 is a flowchart illustrating a vehicle positioning method according to an embodiment of the invention, where the method is executed by a processor in a vehicle positioning system. As shown in fig. 3, the method includes:
210. and when the ith hardware synchronization signal sent by the camera is received, processing the received sensing result sent by the NPU and the preprocessing result of the MCU aiming at the ith-1 th hardware synchronization signal to estimate the running track of the vehicle.
Wherein i is an integer greater than 1, and each time the camera sends a synchronization signal, the NPU performs perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; the MCU preprocesses the received sensor data except the image data to obtain a preprocessing result corresponding to the subsynchronous signal.
Illustratively, step 210 may be specifically implemented by the following steps:
when an ith hardware synchronization signal sent by a camera is received, determining position change information of feature points in a multi-frame perception image according to a received perception result sent by an NPU (neutral point Unit); and carrying out covariance calculation on the preprocessing result of other sensor data sent by the MCU, and fusing the calculation result and the position change information to estimate the running track of the vehicle.
Specifically, the content of the above embodiment may be referred to in the interaction process between the NPU and the processor, and between the MCU and the processor, which is not described in detail in this embodiment.
220. And when the (i + 1) th hardware synchronization signal sent by the camera is received, optimizing the running track to obtain a positioning result of the vehicle.
Illustratively, a nonlinear optimization algorithm can be adopted to perform iterative optimization processing on the running track to obtain a positioning result of the vehicle. The specific optimization process may refer to the content provided by the above implementation, and this embodiment is not described in detail again.
According to the technical scheme provided by the embodiment, the modules in the vehicle positioning system are controlled to run in a parallel mode of a production line, so that the frequent interruption phenomenon of a CPU in the positioning system can be reduced, the technical problem that a positioning algorithm cannot be normally executed due to insufficient computing power of the positioning system of the vehicle is solved, and the operation rate of the positioning system is improved.
EXAMPLE III
Referring to fig. 4, fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. As shown in fig. 4, the in-vehicle terminal may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute the vehicle positioning method according to any embodiment of the present invention.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute a positioning method of a vehicle provided by any embodiment of the invention.
The embodiment of the invention discloses a computer program product, wherein when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the positioning method of the vehicle provided by any embodiment of the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The above detailed description of the positioning system and method for a vehicle disclosed in the embodiments of the present invention is provided, and the principle and the embodiments of the present invention are explained herein by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A vehicle positioning system for autonomous driving, comprising: the system comprises a microcontroller MCU, a deep learning module NPU and a processor respectively connected with the MCU and the NPU, wherein the MCU is externally connected with a plurality of sensors, and the NPU, the processor and the MCU are all connected with an externally connected camera;
the processor is configured to, when receiving the ith hardware synchronization signal sent by the camera, process the received sensing result sent by the NPU and the preprocessing result of the MCU aiming at the ith-1 th hardware synchronization signal so as to estimate the running track of the vehicle;
when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, optimizing the running track to obtain a positioning result of the vehicle;
wherein i is an integer greater than 1, each time the camera transmits a hardware synchronization signal, the NPU is configured to: carrying out perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; the MCU is configured to preprocess the received sensor data except the image data to obtain a preprocessing result corresponding to the secondary hardware synchronization signal, wherein the other sensor data comprises inertial measurement unit IMU data, position data and wheel speed data.
2. The positioning system of claim 1, wherein the processing implement is configured to:
when an ith hardware synchronization signal sent by the camera is received, determining position change information of feature points in a multi-frame perception image according to a received perception result sent by an NPU (neutral point Unit);
and carrying out covariance calculation on the preprocessing result of other sensor data sent by the MCU, and fusing the calculation result with the position change information to estimate the running track of the vehicle.
3. The positioning system of claim 1 or 2, wherein the processing implement is configured to:
and when the processor receives the (i + 1) th hardware synchronization signal sent by the camera, performing iterative optimization processing on the running track by adopting a nonlinear optimization algorithm to obtain a positioning result of the vehicle.
4. The positioning system of claim 1, wherein the NPU is specifically configured to:
when a hardware synchronization signal sent by the camera is received and a starting identification instruction sent by the processor is received, sequentially identifying each piece of road information in the received image data, and sending an interrupt signal to the processor after each piece of road information is identified;
and when receiving a calling instruction of an identification model corresponding to the next road information, which is sent by the processor, identifying the next road information based on the identification model, wherein the identification model enables the road information to be associated with the position of the characteristic point in the road information.
5. The positioning system of claim 1, wherein the MCU is specifically configured to:
acquiring other sensing data and time service information except the image data, wherein the time service information is acquired through a Global Positioning System (GPS);
and carrying out time stamp synchronization on the other sensing data according to the time service information to obtain time stamps corresponding to the other sensing data respectively.
6. The positioning system of claim 5, wherein the MCU is specifically configured to: performing pre-integration processing on IMU data according to the following formula to obtain a pre-processing result of the IMU data:
Figure FDA0003550091160000021
Figure FDA0003550091160000022
Figure FDA0003550091160000023
wherein the content of the first and second substances,
Figure FDA0003550091160000026
representing angular velocity at time t, bg(t) zero offset and acceleration of the gyroscope at time t
Figure FDA0003550091160000025
Indicating acceleration at time t, acceleration zero offset ba(t) represents acceleration zero offset, η at time tgd(t) represents noise, g represents gravity, R (t + Δ t) represents a rotation matrix at time (t + Δ t), v (t + Δ t) represents a velocity matrix at time (t + Δ t), and p (t + Δ t) represents time (t + Δ t)The displacement matrix of (2).
7. The positioning system of claim 1, wherein the MCU is specifically configured to:
after the preprocessing results of other sensors are obtained, if the hardware synchronization signal sent by the camera is received again, the preprocessing results of the data of the other sensors and the corresponding timestamp information are sent to the processor in the form of a whole packet.
8. A vehicle positioning method is applied to a processor in a vehicle positioning system, and is characterized by comprising the following steps:
when an ith hardware synchronization signal sent by a camera is received, processing a received sensing result sent by the NPU and a preprocessing result of the MCU aiming at the ith-1 th hardware synchronization signal to estimate the running track of the vehicle;
when an i +1 th hardware synchronization signal sent by the camera is received, optimizing the running track to obtain a positioning result of the vehicle;
wherein i is an integer greater than 1, and each time the camera sends a synchronization signal, the NPU performs perception calculation on the received image data to obtain a perception result corresponding to the hardware synchronization signal; and the MCU preprocesses the data of other sensors except the image data to obtain a preprocessing result corresponding to the hardware synchronizing signal, wherein the data of other sensors comprises IMU data, position data and wheel speed data of an inertial measurement unit.
9. The method according to claim 8, wherein the step of processing the received sensing result sent by the deep learning module NPU and the preprocessing result of the microcontroller MCU aiming at the i-1 st hardware synchronization signal to estimate the running track of the vehicle when the i-th hardware synchronization signal sent by the camera is received comprises:
when an ith hardware synchronization signal sent by the camera is received, determining position change information of feature points in a multi-frame perception image according to a received perception result sent by an NPU (neutral point Unit);
and carrying out covariance calculation on the preprocessing result of other sensor data sent by the MCU, and fusing the calculation result with the position change information to estimate the running track of the vehicle.
10. The method according to claim 8 or 9, wherein the optimizing the running track to obtain the positioning result of the vehicle comprises:
and carrying out iterative optimization processing on the running track by adopting a nonlinear optimization algorithm to obtain a positioning result of the vehicle.
CN201910364956.2A 2019-04-30 2019-04-30 Vehicle positioning system and method Active CN111854770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910364956.2A CN111854770B (en) 2019-04-30 2019-04-30 Vehicle positioning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910364956.2A CN111854770B (en) 2019-04-30 2019-04-30 Vehicle positioning system and method

Publications (2)

Publication Number Publication Date
CN111854770A CN111854770A (en) 2020-10-30
CN111854770B true CN111854770B (en) 2022-05-13

Family

ID=72965121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910364956.2A Active CN111854770B (en) 2019-04-30 2019-04-30 Vehicle positioning system and method

Country Status (1)

Country Link
CN (1) CN111854770B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052855B (en) * 2021-02-26 2021-11-02 苏州迈思捷智能科技有限公司 Semantic SLAM method based on visual-IMU-wheel speed meter fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN208705898U (en) * 2018-08-10 2019-04-05 北京魔门塔科技有限公司 A kind of image collecting device for neural network
CN109634263A (en) * 2018-12-29 2019-04-16 深圳市易成自动驾驶技术有限公司 Based on data synchronous automatic Pilot method, terminal and readable storage medium storing program for executing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9489045B2 (en) * 2015-03-26 2016-11-08 Honeywell International Inc. Methods and apparatus for providing a snapshot truthing system for a tracker
CN106446815B (en) * 2016-09-14 2019-08-09 浙江大学 A kind of simultaneous localization and mapping method
EP3615955A4 (en) * 2017-04-28 2020-05-13 SZ DJI Technology Co., Ltd. Calibration of laser and vision sensors
EP3428884B1 (en) * 2017-05-12 2020-01-08 HTC Corporation Tracking system and tracking method thereof
CN108629793B (en) * 2018-03-22 2020-11-10 中国科学院自动化研究所 Visual inertial ranging method and apparatus using on-line time calibration
CN108981690A (en) * 2018-06-07 2018-12-11 北京轻威科技有限责任公司 A kind of light is used to fusion and positioning method, equipment and system
CN108804161B (en) * 2018-06-21 2022-03-04 北京字节跳动网络技术有限公司 Application initialization method, device, terminal and storage medium
CN109447170A (en) * 2018-11-05 2019-03-08 贵州大学 The dictionary optimization method of mobile robot synchronous superposition system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN208705898U (en) * 2018-08-10 2019-04-05 北京魔门塔科技有限公司 A kind of image collecting device for neural network
CN109634263A (en) * 2018-12-29 2019-04-16 深圳市易成自动驾驶技术有限公司 Based on data synchronous automatic Pilot method, terminal and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111854770A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN107328424B (en) Navigation method and device
CN111830953B (en) Vehicle self-positioning method, device and system
CN111626208A (en) Method and apparatus for detecting small targets
CN109931945B (en) AR navigation method, device, equipment and storage medium
CN110377025A (en) Sensor aggregation framework for automatic driving vehicle
KR20130072437A (en) Apparatus and method for recognizing vehicle location using in-vehicle network and image sensor
CN110779538B (en) Allocating processing resources across local and cloud-based systems relative to autonomous navigation
US11410429B2 (en) Image collection system, image collection method, image collection device, recording medium, and vehicle communication device
JP7310313B2 (en) POSITION CORRECTION SERVER, POSITION MANAGEMENT DEVICE, MOBILE POSITION MANAGEMENT SYSTEM AND METHOD, POSITION INFORMATION CORRECTION METHOD, COMPUTER PROGRAM, IN-VEHICLE DEVICE, AND VEHICLE
CN109515439B (en) Automatic driving control method, device, system and storage medium
CN111311902A (en) Data processing method, device, equipment and machine readable medium
US11189162B2 (en) Information processing system, program, and information processing method
CN111401255B (en) Method and device for identifying bifurcation junctions
CN111982132B (en) Data processing method, device and storage medium
CN111854770B (en) Vehicle positioning system and method
CN112902973A (en) Vehicle positioning information correction method and related equipment
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN115841660A (en) Distance prediction method, device, equipment, storage medium and vehicle
CN111339226B (en) Method and device for constructing map based on classification detection network
CN114281832A (en) High-precision map data updating method and device based on positioning result and electronic equipment
CN113566824A (en) Vehicle positioning method and device, electronic equipment and storage medium
WO2020258222A1 (en) Method and system for identifying object
CN112099481A (en) Method and system for constructing road model
CN111060114A (en) Method and device for generating feature map of high-precision map
WO2020073272A1 (en) Snapshot image to train an event detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220303

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: Room 28, 4 / F, block a, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing 100089

Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant