CN115235452A - Intelligent parking positioning system and method based on UWB/IMU and visual information fusion - Google Patents
Intelligent parking positioning system and method based on UWB/IMU and visual information fusion Download PDFInfo
- Publication number
- CN115235452A CN115235452A CN202210871578.9A CN202210871578A CN115235452A CN 115235452 A CN115235452 A CN 115235452A CN 202210871578 A CN202210871578 A CN 202210871578A CN 115235452 A CN115235452 A CN 115235452A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- information
- uwb
- target vehicle
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 58
- 230000000007 visual effect Effects 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000007613 environmental effect Effects 0.000 claims abstract description 33
- 230000007246 mechanism Effects 0.000 claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 26
- 230000008569 process Effects 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 25
- 238000004458 analytical method Methods 0.000 claims description 22
- 238000013139 quantization Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 15
- 230000008054 signal transmission Effects 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000013136 deep learning model Methods 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 10
- 230000008447 perception Effects 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 5
- 239000003086 colorant Substances 0.000 claims description 3
- 238000011002 quantification Methods 0.000 claims description 3
- 230000015654 memory Effects 0.000 description 14
- 238000004590 computer program Methods 0.000 description 4
- 230000005291 magnetic effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 241000227425 Pieris rapae crucivora Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910052731 fluorine Inorganic materials 0.000 description 1
- 125000001153 fluoro group Chemical group F* 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000004304 visual acuity Effects 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/14—Traffic control systems for road vehicles indicating individual free spaces in parking areas
- G08G1/145—Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
- G08G1/148—Management of a network of parking areas
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30264—Parking
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an intelligent parking positioning system and method based on UWB/IMU and visual information fusion, which are used for positioning vehicles in a parking lot. Compared with the prior art, the intelligent parking system realizes double-model fusion positioning by the aid of parking lot end equipment in the intelligent parking process, designs an environmental error neural network learning model to eliminate errors and improve precision, determines the deflection angle of the camera according to the number of vehicles, enables a parking lot camera shooting mechanism to dynamically monitor each vehicle running in the parking lot, enables the vehicles to track the positions in a strange parking environment in a real-time high-precision manner, and realizes the intelligent parking process by the cooperative cooperation of the parking lot and the vehicles.
Description
Technical Field
The invention relates to the technical field of intelligent parking positioning of vehicles, in particular to an intelligent parking positioning system and method based on UWB/IMU and visual information fusion.
Background
With the continuous development of science and technology and the continuous increase of the automobile holding capacity, the intellectualization of automobiles is further developed, wherein each automobile factory also carries out corresponding intellectualized upgrading aiming at the parking function of the automobile. Because intelligent parking is used as an important ring of 'the last kilometer' in automatic driving, the commercial land falling can be realized preferentially, and an intelligent parking system becomes an important direction for research and development of various vehicle enterprises.
Most of the current market schemes are mainly realized by a pure vehicle end, for example, a three-dimensional map is constructed by sensing the surrounding environment through a vehicle-mounted laser radar, and the vehicle-mounted vision is used for scanning the environment to acquire information so as to achieve the aim of parking. However, because the laser radar has a short range and a high price, the laser radar cannot be popularized in a marketable manner, the pure vision is obviously interfered by the environment, and meanwhile, a user needs to learn to park in an unfamiliar parking lot, so that the intelligent parking of the intelligent automobile in the parking lot cannot be well popularized and realized.
Meanwhile, as the automobile holding amount is continuously increased, the gaps of the parking spaces are continuously increased, the dense areas of all big cities face the problem of difficult parking, a driver cannot realize 'three-minute happy parking' in peak hours, and even an intelligent vehicle with parking needs the driver to perform corresponding operation.
In summary, there is a need for improvements to existing parking solutions to overcome the short panel of pure car-end intelligent parking.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an intelligent parking positioning system and method based on UWB/IMU and visual information fusion.
The purpose of the invention can be realized by the following technical scheme:
an intelligent parking positioning system based on UWB/IMU and visual information fusion is used for vehicle positioning of a parking lot and comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module, wherein the central control module comprises a vehicle information quantization unit, a camera mechanism threshold value unit, an environmental error neural network learning model, a path guiding unit and a parking space guiding unit;
the camera unit comprises a plurality of cameras and is used for acquiring current image information of a parking lot, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and surrounding parking space and lane line image information;
the vehicle information quantization unit is used for quantizing the image information acquired by the camera unit into vehicle information, determining the number of vehicles in the current scene acquired by the camera unit and calibrating the target vehicle;
the camera mechanism threshold unit is used for determining a deflection angle threshold of the camera unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
the vehicle track analysis module is used for acquiring track information of a target vehicle according to vehicle image information at continuous moments, and transmitting the track information to the central control module for learning of an environmental error neural network learning model;
the UWB/IMU module is used for acquiring the distance between a target vehicle and a UWB base station and the motion information of the vehicle so as to acquire the virtual coordinate information of the target vehicle in the whole parking lot domain and the inertial advancing direction;
an environment error perception deep learning model is established in the environment error neural network learning model by a convolutional neural network, an error factor of positioning deviation caused by environment factors is extracted, and the UWB/IMU module and the camera unit are assisted to correct positioning accuracy;
the signal transmission and processing module is used for transmitting data of the UWB/IMU module and the camera unit to the central control module;
the calculation and positioning display module is used for performing coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
the data fusion module is used for fusing virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information, and transmitting a fusion result to the central control module;
the route guiding unit is used for screening out the lane information with the least passing vehicles according to the current image information of the parking lot;
and the parking space guiding unit is used for screening out empty parking spaces according to the current image information of the parking lot.
Preferably, an environmental error perception deep learning model is established in the environmental error neural network learning model by a convolutional neural network, error factors of positioning deviation caused by environmental factors are extracted, and high-level features are generated by layer-by-layer combination and abstraction and are used for helping a UWB/IMU module to correct positioning accuracy; the method for extracting the error factor of the positioning deviation caused by the environmental factors comprises the following steps:
nth fixed UWB base station coordinate U n =(x n ,y n ,z n ) Known coordinates; the position of the vehicle to be positioned at the moment t is recorded as N t =(x t ,y t ,z t ) (ii) a the distance from the UWB base station to the target vehicle at the time t is as follows:
substituting error factors at different momentsLearning by an environment error neural network model; whereinIs a high level feature quantity;in order to be the weight and the coefficient,wherein v is the target vehicle travel speed; t is a unit of i+1 、T i Time (T) recorded corresponding to a certain moment and a later time frame in the process of moving the target vehicle i+1 -T i ) Is the time difference of travel; theta i For this purpose, the vehicle is steered.
Preferably, the vehicle information quantization unit is configured to quantize the image information acquired by the camera unit into vehicle information, determine the number of vehicles in the current scene acquired by the camera unit, and calibrate the target vehicle, specifically:
the vehicle information quantization unit acquires vehicle image information, quantizes the image information into the vehicle type, color and license plate number of the vehicle corresponding to the pixel points, sequentially generates unique character string codes for storage, and generates a vehicle digital ID for each vehicle; the vehicle information quantization unit acquires a vehicle body image of a target vehicle in a current time frame, performs local image processing on the vehicle body image to obtain discrete pixel points corresponding to a vehicle body, a color and a license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID within corresponding time, and performs identity calibration on the target vehicle.
Preferably, the camera mechanism threshold unit is configured to determine a deflection angle threshold of the camera unit according to the number of vehicles in the current scene and control the camera unit to execute, specifically:
the azimuth angle corresponding to the space coordinates (x, y, z) of the ith high-precision camera and the parking lot is (alpha) i ,β i ,γ i ) And (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, and the number of target vehicles in the view angle range is acquired to be N according to the azimuth angle k And constructing a state matrix equation:
wherein the camera has an analytic power of R χ Xi is a threshold value set for the camera mechanism, and when xi is less than or equal to N k And when the camera mechanism threshold value unit sends a deflection instruction to the camera, so that the angular deflection of the camera is realized.
Preferably, the UWB/IMU module is configured to acquire a distance between the target vehicle and the UWB base station and motion information of the vehicle, so as to acquire virtual coordinate information and an inertial heading direction of the target vehicle in the parking lot universe, and specifically includes:
the UWB/IMU module obtains the distance between the target vehicle and the base station: space coordinates of each UWB base station in the parking lot are obtained, time transmitted by pulse signals between each UWB base station and the target vehicle is obtained, and the distance between each base station and the target vehicle is calculated, so that virtual coordinate information of the target vehicle in the parking lot can be obtained through calculation:
where m and n are used to identify different base stations, l m,n Represents the distance between m and n UWB base stations, t being the pulse transmission time; c is the speed of light; (x, y, z) are virtual coordinates of the target vehicle within the parking lot;
the UWB/IMU module obtains the motion information of the vehicle: and acquiring accelerometer data E (epsilon) and gyroscope data E (sigma) through an IMU inertial module so as to obtain the inertial traveling direction of the target vehicle.
Preferably, the data fusion module is configured to fuse the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit, and transmit a fusion result to the central control module, and specifically:
establishing a fusion target positioning optimization function:
G i =f(x i-1 ,u i ,w i )
H i,j =h(y j ,x i ,v i,j )
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertia module,coordinate data after UWB positioning processing, T is observation target vehicle time frame, w i As vehicle response rate, G i Equation of motion obtained for the camera unit tracking the target vehicle, H i,j Trajectory prediction equation, u, determined for a vehicle trajectory analysis module i 、v i,j To observe noise, x i Is the target vehicle position, y j The coordinates of the parking spaces;
and when the target positioning optimization function solves the minimum point, obtaining the real-time accurate position information of the finally optimized vehicle.
An intelligent parking positioning method based on UWB/IMU and visual information fusion is based on the intelligent parking positioning system based on UWB/IMU and visual information fusion, and comprises the following steps:
the method comprises the following steps of S1, obtaining current image information of a parking lot through a camera unit, tracking a target vehicle and a scene where the vehicle is located in real time to obtain a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and surrounding parking space and lane line image information;
s2, quantizing the image information acquired by the camera unit into vehicle information through a vehicle information quantization unit, determining the number of vehicles in the current scene acquired by the camera unit, and calibrating the target vehicle;
s3, determining a deflection angle threshold of the camera unit through the camera mechanism threshold unit according to the number of vehicles in the current scene, and controlling the camera unit to execute the deflection angle threshold;
s4, the UWB/IMU module acquires the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so that the virtual coordinate information and the inertial advancing direction of the target vehicle in the whole area of the parking lot are acquired;
s5, the vehicle track analysis module acquires track information of the target vehicle according to the vehicle image information at the continuous time, and transmits the track information to the central control module for learning of an environmental error neural network learning model;
s6, establishing an environmental error perception deep learning model by a convolutional neural network in the environmental error neural network learning model, extracting error factors of positioning deviation caused by environmental factors, and helping the UWB/IMU module and the camera unit to correct positioning accuracy;
s7, the signal transmission and processing module transmits data of the UWB/IMU module and the camera unit to the central control module;
s8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
s9, the data fusion module fuses virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information of the vehicle, and transmits a fusion result to the central control module;
s10, the path guiding unit screens out lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guiding unit screens out empty parking spaces according to the current image information of the parking lot, and the central control module guides the target vehicle to park according to the real-time accurate position information of the vehicle, the lane information and the coordinates of the empty parking spaces.
Preferably, in step S6, the extracting an error factor of the positioning deviation caused by the environmental factor includes:
nth fixed UWB base station coordinate U n =(x n ,y n ,z n ) Known coordinates; the position of the vehicle to be positioned at the moment t is recorded as N t =(x t ,y t ,z t ) (ii) a the distance from the UWB base station to the target vehicle at the time t is as follows:
substituting error factors at different momentsLearning by an environment error neural network model; whereinIs a high level feature quantity;in order to be the weight and the coefficient,wherein v is the target vehicle travel speed; t is i+1 、T i Time (T) recorded corresponding to a certain moment and a later time frame in the process of moving the target vehicle i+1 -T i ) Is the time difference of travel; theta.theta. i For this purpose, the vehicle wheel angle is calculated.
Preferably, step S2 is specifically:
the vehicle information quantization unit acquires vehicle image information, quantizes the image information into vehicle types, colors and license plate numbers of vehicles corresponding to the pixel points, sequentially generates unique character string codes for storage, and generates a vehicle number ID for each vehicle; the vehicle information quantization unit acquires a current time frame vehicle body image of a target vehicle, performs local image processing on the vehicle body image to obtain discrete pixel points corresponding to a vehicle body, a color and a license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID within corresponding time, and performs identity calibration on the target vehicle.
Preferably, step S3 is specifically:
the azimuth angle of the ith high-precision camera corresponding to the parking lot space coordinate (x, y, z) is (alpha) i ,β i ,γ i ) (x, y, z) corresponds to the ith high-precision camera in the parking lotThe position of the spatial coordinate is located, and the number of target vehicles in the visual angle range is acquired to be N according to the azimuth angle k And constructing a state matrix equation:
wherein the resolving power of the camera is R χ And xi is a threshold value set for the camera shooting mechanism, and when xi is less than or equal to N k And then, the threshold value unit of the camera mechanism sends a deflection instruction to the camera to realize the angular deflection of the camera.
Preferably, step S4 is specifically:
the UWB/IMU module acquires the distance between the target vehicle and the base station: space coordinates of each UWB base station in the parking lot are obtained, time transmitted by pulse signals between each UWB base station and the target vehicle is obtained, and the distance between each base station and the target vehicle is calculated, so that virtual coordinate information of the target vehicle in the parking lot can be obtained through calculation:
where m and n are used to identify different base stations, l m,n Represents the distance between m and n UWB base stations, t is the pulse transmission time; c is the speed of light; (x, y, z) are virtual coordinates of the target vehicle within the parking lot;
the UWB/IMU module acquires the motion information of the vehicle: and acquiring accelerometer data E (epsilon) and gyroscope data E (sigma) through an IMU inertial module so as to obtain the inertial traveling direction of the target vehicle.
Preferably, step S9 is specifically:
establishing a fusion target positioning optimization function:
G i =f(x i-1 ,u i ,w i )
H i,j =h(y j ,x i ,v i,j )
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertia module,coordinate data after UWB positioning processing, T is observation target vehicle time frame, w i As vehicle response rate, G i Equation of motion obtained for the camera unit to track the target vehicle, H i,j Trajectory prediction equation, u, determined for a vehicle trajectory analysis module i 、v i,j To observe noise, x i Is the target vehicle position, y j The coordinates of the parking spaces;
and when the target positioning optimization function solves the minimum point, the real-time accurate position information of the finally optimized vehicle is obtained.
Compared with the prior art, the invention has the following beneficial effects:
(1) In the intelligent parking process, the double-model fusion positioning is realized by the assistance of parking lot end equipment, the problem that the vehicle can be tracked in a strange parking environment in a real-time high-precision position is solved, and the intelligent parking process is realized by the cooperative cooperation of a parking lot and the vehicle.
(2) An environmental error neural network learning model is designed, an environmental error perception deep learning model is established by a convolutional neural network, error factors of positioning deviation caused by environmental factors are extracted, high-level features are generated by layer-by-layer combination and abstraction, the error factors can be removed, and the positioning accuracy of a target vehicle is improved.
(3) The threshold unit of the camera mechanism determines the deflection angle of the camera according to the number of vehicles in the current scene, so that the camera mechanism of the parking lot dynamically monitors each vehicle in the parking lot, and each camera cannot monitor too many vehicles, so that the situation that the calculation is insufficient and the target vehicle cannot be tracked is avoided.
Drawings
FIG. 1 is a schematic diagram of an intelligent parking positioning system;
FIG. 2 is a schematic diagram of a central control module;
FIG. 3 is a flow chart of an intelligent parking location method;
FIG. 4 is a schematic view of a usage scenario of the intelligent parking positioning system;
FIG. 5 is a flowchart illustrating an intelligent parking method based on UWB/IMU and visual information fusion according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for fusion positioning to remove environmental error factors according to an embodiment of the present invention;
FIG. 7 is a flowchart of a method for fusion sending of positioning information according to an embodiment of the present invention;
reference numerals: 1. a central control module 2, a UWB/IMU module 3, a camera unit 4, a signal transmission and processing module 5, a calculation and positioning display module 6, a vehicle track analysis module 7, a data fusion module, 11, a vehicle information quantification unit, 12, a camera mechanism threshold value unit, 13, an environment error neural network learning model, 14, a path guiding unit, 15 and a parking space guiding unit.
Detailed Description
For a further understanding of the invention, reference will now be made in detail to the embodiments of the invention illustrated in the accompanying drawings. The embodiment is implemented on the premise of the technical scheme of the invention, and a detailed implementation mode and a specific operation process are given. It is to be understood that these descriptions are only intended to further illustrate features and advantages of the present invention and not to limit the claims of the present invention. The description in this section is for several exemplary embodiments only, and the present invention is not limited only to the scope of the embodiments described. It is within the scope of the present disclosure and protection that the same or similar prior art means and some features of the embodiments may be interchanged.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. In the description of the present invention, it is to be understood that the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Example 1:
an intelligent parking positioning system based on UWB/IMU and visual information fusion is used for vehicle positioning of a parking lot and comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module as shown in figure 1, wherein the central control module comprises a vehicle information quantification unit, a camera mechanism threshold value unit, an environmental error neural network learning model, a path guide unit and a parking space guide unit as shown in figure 2. In this embodiment, as shown in fig. 4, the UWB base stations are multiple and are installed in the parking lot, the UWB transmitter is installed on the target vehicle, the camera unit includes multiple cameras and is installed in the parking lot, the signal transmission and processing unit realizes data transmission between the target vehicle and the parking lot, and the IMU inertial module is installed on the target vehicle.
In the embodiment, the UWB/IMU positioning and visual auxiliary positioning are fused at the parking lot end, so that parking positioning is realized. On one hand, the distance between a target vehicle and a base station is acquired by using a UWB/IMU module, so that the virtual coordinate information and the inertial advancing direction of the target vehicle in the whole area of a parking lot are acquired, on the other hand, the image information of the target vehicle, obstacles and surrounding parking spaces in the parking lot is acquired by using a high-precision camera, the image information of the vehicle body and the environment in the environment of the parking lot where the current time frame of the target vehicle is located, the position information of the empty space coordinates and the like are acquired by using the high-precision camera, then the virtual coordinate information of the UWB/IMU module and the vehicle position information tracked by the high-precision camera are fused, the empty space coordinates and the lanes with fewer passing vehicles are determined, and the appointed empty space path planning is realized by matching with the target vehicle, so that the intelligent parking is realized.
Meanwhile, an environment error neural network learning model is established by combining environment errors existing in different parking lots, and the accuracy of the positioning information acquired by the module is improved by analyzing and eliminating the errors; the vehicle track analysis module is used for acquiring track information of the target vehicle to obtain position information of the target vehicle at different moments, and an analysis result of the position information is used for learning of the environmental error neural network. In the environment error neural network learning model, an environment error perception deep learning model is established by a convolutional neural network, error factors of positioning deviation caused by environment factors are extracted, and high-level features are generated by layer-by-layer combination and abstraction, so that the UWB/IMU module and the high-precision camera can be helped to correct positioning precision.
Moreover, for the images acquired by the camera unit, information quantization processing is carried out, the acquired image information is quantized into the model, color and license plate number of the vehicle corresponding to the pixel points, and unique character string codes are sequentially generated and stored; on one hand, the target vehicle can be calibrated, on the other hand, the target vehicle can be analyzed by the camera mechanism threshold setting unit, the number of the vehicles in the current high-precision camera acquisition scene is judged according to the number of the received coded numbers, the high-precision camera deflection angle threshold under different numbers of the vehicles is set, and therefore the parking lot camera mechanism can dynamically monitor each vehicle in the parking lot.
The intelligent parking positioning system based on UWB/IMU and visual information fusion is designed, corresponding intelligent upgrading is carried out in a parking lot, UWB/IMU modules and visual information are fused, the short board of pure vehicle-end intelligent parking can be overcome, and intelligent parking is promoted to be commercialized as soon as possible.
Specifically, the work of each module unit in the intelligent parking positioning system based on UWB/IMU and visual information fusion is as follows:
(1) The camera unit comprises a plurality of cameras and is used for acquiring current image information of the parking lot and tracking the target vehicle and the scene where the vehicle is located in real time to obtain a motion equation G i =f(x i-1 ,u i ,w i ) Wherein u is i 、x i Determining vehicle position information of a target vehicle for the position of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and surrounding parking spaces and lane line image information; the spatial coordinates of each camera in the parking area are known, the angle of deflection and the cameraParameters such as focal length are also known, so that the spatial coordinates of vehicles, obstacles, lane lines, parking spaces and the like in the parking lot can be determined only by completing calibration and analyzing the image acquired by the camera.
In the embodiment, the lane line elements, the parking space elements and the obstacles are calibrated simultaneously, the lane line elements are calibrated according to the parking lot end calibration in the forms of the existing common white dotted lines, yellow solid lines and white left-right turning arrow lines of the parking lot, and the three forms are respectively acquired by the high-precision camera and transmitted to the central control module of the target vehicle. The parking lot key elements are used for parking lot end calibration according to three common vertical, horizontal and inclined parking lot forms in the current market, the three forms are respectively used for collecting information through a high-precision camera and sending a central control module of a target vehicle, the barriers are classified and calibrated according to common barriers of the current parking lot and mainly comprise vehicles, pets, pedestrians and traffic signs, and the four forms are respectively used for collecting information through the high-precision camera and sending the central control module of the target vehicle, so that blind area collision early warning is realized.
(2) The vehicle information quantization unit is used for quantizing the image information acquired by the camera unit into vehicle information, determining the number of vehicles in the current scene acquired by the camera unit and calibrating the target vehicle;
the vehicle information quantization unit acquires vehicle body image information of a target vehicle in a current parking lot, quantizes the image information into vehicle types, colors and license plate numbers of vehicles corresponding to pixel points, sequentially generates unique character string codes for storage, and generates a vehicle number ID for each vehicle; the vehicle information quantization unit receives a vehicle body image of a target vehicle in a current time frame acquired by a high-precision camera, performs local image processing on the vehicle body image to obtain discrete pixel points corresponding to a vehicle body, a color and a license plate, converts the discrete pixel points into discrete numerical values, generates a unique vehicle digital ID within corresponding time, and performs identity calibration on the target vehicle.
The camera unit compares data through the transformation of coded data of each frame before and after the image, judges whether the data are matched, outputs visual information data if the data are matched, and otherwise searches the coded information of the acquired image again to search matched data. Data may also be provided to the vehicle trajectory analysis unit.
(3) The camera mechanism threshold unit is used for determining a deflection angle threshold of the camera unit according to the number of vehicles in the current scene, controlling the camera unit to execute and setting high-precision camera deflection angle thresholds under different numbers of vehicles;
the azimuth angle corresponding to the space coordinates (x, y, z) of the ith high-precision camera and the parking lot is (alpha) i ,β i ,γ i ) And (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, and the number N of target vehicles in the view angle range is acquired according to the azimuth angle k And constructing a state matrix equation:
wherein the camera has an analytic power of R χ Xi is a threshold value set for the camera mechanism, and when xi is less than or equal to N k And when the camera mechanism threshold value unit sends a deflection instruction to the camera, so that the angular deflection of the camera is realized.
Therefore, the deflection angle of the cameras in the application is determined according to the number of vehicles in the current scene, so that the camera shooting mechanism of the parking lot dynamically monitors each vehicle in the parking lot, and each camera cannot monitor too many vehicles, so that the situation that the calculation is insufficient and the target vehicle cannot be tracked is avoided.
(4) The vehicle track analysis module is used for acquiring track information of a target vehicle according to vehicle image information at continuous moments and transmitting the track information to the central control module for learning of an environmental error neural network learning model;
the vehicle track analysis module determines a track analysis target vehicle and characteristic information thereof through the high-precision camera, and locks the position information of the target vehicle according to the characteristic information and the vehicle information quantization unit in a matching mode. Adopting the contour detection and deflection angle of the vehicle tire to measure the vehicle advancing direction, updating the information of the position of the target vehicle in different time frames to obtain the vehicle advancing path and the track information of the target vehicle, wherein the track prediction equation is as follows:
H i,j =h(y j ,x i ,v i,j )
wherein v is i,j To observe noise; x is a radical of a fluorine atom i Is the target vehicle position; y is j Is a parking space coordinate point.
By analyzing the vehicle track and transmitting the analysis result to the central control module, the environment error neural network model can be learned, the whole system can be continuously optimized, and the robustness and the positioning accuracy of the whole intelligent parking system are improved.
(5) The UWB/IMU module is used for acquiring the distance between a target vehicle and a UWB base station and the motion information of the vehicle, so as to acquire the virtual coordinate information of the target vehicle in the whole area of the parking lot and the inertial advancing direction;
the UWB/IMU module acquires the distance between the target vehicle and the base station: space coordinates of each UWB base station in the parking lot are obtained, time transmitted by pulse signals between each UWB base station and the target vehicle is obtained, and the distance between each base station and the target vehicle is calculated, so that virtual coordinate information of the target vehicle in the parking lot can be obtained through calculation:
where m and n are used to identify different base stations, l m,n Represents the distance between m and n UWB base stations, t being the pulse transmission time; c is the speed of light; (x, y, z) are virtual coordinates of the target vehicle within the parking lot;
in the present embodiment, there are 4 UWB base stations, and spatial coordinates of the UWB base stations in the parking lot are known and are (x) respectively 1 ,y 1 ,z 1 )、(x 2 ,y 2 ,z 2 )、(x 3 ,y 3 ,z 3 )、(x 4 ,y 4 ,z 4 ) Therefore, by combining the above formulas, the virtual coordinates (x, y,z)。
the UWB/IMU module acquires the motion information of the vehicle: and acquiring accelerometer data E (epsilon) and gyroscope data E (sigma) through an IMU inertial module so as to obtain the inertial traveling direction of the target vehicle.
(6) An environmental error perception deep learning model is established by a convolutional neural network in an environmental error neural network learning model, error factors of positioning deviation caused by environmental factors are extracted, and high-level features are generated by combining and abstracting layer by layer to help a UWB/IMU module and a camera unit to correct positioning accuracy; when the network is calculated in the forward direction, a plurality of convolution cores carry out convolution operation on input at a convolution layer to generate a plurality of feature maps, and the dimension of each feature map is reduced relative to the dimension of the input; in a secondary sampling layer, each characteristic graph is subjected to pooling to obtain a corresponding graph with further reduced dimension, and the corresponding graphs are sequentially and alternately stacked and then reach a network through a full connection layer for output, so that the whole intelligent parking system can actively learn, and the robustness and the positioning accuracy of the whole intelligent parking system are improved;
nth fixed UWB base station coordinate U n =(x n ,y n ,z n ) Is a known coordinate; the position of the vehicle to be positioned at the moment t is recorded as N t =(x t ,y t ,z t ) (ii) a the distance from the UWB base station to the target vehicle at the time t is as follows:
substituting error factors at different momentsLearning by an environment error neural network model; whereinIs a high level feature quantity;in order to be the weight and the coefficient,wherein v is the target vehicle travel speed; t is a unit of i+1 、T i Corresponding to the time recorded by a certain moment and a later time frame in the process of the traveling of the target vehicle, (T) i+1 -T i ) Is the time difference of travel; theta i For this purpose, the vehicle wheel angle is calculated.
It should be noted that the neural network learning model for the environmental error is used to correct the positioning accuracy. In practical application, the camera unit can collect images and divide a scene into a non-environment interference state and a non-environment interference state according to whether the environment has interference, for the scene without the environment interference, an environment error neural network learning model is not used, real-time accurate position information is directly obtained by virtual coordinate information of a UWB/IMU module of the data fusion module and vehicle position information of a target vehicle of the camera unit, pose estimation is completed, for the scene with the environment interference, the environment error neural network learning model is needed to help the UWB/IMU module and the camera unit to correct positioning accuracy, and then fusion is carried out to complete pose estimation.
(7) The signal transmission and processing module is used for transmitting data of the UWB/IMU module and the camera unit to the central control module, and mainly comprises information of quantized coding of current UWB positioning coordinates and surrounding environment images of a target vehicle;
(8) The calculation and positioning display module is used for performing coordinate calculation and visual position visual tracking display according to data of the UWB/IMU module and the camera unit, and specifically, processing coordinates and image information output by the UWB/IMU module and the high-precision camera, and performing UWB/IMU positioning coordinate calculation and visual position tracking display;
(9) The data fusion module is used for fusing the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time accurate position information and transmitting a fusion result to the central control module; the fusion process is as follows:
establishing a fusion target positioning optimization function:
G i =f(x i-1 ,u i ,w i )
H i,j =h(y j ,x i ,v i,j )
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertia module,coordinate data after UWB positioning processing, T is observation target vehicle time frame, w i As vehicle response rate, G i Equation of motion obtained for the camera unit to track the target vehicle, H i,j Trajectory prediction equation, u, determined for vehicle trajectory analysis module i 、v i,j To observe noise, x i Is the target vehicle position, y j The subscript has no meaning and only represents the data substituted in the function for calculation; and when the target positioning optimization function solves the minimum point, the real-time accurate position information of the finally optimized vehicle is obtained, so that the positioning drift and the visual positioning deviation are solved, and the positioning precision is improved.
(10) The route guiding unit is used for screening out the lane information with the least passing vehicles according to the current image information of the parking lot; the parking space guiding unit is used for screening out empty parking spaces according to the current image information of the parking lot; and the central control module guides the target vehicle to park according to the real-time accurate position information of the vehicle, the lane information and the coordinates of the empty parking space.
The high-precision camera is used for acquiring image information of the environment where the vehicle is located and image information of an empty parking space in real time; the UWB/IMU module is used for acquiring distance information between the vehicle and the parking lot UWB base station; the signal transmission and processing module is used for receiving and sending the positioning signal sent by the central control module; the calculation and positioning display module is used for processing the position information of the vehicle in the simulation coordinates in real time; the vehicle track analysis module is used for tracking the vehicle motion track and uploading the vehicle motion track to the central control system for correcting the positioning accuracy of the UWB/IMU module; the data fusion module is used for solving the problems of positioning drift and visual positioning deviation and improving the positioning precision. The intelligent parking system realizes double-model fusion positioning by the assistance of parking lot end equipment in the intelligent parking process, solves the problem of real-time high-precision position tracking of the vehicle in an unfamiliar parking environment, and realizes the intelligent parking process by the cooperative cooperation of the parking lot equipment and the vehicle.
Example 2:
an intelligent parking positioning method based on UWB/IMU and visual information fusion is based on the intelligent parking positioning system described in embodiment 1, the flow chart is shown in FIG. 3, and reference can also be made to FIGS. 5-7 for details, and the present specification provides the method operation steps as the embodiment or the flow chart, but more or less operation steps can be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of sequences, and does not represent a unique order of performance. In actual system or server product execution, the method shown in the embodiment or the figures can be executed sequentially or in parallel (for example, in the environment of parallel processors or multi-thread processing), or the execution sequence of steps without timing limitation can be adjusted. Specifically, the intelligent parking positioning method based on UWB/IMU and visual information fusion comprises the following steps:
s1, acquiring current image information of a parking lot through a camera unit, tracking a target vehicle and a scene where the vehicle is located in real time to obtain a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and surrounding parking space and lane line image information;
s2, quantizing the image information acquired by the camera unit into vehicle information through a vehicle information quantization unit, determining the number of vehicles in the current scene acquired by the camera unit, and calibrating the target vehicle;
s3, according to the number of vehicles in the current scene, determining a deflection angle threshold of a camera unit through a camera mechanism threshold unit and controlling the camera unit to execute the deflection angle threshold;
s4, the UWB/IMU module acquires the distance between the target vehicle and a UWB base station and the motion information of the vehicle, so that the virtual coordinate information of the target vehicle in the whole area of the parking lot and the inertial advancing direction are acquired;
s5, the vehicle track analysis module acquires track information of the target vehicle according to the vehicle image information at continuous moments, and transmits the track information to the central control module for learning of the environment error neural network learning model;
s6, establishing an environmental error perception deep learning model by a convolutional neural network in the environmental error neural network learning model, extracting an error factor of positioning deviation caused by environmental factors, and helping a UWB/IMU module and a camera unit to correct positioning accuracy;
s7, the signal transmission and processing module transmits data of the UWB/IMU module and the camera unit to the central control module;
s8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
s9, the data fusion module fuses virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information of the vehicle, and transmits a fusion result to the central control module;
s10, the path guiding unit screens out lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guiding unit screens out empty parking spaces according to the current image information of the parking lot, and the central control module guides the target vehicle to park according to the real-time accurate position information of the vehicle, the lane information and the coordinates of the empty parking spaces.
In the above steps, the specific implementation details of each step are the same as those in embodiment 1, and are not described herein again.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase-Change RAM (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, compact Disc Read-Only Memory (CD-ROM), digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. As such, the software programs (including associated data structures) of the present application can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal bearing medium and/or stored in a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Effects or advantages referred to in the embodiments may not be reflected in the embodiments due to interference of various factors, and the description of the effects or advantages is not intended to limit the embodiments. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other components, materials, and parts, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.
Claims (10)
1. An intelligent parking positioning system based on UWB/IMU and visual information fusion is characterized in that the system is used for vehicle positioning of a parking lot and comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module, wherein the central control module comprises a vehicle information quantification unit, a camera mechanism threshold value unit, an environmental error neural network learning model, a path guide unit and a parking space guide unit;
the camera unit comprises a plurality of cameras and is used for acquiring current image information of a parking lot, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and surrounding parking space and lane line image information;
the vehicle information quantization unit is used for quantizing the image information acquired by the camera unit into vehicle information, determining the number of vehicles in the current scene acquired by the camera unit and calibrating the target vehicle;
the camera mechanism threshold unit is used for determining a deflection angle threshold of the camera unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
the vehicle track analysis module is used for acquiring track information of a target vehicle according to vehicle image information at continuous moments, and transmitting the track information to the central control module for learning of an environmental error neural network learning model;
the UWB/IMU module is used for acquiring the distance between a target vehicle and a UWB base station and the motion information of the vehicle so as to acquire the virtual coordinate information of the target vehicle in the whole parking lot domain and the inertial advancing direction;
in the environment error neural network learning model, a convolutional neural network is used for establishing an environment error perception deep learning model, extracting an error factor of positioning deviation caused by an environment factor, and helping a UWB/IMU module and a camera unit to correct positioning accuracy;
the signal transmission and processing module is used for transmitting data of the UWB/IMU module and the camera unit to the central control module;
the calculation and positioning display module is used for performing coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
the data fusion module is used for fusing virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information, and transmitting a fusion result to the central control module;
the route guiding unit is used for screening out the lane information with the least passing vehicles according to the current image information of the parking lot;
and the parking space guiding unit is used for screening out empty parking spaces according to the current image information of the parking lot.
2. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the environment error neural network learning model is an environment error perception deep learning model established by a convolutional neural network, and error factors of positioning deviation caused by environment factors are extracted and combined layer by layer to generate high-level features in an abstract manner, so as to help a UWB/IMU module to correct positioning accuracy; the method for extracting the error factor of the positioning deviation caused by the environmental factors comprises the following steps:
nth fixed UWB base station coordinate U n =(x n ,y n ,z n ) Known coordinates; the position of the vehicle to be positioned at the moment t is recorded as N t =(x t ,y t ,z t ) (ii) a the distance from the UWB base station to the target vehicle at the time t is as follows:
substituting error factors at different momentsLearning by an environment error neural network model; whereinIs a high layerA characteristic amount;in order to be the weight and the coefficient,wherein v is the target vehicle travel speed; t is i+1 、T i Corresponding to the time recorded by a certain moment and a later time frame in the process of the traveling of the target vehicle, (T) i+1 -T i ) Is the time difference of travel; theta i For this purpose, the vehicle wheel angle is calculated.
3. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the vehicle information quantization unit is configured to quantize image information acquired by the camera unit into vehicle information, determine the number of vehicles in a current scene acquired by the camera unit, and calibrate a target vehicle, specifically:
the vehicle information quantization unit acquires vehicle image information, quantizes the image information into vehicle types, colors and license plate numbers of vehicles corresponding to the pixel points, sequentially generates unique character string codes for storage, and generates a vehicle number ID for each vehicle; the vehicle information quantization unit acquires a current time frame vehicle body image of a target vehicle, performs local image processing on the vehicle body image to obtain discrete pixel points corresponding to a vehicle body, a color and a license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID within corresponding time, and performs identity calibration on the target vehicle.
4. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the camera mechanism threshold unit is configured to determine a deflection angle threshold of the camera unit according to a number of vehicles in a current scene and control the camera unit to execute, and specifically:
the azimuth angle of the ith high-precision camera corresponding to the parking lot space coordinate (x, y, z) is (alpha) i ,β i ,γ i ) The (x, y, z) pairsThe space coordinate position of the ith high-precision camera in the parking lot is acquired, and the number of target vehicles in the visual angle range is N according to the azimuth angle k And constructing a state matrix equation:
wherein the camera has an analytic power of R χ And xi is a threshold value set for the camera shooting mechanism, and when xi is less than or equal to N k And when the camera mechanism threshold value unit sends a deflection instruction to the camera, so that the angular deflection of the camera is realized.
5. The intelligent parking positioning system based on the fusion of the UWB/IMU and the visual information as claimed in claim 1, wherein the UWB/IMU module is configured to obtain a distance between the target vehicle and the UWB base station and motion information of the vehicle, so as to obtain virtual coordinate information and an inertial heading direction of the target vehicle in the whole area of the parking lot, and specifically:
the UWB/IMU module acquires the distance between the target vehicle and the base station: space coordinates of each UWB base station in a parking lot are obtained, time transmitted by pulse signals between each UWB base station and a target vehicle is obtained, and the distance between each base station and the target vehicle is calculated, so that virtual coordinate information of the target vehicle in the parking lot can be obtained through calculation:
where m and n are used to identify different base stations, l m,n Represents the distance between m and n UWB base stations, t being the pulse transmission time; c is the speed of light; (x, y, z) are virtual coordinates of the target vehicle within the parking lot;
the UWB/IMU module acquires the motion information of the vehicle: and acquiring accelerometer data E (epsilon) and gyroscope data E (sigma) through an IMU inertial module so as to obtain the inertial traveling direction of the target vehicle.
6. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the data fusion module is configured to fuse the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit, and transmit a fusion result to the central control module, specifically:
establishing a fusion target positioning optimization function:
G i =f(x i-1 ,u i ,w i )
H i,j =h(y j ,x i ,v i,j )
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertia module,coordinate data after UWB positioning processing, T is observation target vehicle time frame, w i As vehicle response rate, G i Equation of motion obtained for the camera unit tracking the target vehicle, H i,j Trajectory prediction equation, u, determined for vehicle trajectory analysis module i 、v i,j To observe noise, x i Is the target vehicle position, y j The coordinates of the parking spaces are obtained;
and when the target positioning optimization function solves the minimum point, obtaining the real-time accurate position information of the finally optimized vehicle.
7. An intelligent parking positioning method based on UWB/IMU and visual information fusion is characterized in that, based on the intelligent parking positioning system based on UWB/IMU and visual information fusion as claimed in any one of claims 1-6, comprising the following steps:
the method comprises the following steps of S1, obtaining current image information of a parking lot through a camera unit, tracking a target vehicle and a scene where the vehicle is located in real time to obtain a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and image information of surrounding parking places and lane lines;
s2, quantizing the image information acquired by the camera unit into vehicle information through a vehicle information quantization unit, determining the number of vehicles in the current scene acquired by the camera unit, and calibrating the target vehicle;
s3, according to the number of vehicles in the current scene, determining a deflection angle threshold of a camera unit through a camera mechanism threshold unit and controlling the camera unit to execute the deflection angle threshold;
s4, the UWB/IMU module acquires the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so that the virtual coordinate information and the inertial advancing direction of the target vehicle in the whole area of the parking lot are acquired;
s5, the vehicle track analysis module acquires track information of the target vehicle according to the vehicle image information at continuous moments, and transmits the track information to the central control module for learning of the environment error neural network learning model;
s6, establishing an environmental error perception deep learning model by a convolutional neural network in the environmental error neural network learning model, extracting an error factor of positioning deviation caused by environmental factors, and helping a UWB/IMU module and a camera unit to correct positioning accuracy;
s7, the signal transmission and processing module transmits the data of the UWB/IMU module and the camera unit to the central control module;
s8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
s9, the data fusion module fuses virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information of the vehicle, and transmits a fusion result to the central control module;
s10, the path guiding unit screens out lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guiding unit screens out empty parking spaces according to the current image information of the parking lot, and the central control module guides the target vehicle to park according to the real-time accurate position information of the vehicle, the lane information and the coordinates of the empty parking spaces.
8. An intelligent parking positioning method based on UWB/IMU and visual information fusion as claimed in claim 7, wherein in step S6, the manner of extracting the error factor of the positioning deviation caused by the environmental factors includes:
nth fixed UWB base station coordinate U n =(x n ,y n ,z n ) Known coordinates; the position of the vehicle to be positioned at the moment t is recorded as N t =(x t ,y t ,z t ) (ii) a the distance from the UWB base station to the target vehicle at the time t is as follows:
substituting error factors at different momentsLearning by an environment error neural network model; whereinIs a high level feature quantity;in order to be the weight and the coefficient,wherein v is the target vehicle travel speed; t is i+1 、T i Time (T) recorded corresponding to a certain moment and a later time frame in the process of moving the target vehicle i+1 -T i ) Is the time difference of travel; theta.theta. i For this purpose, the vehicle is steered.
9. The intelligent parking positioning method based on UWB/IMU and visual information fusion of claim 7, wherein the step S3 specifically comprises:
the azimuth angle of the ith high-precision camera corresponding to the parking lot space coordinate (x, y, z) is (alpha) i ,β i ,γ i ) And (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, and the number N of target vehicles in the view angle range is acquired according to the azimuth angle k And constructing a state matrix equation:
wherein the camera has an analytic power of R χ And xi is a threshold value set for the camera shooting mechanism, and when xi is less than or equal to N k And when the camera mechanism threshold value unit sends a deflection instruction to the camera, so that the angular deflection of the camera is realized.
10. The intelligent parking positioning method based on UWB/IMU and visual information fusion of claim 7, wherein the step S9 is specifically:
establishing a fusion target positioning optimization function:
G i =f(x i-1 ,u i ,w i )
H i,j =h(y j ,x i ,v i,j )
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertia module,handling squats for UWB positioningTarget data, T is the observation target vehicle time frame, w i As vehicle response rate, G i Equation of motion obtained for the camera unit to track the target vehicle, H i,j Trajectory prediction equation, u, determined for a vehicle trajectory analysis module i 、v i,j To observe noise, x i Is the target vehicle position, y j The coordinates of the parking spaces;
and when the target positioning optimization function solves the minimum point, the real-time accurate position information of the finally optimized vehicle is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210871578.9A CN115235452B (en) | 2022-07-22 | 2022-07-22 | Intelligent parking positioning system and method based on UWB/IMU and visual information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210871578.9A CN115235452B (en) | 2022-07-22 | 2022-07-22 | Intelligent parking positioning system and method based on UWB/IMU and visual information fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115235452A true CN115235452A (en) | 2022-10-25 |
CN115235452B CN115235452B (en) | 2024-08-27 |
Family
ID=83674829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210871578.9A Active CN115235452B (en) | 2022-07-22 | 2022-07-22 | Intelligent parking positioning system and method based on UWB/IMU and visual information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115235452B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115540854A (en) * | 2022-12-01 | 2022-12-30 | 成都信息工程大学 | Active positioning method, equipment and medium based on UWB assistance |
CN116612458A (en) * | 2023-05-30 | 2023-08-18 | 易飒(广州)智能科技有限公司 | Deep learning-based parking path determination method and system |
CN116976535A (en) * | 2023-06-27 | 2023-10-31 | 上海师范大学 | Path planning algorithm based on fusion of few obstacle sides and steering cost |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104697517A (en) * | 2015-03-26 | 2015-06-10 | 江南大学 | Multi-target tracking and positioning system for indoor parking lot |
CN105946853A (en) * | 2016-04-28 | 2016-09-21 | 中山大学 | Long-distance automatic parking system and method based on multi-sensor fusion |
CN107600067A (en) * | 2017-09-08 | 2018-01-19 | 中山大学 | A kind of autonomous parking system and method based on more vision inertial navigation fusions |
WO2020056874A1 (en) * | 2018-09-17 | 2020-03-26 | 魔门塔(苏州)科技有限公司 | Automatic parking system and method based on visual recognition |
CN111239790A (en) * | 2020-01-13 | 2020-06-05 | 上海师范大学 | Vehicle navigation system based on 5G network machine vision |
WO2022100272A1 (en) * | 2020-11-11 | 2022-05-19 | Oppo广东移动通信有限公司 | Indoor positioning method and related apparatus |
CN114623823A (en) * | 2022-05-16 | 2022-06-14 | 青岛慧拓智能机器有限公司 | UWB (ultra wide band) multi-mode positioning system, method and device integrating odometer |
-
2022
- 2022-07-22 CN CN202210871578.9A patent/CN115235452B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104697517A (en) * | 2015-03-26 | 2015-06-10 | 江南大学 | Multi-target tracking and positioning system for indoor parking lot |
CN105946853A (en) * | 2016-04-28 | 2016-09-21 | 中山大学 | Long-distance automatic parking system and method based on multi-sensor fusion |
CN107600067A (en) * | 2017-09-08 | 2018-01-19 | 中山大学 | A kind of autonomous parking system and method based on more vision inertial navigation fusions |
WO2020056874A1 (en) * | 2018-09-17 | 2020-03-26 | 魔门塔(苏州)科技有限公司 | Automatic parking system and method based on visual recognition |
CN111239790A (en) * | 2020-01-13 | 2020-06-05 | 上海师范大学 | Vehicle navigation system based on 5G network machine vision |
WO2022100272A1 (en) * | 2020-11-11 | 2022-05-19 | Oppo广东移动通信有限公司 | Indoor positioning method and related apparatus |
CN114623823A (en) * | 2022-05-16 | 2022-06-14 | 青岛慧拓智能机器有限公司 | UWB (ultra wide band) multi-mode positioning system, method and device integrating odometer |
Non-Patent Citations (2)
Title |
---|
王志兵;: "基于超声波雷达和全景高清影像融合的全自动泊车系统", 电子世界, no. 19, 15 October 2020 (2020-10-15) * |
鲍施锡: "GNSS/UWB 与IMU 组合的室内外定位系统研究", 中国优秀硕博士论文电子期刊, 31 December 2023 (2023-12-31) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115540854A (en) * | 2022-12-01 | 2022-12-30 | 成都信息工程大学 | Active positioning method, equipment and medium based on UWB assistance |
CN116612458A (en) * | 2023-05-30 | 2023-08-18 | 易飒(广州)智能科技有限公司 | Deep learning-based parking path determination method and system |
CN116612458B (en) * | 2023-05-30 | 2024-06-04 | 易飒(广州)智能科技有限公司 | Deep learning-based parking path determination method and system |
CN116976535A (en) * | 2023-06-27 | 2023-10-31 | 上海师范大学 | Path planning algorithm based on fusion of few obstacle sides and steering cost |
CN116976535B (en) * | 2023-06-27 | 2024-05-17 | 上海师范大学 | Path planning method based on fusion of few obstacle sides and steering cost |
Also Published As
Publication number | Publication date |
---|---|
CN115235452B (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111448478B (en) | System and method for correcting high-definition maps based on obstacle detection | |
CN107235044B (en) | A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior | |
CN115235452B (en) | Intelligent parking positioning system and method based on UWB/IMU and visual information fusion | |
CN112700470B (en) | Target detection and track extraction method based on traffic video stream | |
CN107246876B (en) | Method and system for autonomous positioning and map construction of unmanned automobile | |
CN111986506B (en) | Mechanical parking space parking method based on multi-vision system | |
CN108628324B (en) | Unmanned vehicle navigation method, device, equipment and storage medium based on vector map | |
CN112212874B (en) | Vehicle track prediction method and device, electronic equipment and computer readable medium | |
JP2022019642A (en) | Positioning method and device based upon multi-sensor combination | |
WO2019161134A1 (en) | Lane marking localization | |
CN113743469B (en) | Automatic driving decision method integrating multi-source data and comprehensive multi-dimensional indexes | |
CN112753038B (en) | Method and device for identifying lane change trend of vehicle | |
CN113405555B (en) | Automatic driving positioning sensing method, system and device | |
CN113252051A (en) | Map construction method and device | |
CN111539305B (en) | Map construction method and system, vehicle and storage medium | |
Gressenbuch et al. | Mona: The munich motion dataset of natural driving | |
CN114694111A (en) | Vehicle positioning | |
CN112967393B (en) | Correction method and device for vehicle movement track, electronic equipment and storage medium | |
CN114841188A (en) | Vehicle fusion positioning method and device based on two-dimensional code | |
CN114358038B (en) | Two-dimensional code coordinate calibration method and device based on vehicle high-precision positioning | |
CN114563007B (en) | Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium | |
CN114543842A (en) | Positioning precision evaluation system and method of multi-sensor fusion positioning system | |
CN113566834A (en) | Positioning method, positioning device, vehicle, and storage medium | |
CN115273015A (en) | Prediction method and device, intelligent driving system and vehicle | |
EP4160154A1 (en) | Methods and systems for estimating lanes for a vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |