CN115235452B - Intelligent parking positioning system and method based on UWB/IMU and visual information fusion - Google Patents

Intelligent parking positioning system and method based on UWB/IMU and visual information fusion Download PDF

Info

Publication number
CN115235452B
CN115235452B CN202210871578.9A CN202210871578A CN115235452B CN 115235452 B CN115235452 B CN 115235452B CN 202210871578 A CN202210871578 A CN 202210871578A CN 115235452 B CN115235452 B CN 115235452B
Authority
CN
China
Prior art keywords
vehicle
information
uwb
module
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210871578.9A
Other languages
Chinese (zh)
Other versions
CN115235452A (en
Inventor
朱苏磊
鲍施锡
李天辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Normal University
Original Assignee
Shanghai Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Normal University filed Critical Shanghai Normal University
Priority to CN202210871578.9A priority Critical patent/CN115235452B/en
Publication of CN115235452A publication Critical patent/CN115235452A/en
Application granted granted Critical
Publication of CN115235452B publication Critical patent/CN115235452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • G08G1/148Management of a network of parking areas
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an intelligent parking positioning system and method based on UWB/IMU and visual information fusion, which is used for vehicle positioning of a parking lot and comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module, the central control module comprises a vehicle information quantization unit, a camera mechanism threshold unit, an environmental error neural network learning model, a path guiding unit and a parking space guiding unit. Compared with the prior art, the intelligent parking system has the advantages that the intelligent parking process is assisted by parking lot end equipment, the dual-model fusion positioning is realized, the error elimination and improvement precision of an environmental error neural network learning model is designed, the deflection angle of the camera is determined according to the number of vehicles, each traveling vehicle in the parking lot is dynamically monitored by a parking lot camera mechanism, the real-time high-precision position tracking of the vehicle in an unfamiliar parking environment can be realized, and the intelligent parking process is realized by the cooperation of the parking lot and the vehicle.

Description

Intelligent parking positioning system and method based on UWB/IMU and visual information fusion
Technical Field
The invention relates to the technical field of intelligent parking positioning of vehicles, in particular to an intelligent parking positioning system and method based on UWB/IMU and visual information fusion.
Background
Along with the continuous development of technology and the continuous increase of the quantity of the reserved automobiles, the intellectualization of the automobiles is further developed, wherein each automobile factory correspondingly and intelligently upgrades the parking function of the automobile. Because intelligent parking is an important ring of 'last kilometer' in automatic driving, commercialization landing can be realized preferentially, and an intelligent parking system becomes an important direction for research and development of various vehicle enterprises.
Most of the current market schemes are mainly realized by a pure vehicle end, such as constructing a three-dimensional map for surrounding environment perception through a vehicle-mounted laser radar and acquiring information by scanning the environment through vehicle-mounted vision so as to achieve the purpose of parking. However, due to the fact that the laser radar is short in range and high in price, market popularization cannot be achieved, pure vision is obviously interfered by environment, meanwhile, a learner is required to park in a strange parking lot, and the intelligent parking commercialized landing of the intelligent automobile in the parking lot cannot be well popularized and achieved due to the fact that the laser radar is short in range and high in price.
Meanwhile, as the storage quantity of the automobile is continuously increased and the gaps of the parking spaces are continuously increased, the problem of difficult parking is faced in dense areas of all large cities, and a driver cannot realize three-minute Happy parking in a peak period, even an intelligent vehicle with parking needs to be correspondingly operated by the driver.
In summary, there is a need for improvements to existing parking schemes to overcome the short plates of pure vehicle-end intelligent parking.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an intelligent parking positioning system and method based on UWB/IMU and visual information fusion.
The aim of the invention can be achieved by the following technical scheme:
An intelligent parking positioning system based on UWB/IMU and visual information fusion is used for vehicle positioning of a parking lot and comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module, wherein the central control module comprises a vehicle information quantization unit, a camera mechanism threshold unit, an environment error neural network learning model, a path guiding unit and a parking space guiding unit;
The camera unit comprises a plurality of cameras and is used for acquiring current image information of a parking lot, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information, surrounding parking spaces and lane line image information;
The vehicle information quantization unit is used for quantizing the image information acquired by the camera unit into vehicle information, determining the number of vehicles in the current scene acquired by the camera unit and calibrating the target vehicle;
The camera mechanism threshold unit is used for determining a deflection angle threshold of the camera unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
The vehicle track analysis module is used for acquiring track information of a target vehicle according to vehicle image information at continuous moments, and transmitting the track information to the central control module for learning of an environmental error neural network learning model;
The UWB/IMU module is used for acquiring the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to acquire virtual coordinate information and the inertial advancing direction of the target vehicle in the whole parking area;
The environment error neural network learning model is used for establishing an environment error perception deep learning model by using a convolutional neural network, extracting error factors of positioning deviation caused by environment factors, and helping a UWB/IMU module and a camera unit to correct positioning accuracy;
The signal transmission and processing module is used for transmitting the data of the UWB/IMU module and the camera unit to the central control module;
The calculation and positioning display module is used for carrying out coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
the data fusion module is used for fusing virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information, and transmitting a fusion result to the central control module;
the path guiding unit is used for screening out lane information with the least passing vehicles according to the current image information of the parking lot;
the parking space guiding unit is used for screening out empty parking spaces according to the current image information of the parking lot.
Preferably, in the environmental error neural network learning model, an environmental error perception deep learning model is established by a convolutional neural network, error factors of positioning deviation caused by environmental factors are extracted, and high-level features are generated by combination abstraction layer by layer and used for helping a UWB/IMU module to correct positioning accuracy; the error factor mode for extracting the environmental factors to cause the positioning deviation comprises the following steps:
N-th fixed UWB base station coordinates U n=(xn,yn,zn) are known coordinates; the position of the vehicle to be positioned at the time t is recorded as N t=(xt,yt,zt); the distance from the UWB base station to the target vehicle at the moment t is as follows:
Wherein the method comprises the steps of For this time error factor;
substituting error factors at different moments Learning the neural network model for environmental errors; wherein the method comprises the steps ofIs a high-level feature quantity; As the weights and coefficients of the set of coefficients, Wherein v is the target vehicle travel speed; t i+1、Ti corresponds to the time recorded by a certain moment and a later time frame in the advancing process of the target vehicle, and (T i+1-Ti) is the advancing time difference; θ i is the vehicle wheel rotation angle at this time.
Preferably, the vehicle information quantization unit is configured to quantize the image information collected by the camera unit into vehicle information, determine the number of vehicles in the current scene collected by the camera unit, and calibrate the target vehicle, and specifically includes:
The vehicle information quantization unit obtains vehicle image information, quantizes the image information into the vehicle type, the color and the license plate number of the vehicle corresponding to the pixel point, sequentially generates and stores unique character string codes, and generates a vehicle digital ID for each vehicle; the vehicle information quantization unit acquires a vehicle body image of a current time frame of the target vehicle, performs partial image processing on the vehicle body image to obtain discrete pixel points corresponding to the vehicle body, the color and the license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID in corresponding time, and performs identity calibration on the target vehicle.
Preferably, the camera mechanism threshold unit is configured to determine a yaw angle threshold of the camera unit according to the number of vehicles in the current scene and control the camera unit to execute, specifically:
The azimuth angle corresponding to the spatial coordinates (x, y, z) of the ith high-precision camera and the parking lot is (alpha iii), the (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, the number of target vehicles in the view angle range is N k according to the azimuth angle, and a state matrix equation is constructed:
The camera analysis calculation force is R χ, xi is a threshold value set by the camera mechanism, and when xi is less than or equal to N k, the camera mechanism threshold value unit sends a deflection instruction to the camera to realize the angle deflection of the camera.
Preferably, the UWB/IMU module is configured to obtain a distance between the target vehicle and the UWB base station and motion information of the vehicle, so as to obtain virtual coordinate information and an inertial forward direction of the target vehicle in a parking area, where the target vehicle is located, specifically:
the UWB/IMU module obtains the distance between the target vehicle and the base station: the method comprises the steps of obtaining space coordinates of each UWB base station in a parking lot, obtaining time transmitted by pulse signals between each UWB base station and a target vehicle, and calculating the distance between each base station and the target vehicle, so that virtual coordinate information of the target vehicle in the parking lot can be calculated:
wherein m and n are used for identifying different base stations, l m,n represents the distance between UWB base stations m and n, and t is pulse transmission time; c is the speed of light; (x, y, z) is the virtual coordinates of the target vehicle in the parking lot;
The UWB/IMU module obtains motion information of the vehicle: accelerometer data E (epsilon) and gyroscope data E (sigma) are acquired through an IMU inertial module, so that the inertial travelling direction of the target vehicle is obtained.
Preferably, the data fusion module is configured to fuse virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit, and transmit a fusion result to the central control module, specifically:
establishing a fusion target positioning optimization function:
Gi=f(xi-1,ui,wi)
Hi,j=h(yj,xi,vi,j)
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertial module, For coordinate data after UWB positioning processing, T is an observation target vehicle time frame, w i is a vehicle response speed, G i is a motion equation obtained by tracking a target vehicle by a camera unit, H i,j is a track prediction equation determined by a vehicle track analysis module, u i、vi,j is observation noise, x i is a target vehicle position, and y j is a coordinate of a parking space;
and solving the minimum point by the target positioning optimization function to obtain real-time accurate position information of the finally optimized vehicle.
An intelligent parking positioning method based on UWB/IMU and visual information fusion, based on the intelligent parking positioning system based on UWB/IMU and visual information fusion, comprises the following steps:
S1, acquiring current image information of a parking lot through a camera unit, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information, surrounding parking spaces and lane line image information;
s2, quantifying the image information acquired by the camera unit into vehicle information through a vehicle information quantification unit, determining the number of vehicles in the current scene acquired by the camera unit, and calibrating a target vehicle;
s3, determining a deflection angle threshold value of the camera unit through a camera mechanism threshold value unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
S4, the UWB/IMU module obtains the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so that virtual coordinate information and the inertial advancing direction of the target vehicle in the whole parking area are obtained;
S5, the vehicle track analysis module acquires track information of the target vehicle according to the vehicle image information at continuous moments, and transmits the track information to the central control module for learning of the environmental error neural network learning model;
S6, establishing an environmental error perception deep learning model by using a convolutional neural network in the environmental error neural network learning model, extracting error factors of positioning deviation caused by environmental factors, and helping the UWB/IMU module and the camera unit to correct positioning accuracy;
s7, the signal transmission and processing module transmits the data of the UWB/IMU module and the camera unit to the central control module;
S8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
s9, the data fusion module fuses the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time accurate position information of the vehicle, and transmits the fusion result to the central control module;
S10, the path guiding unit screens out lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guiding unit screens out empty parking spaces according to the current image information of the parking lot, and the central control module guides the target vehicles to park according to real-time accurate position information of the vehicles, the lane information and coordinates of the empty parking spaces.
Preferably, in step S6, the method for extracting an error factor of the positioning deviation caused by the environmental factor includes:
N-th fixed UWB base station coordinates U n=(xn,yn,zn) are known coordinates; the position of the vehicle to be positioned at the time t is recorded as N t=(xt,yt,zt); the distance from the UWB base station to the target vehicle at the moment t is as follows:
Wherein the method comprises the steps of For this purpose, error factors:
substituting error factors at different moments Learning the neural network model for environmental errors; wherein the method comprises the steps ofIs a high-level feature quantity; As the weights and coefficients of the set of coefficients, Wherein v is the target vehicle travel speed; t i+1、Ti corresponds to the time recorded by a certain moment and a later time frame in the advancing process of the target vehicle, and (T i+1-Ti) is the advancing time difference; θ i is the vehicle wheel rotation angle at this time.
Preferably, step S2 is specifically:
The vehicle information quantization unit obtains vehicle image information, quantizes the image information into the vehicle type, the color and the license plate number of the vehicle corresponding to the pixel point, sequentially generates and stores unique character string codes, and generates a vehicle digital ID for each vehicle; the vehicle information quantization unit acquires a vehicle body image of a current time frame of the target vehicle, performs partial image processing on the vehicle body image to obtain discrete pixel points corresponding to the vehicle body, the color and the license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID in corresponding time, and performs identity calibration on the target vehicle.
Preferably, step S3 is specifically:
The azimuth angle corresponding to the spatial coordinates (x, y, z) of the ith high-precision camera and the parking lot is (alpha iii), the (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, the number of target vehicles in the view angle range is N k according to the azimuth angle, and a state matrix equation is constructed:
The camera analysis calculation force is R χ, xi is a threshold value set by the camera mechanism, and when xi is less than or equal to N k, the camera mechanism threshold value unit sends a deflection instruction to the camera to realize the angle deflection of the camera.
Preferably, step S4 is specifically:
the UWB/IMU module obtains the distance between the target vehicle and the base station: the method comprises the steps of obtaining space coordinates of each UWB base station in a parking lot, obtaining time transmitted by pulse signals between each UWB base station and a target vehicle, and calculating the distance between each base station and the target vehicle, so that virtual coordinate information of the target vehicle in the parking lot can be calculated:
wherein m and n are used for identifying different base stations, l m,n represents the distance between UWB base stations m and n, and t is pulse transmission time; c is the speed of light; (x, y, z) is the virtual coordinates of the target vehicle in the parking lot;
The UWB/IMU module obtains motion information of the vehicle: accelerometer data E (epsilon) and gyroscope data E (sigma) are acquired through an IMU inertial module, so that the inertial travelling direction of the target vehicle is obtained.
Preferably, step S9 is specifically:
establishing a fusion target positioning optimization function:
Gi=f(xi-1,ui,wi)
Hi,j=h(yj,xi,vi,j)
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertial module, For coordinate data after UWB positioning processing, T is an observation target vehicle time frame, w i is a vehicle response speed, G i is a motion equation obtained by tracking a target vehicle by a camera unit, H i,j is a track prediction equation determined by a vehicle track analysis module, u i、vi,j is observation noise, x i is a target vehicle position, and y j is a coordinate of a parking space;
and solving the minimum point by the target positioning optimization function to obtain real-time accurate position information of the finally optimized vehicle.
Compared with the prior art, the invention has the following beneficial effects:
(1) In the intelligent parking process, double-model fusion positioning is realized through the assistance of parking lot end equipment, the problem that vehicles can track in real time with high precision in an unfamiliar parking environment is solved, and the intelligent parking process is realized through the cooperation of the parking lot and the vehicles.
(2) The environment error neural network learning model is designed, the environment error perception deep learning model is built by the convolutional neural network, error factors of positioning deviation caused by environment factors are extracted, and high-level features are generated by combination abstraction layer by layer, so that the error factors can be removed, and the positioning accuracy of a target vehicle is improved.
(3) The camera mechanism threshold unit determines the deflection angle of the cameras according to the number of vehicles in the current scene, so that the camera mechanism of the parking lot dynamically monitors each traveling vehicle in the parking lot, and each camera cannot monitor too many vehicles, so that the situation that calculation force is insufficient and the target vehicle cannot be tracked is avoided.
Drawings
FIG. 1 is a schematic diagram of a configuration of an intelligent park locating system;
FIG. 2 is a schematic diagram of a central control module;
FIG. 3 is a flow chart of a smart park locating method;
FIG. 4 is a schematic view of a usage scenario of the intelligent parking location system;
FIG. 5 is a flowchart of an intelligent parking method based on UWB/IMU and visual information fusion in an embodiment of the invention;
FIG. 6 is a flowchart of a method for removing an environmental error factor fusion location in an embodiment of the present invention;
FIG. 7 is a flowchart of a method for fusion transmission of positioning information in an embodiment of the present invention;
Reference numerals: 1. the system comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module, a data fusion module, a vehicle information quantization unit, a camera mechanism threshold unit, an environment error neural network learning model, a path guiding unit, a parking space guiding unit and a parking space guiding unit, wherein the central control module, the UWB/IMU module, the camera unit, the signal transmission and processing module, the calculation and positioning display module, the vehicle track analysis module, the data fusion module, the vehicle information quantization unit, the camera mechanism threshold unit, the environment error neural network learning model and the parking space guiding unit are arranged in sequence, and the parking space guiding unit is arranged in sequence.
Detailed Description
For a further understanding of the present invention, reference will now be made in detail to the present invention, examples of which are illustrated in the accompanying drawings. The embodiment is implemented on the premise of the technical scheme of the invention, and detailed implementation modes and specific operation processes are given. It should be understood that these descriptions are merely provided to further illustrate the features and advantages of the present invention, and are not intended to limit the scope of the claims. The description of this section is intended to be illustrative of only a few exemplary embodiments and the invention is not to be limited in scope by the description of the embodiments. It is also within the scope of the description and claims of the invention to interchange some of the technical features of the embodiments with other technical features of the same or similar prior art.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. In the description of the present invention, it should be understood that the terms "comprise" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Example 1:
An intelligent parking positioning system based on UWB/IMU and visual information fusion is used for vehicle positioning of a parking lot, and comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module, wherein the central control module comprises a vehicle information quantization unit, a camera mechanism threshold unit, an environment error neural network learning model, a path guiding unit and a parking space guiding unit, as shown in fig. 1. In this embodiment, as shown in fig. 4, the application scenario is multiple, UWB base stations are installed in a parking lot, UWB transmitters are installed on target vehicles, a camera unit includes multiple cameras and is installed in the parking lot, a signal transmission and processing unit realizes data transmission between the target vehicles and the parking lot, and an IMU inertial module is installed on the target vehicles.
In the embodiment, the parking positioning is realized mainly by fusing the UWB/IMU positioning and the vision auxiliary positioning at the parking lot end. According to the intelligent parking system, on one hand, the distance between a target vehicle and a base station is acquired by utilizing the UWB/IMU module, so that virtual coordinate information and an inertial traveling direction of the target vehicle in the whole domain of a parking lot are acquired, on the other hand, the high-precision camera is used for acquiring image information of the target vehicle, an obstacle and surrounding parking space line in the parking lot, vehicle body and environment image information, empty space coordinate position information and the like in the parking lot environment where a current time frame of the target vehicle is located are acquired by utilizing the high-precision camera, virtual coordinate information of the UWB/IMU module and vehicle position information tracked by the high-precision camera are fused, empty space coordinates and lanes with fewer passing vehicles are determined, and specified empty space path planning is realized by matching with the target vehicle, so that intelligent parking is realized.
Meanwhile, an environmental error neural network learning model is established by combining environmental errors existing in different parking lots, and the accuracy of positioning information acquired by the module is improved by analyzing and eliminating errors; the vehicle track analysis module is used for acquiring track information of the target vehicle, obtaining position information of the target vehicle at different moments, and providing analysis results for the environmental error neural network to learn. The environment error neural network learning model is built by a convolutional neural network, an environment error perception deep learning model is extracted, error factors of positioning deviation caused by environment factors are extracted, and high-level features are generated by combination and abstraction layer by layer and are used for helping a UWB/IMU module and a high-precision camera to correct positioning precision.
In addition, for the image collected by the camera unit, the application carries out information quantization processing, quantizes the collected image information into the vehicle type, color and license plate number of the vehicle corresponding to the pixel point, and generates and stores unique character string codes in sequence; the method can calibrate the target vehicle on the one hand, and can be used for the camera mechanism threshold setting unit to analyze on the other hand, and judge the number of vehicles in the current high-precision camera acquisition scene according to the number of the received coding numbers, so as to set the deflection angle threshold of the high-precision camera under different numbers of vehicles, thereby realizing the dynamic monitoring of each traveling vehicle in the parking lot by the parking lot camera mechanism.
The intelligent parking positioning system based on UWB/IMU and visual information fusion is correspondingly and intelligently updated in a parking lot, and the UWB/IMU module and visual information are fused, so that a short plate of intelligent parking at a pure vehicle end can be overcome, and intelligent parking can be promoted to be commercialized to fall to the ground as soon as possible.
Specifically, the intelligent parking positioning system based on UWB/IMU and visual information fusion works as follows:
(1) The camera unit comprises a plurality of cameras and is used for acquiring current image information of a parking lot and tracking a target vehicle and a scene where the vehicle is located in real time to obtain a motion equation G i=f(xi-1,ui,wi, wherein u i、xi is the position of the target vehicle, the vehicle position information of the target vehicle is determined, and the current image information of the parking lot comprises vehicle image information, obstacle image information, surrounding parking spaces and lane line image information; the space coordinates of each camera in the parking lot are known, and the deflection angle, the photographed focal length and other parameters are also known, so that the space coordinates of the vehicle, the obstacle, the lane line, the parking space and the like in the parking lot can be determined by analyzing the image acquired by the camera only by completing calibration.
In this embodiment, the lane line elements, the parking space elements and the obstacle classification are calibrated at the same time, the lane line elements are calibrated according to the common white dotted lines, huang Shixian and the white left-right turning arrow route forms of the existing parking lot, and the three forms are respectively used for acquiring information by the high-precision cameras and sending the information to the central control module of the target vehicle. The parking space elements are used for carrying out parking space end calibration according to three common parking space forms of vertical, horizontal and inclined in the current market, the three forms are respectively used for collecting information by a high-precision camera to send a central control module of a target vehicle, the obstacle classification calibration mainly comprises vehicles, pets, pedestrians and traffic signs according to the common obstacles in the current parking space, and the four forms are respectively used for collecting information by the high-precision camera to send the central control module of the target vehicle, so that the blind area collision early warning is realized.
(2) The vehicle information quantization unit is used for quantizing the image information acquired by the camera unit into vehicle information, determining the number of vehicles in the current scene acquired by the camera unit and calibrating the target vehicle;
The vehicle information quantization unit obtains vehicle body image information in a parking lot where a target vehicle is currently located, quantizes the image information into a vehicle type, a color and a license plate number of the vehicle corresponding to the pixel point, sequentially generates a unique character string code for storage, and generates a vehicle digital ID for each vehicle; the vehicle information quantization unit receives a vehicle body image of a current time frame of a target vehicle acquired by the high-precision camera, performs local image processing on the vehicle body image to obtain discrete pixel points corresponding to a vehicle body, a color and a license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID in corresponding time, and performs identity calibration on the target vehicle.
The target vehicle is calibrated through the vehicle information quantization unit, real-time tracking of the target vehicle can be achieved, the camera unit performs data comparison through the coded data conversion of each frame before and after the image, whether the data are matched or not is judged, visual information data are output if the data are matched, otherwise, the coded information of the acquired image is searched again, and matching data are searched. The vehicle trajectory analysis unit may also be provided with data.
(3) The camera mechanism threshold unit is used for determining a deflection angle threshold of the camera unit according to the number of vehicles in the current scene and controlling the camera unit to execute so as to set high-precision camera deflection angle thresholds under different numbers of vehicles;
The azimuth angle corresponding to the spatial coordinates (x, y, z) of the ith high-precision camera and the parking lot is (alpha iii), the (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, the number of target vehicles in the view angle range is N k according to the azimuth angle, and a state matrix equation is constructed:
The camera analysis calculation force is R χ, xi is a threshold value set by the camera mechanism, and when xi is less than or equal to N k, the camera mechanism threshold value unit sends a deflection instruction to the camera to realize the angle deflection of the camera.
Therefore, the deflection angle of the cameras is determined according to the number of vehicles in the current scene, so that each traveling vehicle in the parking lot is dynamically monitored by the parking lot camera shooting mechanism, and each camera cannot monitor too many vehicles to avoid insufficient calculation force and cannot track the target vehicle.
(4) The vehicle track analysis module is used for acquiring track information of a target vehicle according to the vehicle image information at continuous moments, and transmitting the track information to the central control module for learning of the environmental error neural network learning model;
The vehicle track analysis module determines track analysis target vehicles and characteristic information thereof through the high-precision camera, and locks target vehicle position information according to matching of the characteristic information and the vehicle information quantization unit. The vehicle travelling direction is measured by adopting vehicle tire contour detection and deflection angle, the vehicle travelling path is obtained by updating information of the position of the target vehicle in different time frames, the track information of the target vehicle is obtained, and the track prediction equation is as follows:
Hi,j=h(yj,xi,vi,j)
Wherein v i,j is observation noise; x i is the target vehicle position; y j is a parking space coordinate point.
The vehicle track analysis is carried out, and the analysis result is transmitted to the central control module, so that the environment error neural network model can be used for learning, the whole system can be continuously optimized, and the robustness and the positioning accuracy of the whole intelligent parking system are improved.
(5) The UWB/IMU module is used for acquiring the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to acquire virtual coordinate information and the inertial advancing direction of the target vehicle in the whole domain of the parking lot;
the UWB/IMU module obtains the distance between the target vehicle and the base station: the method comprises the steps of obtaining space coordinates of each UWB base station in a parking lot, obtaining time transmitted by pulse signals between each UWB base station and a target vehicle, and calculating the distance between each base station and the target vehicle, so that virtual coordinate information of the target vehicle in the parking lot can be calculated:
wherein m and n are used for identifying different base stations, l m,n represents the distance between UWB base stations m and n, and t is pulse transmission time; c is the speed of light; (x, y, z) is the virtual coordinates of the target vehicle in the parking lot;
In this embodiment, there are 4 UWB base stations in total, and the spatial coordinates of these are known in the parking lot, and (x1,y1,z1)、(x2,y2,z2)、(x3,y3,z3)、(x4,y4,z4), are used in combination with the above equations to obtain the virtual coordinates (x, y, z) of the target vehicle in the parking lot.
The UWB/IMU module obtains motion information of the vehicle: accelerometer data E (epsilon) and gyroscope data E (sigma) are acquired through an IMU inertial module, so that the inertial travelling direction of the target vehicle is obtained.
(6) An environmental error perception deep learning model is built in the environmental error neural network learning model by using a convolutional neural network, error factors of positioning deviation caused by environmental factors are extracted, and high-level features are generated by combination and abstraction layer by layer, so that the UWB/IMU module and the camera unit are helped to correct positioning accuracy; in the forward calculation of the network, a plurality of convolution check inputs are subjected to convolution operation at the convolution layer at the same time to generate a plurality of feature graphs, and the dimension of each feature graph is reduced relative to the dimension of the input; in the sub-sampling layer, each feature map is subjected to pooling to obtain a corresponding map with further reduced dimension, and after being sequentially stacked in a crossing manner, the feature map reaches network output through a full-connection layer, so that the whole intelligent parking system can actively learn, and the robustness and the positioning accuracy of the whole intelligent parking system are improved;
N-th fixed UWB base station coordinates U n=(xn,yn,zn) are known coordinates; the position of the vehicle to be positioned at the time t is recorded as N t=(xt,yt,zt); the distance from the UWB base station to the target vehicle at the moment t is as follows:
Wherein the method comprises the steps of For this time error factor;
substituting error factors at different moments Learning the neural network model for environmental errors; wherein the method comprises the steps ofIs a high-level feature quantity; As the weights and coefficients of the set of coefficients, Wherein v is the target vehicle travel speed; t i+1、Ti corresponds to the time recorded by a certain moment and a later time frame in the advancing process of the target vehicle, and (T i+1-Ti) is the advancing time difference; θ i is the vehicle wheel rotation angle at this time.
It should be noted that the environmental error neural network learning model is used to correct positioning accuracy. In practical application, the camera unit can collect images and divide the scene into environment-free interference and environment-free interference according to whether the environment is interfered, for the scene without the environment interference, the environment error neural network learning model is not used, the real-time accurate position information is directly obtained by the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit, the pose estimation is completed, and for the scene with the environment interference, the environment error neural network learning model is needed to help the UWB/IMU module and the camera unit to correct the positioning precision, and then the pose estimation is completed through fusion.
(7) The signal transmission and processing module is used for transmitting the data of the UWB/IMU module and the camera unit to the central control module, and mainly comprises the current UWB positioning coordinates of the target vehicle and information after quantization coding of surrounding environment images;
(8) The calculation and positioning display module is used for carrying out coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit, specifically, processing the coordinate and image information output by the UWB/IMU module and the high-precision camera, and carrying out UWB/IMU positioning coordinate calculation and visual position tracking display;
(9) The data fusion module is used for fusing the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time accurate position information, and transmitting the fusion result to the central control module; the fusion process is as follows:
establishing a fusion target positioning optimization function:
Gi=f(xi-1,ui,wi)
Hi,j=h(yj,xi,vi,j)
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertial module, For coordinate data after UWB positioning processing, T is an observation target vehicle time frame, w i is vehicle response speed, G i is a motion equation obtained by tracking a target vehicle by a camera unit, H i,j is a track prediction equation determined by a vehicle track analysis module, u i、vi,j is observation noise, x i is a target vehicle position, y j is a coordinate of a parking space, and subscript has no meaning only to indicate that data are substituted into a function for calculation; and when the minimum point is solved by the target positioning optimization function, the real-time accurate position information of the finally optimized vehicle is obtained, so that the positioning drift and the visual positioning deviation are solved, and the positioning precision is improved.
(10) The path guiding unit is used for screening out lane information with the least passing vehicles according to the current image information of the parking lot; the parking space guiding unit is used for screening out empty parking spaces according to the current image information of the parking lot; and the central control module guides the target vehicle to park according to the real-time accurate position information of the vehicle, the lane information and the coordinates of the empty parking space.
The high-precision camera is used for acquiring the environment image information of the vehicle and the empty image information in real time; the UWB/IMU module is used for acquiring distance information between the vehicle and the UWB base station of the parking lot; the signal transmission and processing module is used for receiving and transmitting the positioning signal transmitted by the central control module; the calculating and positioning display module is used for processing the position information of the vehicle in the simulated coordinates in real time; the vehicle track analysis module is used for tracking the motion track of the vehicle and uploading the motion track to the central control system for correcting the positioning precision of the UWB/IMU module; the data fusion module is used for solving the positioning drift and the visual positioning deviation and improving the positioning precision. According to the intelligent parking system, the double-model fusion positioning is realized through the assistance of the parking lot end equipment in the intelligent parking process, the real-time high-precision position tracking of the vehicle in an unfamiliar parking environment is solved, and the intelligent parking process is realized through the cooperative coordination of the parking lot equipment and the vehicle.
Example 2:
The intelligent parking location method based on UWB/IMU and visual information fusion is based on the intelligent parking location system described in the embodiment 1, the flow chart is shown in FIG. 3, and further referring to FIGS. 5-7, details thereof can be understood, the present specification provides the method operation steps as an example or flow chart, but more or fewer operation steps can be included based on routine or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. In actual system or server product execution, the steps may be performed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment) or in an order that is not timing-constrained, as per the methods shown in the embodiments or figures. Specifically, the intelligent parking positioning method based on UWB/IMU and visual information fusion comprises the following steps:
S1, acquiring current image information of a parking lot through a camera unit, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information, surrounding parking spaces and lane line image information;
s2, quantifying the image information acquired by the camera unit into vehicle information through a vehicle information quantification unit, determining the number of vehicles in the current scene acquired by the camera unit, and calibrating a target vehicle;
s3, determining a deflection angle threshold value of the camera unit through a camera mechanism threshold value unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
S4, the UWB/IMU module obtains the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so that virtual coordinate information and the inertial advancing direction of the target vehicle in the whole parking area are obtained;
S5, the vehicle track analysis module acquires track information of the target vehicle according to the vehicle image information at continuous moments, and transmits the track information to the central control module for learning of the environmental error neural network learning model;
S6, establishing an environmental error perception deep learning model by using a convolutional neural network in the environmental error neural network learning model, extracting error factors of positioning deviation caused by environmental factors, and helping the UWB/IMU module and the camera unit to correct positioning accuracy;
s7, the signal transmission and processing module transmits the data of the UWB/IMU module and the camera unit to the central control module;
S8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
s9, the data fusion module fuses the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time accurate position information of the vehicle, and transmits the fusion result to the central control module;
S10, the path guiding unit screens out lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guiding unit screens out empty parking spaces according to the current image information of the parking lot, and the central control module guides the target vehicles to park according to real-time accurate position information of the vehicles, the lane information and coordinates of the empty parking spaces.
In the above steps, the specific implementation details of each step are the same as those of embodiment 1, and are not repeated here.
In one exemplary configuration of the application, the terminal, the device of the service network, and the trusted party each include one or more processors (e.g., central processing units (Central Processing Unit, CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include non-volatile Memory in a computer readable medium, random access Memory (Random Access Memory, RAM) and/or non-volatile Memory, such as Read Only Memory (ROM) or flash RAM. Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-change Memory (Phase-CHANGE RAM, PRAM), static random access Memory (Static Random Access Memory, SRAM), dynamic random access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically erasable programmable read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), flash Memory or other Memory technology, read Only optical disk read Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disk (DIGITAL VERSATILE DISK, DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information which can be accessed by the computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present application may be executed by a processor to perform the steps or functions described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Program instructions for invoking the inventive methods may be stored in fixed or removable recording media and/or transmitted via a data stream in a broadcast or other signal bearing medium and/or stored within a working memory of a computer device operating according to the program instructions. An embodiment according to the application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the application as described above.
The description and applications of the present invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Effects or advantages referred to in the embodiments may not be embodied in the embodiments due to interference of various factors, and description of the effects or advantages is not intended to limit the embodiments. Variations and modifications of the embodiments disclosed herein are possible, and alternatives and equivalents of the various components of the embodiments are known to those of ordinary skill in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other assemblies, materials, and components, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (9)

1. The intelligent parking positioning system based on UWB-IMU and visual information fusion is characterized by comprising a central control module, a UWB-IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module, wherein the central control module comprises a vehicle information quantization unit, a camera mechanism threshold unit, an environmental error neural network learning model, a path guiding unit and a parking space guiding unit;
The camera unit comprises a plurality of cameras and is used for acquiring current image information of a parking lot, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information, surrounding parking spaces and lane line image information;
The vehicle information quantization unit is used for quantizing the image information acquired by the camera unit into vehicle information, determining the number of vehicles in the current scene acquired by the camera unit and calibrating the target vehicle;
The camera mechanism threshold unit is used for determining a deflection angle threshold of the camera unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
The vehicle track analysis module is used for acquiring track information of a target vehicle according to vehicle image information at continuous moments, and transmitting the track information to the central control module for learning of an environmental error neural network learning model;
The UWB-IMU module is used for acquiring the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to acquire virtual coordinate information and the inertial advancing direction of the target vehicle in the whole parking area;
the environment error neural network learning model is used for establishing an environment error perception deep learning model by using a convolutional neural network, extracting error factors of positioning deviation caused by environment factors, and helping a UWB-IMU module and a camera unit to correct positioning accuracy;
the signal transmission and processing module is used for transmitting data of the UWB-IMU module and the camera unit to the central control module;
the calculation and positioning display module is used for carrying out coordinate calculation and visual position visual tracking display according to the data of the UWB-IMU module and the camera unit;
The data fusion module is used for fusing virtual coordinate information of the UWB-IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information, and transmitting a fusion result to the central control module;
the path guiding unit is used for screening out lane information with the least passing vehicles according to the current image information of the parking lot;
the parking space guiding unit is used for screening out empty parking spaces according to the current image information of the parking lot;
the environment error neural network learning model is characterized in that an environment error perception deep learning model is built by a convolutional neural network, error factors of positioning deviation caused by environment factors are extracted, and high-level features are generated by combination abstraction layer by layer and are used for helping a UWB-IMU module to correct positioning accuracy; the error factor mode for extracting the environmental factors to cause the positioning deviation comprises the following steps:
N-th fixed UWB base station coordinates U n=(xn,yn,zn) are known coordinates; the position of the vehicle to be positioned at the time t is recorded as N t=(xt,yt,zt); the distance from the UWB base station to the target vehicle at the moment t is as follows:
Wherein the method comprises the steps of For this time error factor;
substituting error factors at different moments Learning the neural network model for environmental errors; wherein the method comprises the steps ofIs a high-level feature quantity; As the weights and coefficients of the set of coefficients, Wherein v is the target vehicle travel speed; t i+1、Ti corresponds to the time recorded by a certain moment and a later time frame in the advancing process of the target vehicle, and (T i+1-Ti) is the advancing time difference; θ i is the vehicle wheel rotation angle at this time.
2. The intelligent parking location system based on UWB-IMU and visual information fusion according to claim 1, wherein the vehicle information quantization unit is configured to quantize image information collected by the camera unit into vehicle information, determine the number of vehicles in the current scene collected by the camera unit, and calibrate the target vehicle, specifically:
The vehicle information quantization unit obtains vehicle image information, quantizes the image information into the vehicle type, the color and the license plate number of the vehicle corresponding to the pixel point, sequentially generates and stores unique character string codes, and generates a vehicle digital ID for each vehicle; the vehicle information quantization unit acquires a vehicle body image of a current time frame of the target vehicle, performs partial image processing on the vehicle body image to obtain discrete pixel points corresponding to the vehicle body, the color and the license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID in corresponding time, and performs identity calibration on the target vehicle.
3. The intelligent parking location system based on UWB-IMU and visual information fusion according to claim 1, wherein the camera mechanism threshold unit is configured to determine a yaw angle threshold of the camera unit according to the number of vehicles in the current scene and control the camera unit to execute:
The azimuth angle corresponding to the spatial coordinates (x, y, z) of the ith high-precision camera and the parking lot is (alpha iii), the (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, the number of target vehicles in the view angle range is N k according to the azimuth angle, and a state matrix equation is constructed:
The camera analysis calculation force is R χ, xi is a threshold value set by the camera mechanism, and when xi is less than or equal to N k, the camera mechanism threshold value unit sends a deflection instruction to the camera to realize the angle deflection of the camera.
4. The intelligent parking location system based on the fusion of UWB-IMU and visual information according to claim 1, wherein the UWB-IMU module is configured to obtain a distance between a target vehicle and a UWB base station and motion information of the vehicle, so as to obtain virtual coordinate information and an inertial forward direction of the target vehicle in a parking area, where the target vehicle is located, specifically:
The UWB-IMU module obtains the distance between the target vehicle and the base station: the method comprises the steps of obtaining space coordinates of each UWB base station in a parking lot, obtaining time transmitted by pulse signals between each UWB base station and a target vehicle, and calculating the distance between each base station and the target vehicle, so that virtual coordinate information of the target vehicle in the parking lot can be calculated:
wherein m and n are used for identifying different base stations, l m,n represents the distance between UWB base stations m and n, and t is pulse transmission time; c is the speed of light; (x, y, z) is the virtual coordinates of the target vehicle in the parking lot;
The UWB-IMU module obtains motion information of the vehicle: accelerometer data E (epsilon) and gyroscope data E (sigma) are acquired through an IMU inertial module, so that the inertial travelling direction of the target vehicle is obtained.
5. The intelligent parking location system based on the UWB-IMU and visual information fusion according to claim 1, wherein the data fusion module is configured to fuse virtual coordinate information of the UWB-IMU module and vehicle position information of a target vehicle of the camera unit, and transmit the fusion result to the central control module, specifically:
establishing a fusion target positioning optimization function:
Gi=f(xi-1,ui,wi)
Hi,j=h(yj,xi,vi,j)
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertial module, For coordinate data after UWB positioning processing, T is an observation target vehicle time frame, w i is a vehicle response speed, G i is a motion equation obtained by tracking a target vehicle by a camera unit, H i,j is a track prediction equation determined by a vehicle track analysis module, u i、vi,j is observation noise, x i is a target vehicle position, and y j is a coordinate of a parking space;
and solving the minimum point by the target positioning optimization function to obtain real-time accurate position information of the finally optimized vehicle.
6. An intelligent parking positioning method based on UWB-IMU and visual information fusion is characterized in that, an intelligent parking location system based on UWB-IMU and visual information fusion according to any of claims 1-5, comprising the steps of:
S1, acquiring current image information of a parking lot through a camera unit, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information, surrounding parking spaces and lane line image information;
s2, quantifying the image information acquired by the camera unit into vehicle information through a vehicle information quantification unit, determining the number of vehicles in the current scene acquired by the camera unit, and calibrating a target vehicle;
s3, determining a deflection angle threshold value of the camera unit through a camera mechanism threshold value unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
S4, the UWB-IMU module obtains the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so that virtual coordinate information and the inertial advancing direction of the target vehicle in the whole parking area are obtained;
S5, the vehicle track analysis module acquires track information of the target vehicle according to the vehicle image information at continuous moments, and transmits the track information to the central control module for learning of the environmental error neural network learning model;
S6, establishing an environmental error perception deep learning model by using a convolutional neural network in the environmental error neural network learning model, extracting error factors of positioning deviation caused by environmental factors, and helping the UWB-IMU module and the camera unit to correct positioning accuracy;
S7, the signal transmission and processing module transmits the data of the UWB-IMU module and the camera unit to the central control module;
s8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB-IMU module and the camera unit;
s9, the data fusion module fuses the virtual coordinate information of the UWB-IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time accurate position information of the vehicle, and transmits the fusion result to the central control module;
S10, the path guiding unit screens out lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guiding unit screens out empty parking spaces according to the current image information of the parking lot, and the central control module guides the target vehicles to park according to real-time accurate position information of the vehicles, the lane information and coordinates of the empty parking spaces.
7. The intelligent parking location method based on the combination of UWB-IMU and visual information according to claim 6, wherein in step S6, the error factor manner of extracting the environmental factor to cause the location deviation comprises:
N-th fixed UWB base station coordinates U n=(xn,yn,zn) are known coordinates; the position of the vehicle to be positioned at the time t is recorded as N t=(xt,yt,zt); the distance from the UWB base station to the target vehicle at the moment t is as follows:
Wherein the method comprises the steps of For this time error factor;
substituting error factors at different moments Learning the neural network model for environmental errors; wherein the method comprises the steps ofIs a high-level feature quantity; As the weights and coefficients of the set of coefficients, Wherein v is the target vehicle travel speed; t i+1、Ti corresponds to the time recorded by a certain moment and a later time frame in the advancing process of the target vehicle, and (T i+1-Ti) is the advancing time difference; θ i is the vehicle wheel rotation angle at this time.
8. The intelligent parking positioning method based on UWB-IMU and visual information fusion according to claim 6, wherein the step S3 is specifically:
The azimuth angle corresponding to the spatial coordinates (x, y, z) of the ith high-precision camera and the parking lot is (alpha iii), the (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, the number of target vehicles in the view angle range is N k according to the azimuth angle, and a state matrix equation is constructed:
The camera analysis calculation force is R χ, xi is a threshold value set by the camera mechanism, and when xi is less than or equal to N k, the camera mechanism threshold value unit sends a deflection instruction to the camera to realize the angle deflection of the camera.
9. The intelligent parking location method based on UWB-IMU and visual information fusion according to claim 6, wherein step S9 is specifically:
establishing a fusion target positioning optimization function:
Gi=f(xi-1,ui,wi)
Hi,j=h(yj,xi,vi,j)
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertial module, For coordinate data after UWB positioning processing, T is an observation target vehicle time frame, w i is a vehicle response speed, G i is a motion equation obtained by tracking a target vehicle by a camera unit, H i,j is a track prediction equation determined by a vehicle track analysis module, u i、vi,j is observation noise, x i is a target vehicle position, and y j is a coordinate of a parking space; and solving the minimum point by the target positioning optimization function to obtain real-time accurate position information of the finally optimized vehicle.
CN202210871578.9A 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion Active CN115235452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210871578.9A CN115235452B (en) 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210871578.9A CN115235452B (en) 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion

Publications (2)

Publication Number Publication Date
CN115235452A CN115235452A (en) 2022-10-25
CN115235452B true CN115235452B (en) 2024-08-27

Family

ID=83674829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210871578.9A Active CN115235452B (en) 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion

Country Status (1)

Country Link
CN (1) CN115235452B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115540854A (en) * 2022-12-01 2022-12-30 成都信息工程大学 Active positioning method, equipment and medium based on UWB assistance
CN116612458B (en) * 2023-05-30 2024-06-04 易飒(广州)智能科技有限公司 Deep learning-based parking path determination method and system
CN116976535B (en) * 2023-06-27 2024-05-17 上海师范大学 Path planning method based on fusion of few obstacle sides and steering cost

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104697517B (en) * 2015-03-26 2017-11-17 江南大学 A kind of parking garage Multi-Targets Tracking and Positioning System
CN105946853B (en) * 2016-04-28 2018-05-29 中山大学 The system and method for long range automatic parking based on Multi-sensor Fusion
CN107600067B (en) * 2017-09-08 2019-09-20 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
CN109720340B (en) * 2018-09-17 2021-05-04 魔门塔(苏州)科技有限公司 Automatic parking system and method based on visual identification
CN111239790B (en) * 2020-01-13 2024-02-06 上海师范大学 Vehicle navigation system based on 5G network machine vision
CN114485656B (en) * 2020-11-11 2024-07-16 Oppo广东移动通信有限公司 Indoor positioning method and related device
CN114623823B (en) * 2022-05-16 2022-09-13 青岛慧拓智能机器有限公司 UWB (ultra wide band) multi-mode positioning system, method and device integrating odometer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GNSS/UWB 与IMU 组合的室内外定位系统研究;鲍施锡;中国优秀硕博士论文电子期刊;20231231;全文 *

Also Published As

Publication number Publication date
CN115235452A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN115235452B (en) Intelligent parking positioning system and method based on UWB/IMU and visual information fusion
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN109084786B (en) Map data processing method
US10922817B2 (en) Perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking
WO2019161134A1 (en) Lane marking localization
EP3722908A1 (en) Learning a scenario-based distribution of human driving behavior for realistic simulation model
US20170083794A1 (en) Virtual, road-surface-perception test bed
CN112753038B (en) Method and device for identifying lane change trend of vehicle
EP3722907B1 (en) Learning a scenario-based distribution of human driving behavior for realistic simulation model and deriving an error model of stationary and mobile sensors
CN111353453B (en) Obstacle detection method and device for vehicle
CN113112524B (en) Track prediction method and device for moving object in automatic driving and computing equipment
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
CN113405555B (en) Automatic driving positioning sensing method, system and device
CN116958763B (en) Feature-result-level-fused vehicle-road collaborative sensing method, medium and electronic equipment
Gressenbuch et al. Mona: The munich motion dataset of natural driving
CN115705693A (en) Method, system and storage medium for annotation of sensor data
CN114358038B (en) Two-dimensional code coordinate calibration method and device based on vehicle high-precision positioning
CN114563007B (en) Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium
CN117859041A (en) Method and auxiliary device for supporting vehicle functions in a parking space and motor vehicle
CN113566834A (en) Positioning method, positioning device, vehicle, and storage medium
CN115752476B (en) Vehicle ground library repositioning method, device, equipment and medium based on semantic information
US20230237679A1 (en) Aligning geodata graph over electronic maps
CN117109599B (en) Vehicle auxiliary positioning method, device and medium based on road side two-dimension code
CN113822932B (en) Device positioning method, device, nonvolatile storage medium and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant