CN115235452A - Intelligent parking positioning system and method based on UWB/IMU and visual information fusion - Google Patents

Intelligent parking positioning system and method based on UWB/IMU and visual information fusion Download PDF

Info

Publication number
CN115235452A
CN115235452A CN202210871578.9A CN202210871578A CN115235452A CN 115235452 A CN115235452 A CN 115235452A CN 202210871578 A CN202210871578 A CN 202210871578A CN 115235452 A CN115235452 A CN 115235452A
Authority
CN
China
Prior art keywords
vehicle
information
uwb
module
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210871578.9A
Other languages
Chinese (zh)
Other versions
CN115235452B (en
Inventor
朱苏磊
鲍施锡
李天辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Normal University
Original Assignee
Shanghai Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Normal University filed Critical Shanghai Normal University
Priority to CN202210871578.9A priority Critical patent/CN115235452B/en
Publication of CN115235452A publication Critical patent/CN115235452A/en
Application granted granted Critical
Publication of CN115235452B publication Critical patent/CN115235452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • G08G1/148Management of a network of parking areas
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于UWB/IMU和视觉信息融合的智能泊车定位系统及方法,用于停车场的车辆定位,包括中央控制模块、UWB/IMU模块、摄像机单元、信号传输与处理模块、计算与定位显示模块、车辆轨迹分析模块和数据融合模块,中央控制模块包括车辆信息量化单元、摄像机机构阈值单元、环境误差神经网络学习模型、路径引导单元和车位引导单元。与现有技术相比,本发明在智能泊车过程中通过停车场端设备辅助,实现双模型融合定位,设计了环境误差神经网络学习模型消除误差提高精度,根据车辆数量确定摄像机偏转角度,使得停车场摄像机构动态监控停车场内每一辆行进车辆,车辆在陌生停车环境中能够实时高精度位置跟踪,通过停车场与车辆协同配合实现智能泊车过程。

Figure 202210871578

The invention relates to an intelligent parking positioning system and method based on the fusion of UWB/IMU and visual information, which is used for vehicle positioning in a parking lot, and includes a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a computing Together with the positioning display module, the vehicle trajectory analysis module and the data fusion module, the central control module includes a vehicle information quantification unit, a camera mechanism threshold unit, an environmental error neural network learning model, a path guidance unit and a parking space guidance unit. Compared with the prior art, the present invention is assisted by the parking lot terminal equipment in the intelligent parking process to realize dual-model fusion positioning, design an environmental error neural network learning model to eliminate errors and improve accuracy, and determine the camera deflection angle according to the number of vehicles, so that the The parking lot camera mechanism dynamically monitors every moving vehicle in the parking lot, and the vehicle can track its real-time high-precision position in an unfamiliar parking environment, and realize the intelligent parking process through the cooperation between the parking lot and the vehicle.

Figure 202210871578

Description

基于UWB/IMU和视觉信息融合的智能泊车定位系统及方法Intelligent parking positioning system and method based on UWB/IMU and visual information fusion

技术领域technical field

本发明涉及车辆智能泊车定位技术领域,尤其是涉及一种基于UWB/IMU和视觉信息融合的智能泊车定位系统及方法。The invention relates to the technical field of vehicle intelligent parking positioning, in particular to an intelligent parking positioning system and method based on UWB/IMU and visual information fusion.

背景技术Background technique

随着科技的持续发展以及汽车保有量的持续性增加,汽车的智能化也得到进一步的发展,其中各车厂针对车辆的泊车功能也进行了相应的智能化升级。由于智能泊车作为自动驾驶中“最后一公里”的重要一环,会优先实现商业化落地,智能泊车系统成为各车企研发的重要方向。With the continuous development of science and technology and the continuous increase of car ownership, the intelligence of cars has also been further developed. Among them, various car manufacturers have also carried out corresponding intelligent upgrades for the parking function of vehicles. As intelligent parking is an important part of the "last mile" in autonomous driving, commercialization will be prioritized, and intelligent parking systems have become an important direction for the research and development of various car companies.

目前市场方案大部分主要依靠纯车端实现,如通过车载激光雷达对周围环境感知构建三维地图及车载视觉进行扫视环境获取信息以达到泊车目的。但是,由于激光雷达存在范围短板,同时价格较高,无法市场化推广,而纯视觉受环境干扰明显,同时在陌生停车场需要学习方能泊车,上述原因致使智能汽车在停车场的智能泊车商业化落地无法得到很好的推广与实现。At present, most of the market solutions mainly rely on pure vehicle-end implementation, such as building a three-dimensional map through vehicle lidar to perceive the surrounding environment and vehicle vision to scan the environment to obtain information to achieve the purpose of parking. However, due to the lack of scope and high price of lidar, it cannot be promoted in the market, and pure vision is obviously disturbed by the environment. At the same time, it needs to learn to park in an unfamiliar parking lot. The commercialization of parking cannot be well promoted and realized.

同时,由于汽车保有量不断增加,车位缺口不断增大,各大城市密集区域均面临停车难问题,驾驶员在高峰时段无法实现“三分钟快乐停车”,即使带有泊车的智能车辆也需要驾驶员给予相应的操作。At the same time, due to the continuous increase in the number of cars and the increasing gap of parking spaces, the dense areas of major cities are faced with the problem of parking difficulties. Drivers cannot achieve "three-minute happy parking" during peak hours. Even smart vehicles with parking need to The driver gives the corresponding operation.

综上,需要对现有的泊车方案进行改进以克服纯车端智能泊车的短板。In summary, the existing parking solutions need to be improved to overcome the shortcomings of pure vehicle-side intelligent parking.

发明内容SUMMARY OF THE INVENTION

本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种基于UWB/IMU和视觉信息融合的智能泊车定位系统及方法。The purpose of the present invention is to provide an intelligent parking positioning system and method based on UWB/IMU and visual information fusion in order to overcome the above-mentioned defects in the prior art.

本发明的目的可以通过以下技术方案来实现:The object of the present invention can be realized through the following technical solutions:

一种基于UWB/IMU和视觉信息融合的智能泊车定位系统,用于停车场的车辆定位,包括中央控制模块、UWB/IMU模块、摄像机单元、信号传输与处理模块、计算与定位显示模块、车辆轨迹分析模块和数据融合模块,所述中央控制模块包括车辆信息量化单元、摄像机机构阈值单元、环境误差神经网络学习模型、路径引导单元和车位引导单元;An intelligent parking positioning system based on UWB/IMU and visual information fusion, used for vehicle positioning in parking lots, including a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, A vehicle trajectory analysis module and a data fusion module, the central control module includes a vehicle information quantification unit, a camera mechanism threshold unit, an environmental error neural network learning model, a path guidance unit and a parking space guidance unit;

所述摄像机单元包括多个摄像机,用于获取停车场当前图像信息并实时跟踪目标车辆及车辆所处场景,得到运动方程,确定目标车辆的车辆位置信息,所述停车场当前图像信息包括车辆图像信息、障碍物图像信息及周围车位和车道线图像信息;The camera unit includes a plurality of cameras, which are used to obtain the current image information of the parking lot and track the target vehicle and the scene where the vehicle is located in real time, obtain the motion equation, and determine the vehicle position information of the target vehicle, and the current image information of the parking lot includes the vehicle image. information, image information of obstacles, and image information of surrounding parking spaces and lane lines;

所述车辆信息量化单元用于将摄像机单元采集的图像信息量化为车辆信息,确定摄像机单元采集的当前场景下的车辆数量,并标定目标车辆;The vehicle information quantification unit is used to quantify the image information collected by the camera unit into vehicle information, determine the number of vehicles in the current scene collected by the camera unit, and calibrate the target vehicle;

所述摄像机机构阈值单元用于根据当前场景下的车辆数量确定摄像机单元的偏转角度阈值并控制摄像机单元执行;The camera mechanism threshold unit is used to determine the deflection angle threshold of the camera unit according to the number of vehicles in the current scene and control the camera unit to perform;

所述车辆轨迹分析模块用于根据连续时刻的车辆图像信息获取目标车辆的轨迹信息,并传输到中央控制模块,供环境误差神经网络学习模型学习;The vehicle trajectory analysis module is used to obtain the trajectory information of the target vehicle according to the vehicle image information at successive moments, and transmit it to the central control module for learning by the environmental error neural network learning model;

所述UWB/IMU模块用于获取目标车辆与UWB基站之间的距离以及车辆的运动信息,从而获取目标车辆在停车场全域所处虚拟坐标信息及惯性前进方向;The UWB/IMU module is used to obtain the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to obtain the virtual coordinate information and inertial direction of the target vehicle in the whole area of the parking lot;

所述环境误差神经网络学习模型中以卷积神经网络建立环境误差感知深度学习模型,提取环境因素导致定位偏差的误差因子,帮助UWB/IMU模块及摄像机单元纠正定位精度;In the environmental error neural network learning model, a convolutional neural network is used to establish an environmental error perception deep learning model, and an error factor of the positioning deviation caused by environmental factors is extracted to help the UWB/IMU module and the camera unit to correct the positioning accuracy;

所述信号传输与处理模块用于传输UWB/IMU模块和摄像机单元的数据至中央控制模块;The signal transmission and processing module is used to transmit the data of the UWB/IMU module and the camera unit to the central control module;

所述计算与定位显示模块用于根据UWB/IMU模块和摄像机单元的数据进行坐标计算及视觉位置可视化跟踪显示;The calculation and positioning display module is used for coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;

所述数据融合模块用于融合UWB/IMU模块的虚拟坐标信息和摄像机单元的目标车辆的车辆位置信息得到实时精确位置信息,并将融合结果传输到中央控制模块;The data fusion module is used to fuse the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time precise position information, and transmit the fusion result to the central control module;

所述路径引导单元用于根据停车场当前图像信息筛选出通行车辆最少的车道信息;The path guidance unit is used to filter out the lane information with the least passing vehicles according to the current image information of the parking lot;

所述车位引导单元用于根据停车场当前图像信息筛选出空车位。The parking space guidance unit is used for filtering out empty parking spaces according to the current image information of the parking lot.

优选地,所述环境误差神经网络学习模型中以卷积神经网络建立环境误差感知深度学习模型,提取环境因素导致定位偏差的误差因子,并逐层组合抽象生成高层特征,用来帮助UWB/IMU模块纠正定位精度;所述提取环境因素导致定位偏差的误差因子方式包括:Preferably, in the environmental error neural network learning model, a convolutional neural network is used to establish a deep learning model for environmental error perception, to extract the error factors of the positioning deviation caused by environmental factors, and to combine abstraction layer by layer to generate high-level features to help UWB/IMU The module corrects the positioning accuracy; the method of extracting the error factor of the positioning deviation caused by the environmental factor includes:

第n个固定UWB基站坐标Un=(xn,yn,zn)为已知坐标;待定位的车辆在t时刻的位置记为Nt=(xt,yt,zt);t时刻UWB基站到目标车辆的距离为:The nth fixed UWB base station coordinates U n =(x n , y n , z n ) are known coordinates; the position of the vehicle to be positioned at time t is denoted as N t =(x t , y t , z t ); The distance from the UWB base station to the target vehicle at time t is:

Figure BDA0003761016330000031
Figure BDA0003761016330000031

其中

Figure BDA0003761016330000032
为此时误差因子;in
Figure BDA0003761016330000032
error factor for this time;

将不同时刻误差因子代入

Figure BDA0003761016330000033
用以环境误差神经网络模型学习;其中
Figure BDA0003761016330000034
是高层特征量;
Figure BDA0003761016330000035
为权和系数,
Figure BDA0003761016330000036
其中v为目标车辆行驶速度;Ti+1、Ti对应目标车辆行进过程中某一时刻及后一时间帧记录的时间,(Ti+1-Ti)为行进的时间差;θi为此时刻车辆轮转角度。Substitute the error factors at different times into
Figure BDA0003761016330000033
used for environmental error neural network model learning; where
Figure BDA0003761016330000034
is the high-level feature quantity;
Figure BDA0003761016330000035
is the weight and coefficient,
Figure BDA0003761016330000036
Among them, v is the speed of the target vehicle; T i+1 and T i correspond to the time recorded at a certain moment and the next time frame during the travel of the target vehicle, (T i+1 -T i ) is the travel time difference; θ i is The rotation angle of the vehicle at this moment.

优选地,所述车辆信息量化单元用于将摄像机单元采集的图像信息量化为车辆信息,确定摄像机单元采集的当前场景下的车辆数量,并标定目标车辆,具体为:Preferably, the vehicle information quantification unit is used to quantify the image information collected by the camera unit into vehicle information, determine the number of vehicles in the current scene collected by the camera unit, and calibrate the target vehicle, specifically:

车辆信息量化单元获取车辆图像信息,将图像信息量化成像素点所对应车辆的车型、颜色及车牌号,按顺序生成唯一字符串码存储,并为每台车辆生成车辆数字ID;车辆信息量化单元获取目标车辆当前时间帧车身图像,对车身图像进行局部图像处理,得到车身、颜色及车牌所对应的离散像素点,再转换成离散的数量值,生成对应时间内的唯一车辆数字ID,为目标车辆进行身份标定。The vehicle information quantification unit obtains the vehicle image information, quantifies the image information into the model, color and license plate number of the vehicle corresponding to the pixel point, generates a unique string code for storage in sequence, and generates a vehicle digital ID for each vehicle; the vehicle information quantification unit Obtain the body image of the current time frame of the target vehicle, perform local image processing on the body image, obtain discrete pixels corresponding to the body, color and license plate, and then convert them into discrete quantitative values to generate a unique vehicle digital ID within the corresponding time, as the target Vehicles are identified.

优选地,所述摄像机机构阈值单元用于根据当前场景下的车辆数量确定摄像机单元的偏转角度阈值并控制摄像机单元执行,具体为:Preferably, the camera mechanism threshold unit is used to determine the deflection angle threshold of the camera unit according to the number of vehicles in the current scene and control the camera unit to perform, specifically:

第i台高精度摄像机与停车场空间坐标(x,y,z)对应的方位角为(αiii),(x,y,z)对应的是第i台高精度摄像机在停车场中所处的空间坐标位置,并根据此方位角采集到视角范围内目标车辆数为Nk,构建状态矩阵方程:The azimuth corresponding to the ith high-precision camera and the parking space coordinates (x, y, z) is (α i , β i , γ i ), and (x, y, z) corresponds to the ith high-precision camera The spatial coordinate position in the parking lot, and the number of target vehicles within the viewing angle is collected according to this azimuth angle as N k , and the state matrix equation is constructed:

Figure BDA0003761016330000037
Figure BDA0003761016330000037

其中,摄像机解析算力为Rχ,ξ为摄像机构设定阈值,当ξ≤Nk时,摄像机机构阈值单元发送偏转指令到摄像机,实现摄像机角度偏转。Among them, the analytical computing power of the camera is R χ , and ξ is the threshold value set by the camera mechanism. When ξ≤N k , the camera mechanism threshold unit sends a deflection command to the camera to realize the camera angle deflection.

优选地,所述UWB/IMU模块用于获取目标车辆与UWB基站之间的距离以及车辆的运动信息,从而获取目标车辆在停车场全域所处虚拟坐标信息及惯性前进方向,具体为:Preferably, the UWB/IMU module is used to obtain the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to obtain the virtual coordinate information and inertial direction of the target vehicle in the whole area of the parking lot, specifically:

UWB/IMU模块获取目标车辆与基站之间的距离:获取各个UWB基站在停车场内空间坐标,获取各个UWB基站与目标车辆之间通过脉冲信号传递的时间,计算各个基站与目标车辆之间的距离,从而可以计算得到目标车辆在停车场内虚拟坐标信息:The UWB/IMU module obtains the distance between the target vehicle and the base station: obtains the spatial coordinates of each UWB base station in the parking lot, obtains the time transmitted by the pulse signal between each UWB base station and the target vehicle, and calculates the distance between each base station and the target vehicle. distance, so that the virtual coordinate information of the target vehicle in the parking lot can be calculated:

Figure BDA0003761016330000041
Figure BDA0003761016330000041

其中,m和n用于标识不同的基站,lm,n表示UWB基站m和n之间的距离,t为脉冲传输时间;c为光速;(x,y,z)为目标车辆在停车场内虚拟坐标;Among them, m and n are used to identify different base stations, lm , n represent the distance between UWB base stations m and n, t is the pulse transmission time; c is the speed of light; (x, y, z) is the target vehicle in the parking lot Inner virtual coordinates;

UWB/IMU模块获取车辆的运动信息:通过IMU惯性模块获取加速度计数据E(ε)与陀螺仪数据E(σ),从而获得目标车辆的惯性行进方向。The UWB/IMU module obtains the motion information of the vehicle: the accelerometer data E(ε) and the gyroscope data E(σ) are obtained through the IMU inertial module, so as to obtain the inertial travel direction of the target vehicle.

优选地,所述数据融合模块用于融合UWB/IMU模块的虚拟坐标信息和摄像机单元的目标车辆的车辆位置信息,并将融合结果传输到中央控制模块,具体为:Preferably, the data fusion module is used to fuse the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit, and transmit the fusion result to the central control module, specifically:

建立融合目标定位优化函数:Establish the fusion target positioning optimization function:

Figure BDA0003761016330000042
Figure BDA0003761016330000042

Gi=f(xi-1,ui,wi)G i =f(x i-1 , ui , wi )

Hi,j=h(yj,xi,vi,j)H i,j =h(y j , xi ,vi ,j )

其中,E(ε)与E(σ)为MU惯性模块测得的加速度计数据和陀螺仪数据,

Figure BDA0003761016330000043
为UWB定位处理后坐标数据,T为观测目标车辆时间帧,wi为车辆响应速率,Gi为摄像机单元跟踪目标车辆得到的运动方程,Hi,j为车辆轨迹分析模块确定的轨迹预测方程,ui、vi,j为观测噪声,xi为目标车辆位置,yj为车位的坐标;Among them, E(ε) and E(σ) are the accelerometer data and gyroscope data measured by the MU inertial module,
Figure BDA0003761016330000043
is the coordinate data after UWB positioning processing, T is the time frame of the observed target vehicle, wi is the response rate of the vehicle, G i is the motion equation obtained by the camera unit tracking the target vehicle, H i,j is the trajectory prediction equation determined by the vehicle trajectory analysis module , u i , vi ,j are the observation noise, xi is the target vehicle position, y j is the coordinates of the parking space;

当目标定位优化函数求解最小点时为最终优化后车辆的实时精确位置信息。When the target positioning optimization function solves the minimum point, it is the real-time precise position information of the final optimized vehicle.

一种基于UWB/IMU和视觉信息融合的智能泊车定位方法,基于上述的基于UWB/IMU和视觉信息融合的智能泊车定位系统,包括以下步骤:An intelligent parking positioning method based on UWB/IMU and visual information fusion, based on the above-mentioned intelligent parking positioning system based on UWB/IMU and visual information fusion, comprising the following steps:

S1、通过摄像机单元获取停车场当前图像信息并实时跟踪目标车辆及车辆所处场景,得到运动方程,确定目标车辆的车辆位置信息,所述停车场当前图像信息包括车辆图像信息、障碍物图像信息及周围车位和车道线图像信息;S1, obtain the current image information of the parking lot through the camera unit and track the target vehicle and the scene where the vehicle is located in real time, obtain the motion equation, and determine the vehicle position information of the target vehicle, where the current image information of the parking lot includes vehicle image information and obstacle image information. and surrounding parking spaces and lane line image information;

S2、通过车辆信息量化单元将摄像机单元采集的图像信息量化为车辆信息,确定摄像机单元采集的当前场景下的车辆数量,并标定目标车辆;S2, quantifying the image information collected by the camera unit into vehicle information through the vehicle information quantification unit, determining the number of vehicles in the current scene collected by the camera unit, and calibrating the target vehicle;

S3、根据当前场景下的车辆数量,通过摄像机机构阈值单元确定摄像机单元的偏转角度阈值并控制摄像机单元执行;S3, according to the number of vehicles in the current scene, determine the deflection angle threshold of the camera unit through the camera mechanism threshold unit and control the camera unit to execute;

S4、UWB/IMU模块获取目标车辆与UWB基站之间的距离以及车辆的运动信息,从而获取目标车辆在停车场全域所处虚拟坐标信息及惯性前进方向;S4. The UWB/IMU module obtains the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to obtain the virtual coordinate information and inertial direction of the target vehicle in the whole area of the parking lot;

S5、车辆轨迹分析模块根据连续时刻的车辆图像信息获取目标车辆的轨迹信息,并传输到中央控制模块,供环境误差神经网络学习模型学习;S5. The vehicle trajectory analysis module obtains the trajectory information of the target vehicle according to the vehicle image information at successive moments, and transmits it to the central control module for learning by the environmental error neural network learning model;

S6、环境误差神经网络学习模型中以卷积神经网络建立环境误差感知深度学习模型,提取环境因素导致定位偏差的误差因子,帮助UWB/IMU模块及摄像机单元纠正定位精度;S6. In the environmental error neural network learning model, a convolutional neural network is used to establish an environmental error perception deep learning model, and the error factor of the positioning deviation caused by environmental factors is extracted to help the UWB/IMU module and the camera unit to correct the positioning accuracy;

S7、信号传输与处理模块将UWB/IMU模块和摄像机单元的数据传输至中央控制模块;S7. The signal transmission and processing module transmits the data of the UWB/IMU module and the camera unit to the central control module;

S8、计算与定位显示模块根据UWB/IMU模块和摄像机单元的数据进行坐标计算及视觉位置可视化跟踪显示;S8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;

S9、数据融合模块融合UWB/IMU模块的虚拟坐标信息和摄像机单元的目标车辆的车辆位置信息得到车辆的实时精确位置信息,并将融合结果传输到中央控制模块;S9, the data fusion module fuses the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain the real-time precise position information of the vehicle, and transmits the fusion result to the central control module;

S10、路径引导单元根据停车场当前图像信息筛选出通行车辆最少的车道信息,车位引导单元根据停车场当前图像信息筛选出空车位,中央控制模块根据车辆的实时精确位置信息以及车道信息和空车位的坐标引导目标车辆泊车。S10, the path guidance unit filters out the lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guidance unit filters out the empty parking spaces according to the current image information of the parking lot, and the central control module according to the real-time precise position information of the vehicle and the lane information and empty parking spaces The coordinates guide the target vehicle to park.

优选地,步骤S6中,所述提取环境因素导致定位偏差的误差因子方式包括:Preferably, in step S6, the method of extracting the error factor for the positioning deviation caused by the environmental factor includes:

第n个固定UWB基站坐标Un=(xn,yn,zn)为已知坐标;待定位的车辆在t时刻的位置记为Nt=(xt,yt,zt);t时刻UWB基站到目标车辆的距离为:The nth fixed UWB base station coordinates U n =(x n , y n , z n ) are known coordinates; the position of the vehicle to be positioned at time t is denoted as N t =(x t , y t , z t ); The distance from the UWB base station to the target vehicle at time t is:

Figure BDA0003761016330000051
Figure BDA0003761016330000051

其中

Figure BDA0003761016330000052
为此时误差因子:in
Figure BDA0003761016330000052
For this time the error factor:

将不同时刻误差因子代入

Figure BDA0003761016330000053
用以环境误差神经网络模型学习;其中
Figure BDA0003761016330000054
是高层特征量;
Figure BDA0003761016330000055
为权和系数,
Figure BDA0003761016330000056
其中v为目标车辆行驶速度;Ti+1、Ti对应目标车辆行进过程中某一时刻及后一时间帧记录的时间,(Ti+1-Ti)为行进的时间差;θi为此时刻车辆轮转角度。Substitute the error factors at different times into
Figure BDA0003761016330000053
used for environmental error neural network model learning; where
Figure BDA0003761016330000054
is the high-level feature quantity;
Figure BDA0003761016330000055
is the weight and coefficient,
Figure BDA0003761016330000056
Among them, v is the speed of the target vehicle; T i+1 and T i correspond to the time recorded at a certain moment and the next time frame during the travel of the target vehicle, (T i+1 -T i ) is the travel time difference; θ i is The rotation angle of the vehicle at this moment.

优选地,步骤S2具体为:Preferably, step S2 is specifically:

车辆信息量化单元获取车辆图像信息,将图像信息量化成像素点所对应车辆的车型、颜色及车牌号,按顺序生成唯一字符串码存储,并为每台车辆生成车辆数字ID;车辆信息量化单元获取目标车辆当前时间帧车身图像,对车身图像进行局部图像处理,得到车身、颜色及车牌所对应的离散像素点,再转换成离散的数量值,生成对应时间内的唯一车辆数字ID,为目标车辆进行身份标定。The vehicle information quantification unit obtains the vehicle image information, quantifies the image information into the model, color and license plate number of the vehicle corresponding to the pixel point, generates a unique string code for storage in sequence, and generates a vehicle digital ID for each vehicle; the vehicle information quantification unit Obtain the body image of the current time frame of the target vehicle, perform local image processing on the body image, obtain discrete pixels corresponding to the body, color and license plate, and then convert them into discrete quantitative values to generate a unique vehicle digital ID within the corresponding time, as the target Vehicles are identified.

优选地,步骤S3具体为:Preferably, step S3 is specifically:

第i台高精度摄像机与停车场空间坐标(x,y,z)对应的方位角为(αiii),(x,y,z)对应的是第i台高精度摄像机在停车场中所处的空间坐标位置,并根据此方位角采集到视角范围内目标车辆数为Nk,构建状态矩阵方程:The azimuth corresponding to the ith high-precision camera and the parking space coordinates (x, y, z) is (α i , β i , γ i ), and (x, y, z) corresponds to the ith high-precision camera The spatial coordinate position in the parking lot, and the number of target vehicles within the viewing angle is collected according to this azimuth angle as N k , and the state matrix equation is constructed:

Figure BDA0003761016330000061
Figure BDA0003761016330000061

其中,摄像机解析算力为Rχ,ξ为摄像机构设定阈值,当ξ≤Nk时,摄像机机构阈值单元发送偏转指令到摄像机,实现摄像机角度偏转。Among them, the analytical computing power of the camera is R χ , and ξ is the threshold value set by the camera mechanism. When ξ≤N k , the camera mechanism threshold unit sends a deflection command to the camera to realize the camera angle deflection.

优选地,步骤S4具体为:Preferably, step S4 is specifically:

UWB/IMU模块获取目标车辆与基站之间的距离:获取各个UWB基站在停车场内空间坐标,获取各个UWB基站与目标车辆之间通过脉冲信号传递的时间,计算各个基站与目标车辆之间的距离,从而可以计算得到目标车辆在停车场内虚拟坐标信息:The UWB/IMU module obtains the distance between the target vehicle and the base station: obtains the spatial coordinates of each UWB base station in the parking lot, obtains the time transmitted by the pulse signal between each UWB base station and the target vehicle, and calculates the distance between each base station and the target vehicle. distance, so that the virtual coordinate information of the target vehicle in the parking lot can be calculated:

Figure BDA0003761016330000062
Figure BDA0003761016330000062

其中,m和n用于标识不同的基站,lm,n表示UWB基站m和n之间的距离,t为脉冲传输时间;c为光速;(x,y,z)为目标车辆在停车场内虚拟坐标;Among them, m and n are used to identify different base stations, lm , n represent the distance between UWB base stations m and n, t is the pulse transmission time; c is the speed of light; (x, y, z) is the target vehicle in the parking lot Inner virtual coordinates;

UWB/IMU模块获取车辆的运动信息:通过IMU惯性模块获取加速度计数据E(ε)与陀螺仪数据E(σ),从而获得目标车辆的惯性行进方向。The UWB/IMU module obtains the motion information of the vehicle: the accelerometer data E(ε) and the gyroscope data E(σ) are obtained through the IMU inertial module, so as to obtain the inertial travel direction of the target vehicle.

优选地,步骤S9具体为:Preferably, step S9 is specifically:

建立融合目标定位优化函数:Establish the fusion target positioning optimization function:

Figure BDA0003761016330000063
Figure BDA0003761016330000063

Gi=f(xi-1,ui,wi)G i =f(x i-1 , ui , wi )

Hi,j=h(yj,xi,vi,j)H i,j =h(y j , xi ,vi ,j )

其中,E(ε)与E(σ)为MU惯性模块测得的加速度计数据和陀螺仪数据,

Figure BDA0003761016330000064
为UWB定位处理后坐标数据,T为观测目标车辆时间帧,wi为车辆响应速率,Gi为摄像机单元跟踪目标车辆得到的运动方程,Hi,j为车辆轨迹分析模块确定的轨迹预测方程,ui、vi,j为观测噪声,xi为目标车辆位置,yj为车位的坐标;Among them, E(ε) and E(σ) are the accelerometer data and gyroscope data measured by the MU inertial module,
Figure BDA0003761016330000064
is the coordinate data after UWB positioning processing, T is the time frame of the observed target vehicle, wi is the response rate of the vehicle, G i is the motion equation obtained by the camera unit tracking the target vehicle, H i,j is the trajectory prediction equation determined by the vehicle trajectory analysis module , u i , vi ,j are the observation noise, xi is the target vehicle position, y j is the coordinates of the parking space;

当目标定位优化函数求解最小点时为最终优化后车辆的实时精确位置信息。When the target positioning optimization function solves the minimum point, it is the real-time precise position information of the final optimized vehicle.

与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

(1)在智能泊车过程中通过停车场端设备辅助,实现双模型融合定位,解决车辆在陌生停车环境中能够实时高精度位置跟踪,通过停车场与车辆协同配合实现智能泊车过程。(1) In the process of intelligent parking, with the assistance of the parking lot terminal equipment, the dual-model fusion positioning is realized, and the real-time high-precision position tracking of the vehicle in an unfamiliar parking environment can be solved, and the intelligent parking process can be realized through the cooperation between the parking lot and the vehicle.

(2)设计了环境误差神经网络学习模型,以卷积神经网络建立环境误差感知深度学习模型,提取环境因素导致定位偏差的误差因子,并逐层组合抽象生成高层特征,能够去除误差因子,提高目标车辆定位精度。(2) The environmental error neural network learning model is designed, and a convolutional neural network is used to establish a deep learning model for environmental error perception, extracting the error factors of the positioning deviation caused by environmental factors, and combining abstraction layer by layer to generate high-level features, which can remove the error factors and improve the Target vehicle positioning accuracy.

(3)摄像机机构阈值单元根据当前场景下的车辆数量确定摄像机偏转角度,目的是使得停车场摄像机构动态监控停车场内每一辆行进车辆,且每台摄像机不会监控过多车辆,以免算力不足,无法跟踪目标车辆。(3) The camera mechanism threshold unit determines the camera deflection angle according to the number of vehicles in the current scene. The purpose is to enable the parking lot camera mechanism to dynamically monitor every moving vehicle in the parking lot, and each camera will not monitor too many vehicles, so as not to calculate Insufficient power to track the target vehicle.

附图说明Description of drawings

图1为智能泊车定位系统的结构示意图;1 is a schematic structural diagram of an intelligent parking positioning system;

图2为中央控制模块的结构示意图;Fig. 2 is the structural representation of the central control module;

图3为智能泊车定位方法的流程图;3 is a flowchart of an intelligent parking positioning method;

图4为智能泊车定位系统的使用场景示意图;4 is a schematic diagram of a usage scenario of an intelligent parking positioning system;

图5为本发明实施例中一种基于UWB/IMU和视觉信息融合的智能泊车方法流程图;5 is a flowchart of an intelligent parking method based on UWB/IMU and visual information fusion in an embodiment of the present invention;

图6为本发明实施例中去除环境误差因子融合定位的方法流程图;6 is a flowchart of a method for removing environmental error factor fusion positioning in an embodiment of the present invention;

图7为本发明实施例中定位信息融合发送的方法流程图;FIG. 7 is a flowchart of a method for fusion sending of positioning information in an embodiment of the present invention;

附图标记:1、中央控制模块,2、UWB/IMU模块,3、摄像机单元,4、信号传输与处理模块,5、计算与定位显示模块,6、车辆轨迹分析模块,7、数据融合模块,11、车辆信息量化单元,12、摄像机机构阈值单元,13、环境误差神经网络学习模型,14、路径引导单元,15、车位引导单元。Reference numerals: 1. Central control module, 2. UWB/IMU module, 3. Camera unit, 4. Signal transmission and processing module, 5. Calculation and positioning display module, 6. Vehicle trajectory analysis module, 7. Data fusion module , 11, vehicle information quantification unit, 12, camera mechanism threshold unit, 13, environment error neural network learning model, 14, path guidance unit, 15, parking space guidance unit.

具体实施方式Detailed ways

为了进一步理解本发明,下面结合附图和具体实施例对本发明进行详细说明。实施例以本发明技术方案为前提进行实施,给出了详细的实施方式和具体的操作过程。应当理解,这些描述只是为进一步说明本发明的特征和优点,而不是对本发明权利要求的限制。该部分的描述只针对几个典型的实施例,本发明并不仅局限于实施例描述的范围。相同或相近的现有技术手段与实施例中的一些技术特征进行相互替换也在本发明描述和保护的范围内。In order to further understand the present invention, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. The embodiment is implemented on the premise of the technical solution of the present invention, and provides a detailed implementation manner and a specific operation process. It should be understood that these descriptions are intended to further illustrate the features and advantages of the present invention, rather than to limit the claims of the present invention. The description in this section is only for a few typical embodiments, and the present invention is not limited to the scope of the description of the embodiments. It is also within the scope of the description and protection of the present invention to replace some technical features in the embodiments with the same or similar prior art means.

此处所称的“一个实施例”或“实施例”是指可包含于本发明至少一个实现方式中的特定特征、结构或特性。在本发明的描述中,需要理解的是,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。Reference herein to "one embodiment" or "an embodiment" refers to a particular feature, structure, or characteristic that may be included in at least one implementation of the present invention. In the description of the present invention, it is to be understood that the terms "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally also includes For other steps or units inherent to these processes, methods, products or devices.

实施例1:Example 1:

一种基于UWB/IMU和视觉信息融合的智能泊车定位系统,用于停车场的车辆定位,如图1所示,包括中央控制模块、UWB/IMU模块、摄像机单元、信号传输与处理模块、计算与定位显示模块、车辆轨迹分析模块和数据融合模块,如图2所示,中央控制模块包括车辆信息量化单元、摄像机机构阈值单元、环境误差神经网络学习模型、路径引导单元和车位引导单元。本实施例中,应用场景如图4所示,UWB基站为多个,安装在停车场,目标车辆上安装UWB发射器,摄像机单元包括多个摄像机,安装在停车场,信号传输与处理单元实现目标车辆与停车场的数据传输,IMU惯性模块安装在目标车辆上。An intelligent parking positioning system based on UWB/IMU and visual information fusion, used for vehicle positioning in parking lots, as shown in Figure 1, including a central control module, UWB/IMU module, camera unit, signal transmission and processing module, Calculation and positioning display module, vehicle trajectory analysis module and data fusion module, as shown in Figure 2, the central control module includes a vehicle information quantification unit, a camera mechanism threshold unit, an environmental error neural network learning model, a path guidance unit and a parking space guidance unit. In this embodiment, the application scenario is shown in Figure 4. There are multiple UWB base stations, which are installed in the parking lot, UWB transmitters are installed on the target vehicle, and the camera unit includes multiple cameras, which are installed in the parking lot. The signal transmission and processing unit realizes For the data transmission between the target vehicle and the parking lot, the IMU inertial module is installed on the target vehicle.

在本实施例中,本发明主要通过将UWB/IMU定位和视觉辅助定位两部分在停车场端融合,从而实现泊车定位。一方面,本发明利用UWB/IMU模块获取目标车辆与基站之间的距离,从而获取目标车辆在停车场全域所处虚拟坐标信息及惯性行进方向,另一方面,本发明高精度摄像机获取停车场中目标车辆、障碍物及周围车位线图像信息,利用高精度摄像机采集目标车辆当前时间帧所处停车场环境中车身及环境图像信息、空车位坐标位置信息等,再融合UWB/IMU模块的虚拟坐标信息及高精度摄像机跟踪的车辆位置信息,并确定空车位坐标以及通行车辆较少的车道,配合目标车辆实现指定空车位路径规划,从而实现智能泊车。In this embodiment, the present invention mainly realizes parking positioning by merging two parts of UWB/IMU positioning and vision-assisted positioning at the parking lot end. On the one hand, the present invention uses the UWB/IMU module to obtain the distance between the target vehicle and the base station, so as to obtain the virtual coordinate information and inertial travel direction of the target vehicle in the whole area of the parking lot. On the other hand, the high-precision camera of the present invention obtains the parking lot. The image information of the target vehicle, obstacles and surrounding parking space lines is used to collect the image information of the body and the environment in the parking lot environment where the target vehicle is located at the current time frame, and the coordinate position information of the empty parking space, etc. Coordinate information and vehicle location information tracked by high-precision cameras, and determine the coordinates of empty parking spaces and lanes with fewer passing vehicles, and cooperate with target vehicles to achieve path planning for designated empty parking spaces, thereby realizing intelligent parking.

同时,结合不同的停车场存在的环境误差建立环境误差神经网络学习模型,分析并消除误差提高该模块采集的定位信息精度;通过车辆轨迹分析模块用以获取目标车辆的轨迹信息,得到目标车辆在不同时刻的位置信息,将其分析结果供环境误差神经网络学习。环境误差神经网络学习模型中以卷积神经网络建立环境误差感知深度学习模型,提取环境因素导致定位偏差的误差因子,并逐层组合抽象生成高层特征,用来帮助UWB/IMU模块及高精度摄像机纠正定位精度。At the same time, an environmental error neural network learning model is established in combination with the environmental errors existing in different parking lots, and the errors are analyzed and eliminated to improve the accuracy of the positioning information collected by the module; the vehicle trajectory analysis module is used to obtain the trajectory information of the target vehicle, and the target vehicle is The position information at different times, and the analysis results are used for the learning of the environmental error neural network. In the environmental error neural network learning model, a convolutional neural network is used to establish a deep learning model for environmental error perception, to extract the error factors of the positioning deviation caused by environmental factors, and to abstractly generate high-level features layer by layer, which is used to help UWB/IMU modules and high-precision cameras. Correct positioning accuracy.

而且,对于摄像机单元采集的图像,本申请进行了信息量化处理,将采集图像信息量化成像素点所对应车辆的车型、颜色及车牌号,按顺序生成唯一字符串码存储;一方面可以标定目标车辆,另一方面可以供摄像机机构阈值设定单元进行分析,根据所接受的编码数字数目判断当前高精度摄像机采集场景下车辆数,用以设定不同车辆数下高精度摄像机偏转角度阈值,从而实现停车场摄像机构动态监控停车场内每一辆行进车辆。Moreover, for the images collected by the camera unit, the application has carried out information quantification processing, quantifying the collected image information into the model, color and license plate number of the vehicle corresponding to the pixel point, and generating a unique string code in sequence for storage; on the one hand, the target can be calibrated On the other hand, it can be analyzed by the threshold setting unit of the camera mechanism, and the number of vehicles in the current high-precision camera acquisition scene can be judged according to the number of received coded numbers, so as to set the high-precision camera deflection angle threshold for different numbers of vehicles, thereby Realize the dynamic monitoring of every moving vehicle in the parking lot by the parking lot camera mechanism.

本申请设计一种基于UWB/IMU和视觉信息融合的智能泊车定位系统,在停车场进行相应智能化升级,融合UWB/IMU模块和视觉信息,能够克服纯车端智能泊车的短板,推动智能泊车尽早商业化落地。This application designs an intelligent parking positioning system based on the fusion of UWB/IMU and visual information, and performs corresponding intelligent upgrades in the parking lot, integrating UWB/IMU modules and visual information, which can overcome the shortcomings of pure vehicle-side intelligent parking. Promote the commercialization of smart parking as soon as possible.

具体的,一种基于UWB/IMU和视觉信息融合的智能泊车定位系统中各个模块单元的工作如下:Specifically, the work of each module unit in an intelligent parking positioning system based on UWB/IMU and visual information fusion is as follows:

(1)摄像机单元包括多个摄像机,用于获取停车场当前图像信息并实时跟踪目标车辆及车辆所处场景,得到运动方程Gi=f(xi-1,ui,wi),其中ui、xi为目标车辆位置,确定目标车辆的车辆位置信息,停车场当前图像信息包括车辆图像信息、障碍物图像信息及周围车位和车道线图像信息;每个摄像机在停车场内空间坐标是已知的,其偏转角度和拍摄的焦距等参数也是已知的,因此,只需实现完成标定,对摄像头采集的图像进行分析,就可以确定图像中车辆、障碍物、车道线、车位等在停车场内的空间坐标。(1) The camera unit includes a plurality of cameras, which are used to obtain the current image information of the parking lot and track the target vehicle and the scene where the vehicle is located in real time, and obtain the motion equation G i =f(x i-1 , u i , w i ), where u i and xi are the position of the target vehicle, and the vehicle position information of the target vehicle is determined. The current image information of the parking lot includes the image information of the vehicle, the image information of the obstacles, and the image information of the surrounding parking spaces and lane lines; the spatial coordinates of each camera in the parking lot It is known, and its parameters such as the deflection angle and the focal length of the shooting are also known. Therefore, only after completing the calibration and analyzing the images collected by the camera, the vehicles, obstacles, lane lines, parking spaces, etc. in the image can be determined. The spatial coordinates within the parking lot.

在本实施例中,同时对车道线要素、停车位要素及障碍物分类进行标定,车道线要素标定根据目前存在停车场常见白虚线、黄实线及白左右转弯箭头路线形式进行停车场端标定,三种形式分别由高精度摄像头采集信息发送目标车辆的中央控制模块。停车位要素根据目前市场存在垂直、水平及倾斜常见三种车位形式进行停车场端标定,这三种形式分别由高精度摄像头采集信息发送目标车辆的中央控制模块,障碍物分类标定根据目前停车场常见障碍物主要包括车辆、宠物、行人及交通指示牌,这四种形式分别由高精度摄像头采集信息发送目标车辆的中央控制模块,实现盲区碰撞预警。In this embodiment, the lane line elements, the parking space elements and the classification of obstacles are calibrated at the same time, and the lane line element calibration is performed according to the current forms of common white dotted lines, yellow solid lines and white left and right turn arrows in the parking lot. , the three forms are respectively collected by high-precision cameras and sent to the central control module of the target vehicle. The parking space elements are calibrated according to the three common types of parking spaces in the market: vertical, horizontal and inclined. These three forms are respectively collected by high-precision cameras and sent to the central control module of the target vehicle. The classification and calibration of obstacles is based on the current parking lot. Common obstacles mainly include vehicles, pets, pedestrians and traffic signs. These four forms are collected by high-precision cameras and sent to the central control module of the target vehicle to realize blind spot collision warning.

(2)车辆信息量化单元用于将摄像机单元采集的图像信息量化为车辆信息,确定摄像机单元采集的当前场景下的车辆数量,并标定目标车辆;(2) The vehicle information quantification unit is used to quantify the image information collected by the camera unit into vehicle information, determine the number of vehicles in the current scene collected by the camera unit, and calibrate the target vehicle;

车辆信息量化单元获取目标车辆当前所处停车场内的车身图像信息,将图像信息量化成像素点所对应车辆的车型、颜色及车牌号,按顺序生成唯一字符串码存储,并为每台车辆生成车辆数字ID;车辆信息量化单元接收通过高精度摄像机采集目标车辆当前时间帧车身图像,对车身图像进行局部图像处理,得到车身、颜色及车牌所对应的离散像素点,再转换成离散的数量值,生成对应时间内的唯一车辆数字ID,为目标车辆进行身份标定。The vehicle information quantification unit obtains the body image information in the parking lot where the target vehicle is currently located, quantifies the image information into the model, color and license plate number of the vehicle corresponding to the pixel point, generates a unique string code in sequence, and stores it for each vehicle. Generate the vehicle digital ID; the vehicle information quantification unit receives the current time frame body image of the target vehicle collected by the high-precision camera, performs local image processing on the body image, obtains the discrete pixels corresponding to the body, color and license plate, and then converts them into discrete quantities. value, generate a unique vehicle digital ID within the corresponding time, and perform identity calibration for the target vehicle.

通过车辆信息量化单元标定目标车辆,可以实现目标车辆的实时追踪,摄像机单元通过图像前后每一帧的编码数据变换进行数据对比,判断数据是否吻合,如果吻合则输出视觉信息数据,否则重新搜索采集图像编码后信息,寻找匹配数据。还可以为车辆轨迹分析单元提供数据。The target vehicle is calibrated by the vehicle information quantification unit, and the real-time tracking of the target vehicle can be realized. The camera unit compares the data by transforming the encoded data of each frame before and after the image, and judges whether the data is consistent. If it is consistent, output the visual information data, otherwise search and collect again. After the image is encoded, the information is searched for matching data. Data can also be provided to the vehicle trajectory analysis unit.

(3)摄像机机构阈值单元用于根据当前场景下的车辆数量确定摄像机单元的偏转角度阈值并控制摄像机单元执行,用以设定不同车辆数下高精度摄像机偏转角度阈值;(3) The camera mechanism threshold unit is used to determine the deflection angle threshold of the camera unit according to the number of vehicles in the current scene and control the camera unit to execute, in order to set the high-precision camera deflection angle threshold under different vehicle numbers;

第i台高精度摄像机与停车场空间坐标(x,y,z)对应的方位角为(αiii),(x,y,z)对应的是第i台高精度摄像机在停车场中所处的空间坐标位置,并根据此方位角采集到视角范围内目标车辆数为Nk,构建状态矩阵方程:The azimuth corresponding to the ith high-precision camera and the parking space coordinates (x, y, z) is (α i , β i , γ i ), and (x, y, z) corresponds to the ith high-precision camera The spatial coordinate position in the parking lot, and the number of target vehicles within the viewing angle is collected according to this azimuth angle as N k , and the state matrix equation is constructed:

Figure BDA0003761016330000101
Figure BDA0003761016330000101

其中,摄像机解析算力为Rχ,ξ为摄像机构设定阈值,当ξ≤Nk时,摄像机机构阈值单元发送偏转指令到摄像机,实现摄像机角度偏转。Among them, the analytical computing power of the camera is R χ , and ξ is the threshold value set by the camera mechanism. When ξ≤N k , the camera mechanism threshold unit sends a deflection command to the camera to realize the camera angle deflection.

可见,本申请中摄像机的偏转角度是根据当前场景下的车辆数量确定的,目的是使得停车场摄像机构动态监控停车场内每一辆行进车辆,且每台摄像机不会监控过多车辆,以免算力不足,无法跟踪目标车辆。It can be seen that the deflection angle of the camera in this application is determined according to the number of vehicles in the current scene. The purpose is to enable the parking lot camera mechanism to dynamically monitor every moving vehicle in the parking lot, and each camera will not monitor too many vehicles to avoid Insufficient computing power to track the target vehicle.

(4)车辆轨迹分析模块用于根据连续时刻的车辆图像信息获取目标车辆的轨迹信息,并传输到中央控制模块,供环境误差神经网络学习模型学习;(4) The vehicle trajectory analysis module is used to obtain the trajectory information of the target vehicle according to the vehicle image information at successive times, and transmit it to the central control module for learning by the environmental error neural network learning model;

车辆轨迹分析模块通过高精度摄像机确定轨迹分析目标车辆及其特征信息,根据特征信息与车辆信息量化单元匹配锁定目标车辆位置信息。采用车辆轮胎轮廓检及偏转角度测得到车辆行进方向,在不同时间帧内目标车辆位置更新信息获取车辆行进路径,获取目标车辆的轨迹信息,轨迹预测方程为:The vehicle trajectory analysis module determines the trajectory analysis target vehicle and its characteristic information through a high-precision camera, and matches and locks the target vehicle position information according to the characteristic information and the vehicle information quantification unit. The vehicle travel direction is obtained by detecting the vehicle tire profile and the deflection angle, and the target vehicle position update information in different time frames is used to obtain the vehicle travel path, and the trajectory information of the target vehicle is obtained. The trajectory prediction equation is:

Hi,j=h(yj,xi,vi,j)H i,j =h(y j , xi ,vi ,j )

其中,vi,j为观测噪声;xi为目标车辆位置;yj为车位坐标点。Among them, vi ,j is the observation noise; xi is the target vehicle position; y j is the parking space coordinate point.

通过进行车辆轨迹分析并将其分析结果传输至中央控制模块,可以供环境误差神经网络模型学习,能够不断优化整个系统,提高整个智能泊车系统的鲁棒性和定位精确率。By analyzing the vehicle trajectory and transmitting the analysis results to the central control module, it can be used for the learning of the environmental error neural network model, which can continuously optimize the entire system and improve the robustness and positioning accuracy of the entire intelligent parking system.

(5)UWB/IMU模块用于获取目标车辆与UWB基站之间的距离以及车辆的运动信息,从而获取目标车辆在停车场全域所处虚拟坐标信息及惯性前进方向;(5) The UWB/IMU module is used to obtain the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to obtain the virtual coordinate information and inertial direction of the target vehicle in the whole area of the parking lot;

UWB/IMU模块获取目标车辆与基站之间的距离:获取各个UWB基站在停车场内空间坐标,获取各个UWB基站与目标车辆之间通过脉冲信号传递的时间,计算各个基站与目标车辆之间的距离,从而可以计算得到目标车辆在停车场内虚拟坐标信息:The UWB/IMU module obtains the distance between the target vehicle and the base station: obtains the spatial coordinates of each UWB base station in the parking lot, obtains the time transmitted by the pulse signal between each UWB base station and the target vehicle, and calculates the distance between each base station and the target vehicle. distance, so that the virtual coordinate information of the target vehicle in the parking lot can be calculated:

Figure BDA0003761016330000113
Figure BDA0003761016330000113

其中,m和n用于标识不同的基站,lm,n表示UWB基站m和n之间的距离,t为脉冲传输时间;c为光速;(x,y,z)为目标车辆在停车场内虚拟坐标;Among them, m and n are used to identify different base stations, lm , n represent the distance between UWB base stations m and n, t is the pulse transmission time; c is the speed of light; (x, y, z) is the target vehicle in the parking lot Inner virtual coordinates;

本实施例中,共有4个UWB基站,它们在停车场内空间坐标是已知的,分别为(x1,y1,z1)、(x2,y2,z2)、(x3,y3,z3)、(x4,y4,z4),因此联立上述公式,就可以求得得到目标车辆在停车场内虚拟坐标(x,y,z)。In this embodiment, there are 4 UWB base stations in total, and their spatial coordinates in the parking lot are known, namely (x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 ), (x 3 , y 3 , z 3 ), (x 4 , y 4 , z 4 ), so by combining the above formulas, the virtual coordinates (x, y, z) of the target vehicle in the parking lot can be obtained.

UWB/IMU模块获取车辆的运动信息:通过IMU惯性模块获取加速度计数据E(ε)与陀螺仪数据E(σ),从而获得目标车辆的惯性行进方向。The UWB/IMU module obtains the motion information of the vehicle: the accelerometer data E(ε) and the gyroscope data E(σ) are obtained through the IMU inertial module, so as to obtain the inertial travel direction of the target vehicle.

(6)环境误差神经网络学习模型中以卷积神经网络建立环境误差感知深度学习模型,提取环境因素导致定位偏差的误差因子,并逐层组合抽象生成高层特征,帮助UWB/IMU模块及摄像机单元纠正定位精度;在网络前向计算时,在卷积层,同时有多个卷积核对输入进行卷积运算,生成多个特征图,每个特征图的维度相对于输入的维度有所降低;在次采样层,每个特征图经过池化得到维度进一步降低的对应图,依次交叉堆叠后,经过全连接层到达网络输出,供整个智能泊车系统主动学习,提高整个智能泊车系统的鲁棒性和定位精确率;(6) In the environmental error neural network learning model, a convolutional neural network is used to establish a deep learning model for environmental error perception, to extract the error factors of the positioning deviation caused by environmental factors, and to abstractly generate high-level features layer by layer to help UWB/IMU modules and camera units. Correct the positioning accuracy; in the forward calculation of the network, in the convolution layer, there are multiple convolution kernels to perform convolution operations on the input at the same time, generating multiple feature maps, and the dimension of each feature map is reduced relative to the input dimension; In the sub-sampling layer, each feature map is pooled to obtain a corresponding map with a further reduced dimension. After being cross-stacked in turn, it reaches the network output through the fully connected layer for the entire intelligent parking system to actively learn and improve the robustness of the entire intelligent parking system. Robustness and positioning accuracy;

第n个固定UWB基站坐标Un=(xn,yn,zn)为已知坐标;待定位的车辆在t时刻的位置记为Nt=(xt,yt,zt);t时刻UWB基站到目标车辆的距离为:The nth fixed UWB base station coordinates U n =(x n , y n , z n ) are known coordinates; the position of the vehicle to be positioned at time t is denoted as N t =(x t , y t , z t ); The distance from the UWB base station to the target vehicle at time t is:

Figure BDA0003761016330000111
Figure BDA0003761016330000111

其中

Figure BDA0003761016330000112
为此时误差因子;in
Figure BDA0003761016330000112
error factor for this time;

将不同时刻误差因子代入

Figure BDA0003761016330000121
用以环境误差神经网络模型学习;其中
Figure BDA0003761016330000122
是高层特征量;
Figure BDA0003761016330000123
为权和系数,
Figure BDA0003761016330000124
其中v为目标车辆行驶速度;Ti+1、Ti对应目标车辆行进过程中某一时刻及后一时间帧记录的时间,(Ti+1-Ti)为行进的时间差;θi为此时刻车辆轮转角度。Substitute the error factors at different times into
Figure BDA0003761016330000121
used for environmental error neural network model learning; where
Figure BDA0003761016330000122
is the high-level feature quantity;
Figure BDA0003761016330000123
is the weight and coefficient,
Figure BDA0003761016330000124
Among them, v is the speed of the target vehicle; T i+1 and T i correspond to the time recorded at a certain moment and the next time frame during the travel of the target vehicle, (T i+1 -T i ) is the travel time difference; θ i is The rotation angle of the vehicle at this moment.

需要注意的是,环境误差神经网络学习模型是用于纠正定位精度的。在实际应用时,可以由摄像机单元采集图像并根据环境是否存在干扰将场景分为无环境干扰和存在环境干扰,对于不存在环境干扰的场景,可以不使用环境误差神经网络学习模型,直接由数据融合模块UWB/IMU模块的虚拟坐标信息和摄像机单元的目标车辆的车辆位置信息得到实时精确位置信息,完成位姿估计,对于存在环境干扰的场景,需要使用环境误差神经网络学习模型帮助UWB/IMU模块及摄像机单元纠正定位精度,再进行融合完成位姿估计。It should be noted that the environmental error neural network learning model is used to correct the positioning accuracy. In practical applications, the camera unit can collect images and divide the scene into no environmental interference and environmental interference according to whether there is interference in the environment. Integrate the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time accurate position information to complete the pose estimation. For scenes with environmental interference, it is necessary to use the environmental error neural network learning model to help UWB/IMU The module and the camera unit correct the positioning accuracy, and then perform fusion to complete the pose estimation.

(7)信号传输与处理模块用于传输UWB/IMU模块和摄像机单元的数据至中央控制模块,主要是目标车辆当前UWB定位坐标及周围环境图像量化编码后信息;(7) The signal transmission and processing module is used to transmit the data of the UWB/IMU module and the camera unit to the central control module, mainly the current UWB positioning coordinates of the target vehicle and the quantized and encoded information of the surrounding environment image;

(8)计算与定位显示模块用于根据UWB/IMU模块和摄像机单元的数据进行坐标计算及视觉位置可视化跟踪显示,具体的,处理UWB/IMU模块及高精度摄像机输出的坐标及图像信息,进行UWB/IMU定位坐标计算及视觉位置跟踪显示;(8) The calculation and positioning display module is used for coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit. Specifically, it processes the coordinates and image information output by the UWB/IMU module and the high-precision camera, and performs UWB/IMU positioning coordinate calculation and visual position tracking display;

(9)数据融合模块用于融合UWB/IMU模块的虚拟坐标信息和摄像机单元的目标车辆的车辆位置信息得到实时精确位置信息,并将融合结果传输到中央控制模块;融合过程为:(9) The data fusion module is used to fuse the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain real-time accurate position information, and transmit the fusion result to the central control module; the fusion process is:

建立融合目标定位优化函数:Establish the fusion target positioning optimization function:

Figure BDA0003761016330000125
Figure BDA0003761016330000125

Gi=f(xi-1,ui,wi)G i =f(x i-1 , ui , wi )

Hi,j=h(yj,xi,vi,j)H i,j =h(y j , xi ,vi ,j )

其中,E(ε)与E(σ)为MU惯性模块测得的加速度计数据和陀螺仪数据,

Figure BDA0003761016330000126
为UWB定位处理后坐标数据,T为观测目标车辆时间帧,wi为车辆响应速率,Gi为摄像机单元跟踪目标车辆得到的运动方程,Hi,j为车辆轨迹分析模块确定的轨迹预测方程,ui、vi,j为观测噪声,xi为目标车辆位置,yj为车位的坐标,下标没有含义仅表示函数中代入数据进行计算;当目标定位优化函数求解最小点时为最终优化后车辆的实时精确位置信息,从而解决定位漂移及视觉定位偏差,提高定位精度。Among them, E(ε) and E(σ) are the accelerometer data and gyroscope data measured by the MU inertial module,
Figure BDA0003761016330000126
is the coordinate data after UWB positioning processing, T is the time frame of the observed target vehicle, wi is the response rate of the vehicle, G i is the motion equation obtained by the camera unit tracking the target vehicle, H i,j is the trajectory prediction equation determined by the vehicle trajectory analysis module , u i , vi ,j are the observation noise, xi is the target vehicle position, y j is the coordinates of the parking space, the subscript has no meaning and only means that the data is substituted into the function for calculation; when the target positioning optimization function solves the minimum point, it is the final The real-time accurate position information of the optimized vehicle can solve the positioning drift and visual positioning deviation and improve the positioning accuracy.

(10)路径引导单元用于根据停车场当前图像信息筛选出通行车辆最少的车道信息;车位引导单元用于根据停车场当前图像信息筛选出空车位;中央控制模块根据车辆的实时精确位置信息以及车道信息和空车位的坐标引导目标车辆泊车。(10) The path guidance unit is used to filter out the lane information with the least passing vehicles according to the current image information of the parking lot; the parking space guidance unit is used to filter out the empty parking spaces according to the current image information of the parking lot; the central control module is based on the real-time precise position information of the vehicle and The lane information and the coordinates of the empty parking space guide the target vehicle to park.

高精度摄像机用以实时获取车辆的所处环境图像信息以及空车位图像信息;UWB/IMU模块用以获取车辆与停车场UWB基站的距离信息;信号传输与处理模块用以接受和发送中央控制模块发送的定位信号;计算与定位显示模块用以实时处理车辆在模拟坐标中位置信息;车辆轨迹分析模块用以追踪车辆运动轨迹并上传中央控制系统用来修正UWB/IMU模块定位精度;数据融合模块用以解决定位漂移及视觉定位偏差,提高定位精度。本发明在智能泊车过程中通过停车场端设备辅助,实现双模型融合定位,解决车辆在陌生停车环境中实时高精度位置跟踪,通过停车场设备与车辆协同配合实现智能泊车过程。The high-precision camera is used to obtain the image information of the vehicle's environment and the image information of the empty parking space in real time; the UWB/IMU module is used to obtain the distance information between the vehicle and the UWB base station of the parking lot; the signal transmission and processing module is used to receive and send the central control module The positioning signal sent; the calculation and positioning display module is used to process the position information of the vehicle in the simulated coordinates in real time; the vehicle trajectory analysis module is used to track the vehicle trajectory and upload it to the central control system to correct the positioning accuracy of the UWB/IMU module; the data fusion module It is used to solve positioning drift and visual positioning deviation and improve positioning accuracy. In the intelligent parking process, the invention realizes dual-model fusion positioning through the assistance of the parking lot terminal equipment, solves the real-time high-precision position tracking of the vehicle in the unfamiliar parking environment, and realizes the intelligent parking process through the cooperation between the parking lot equipment and the vehicle.

实施例2:Example 2:

一种基于UWB/IMU和视觉信息融合的智能泊车定位方法,基于实施例1中所描述的智能泊车定位系统,流程图如图3所示,还可以参考图5-图7,了解其细节,本说明书提供了如实施例或流程示意图的方法操作步骤,但基于常规或者无创造性的劳动可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的系统或服务器产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境)或者调整没有时序限制的步骤的执行顺序。具体的,一种基于UWB/IMU和视觉信息融合的智能泊车定位方法包括以下步骤:An intelligent parking positioning method based on UWB/IMU and visual information fusion, based on the intelligent parking positioning system described in Embodiment 1, the flowchart is shown in Figure 3, and Figures 5-7 can also be referred to to understand the For details, the present specification provides method operation steps such as embodiments or schematic flowcharts, but more or less operation steps may be included based on routine or non-creative work. The sequence of steps enumerated in the embodiments is only one of the execution sequences of many steps, and does not represent the only execution sequence. When an actual system or server product is executed, the methods shown in the embodiments or the accompanying drawings may be executed sequentially or in parallel (eg, in a parallel processor or multi-threaded processing environment), or the execution sequence of steps without timing constraints may be adjusted. Specifically, an intelligent parking positioning method based on UWB/IMU and visual information fusion includes the following steps:

S1、通过摄像机单元获取停车场当前图像信息并实时跟踪目标车辆及车辆所处场景,得到运动方程,确定目标车辆的车辆位置信息,停车场当前图像信息包括车辆图像信息、障碍物图像信息及周围车位和车道线图像信息;S1. Obtain the current image information of the parking lot through the camera unit and track the target vehicle and the scene where the vehicle is located in real time, obtain the motion equation, and determine the vehicle position information of the target vehicle. The current image information of the parking lot includes vehicle image information, obstacle image information and surrounding Image information of parking spaces and lane lines;

S2、通过车辆信息量化单元将摄像机单元采集的图像信息量化为车辆信息,确定摄像机单元采集的当前场景下的车辆数量,并标定目标车辆;S2, quantifying the image information collected by the camera unit into vehicle information through the vehicle information quantification unit, determining the number of vehicles in the current scene collected by the camera unit, and calibrating the target vehicle;

S3、根据当前场景下的车辆数量,通过摄像机机构阈值单元确定摄像机单元的偏转角度阈值并控制摄像机单元执行;S3, according to the number of vehicles in the current scene, determine the deflection angle threshold of the camera unit through the camera mechanism threshold unit and control the camera unit to execute;

S4、UWB/IMU模块获取目标车辆与UWB基站之间的距离以及车辆的运动信息,从而获取目标车辆在停车场全域所处虚拟坐标信息及惯性前进方向;S4. The UWB/IMU module obtains the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so as to obtain the virtual coordinate information and inertial direction of the target vehicle in the whole area of the parking lot;

S5、车辆轨迹分析模块根据连续时刻的车辆图像信息获取目标车辆的轨迹信息,并传输到中央控制模块,供环境误差神经网络学习模型学习;S5. The vehicle trajectory analysis module obtains the trajectory information of the target vehicle according to the vehicle image information at successive moments, and transmits it to the central control module for learning by the environmental error neural network learning model;

S6、环境误差神经网络学习模型中以卷积神经网络建立环境误差感知深度学习模型,提取环境因素导致定位偏差的误差因子,帮助UWB/IMU模块及摄像机单元纠正定位精度;S6. In the environmental error neural network learning model, a convolutional neural network is used to establish an environmental error perception deep learning model, and the error factor of the positioning deviation caused by environmental factors is extracted to help the UWB/IMU module and the camera unit to correct the positioning accuracy;

S7、信号传输与处理模块将UWB/IMU模块和摄像机单元的数据传输至中央控制模块;S7. The signal transmission and processing module transmits the data of the UWB/IMU module and the camera unit to the central control module;

S8、计算与定位显示模块根据UWB/IMU模块和摄像机单元的数据进行坐标计算及视觉位置可视化跟踪显示;S8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;

S9、数据融合模块融合UWB/IMU模块的虚拟坐标信息和摄像机单元的目标车辆的车辆位置信息得到车辆的实时精确位置信息,并将融合结果传输到中央控制模块;S9, the data fusion module fuses the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit to obtain the real-time precise position information of the vehicle, and transmits the fusion result to the central control module;

S10、路径引导单元根据停车场当前图像信息筛选出通行车辆最少的车道信息,车位引导单元根据停车场当前图像信息筛选出空车位,中央控制模块根据车辆的实时精确位置信息以及车道信息和空车位的坐标引导目标车辆泊车。S10, the path guidance unit filters out the lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guidance unit filters out the empty parking spaces according to the current image information of the parking lot, and the central control module according to the real-time precise position information of the vehicle and the lane information and empty parking spaces The coordinates guide the target vehicle to park.

上述步骤中,各个步骤的具体实施细节同实施例1,在此不再赘述。In the above steps, the specific implementation details of each step are the same as those in Embodiment 1, and are not repeated here.

在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(例如中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party all include one or more processors (eg, a central processing unit (CPU)), an input/output interface, a network interface, and a memory .

内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RandomAccess Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory (flash). RAM). Memory is an example of a computer-readable medium.

计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change RAM,PRAM)、静态随机存取存储器(Static Random Access Memory,SRAM)、动态随机存取存储器(Dynamic RandomAccess Memory,DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disk,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非暂存电脑可读媒体(transitorymedia),如调制的数据信号和载波。Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (Phase-Change RAM, PRAM), static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), Other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, read-only Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disk (DVD) or other optical storage, magnetic cassette tape, magnetic tape disk storage or other magnetic storage device or any other non- A transmission medium that can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, excludes non-transitory computer-readable media, such as modulated data signals and carrier waves.

需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。It should be noted that the present application may be implemented in software and/or a combination of software and hardware, eg, an application specific integrated circuit (ASIC), a general purpose computer, or any other similar hardware device. In one embodiment, the software program of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer-readable recording medium, such as RAM memory, magnetic or optical drives or floppy disks, and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.

另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。而调用本申请的方法的程序指令,可能被存储在固定的或可移动的记录介质中,和/或通过广播或其他信号承载媒体中的数据流而被传输,和/或被存储在根据所述程序指令运行的计算机设备的工作存储器中。在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术方案。In addition, a part of the present application can be applied as a computer program product, such as computer program instructions, which when executed by a computer, through the operation of the computer, can invoke or provide methods and/or technical solutions according to the present application. The program instructions for invoking the methods of the present application may be stored in fixed or removable recording media, and/or transmitted via data streams in broadcast or other signal-bearing media, and/or stored in accordance with the in the working memory of the computer device on which the program instructions are executed. Here, an embodiment according to the present application includes an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, a trigger is The apparatus operates based on the aforementioned methods and/or technical solutions according to various embodiments of the present application.

这里本发明的描述和应用是说明性的,并非想将本发明的范围限制在上述实施例中。实施例中所涉及的效果或优点可因多种因素干扰而可能不能在实施例中体现,对于效果或优点的描述不用于对实施例进行限制。这里所披露的实施例的变形和改变是可能的,对于那些本领域的普通技术人员来说实施例的替换和等效的各种部件是公知的。本领域技术人员应该清楚的是,在不脱离本发明的精神或本质特征的情况下,本发明可以以其它形式、结构、布置、比例,以及用其它组件、材料和部件来实现。在不脱离本发明范围和精神的情况下,可以对这里所披露的实施例进行其它变形和改变。The description and application of the present invention herein is illustrative, and is not intended to limit the scope of the present invention to the above-described embodiments. The effects or advantages involved in the embodiments may be interfered by various factors and may not be embodied in the embodiments, and the description of the effects or advantages is not intended to limit the embodiments. Variations and variations of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments are known to those of ordinary skill in the art. It should be apparent to those skilled in the art that the present invention may be implemented in other forms, structures, arrangements, proportions, and with other components, materials and components without departing from the spirit or essential characteristics of the invention. Other modifications and changes of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (10)

1. An intelligent parking positioning system based on UWB/IMU and visual information fusion is characterized in that the system is used for vehicle positioning of a parking lot and comprises a central control module, a UWB/IMU module, a camera unit, a signal transmission and processing module, a calculation and positioning display module, a vehicle track analysis module and a data fusion module, wherein the central control module comprises a vehicle information quantification unit, a camera mechanism threshold value unit, an environmental error neural network learning model, a path guide unit and a parking space guide unit;
the camera unit comprises a plurality of cameras and is used for acquiring current image information of a parking lot, tracking a target vehicle and a scene where the vehicle is located in real time, obtaining a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and surrounding parking space and lane line image information;
the vehicle information quantization unit is used for quantizing the image information acquired by the camera unit into vehicle information, determining the number of vehicles in the current scene acquired by the camera unit and calibrating the target vehicle;
the camera mechanism threshold unit is used for determining a deflection angle threshold of the camera unit according to the number of vehicles in the current scene and controlling the camera unit to execute;
the vehicle track analysis module is used for acquiring track information of a target vehicle according to vehicle image information at continuous moments, and transmitting the track information to the central control module for learning of an environmental error neural network learning model;
the UWB/IMU module is used for acquiring the distance between a target vehicle and a UWB base station and the motion information of the vehicle so as to acquire the virtual coordinate information of the target vehicle in the whole parking lot domain and the inertial advancing direction;
in the environment error neural network learning model, a convolutional neural network is used for establishing an environment error perception deep learning model, extracting an error factor of positioning deviation caused by an environment factor, and helping a UWB/IMU module and a camera unit to correct positioning accuracy;
the signal transmission and processing module is used for transmitting data of the UWB/IMU module and the camera unit to the central control module;
the calculation and positioning display module is used for performing coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
the data fusion module is used for fusing virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information, and transmitting a fusion result to the central control module;
the route guiding unit is used for screening out the lane information with the least passing vehicles according to the current image information of the parking lot;
and the parking space guiding unit is used for screening out empty parking spaces according to the current image information of the parking lot.
2. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the environment error neural network learning model is an environment error perception deep learning model established by a convolutional neural network, and error factors of positioning deviation caused by environment factors are extracted and combined layer by layer to generate high-level features in an abstract manner, so as to help a UWB/IMU module to correct positioning accuracy; the method for extracting the error factor of the positioning deviation caused by the environmental factors comprises the following steps:
nth fixed UWB base station coordinate U n =(x n ,y n ,z n ) Known coordinates; the position of the vehicle to be positioned at the moment t is recorded as N t =(x t ,y t ,z t ) (ii) a the distance from the UWB base station to the target vehicle at the time t is as follows:
Figure FDA0003761016320000021
wherein
Figure FDA0003761016320000022
For this purpose, an error factor;
substituting error factors at different moments
Figure FDA0003761016320000023
Learning by an environment error neural network model; wherein
Figure FDA0003761016320000024
Is a high layerA characteristic amount;
Figure FDA0003761016320000026
in order to be the weight and the coefficient,
Figure FDA0003761016320000025
wherein v is the target vehicle travel speed; t is i+1 、T i Corresponding to the time recorded by a certain moment and a later time frame in the process of the traveling of the target vehicle, (T) i+1 -T i ) Is the time difference of travel; theta i For this purpose, the vehicle wheel angle is calculated.
3. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the vehicle information quantization unit is configured to quantize image information acquired by the camera unit into vehicle information, determine the number of vehicles in a current scene acquired by the camera unit, and calibrate a target vehicle, specifically:
the vehicle information quantization unit acquires vehicle image information, quantizes the image information into vehicle types, colors and license plate numbers of vehicles corresponding to the pixel points, sequentially generates unique character string codes for storage, and generates a vehicle number ID for each vehicle; the vehicle information quantization unit acquires a current time frame vehicle body image of a target vehicle, performs local image processing on the vehicle body image to obtain discrete pixel points corresponding to a vehicle body, a color and a license plate, converts the discrete pixel points into discrete quantity values, generates a unique vehicle digital ID within corresponding time, and performs identity calibration on the target vehicle.
4. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the camera mechanism threshold unit is configured to determine a deflection angle threshold of the camera unit according to a number of vehicles in a current scene and control the camera unit to execute, and specifically:
the azimuth angle of the ith high-precision camera corresponding to the parking lot space coordinate (x, y, z) is (alpha) iii ) The (x, y, z) pairsThe space coordinate position of the ith high-precision camera in the parking lot is acquired, and the number of target vehicles in the visual angle range is N according to the azimuth angle k And constructing a state matrix equation:
Figure FDA0003761016320000031
wherein the camera has an analytic power of R χ And xi is a threshold value set for the camera shooting mechanism, and when xi is less than or equal to N k And when the camera mechanism threshold value unit sends a deflection instruction to the camera, so that the angular deflection of the camera is realized.
5. The intelligent parking positioning system based on the fusion of the UWB/IMU and the visual information as claimed in claim 1, wherein the UWB/IMU module is configured to obtain a distance between the target vehicle and the UWB base station and motion information of the vehicle, so as to obtain virtual coordinate information and an inertial heading direction of the target vehicle in the whole area of the parking lot, and specifically:
the UWB/IMU module acquires the distance between the target vehicle and the base station: space coordinates of each UWB base station in a parking lot are obtained, time transmitted by pulse signals between each UWB base station and a target vehicle is obtained, and the distance between each base station and the target vehicle is calculated, so that virtual coordinate information of the target vehicle in the parking lot can be obtained through calculation:
Figure FDA0003761016320000032
where m and n are used to identify different base stations, l m,n Represents the distance between m and n UWB base stations, t being the pulse transmission time; c is the speed of light; (x, y, z) are virtual coordinates of the target vehicle within the parking lot;
the UWB/IMU module acquires the motion information of the vehicle: and acquiring accelerometer data E (epsilon) and gyroscope data E (sigma) through an IMU inertial module so as to obtain the inertial traveling direction of the target vehicle.
6. The intelligent parking positioning system based on UWB/IMU and visual information fusion of claim 1, wherein the data fusion module is configured to fuse the virtual coordinate information of the UWB/IMU module and the vehicle position information of the target vehicle of the camera unit, and transmit a fusion result to the central control module, specifically:
establishing a fusion target positioning optimization function:
Figure FDA0003761016320000033
G i =f(x i-1 ,u i ,w i )
H i,j =h(y j ,x i ,v i,j )
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertia module,
Figure FDA0003761016320000034
coordinate data after UWB positioning processing, T is observation target vehicle time frame, w i As vehicle response rate, G i Equation of motion obtained for the camera unit tracking the target vehicle, H i,j Trajectory prediction equation, u, determined for vehicle trajectory analysis module i 、v i,j To observe noise, x i Is the target vehicle position, y j The coordinates of the parking spaces are obtained;
and when the target positioning optimization function solves the minimum point, obtaining the real-time accurate position information of the finally optimized vehicle.
7. An intelligent parking positioning method based on UWB/IMU and visual information fusion is characterized in that, based on the intelligent parking positioning system based on UWB/IMU and visual information fusion as claimed in any one of claims 1-6, comprising the following steps:
the method comprises the following steps of S1, obtaining current image information of a parking lot through a camera unit, tracking a target vehicle and a scene where the vehicle is located in real time to obtain a motion equation, and determining vehicle position information of the target vehicle, wherein the current image information of the parking lot comprises vehicle image information, obstacle image information and image information of surrounding parking places and lane lines;
s2, quantizing the image information acquired by the camera unit into vehicle information through a vehicle information quantization unit, determining the number of vehicles in the current scene acquired by the camera unit, and calibrating the target vehicle;
s3, according to the number of vehicles in the current scene, determining a deflection angle threshold of a camera unit through a camera mechanism threshold unit and controlling the camera unit to execute the deflection angle threshold;
s4, the UWB/IMU module acquires the distance between the target vehicle and the UWB base station and the motion information of the vehicle, so that the virtual coordinate information and the inertial advancing direction of the target vehicle in the whole area of the parking lot are acquired;
s5, the vehicle track analysis module acquires track information of the target vehicle according to the vehicle image information at continuous moments, and transmits the track information to the central control module for learning of the environment error neural network learning model;
s6, establishing an environmental error perception deep learning model by a convolutional neural network in the environmental error neural network learning model, extracting an error factor of positioning deviation caused by environmental factors, and helping a UWB/IMU module and a camera unit to correct positioning accuracy;
s7, the signal transmission and processing module transmits the data of the UWB/IMU module and the camera unit to the central control module;
s8, the calculation and positioning display module performs coordinate calculation and visual position visual tracking display according to the data of the UWB/IMU module and the camera unit;
s9, the data fusion module fuses virtual coordinate information of the UWB/IMU module and vehicle position information of a target vehicle of the camera unit to obtain real-time accurate position information of the vehicle, and transmits a fusion result to the central control module;
s10, the path guiding unit screens out lane information with the least passing vehicles according to the current image information of the parking lot, the parking space guiding unit screens out empty parking spaces according to the current image information of the parking lot, and the central control module guides the target vehicle to park according to the real-time accurate position information of the vehicle, the lane information and the coordinates of the empty parking spaces.
8. An intelligent parking positioning method based on UWB/IMU and visual information fusion as claimed in claim 7, wherein in step S6, the manner of extracting the error factor of the positioning deviation caused by the environmental factors includes:
nth fixed UWB base station coordinate U n =(x n ,y n ,z n ) Known coordinates; the position of the vehicle to be positioned at the moment t is recorded as N t =(x t ,y t ,z t ) (ii) a the distance from the UWB base station to the target vehicle at the time t is as follows:
Figure FDA0003761016320000051
wherein
Figure FDA0003761016320000052
For this purpose, an error factor;
substituting error factors at different moments
Figure FDA0003761016320000053
Learning by an environment error neural network model; wherein
Figure FDA0003761016320000054
Is a high level feature quantity;
Figure FDA0003761016320000055
in order to be the weight and the coefficient,
Figure FDA0003761016320000056
wherein v is the target vehicle travel speed; t is i+1 、T i Time (T) recorded corresponding to a certain moment and a later time frame in the process of moving the target vehicle i+1 -T i ) Is the time difference of travel; theta.theta. i For this purpose, the vehicle is steered.
9. The intelligent parking positioning method based on UWB/IMU and visual information fusion of claim 7, wherein the step S3 specifically comprises:
the azimuth angle of the ith high-precision camera corresponding to the parking lot space coordinate (x, y, z) is (alpha) iii ) And (x, y, z) corresponds to the spatial coordinate position of the ith high-precision camera in the parking lot, and the number N of target vehicles in the view angle range is acquired according to the azimuth angle k And constructing a state matrix equation:
Figure FDA0003761016320000057
wherein the camera has an analytic power of R χ And xi is a threshold value set for the camera shooting mechanism, and when xi is less than or equal to N k And when the camera mechanism threshold value unit sends a deflection instruction to the camera, so that the angular deflection of the camera is realized.
10. The intelligent parking positioning method based on UWB/IMU and visual information fusion of claim 7, wherein the step S9 is specifically:
establishing a fusion target positioning optimization function:
Figure FDA0003761016320000058
G i =f(x i-1 ,u i ,w i )
H i,j =h(y j ,x i ,v i,j )
wherein E (epsilon) and E (sigma) are accelerometer data and gyroscope data measured by the MU inertia module,
Figure FDA0003761016320000059
handling squats for UWB positioningTarget data, T is the observation target vehicle time frame, w i As vehicle response rate, G i Equation of motion obtained for the camera unit to track the target vehicle, H i,j Trajectory prediction equation, u, determined for a vehicle trajectory analysis module i 、v i,j To observe noise, x i Is the target vehicle position, y j The coordinates of the parking spaces;
and when the target positioning optimization function solves the minimum point, the real-time accurate position information of the finally optimized vehicle is obtained.
CN202210871578.9A 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion Active CN115235452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210871578.9A CN115235452B (en) 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210871578.9A CN115235452B (en) 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion

Publications (2)

Publication Number Publication Date
CN115235452A true CN115235452A (en) 2022-10-25
CN115235452B CN115235452B (en) 2024-08-27

Family

ID=83674829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210871578.9A Active CN115235452B (en) 2022-07-22 2022-07-22 Intelligent parking positioning system and method based on UWB/IMU and visual information fusion

Country Status (1)

Country Link
CN (1) CN115235452B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115540854A (en) * 2022-12-01 2022-12-30 成都信息工程大学 Active positioning method, equipment and medium based on UWB assistance
CN115880888A (en) * 2022-11-28 2023-03-31 复旦大学 Intersection safety guiding method, equipment and medium based on digital twin
CN116612458A (en) * 2023-05-30 2023-08-18 易飒(广州)智能科技有限公司 Deep learning-based parking path determination method and system
CN116976535A (en) * 2023-06-27 2023-10-31 上海师范大学 Path planning algorithm based on fusion of few obstacle sides and steering cost

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104697517A (en) * 2015-03-26 2015-06-10 江南大学 Multi-target tracking and positioning system for indoor parking lot
CN105946853A (en) * 2016-04-28 2016-09-21 中山大学 Long-distance automatic parking system and method based on multi-sensor fusion
CN107600067A (en) * 2017-09-08 2018-01-19 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
WO2020056874A1 (en) * 2018-09-17 2020-03-26 魔门塔(苏州)科技有限公司 Automatic parking system and method based on visual recognition
CN111239790A (en) * 2020-01-13 2020-06-05 上海师范大学 Vehicle navigation system based on 5G network machine vision
WO2022100272A1 (en) * 2020-11-11 2022-05-19 Oppo广东移动通信有限公司 Indoor positioning method and related apparatus
CN114623823A (en) * 2022-05-16 2022-06-14 青岛慧拓智能机器有限公司 UWB (ultra wide band) multi-mode positioning system, method and device integrating odometer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104697517A (en) * 2015-03-26 2015-06-10 江南大学 Multi-target tracking and positioning system for indoor parking lot
CN105946853A (en) * 2016-04-28 2016-09-21 中山大学 Long-distance automatic parking system and method based on multi-sensor fusion
CN107600067A (en) * 2017-09-08 2018-01-19 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
WO2020056874A1 (en) * 2018-09-17 2020-03-26 魔门塔(苏州)科技有限公司 Automatic parking system and method based on visual recognition
CN111239790A (en) * 2020-01-13 2020-06-05 上海师范大学 Vehicle navigation system based on 5G network machine vision
WO2022100272A1 (en) * 2020-11-11 2022-05-19 Oppo广东移动通信有限公司 Indoor positioning method and related apparatus
CN114623823A (en) * 2022-05-16 2022-06-14 青岛慧拓智能机器有限公司 UWB (ultra wide band) multi-mode positioning system, method and device integrating odometer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王志兵;: "基于超声波雷达和全景高清影像融合的全自动泊车系统", 电子世界, no. 19, 15 October 2020 (2020-10-15) *
鲍施锡: "GNSS/UWB 与IMU 组合的室内外定位系统研究", 中国优秀硕博士论文电子期刊, 31 December 2023 (2023-12-31) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880888A (en) * 2022-11-28 2023-03-31 复旦大学 Intersection safety guiding method, equipment and medium based on digital twin
CN115540854A (en) * 2022-12-01 2022-12-30 成都信息工程大学 Active positioning method, equipment and medium based on UWB assistance
CN116612458A (en) * 2023-05-30 2023-08-18 易飒(广州)智能科技有限公司 Deep learning-based parking path determination method and system
CN116612458B (en) * 2023-05-30 2024-06-04 易飒(广州)智能科技有限公司 Deep learning-based parking path determination method and system
CN116976535A (en) * 2023-06-27 2023-10-31 上海师范大学 Path planning algorithm based on fusion of few obstacle sides and steering cost
CN116976535B (en) * 2023-06-27 2024-05-17 上海师范大学 Path planning method based on fusion of few obstacle sides and steering cost

Also Published As

Publication number Publication date
CN115235452B (en) 2024-08-27

Similar Documents

Publication Publication Date Title
CN115235452B (en) Intelligent parking positioning system and method based on UWB/IMU and visual information fusion
CN112700470B (en) Target detection and track extraction method based on traffic video stream
KR102525227B1 (en) Method and apparatus for determining road information data, electronic device, storage medium and program
US10919543B2 (en) Learning method and learning device for determining whether to switch mode of vehicle from manual driving mode to autonomous driving mode by performing trajectory-based behavior analysis on recent driving route
US20240125610A1 (en) Lane marking localization
US10810754B2 (en) Simultaneous localization and mapping constraints in generative adversarial networks for monocular depth estimation
CN109084786B (en) Map data processing method
EP3647734A1 (en) Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle
US20190050653A1 (en) Perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking
US20190188862A1 (en) A perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking
US11460851B2 (en) Eccentricity image fusion
CN112753038B (en) Method and apparatus for recognizing vehicle lane changing tendency
CN102792316A (en) Traffic signal mapping and detection
US12205319B2 (en) Framework for 3D object detection and depth prediction from 2D images
CN113177976B (en) A depth estimation method, device, electronic device and storage medium
CN116958763B (en) Feature-result-level-fused vehicle-road collaborative sensing method, medium and electronic equipment
CN114663852B (en) Lane diagram construction method and device, electronic equipment and readable storage medium
CN113643431B (en) A system and method for iterative optimization of visual algorithms
US11634156B1 (en) Aerial view generation for vehicle control
CN116776151A (en) Automatic driving model capable of performing autonomous interaction with outside personnel and training method
CN114379544A (en) Automatic parking system, method and device based on multi-sensor pre-fusion
US11628859B1 (en) Vehicle placement on aerial views for vehicle control
Ngo et al. Beamforming and scalable image processing in vehicle-to-vehicle networks
CN115752476B (en) Vehicle ground library repositioning method, device, equipment and medium based on semantic information
CN114563007B (en) Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant