WO2020228393A1 - 基于物联网的深度学习型智能驾驶环境感知系统 - Google Patents

基于物联网的深度学习型智能驾驶环境感知系统 Download PDF

Info

Publication number
WO2020228393A1
WO2020228393A1 PCT/CN2020/077066 CN2020077066W WO2020228393A1 WO 2020228393 A1 WO2020228393 A1 WO 2020228393A1 CN 2020077066 W CN2020077066 W CN 2020077066W WO 2020228393 A1 WO2020228393 A1 WO 2020228393A1
Authority
WO
WIPO (PCT)
Prior art keywords
intelligent driving
layer
perception system
unit
internet
Prior art date
Application number
PCT/CN2020/077066
Other languages
English (en)
French (fr)
Inventor
王进
邹勇松
陈华
张建明
卢佳顺
Original Assignee
长沙理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙理工大学 filed Critical 长沙理工大学
Publication of WO2020228393A1 publication Critical patent/WO2020228393A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0062Adapting control system settings
    • B60W2050/0075Automatic parameter input, automatic initialising or calibrating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/55External transmission of data to or from the vehicle using telemetry

Definitions

  • the invention relates to the field of intelligent perception systems, and more specifically, to a deep learning type intelligent driving environment perception system based on the Internet of Things.
  • driverless driving has a profound impact on the automobile industry and even the transportation industry.
  • the advent of driverless cars will free human hands, reduce the frequency of traffic accidents, and ensure people's safety.
  • unmanned driving will definitely become more intelligent, and at the same time, it will be able to realize the industrialization of unmanned vehicles.
  • Unmanned driving technology especially the Internet and the addition of non-traditional automobile companies into this field, makes this technology fall into the "red sea” before it is widely used.
  • this kind of intelligent robot-like unmanned car like the invention of a car instead of a horse-drawn carriage, will improve people's ability to travel by car and completely improve it?
  • the main reason for the poor road traffic conditions in China's large and medium cities is that the number of vehicles is too large, and under our national conditions, human factors cause accidents and congestion to account for a considerable proportion.
  • a large number of rear-end collisions, touch and scratches, and other traffic accidents can be summarized as driving dependent on the driver.
  • the "independent behavior ability" model that is, discovery-judgment-action within the visual range.
  • the purpose of the present invention is to provide a deep learning intelligent driving environment perception system based on the Internet of Things, which improves the realization of unmanned driving and uses the perception system to collect all moving and stationary information on the road. Obstacles, the data information of these obstacles is sent to all intelligent driving vehicles driving on the road, which improves the safety and stability of unmanned driving while reducing technical difficulty and production costs.
  • the present invention adopts the following technical solutions.
  • a deep learning intelligent driving environment perception system based on the Internet of Things includes a perception system and an intelligent driving vehicle.
  • the perception system includes a perception layer and a cognitive layer.
  • the intelligent driving vehicle is provided with a decision-making layer and a control layer.
  • the layer is used for data collection, and the perception layer includes a radar unit, an inertial navigation unit, a positioning unit, and a camera unit.
  • the cognitive layer is used for data analysis, and the decision-making layer is used for information and route planning from the cognitive layer. , Use algorithms to process, and output instructions to adjust the speed and direction of the vehicle to the control layer.
  • the control layer receives instructions from the decision-making layer and controls the brakes, accelerators and gears of the vehicle.
  • This solution improves the implementation of unmanned driving , Use the sensing system to collect all moving and stationary obstacles on the road, and send the data information of these obstacles to all intelligent driving vehicles driving on the road, which improves the safety and stability of unmanned driving while reducing the technical difficulty And production costs.
  • the radar unit, inertial navigation unit, positioning unit, and camera unit can be installed on the street lamps on both sides of the road, and the original street lamp equipment is used to install and design the sensing system without additional large-scale hardware facilities, which can save A large number of hardware configurations save costs and do not occupy space.
  • the cognitive layer includes analysis of pedestrians, vehicles, traffic objects, traffic signs and lane lines.
  • the radar unit includes a laser radar and a millimeter-wave radar.
  • the millimeter-wave radar is a device for measuring the distance, angle, and relative speed with the vehicles around the car by transmitting and receiving radio waves. It is currently widely used as a vehicle radar. It is not easy to be affected by severe weather such as heavy fog, rain and snow, dust and dirt, etc., and can detect vehicles stably. In this system, millimeter wave radar is based on a multi-target detection algorithm to detect moving obstacles on lanes and sidewalks in a fixed area The distance and speed of objects.
  • the lidar is prohibited, and the scanning environment is fixed.
  • the lidar first acquires environmental data and stores it in the computer in the form of an array, preprocesses the acquired environmental data, removes trees, ground and other information, and controls the laser Radar distance information and reflection intensity information are simultaneously processed by non-planar algorithm of environmental data segmentation and clustering, and the rectangular outline features of obstacles are extracted.
  • Lidar uses a multi-hypothesis tracking model algorithm to associate two consecutive frames of obstacle information, and The Kalman filter algorithm is used to continuously predict and track dynamic obstacles.
  • wireless communication is adopted between the sensing system and the intelligent driving vehicle, and the sensing system sends the recognized data information to all intelligent driving vehicles nearby, so that each vehicle can clearly control all obstacles, other vehicles, and others around it.
  • the direction and speed of the vehicle is adopted between the sensing system and the intelligent driving vehicle, and the sensing system sends the recognized data information to all intelligent driving vehicles nearby, so that each vehicle can clearly control all obstacles, other vehicles, and others around it. The direction and speed of the vehicle.
  • the sensor data fusion of the radar unit, the inertial navigation unit, the positioning unit and the camera unit includes spatial fusion, time fusion and sensor data fusion algorithms to establish a precise lidar coordinate system, three-dimensional world coordinate system, and millimeter wave coordinate system , Is the key to achieve spatial fusion of multi-sensor data.
  • the spatial fusion of lidar and millimeter-wave radar is to convert the measured values of different sensor coordinate systems into the same coordinate system; the information of lidar and millimeter-wave radar not only needs to be fused in space, but also requires sensors to collect data synchronously in time. , To achieve the integration of time. The sampling frequency of the two sensors is different.
  • each frame of the image collected by the low frequency transmitter selects the data buffered in the previous frame of the high frequency sensor to complete a common sampling.
  • Frame radar and vision fusion data thereby ensuring the time synchronization of millimeter wave radar data and camera data; the core key of sensor data fusion is to use a suitable fusion algorithm.
  • the sensor data fusion algorithm of this system uses the extended Kalmar filter algorithm .
  • the radar unit, inertial navigation unit, positioning unit and camera unit are installed on adjacent street lights at a fixed distance, and the positioning design is rationalized and standardized.
  • the radar unit, inertial navigation unit, positioning unit and camera unit on the adjacent street lights are connected in parallel. When a single electrical component is damaged, it will not affect the normal operation of other electrical components, ensuring the entire Stability of system operation.
  • the radar unit, inertial navigation unit, positioning unit and camera unit can be installed on the street lamps on both sides of the road, and the original street lamp equipment is used for the installation and design of the sensing system. There is no need to add large-scale hardware facilities, which can save a lot of hardware Configuration, cost saving, and does not take up space.
  • the cognitive layer includes the analysis of pedestrians, vehicles, traffic objects, traffic signs and lane lines.
  • the radar unit includes lidar and millimeter-wave radar.
  • Millimeter-wave radar is a device that measures the distance, angle, and relative speed to vehicles around the car by sending and receiving radio waves. It is currently widely used as a vehicle radar. It is not easy to be affected by severe weather such as heavy fog, rain and snow, dust and dirt, etc., and can detect vehicles stably. In this system, millimeter wave radar is based on a multi-target detection algorithm to detect moving obstacles on lanes and sidewalks in a fixed area The distance and speed of objects.
  • Lidar is prohibited, and its scanning environment is fixed.
  • Lidar first obtains environmental data and stores it in the computer in the form of an array, preprocesses the obtained environmental data, eliminates trees, ground and other information, The distance information and reflection intensity information are simultaneously processed by the non-planar algorithm for environmental data segmentation and clustering, and the rectangular outline features of the obstacle are extracted.
  • the lidar uses a multi-hypothesis tracking model algorithm to correlate the obstacle information of two consecutive frames, and uses Karl The Mann filter algorithm continuously predicts and tracks dynamic obstacles.
  • Wireless communication is adopted between the perception system and the intelligent driving vehicle.
  • the perception system sends the recognized data information to all nearby intelligent driving vehicles, so that each vehicle can clearly control all obstacles around it, other vehicles and other vehicles.
  • Direction and speed of travel is adopted.
  • the sensor data fusion of radar unit, inertial navigation unit, positioning unit and camera unit includes spatial fusion, time fusion and sensor data fusion algorithms to establish a precise lidar coordinate system, three-dimensional world coordinate system, and millimeter wave coordinate system.
  • the key to achieve spatial fusion of multi-sensor data is to achieve spatial fusion of multi-sensor data.
  • the spatial fusion of lidar and millimeter-wave radar is to convert the measured values of different sensor coordinate systems into the same coordinate system; the information of lidar and millimeter-wave radar not only needs to be fused in space, but also requires sensors to collect data synchronously in time. , To achieve the integration of time. The sampling frequency of the two sensors is different.
  • each frame of the image collected by the low frequency transmitter selects the data buffered in the previous frame of the high frequency sensor to complete a common sampling.
  • Frame radar and vision fusion data thereby ensuring the time synchronization of millimeter wave radar data and camera data; the core key of sensor data fusion is to use a suitable fusion algorithm.
  • the sensor data fusion algorithm of this system uses the extended Kalmar filter algorithm .
  • the radar unit, inertial navigation unit, positioning unit and camera unit are installed on adjacent street lights at a fixed distance, and the positioning design is rationalized and standardized.
  • Figure 1 is a schematic diagram of the environment sensing system of the present invention
  • Figure 2 is a diagram of the unmanned driving scheme of the present invention.
  • FIG. 3 is a flow chart of the lidar recognition algorithm of the present invention.
  • Figure 4 is a high-precision electronic construction diagram of the adjacent area of the present invention.
  • connection can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be inside two components. Of connectivity.
  • Connection can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be inside two components. Of connectivity.
  • Connection can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be inside two components. Of connectivity.
  • the deep learning intelligent driving environment perception system based on the Internet of Things includes a perception system and an intelligent driving vehicle.
  • the perception system includes a perception layer and a cognitive layer.
  • the intelligent driving vehicle is equipped with a decision-making layer and a control layer, and the perception layer It is used for data collection, and the perception layer includes radar unit, inertial navigation unit, positioning unit and camera unit.
  • An external computer is used as the cognitive layer for data analysis, and the decision-making layer set on the intelligent driving vehicle is used to transmit the cognitive layer.
  • the information and route planning of the vehicle are processed by algorithms, and the instructions for adjusting the speed and direction of the vehicle are output to the control layer.
  • the control layer set up on the intelligent driving vehicle receives the instructions of the decision-making layer and controls the brake, accelerator and gear of the vehicle.
  • Radar unit, inertial navigation unit, positioning unit and camera unit can be installed on the street lamps on both sides of the road.
  • the original street lamp equipment is used to install and design the sensing system.
  • the cognitive layer includes the analysis of pedestrians, vehicles, traffic objects, traffic signs and lane lines.
  • the radar unit includes lidar and millimeter-wave radar.
  • Millimeter-wave radar is a device that measures the distance, angle, and relative speed from vehicles around the car by sending and receiving radio waves. It is currently widely used as a vehicle radar. It is not easy to be affected by severe weather such as heavy fog, rain and snow, dust and dirt, etc., and can detect vehicles stably. In this system, millimeter wave radar is based on a multi-target detection algorithm to detect moving obstacles on lanes and sidewalks in a fixed area The distance and speed of objects.
  • Lidar is prohibited, and its scanning environment is fixed.
  • Lidar first obtains environmental data and stores it in the computer in the form of an array.
  • the obtained environmental data is preprocessed, trees, ground and other information are removed, and the distance information of Lidar
  • the reflection intensity information is simultaneously processed by the non-planar algorithm for environmental data segmentation and clustering, and the rectangular outline features of the obstacle are extracted.
  • the lidar uses the multi-hypothesis tracking model algorithm to correlate the obstacle information of two consecutive frames, and uses the Kalman filter algorithm Continuously predict and track dynamic obstacles.
  • Wireless communication is used between the sensing system and the intelligent driving vehicle.
  • the sensing system sends the recognized data information to all intelligent driving vehicles nearby, so that each vehicle can clearly control all obstacles, other vehicles and others around it. The direction and speed of the vehicle.
  • the sensor data fusion of radar unit, inertial navigation unit, positioning unit and camera unit includes spatial fusion, time fusion and sensor data fusion algorithms to establish precise lidar coordinate system, three-dimensional world coordinate system, and millimeter wave coordinate system, which is the realization of multi-sensor
  • the key to spatial integration of data is to convert the measured values of different sensor coordinate systems into the same coordinate system; the information of lidar and millimeter-wave radar not only needs to be fused in space, but also requires sensors to collect data synchronously in time. , To achieve the integration of time. The sampling frequency of the two sensors is different.
  • each frame of the image collected by the low frequency transmitter selects the data buffered in the previous frame of the high frequency sensor to complete a common sampling.
  • Frame radar and vision fusion data thereby ensuring the time synchronization of millimeter wave radar data and camera data; the core key of sensor data fusion is to use a suitable fusion algorithm.
  • the sensor data fusion algorithm of this system uses the extended Kalmar filter algorithm .
  • the radar unit, inertial navigation unit, positioning unit and camera unit are installed on adjacent street lights at a fixed distance, and the positioning design is rationalized and standardized.
  • the radar unit, inertial navigation unit, positioning unit and camera unit on adjacent street lights are connected in parallel.
  • the setting can facilitate related maintenance personnel to quickly and accurately locate the fault location, speed up the maintenance process, and ensure the stability of the entire system operation.
  • this solution improves the realization of unmanned driving.
  • it improves the perception system and intelligent driving vehicle.
  • the perception system collects all moving and stationary obstacles on the road, and combines the data information of these obstacles. Sent to all intelligent driving vehicles driving on the road, while improving the safety and stability of unmanned driving, reducing technical difficulty and production costs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于物联网的深度学习型智能驾驶环境感知系统,包括感知系统和智能驾驶车辆,感知系统包括感知层和认知层,智能驾驶车辆上设置有决策层和控制层;感知层用于数据采集,决策层用于将认知层传来的信息和路线规划,用算法进行处理,并向控制层输出调整车速和方向的指令,控制层接收决策层的指令,并控制车辆的刹车、油门和档位。本申请利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。

Description

基于物联网的深度学习型智能驾驶环境感知系统 技术领域
本发明涉及智能感知系统领域,更具体地说,涉及基于物联网的深度学习型智能驾驶环境感知系统。
背景技术
无人驾驶作为汽车未来的研究方向,其对于汽车行业甚至是交通运输业有着深远的影响,无人驾驶汽车的来临将能够解放人类的双手,降低发生交通事故发生的频率,保证了人们的安全。同时随着人工智能、传感检测等核心技术的突破和不断推进,无人驾驶必将更加智能化,同时也能够实现无人驾驶汽车的产业化。
无人驾驶技术,尤其是互联网和非传统汽车企业的加入这个领域,使得这项技术还未普及应用就已经又将陷入一片“红海”之中了。但问题是,以这种智能机器人般无人驾驶汽车,能否会如同发明汽车取代马车般的又将人们乘车交通出行能力得到提升和彻底的改善呢?中国现在大中城市道路交通状况较差的主要因素是车辆保有数量太大,况且我们国情条件下人为因素造成事故和拥堵的原因占相当大的比重。大量追尾、触碰刮蹭等交通事故,可以归纳为依赖驾车人的驾驶。“独立行为能力”模式,即视觉范围内发现-判断-行动。于是就会有诸如发现不及时、反应不及时、反应时间不够等因素造成事故。而以仍然延续人工“个人独立行为能力”的智能无人驾驶模式,当然仍然存在上述的这些问题,没有从根本上解决,理论上就仍然存在上述事故发生的可能性。同时,目前限制无人驾驶车辆批量生产的两大主要问题是技术难度和成本问题,因此对无人驾驶的实现方式进行改进,从而提高无人驾驶的安全性、稳定性,降低技术难度和生产成本。
发明内容
1.要解决的技术问题
针对现有技术中存在的问题,本发明的目的在于提供基于物联网的深度学习型智能驾驶环境感知系统,它对无人驾驶的实现方式进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。
2.技术方案
为解决上述问题,本发明采用如下的技术方案。
基于物联网的深度学习型智能驾驶环境感知系统,包括感知系统和智能驾驶车辆,所述感知系统包括感知层和认知层,所述智能驾驶车辆上设置有决策层和控制层,所述感知层用于数据采集,且感知层包括雷达单元、惯性导航单元、定位单元和摄像单元,所述认知层用于数据分析,所述决策层用于将认知层传来的信息和路线规划,用算法进行处理,并向控制层输出调整车速和方向的指令,所述控制层接收决策层的指令,并控制车辆的刹车、油门和档位,本方案对无人驾驶的实现方式进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。
进一步的,所述雷达单元、惯性导航单元、定位单元和摄像单元可安装于道路两侧的路灯上,利用原有的路灯设备进行感知系统的安装和设计,无需额外增设大型硬件设施,可节省大量硬件配置,节约成本,且不占用空间,所述认知层包括对行人、车辆、交通物品、交通标识和车道线的分析。
进一步的,所述雷达单元包括激光雷达和毫米波雷达,毫米波雷达通过收发无线电波,测量与汽车周围车辆的距离,角度和相对速度的装置。目前作为车载雷达被广泛地使用。不易受大雾雨雪等恶劣天气以及尘土污垢等的 影响,可稳定地检测车辆,在本系统中,毫米波雷达基于多目标检测算法,用于检测固定区域内车道上、人行道上面的移动障碍物的距离、速度。
进一步的,所述激光雷达为禁止,其扫射的环境为固定,激光雷达首先获取环境数据并以数组形式存储于计算机中,对获取的环境数据进行预处理,剔除树木,地面等信息,对激光雷达的距离信息、反射强度信息同时进行非平面算法的环境数据分割聚类处理,提取障碍物外接矩形轮廓特征,激光雷达采用多假设跟踪模型算法对连续两帧的障碍物信息进行数据关联,并利用卡尔曼滤波算法对动态障碍物进行连续地预测和跟踪。
进一步的,所述感知系统与智能驾驶车辆之间采用无线通信,感知系统将识别的数据信息发送给附近的所有智能驾驶车辆,使得每辆车清楚的掌控其周围的所有障碍、其他车辆以及其他车辆的行驶方向和行驶速度。
进一步的,所述雷达单元、惯性导航单元、定位单元和摄像单元的传感器数据融合包括空间融合、时间融合和传感器数据融合算法,建立精确的激光雷达坐标系、三维世界坐标系、毫米波坐标系,是实现多传感器数据的空间融合的关键。激光雷达与毫米波雷达空间融合就是将不同传感器坐标系的测量值转换到同一个坐标系中;激光雷达和毫米波雷达信息在除在空间上需要进行融合,还需要传感器在时间上同步采集数据,实现时间的融合。两种传感器的采样频率不一样,为了保证数据的可靠性,以低采样速率传感器为基准,低频率传暗器每采一帧图像,选取高频率传感器上一帧缓存的数据,即完成共同采样一帧雷达与视觉融合的数据,从而保证了毫米波雷达数据和摄像机数据时间上的同步;传感器数据融合的核心关键还是在于采用合适的融合算法,本系统的传感器数据融合算法采用扩展卡尔马滤波算法。
进一步的,所述雷达单元、惯性导航单元、定位单元和摄像单元安装于固定距离的相邻路灯上,定位设计,合理化、标准化。
进一步的,相邻所述路灯上的雷达单元、惯性导航单元、定位单元和摄像单元之间采用并联的连接方式,单个用电元件损坏时,不会影响其他用电元件的正常运作,保证整个系统运作的稳定性。
3.有益效果
相比于现有技术,本发明的优点在于:
(1)本方案对无人驾驶的实现方式进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。
(2)雷达单元、惯性导航单元、定位单元和摄像单元可安装于道路两侧的路灯上,利用原有的路灯设备进行感知系统的安装和设计,无需额外增设大型硬件设施,可节省大量硬件配置,节约成本,且不占用空间,认知层包括对行人、车辆、交通物品、交通标识和车道线的分析。
(3)雷达单元包括激光雷达和毫米波雷达,毫米波雷达通过收发无线电波,测量与汽车周围车辆的距离,角度和相对速度的装置。目前作为车载雷达被广泛地使用。不易受大雾雨雪等恶劣天气以及尘土污垢等的影响,可稳定地检测车辆,在本系统中,毫米波雷达基于多目标检测算法,用于检测固定区域内车道上、人行道上面的移动障碍物的距离、速度。
(4)激光雷达为禁止,其扫射的环境为固定,激光雷达首先获取环境数据并以数组形式存储于计算机中,对获取的环境数据进行预处理,剔除树木,地面等信息,对激光雷达的距离信息、反射强度信息同时进行非平面算法的环境数据分割聚类处理,提取障碍物外接矩形轮廓特征,激光雷达采用多假设跟踪模型算法对连续两帧的障碍物信息进行数据关联,并利用卡尔曼滤波算法对动态障碍物进行连续地预测和跟踪。
(5)感知系统与智能驾驶车辆之间采用无线通信,感知系统将识别的数据信息发送给附近的所有智能驾驶车辆,使得每辆车清楚的掌控其周围的所有障碍、其他车辆以及其他车辆的行驶方向和行驶速度。
(6)雷达单元、惯性导航单元、定位单元和摄像单元的传感器数据融合包括空间融合、时间融合和传感器数据融合算法,建立精确的激光雷达坐标系、三维世界坐标系、毫米波坐标系,是实现多传感器数据的空间融合的关键。激光雷达与毫米波雷达空间融合就是将不同传感器坐标系的测量值转换到同一个坐标系中;激光雷达和毫米波雷达信息在除在空间上需要进行融合,还需要传感器在时间上同步采集数据,实现时间的融合。两种传感器的采样频率不一样,为了保证数据的可靠性,以低采样速率传感器为基准,低频率传暗器每采一帧图像,选取高频率传感器上一帧缓存的数据,即完成共同采样一帧雷达与视觉融合的数据,从而保证了毫米波雷达数据和摄像机数据时间上的同步;传感器数据融合的核心关键还是在于采用合适的融合算法,本系统的传感器数据融合算法采用扩展卡尔马滤波算法。
(7)雷达单元、惯性导航单元、定位单元和摄像单元安装于固定距离的相邻路灯上,定位设计,合理化、标准化。
(8)相邻路灯上的雷达单元、惯性导航单元、定位单元和摄像单元之间采用并联的连接方式,单个用电元件损坏时,不会影响其他用电元件的正常运作,保证整个系统运作的稳定性。
附图说明
图1为本发明的环境感知系统方案图;
图2为本发明的无人驾驶方案图;
图3为本发明的激光雷达识别算法流程图;
图4为本发明的邻近区域高精度电子构建图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述;显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例,基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明的描述中,需要说明的是,术语“上”、“下”、“内”、“外”、“顶/底端”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“设置有”、“套设/接”、“连接”等,应做广义理解,例如“连接”,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。
实施例1:
请参阅图1,基于物联网的深度学习型智能驾驶环境感知系统,包括感知系统和智能驾驶车辆,感知系统包括感知层和认知层,智能驾驶车辆上设置有决策层和控制层,感知层用于数据采集,且感知层包括雷达单元、惯性导航单元、定位单元和摄像单元,外部的计算机作为认知层用于数据分析,智能驾驶车辆上设置的决策层用于将认知层传来的信息和路线规划,用算法进行处理,并向控制层输出调整车速和方向的指令,智能驾驶车辆上设置的控制层接收决策层的指令,并控制车辆的刹车、油门和档位,本方案对无人驾驶的实现方式进行改进,同时对感知系统和智能驾驶车辆进行改进,使两 者完美配合,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。
雷达单元、惯性导航单元、定位单元和摄像单元可安装于道路两侧的路灯上,利用原有的路灯设备进行感知系统的安装和设计,无需额外增设大型硬件设施,可节省大量硬件配置,节约成本,且不占用空间,认知层包括对行人、车辆、交通物品、交通标识和车道线的分析。
雷达单元包括激光雷达和毫米波雷达,请参阅图3,毫米波雷达通过收发无线电波,测量与汽车周围车辆的距离,角度和相对速度的装置。目前作为车载雷达被广泛地使用。不易受大雾雨雪等恶劣天气以及尘土污垢等的影响,可稳定地检测车辆,在本系统中,毫米波雷达基于多目标检测算法,用于检测固定区域内车道上、人行道上面的移动障碍物的距离、速度。
激光雷达为禁止,其扫射的环境为固定,激光雷达首先获取环境数据并以数组形式存储于计算机中,对获取的环境数据进行预处理,剔除树木,地面等信息,对激光雷达的距离信息、反射强度信息同时进行非平面算法的环境数据分割聚类处理,提取障碍物外接矩形轮廓特征,激光雷达采用多假设跟踪模型算法对连续两帧的障碍物信息进行数据关联,并利用卡尔曼滤波算法对动态障碍物进行连续地预测和跟踪。
请参阅图1,感知系统与智能驾驶车辆之间采用无线通信,感知系统将识别的数据信息发送给附近的所有智能驾驶车辆,使得每辆车清楚的掌控其周围的所有障碍、其他车辆以及其他车辆的行驶方向和行驶速度。
雷达单元、惯性导航单元、定位单元和摄像单元的传感器数据融合包括空间融合、时间融合和传感器数据融合算法,建立精确的激光雷达坐标系、三维世界坐标系、毫米波坐标系,是实现多传感器数据的空间融合的关键。激光雷达与毫米波雷达空间融合就是将不同传感器坐标系的测量值转换到同一个坐标系中;激光雷达和毫米波雷达信息在除在空间上需要进行融合,还 需要传感器在时间上同步采集数据,实现时间的融合。两种传感器的采样频率不一样,为了保证数据的可靠性,以低采样速率传感器为基准,低频率传暗器每采一帧图像,选取高频率传感器上一帧缓存的数据,即完成共同采样一帧雷达与视觉融合的数据,从而保证了毫米波雷达数据和摄像机数据时间上的同步;传感器数据融合的核心关键还是在于采用合适的融合算法,本系统的传感器数据融合算法采用扩展卡尔马滤波算法。
请参阅图4,雷达单元、惯性导航单元、定位单元和摄像单元安装于固定距离的相邻路灯上,定位设计,合理化、标准化。
相邻路灯上的雷达单元、惯性导航单元、定位单元和摄像单元之间采用并联的连接方式,单个用电元件损坏时,不会影响其他用电元件的正常运作,便于维修,通过定位单元的设置可方便相关维修人员快速而准确的找到故障所在位置,加快维修进程,保证整个系统运作的稳定性。
相较于传统技术,本方案对无人驾驶的实现方式进行改进,同时对感知系统和智能驾驶车辆进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。
以上所述,仅为本发明较佳的具体实施方式;但本发明的保护范围并不局限于此。任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其改进构思加以等同替换或改变,都应涵盖在本发明的保护范围内。

Claims (10)

  1. 基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:包括感知系统和智能驾驶车辆,所述感知系统包括感知层和认知层,所述智能驾驶车辆上设置有决策层和控制层,所述感知层用于数据采集,且感知层包括雷达单元、惯性导航单元、定位单元和摄像单元,所述认知层用于数据分析,所述决策层用于将认知层传来的信息和路线规划,用算法进行处理,并向控制层输出调整车速和方向的指令,所述控制层接收决策层的指令,并控制车辆的刹车、油门和档位。
  2. 根据权利要求1所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:所述雷达单元、惯性导航单元、定位单元和摄像单元可安装于道路两侧的路灯上,所述认知层包括对行人、车辆、交通物品、交通标识和车道线的分析。
  3. 根据权利要求1所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:所述雷达单元包括激光雷达和毫米波雷达。
  4. 根据权利要求3所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:所述激光雷达为禁止,其扫射的环境为固定。
  5. 根据权利要求4所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:所述激光雷达首先获取环境数据并以数组形式存储于计算机中,对获取的环境数据进行预处理,剔除树木,地面等信息,对激光雷达的距离信息、反射强度信息同时进行非平面算法的环境数据分割聚类处理,提取障碍物外接矩形轮廓特征。
  6. 根据权利要求5所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:激光雷达采用多假设跟踪模型算法对连续两帧的障碍物信息进行数据关联,并利用卡尔曼滤波算法对动态障碍物进行连续地预测和跟踪。
  7. 根据权利要求1所述的基于物联网的深度学习型智能驾驶环境感知 系统,其特征在于:所述感知系统与智能驾驶车辆之间采用无线通信。
  8. 根据权利要求1所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:所述雷达单元、惯性导航单元、定位单元和摄像单元的传感器数据融合包括空间融合、时间融合和传感器数据融合算法,且传感器数据融合算法采用扩展卡尔马滤波算法。
  9. 根据权利要求2所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:所述雷达单元、惯性导航单元、定位单元和摄像单元安装于固定距离的相邻路灯上。
  10. 根据权利要求9所述的基于物联网的深度学习型智能驾驶环境感知系统,其特征在于:相邻所述路灯上的雷达单元、惯性导航单元、定位单元和摄像单元之间采用并联的连接方式。
PCT/CN2020/077066 2019-05-14 2020-02-28 基于物联网的深度学习型智能驾驶环境感知系统 WO2020228393A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910396591.1 2019-05-14
CN201910396591.1A CN110091875A (zh) 2019-05-14 2019-05-14 基于物联网的深度学习型智能驾驶环境感知系统

Publications (1)

Publication Number Publication Date
WO2020228393A1 true WO2020228393A1 (zh) 2020-11-19

Family

ID=67447890

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/077066 WO2020228393A1 (zh) 2019-05-14 2020-02-28 基于物联网的深度学习型智能驾驶环境感知系统

Country Status (2)

Country Link
CN (1) CN110091875A (zh)
WO (1) WO2020228393A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110091875A (zh) * 2019-05-14 2019-08-06 长沙理工大学 基于物联网的深度学习型智能驾驶环境感知系统
CN111326002A (zh) * 2020-02-26 2020-06-23 公安部交通管理科学研究所 一种自动驾驶汽车环境感知的预测方法、装置及系统
CN111833631B (zh) * 2020-06-24 2021-10-26 武汉理工大学 基于车路协同的目标数据处理方法、系统和存储介质
CN112435466B (zh) * 2020-10-23 2022-03-22 江苏大学 混合交通流环境下cacc车辆退变为传统车的接管时间预测方法及系统
CN112249035B (zh) * 2020-12-16 2021-03-16 国汽智控(北京)科技有限公司 基于通用数据流架构的自动驾驶方法、装置及设备
CN113490178B (zh) * 2021-06-18 2022-07-19 天津大学 一种智能网联车辆多级协作感知系统
CN113734197A (zh) * 2021-09-03 2021-12-03 合肥学院 一种基于数据融合的无人驾驶的智能控制方案
CN113911139B (zh) * 2021-11-12 2023-02-28 湖北芯擎科技有限公司 车辆控制方法、装置和电子设备
CN114056351B (zh) * 2021-11-26 2024-02-02 文远苏行(江苏)科技有限公司 自动驾驶方法及装置
CN114291114A (zh) * 2022-01-05 2022-04-08 天地科技股份有限公司 车辆控制系统及方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007248257A (ja) * 2006-03-16 2007-09-27 Nissan Motor Co Ltd 車両用路上障害物検出装置、路上障害物検出方法および路上障害物検出装置付き車両
WO2016126315A1 (en) * 2015-02-06 2016-08-11 Delphi Technologies, Inc. Autonomous guidance system
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
CN108417087A (zh) * 2018-02-27 2018-08-17 浙江吉利汽车研究院有限公司 一种车辆安全通行系统及方法
CN108646739A (zh) * 2018-05-14 2018-10-12 北京智行者科技有限公司 一种传感信息融合方法
CN108845579A (zh) * 2018-08-14 2018-11-20 苏州畅风加行智能科技有限公司 一种港口车辆的自动驾驶系统及其方法
CN110091875A (zh) * 2019-05-14 2019-08-06 长沙理工大学 基于物联网的深度学习型智能驾驶环境感知系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135777B (zh) * 2010-12-14 2013-01-02 天津理工大学 车载红外跟踪系统
CN105866790B (zh) * 2016-04-07 2018-08-10 重庆大学 一种考虑激光发射强度的激光雷达障碍物识别方法及系统
CN106908783B (zh) * 2017-02-23 2019-10-01 苏州大学 基于多传感器信息融合的障碍物检测方法
CN107193012A (zh) * 2017-05-05 2017-09-22 江苏大学 基于imm‑mht算法的智能车激光雷达机动多目标跟踪方法
CN107609522B (zh) * 2017-09-19 2021-04-13 东华大学 一种基于激光雷达和机器视觉的信息融合车辆检测系统
CN108196535B (zh) * 2017-12-12 2021-09-07 清华大学苏州汽车研究院(吴江) 基于增强学习和多传感器融合的自动驾驶系统
CN108458745A (zh) * 2017-12-23 2018-08-28 天津国科嘉业医疗科技发展有限公司 一种基于智能检测设备的环境感知方法
CN109171684A (zh) * 2018-08-30 2019-01-11 上海师范大学 一种基于可穿戴传感器和智能家居的自动健康监护系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007248257A (ja) * 2006-03-16 2007-09-27 Nissan Motor Co Ltd 車両用路上障害物検出装置、路上障害物検出方法および路上障害物検出装置付き車両
WO2016126315A1 (en) * 2015-02-06 2016-08-11 Delphi Technologies, Inc. Autonomous guidance system
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
CN108417087A (zh) * 2018-02-27 2018-08-17 浙江吉利汽车研究院有限公司 一种车辆安全通行系统及方法
CN108646739A (zh) * 2018-05-14 2018-10-12 北京智行者科技有限公司 一种传感信息融合方法
CN108845579A (zh) * 2018-08-14 2018-11-20 苏州畅风加行智能科技有限公司 一种港口车辆的自动驾驶系统及其方法
CN110091875A (zh) * 2019-05-14 2019-08-06 长沙理工大学 基于物联网的深度学习型智能驾驶环境感知系统

Also Published As

Publication number Publication date
CN110091875A (zh) 2019-08-06

Similar Documents

Publication Publication Date Title
WO2020228393A1 (zh) 基于物联网的深度学习型智能驾驶环境感知系统
WO2022063331A1 (zh) 一种基于v2x的编队行驶智能网联客车
US11854212B2 (en) Traffic light detection system for vehicle
CN113002396B (zh) 一种用于自动驾驶矿用车辆的环境感知系统及矿用车辆
CN205943100U (zh) 一种应用于v2x场景的hmi显示系统
CN111880174A (zh) 一种用于支持自动驾驶控制决策的路侧服务系统及其控制方法
CN109102696B (zh) 基于主动安全的交叉频密路段冲突预警方法
CN208238805U (zh) 一种自动驾驶客车环境感知系统及自动驾驶客车
CN105291984A (zh) 一种基于多车协作的行人及车辆检测的方法及系统
CN110942653A (zh) 一种基于车联网的智能驾驶汽车辅助泊车系统
CN108986510A (zh) 一种面向路口的智能化本地动态地图实现系统及实现方法
CN104115198A (zh) 车辆合流辅助系统和方法
WO2021036907A1 (zh) 列车控制系统、列车控制方法
CN111781933A (zh) 一种基于边缘计算和空间智能的高速自动驾驶车辆实现系统及方法
CN104408972A (zh) 一种基于dgps的矿用车辆防碰撞装置及其控制方法
CN109544936B (zh) 基于雷达摄像机的机动车斑马线不礼让行人预警抓拍系统
CN210882093U (zh) 一种自动驾驶车辆环境感知系统及自动驾驶车辆
CN109001743A (zh) 有轨电车防撞系统
CN212322114U (zh) 一种用于自动驾驶车辆的环境感知及道路环境裂纹检测系统
WO2023109501A1 (zh) 一种基于定位技术的列车主动障碍物检测方法及装置
CN114783184A (zh) 一种基于车、路、无人机信息融合的超视距感知系统
CN208847836U (zh) 有轨电车防撞系统
CN211742265U (zh) 用于智能驾驶公交的智慧路侧系统
CN109591826B (zh) 基于能见度的障碍物避让驾驶引导系统及其引导方法
CN115424474B (zh) 一种基于v2x和rsu的左转辅助预警方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20806188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20806188

Country of ref document: EP

Kind code of ref document: A1