WO2018032385A1 - 基于视觉的智能无人驾驶系统 - Google Patents

基于视觉的智能无人驾驶系统 Download PDF

Info

Publication number
WO2018032385A1
WO2018032385A1 PCT/CN2016/095604 CN2016095604W WO2018032385A1 WO 2018032385 A1 WO2018032385 A1 WO 2018032385A1 CN 2016095604 W CN2016095604 W CN 2016095604W WO 2018032385 A1 WO2018032385 A1 WO 2018032385A1
Authority
WO
WIPO (PCT)
Prior art keywords
vision
main processor
based intelligent
wireless communication
communication device
Prior art date
Application number
PCT/CN2016/095604
Other languages
English (en)
French (fr)
Inventor
邹霞
钟玲珑
Original Assignee
邹霞
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 邹霞 filed Critical 邹霞
Priority to PCT/CN2016/095604 priority Critical patent/WO2018032385A1/zh
Publication of WO2018032385A1 publication Critical patent/WO2018032385A1/zh

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present invention relates to a vision-based intelligent driverless system, and more particularly to an unmanned vehicle platform based on monocular vision, belonging to the field of unmanned vehicles or unmanned vehicles.
  • Control and other technologies can independently identify the environment and state of the vehicle, make analysis and judgment based on the information obtained by each sensor, provide the basis for the next decision, and have a wide range of applications in civil, military and aerospace fields.
  • the smart car lane line identification methods can be generally divided into two categories, one is a feature-based recognition method, and the other is a model-based recognition method.
  • the complexity of the scene and the reality of the algorithm are several difficulties in the detection of lane-based lane-based detection.
  • a vision-based intelligent driverless system mainly includes a main processor, an underlying network device, a wireless communication device, an execution device, a stereo vision system, a panoramic vision system, and a camera control system, wherein The camera control system is composed of a pan/tilt driver and a camera pan/tilt.
  • the main processor is respectively connected with the underlying network device, the wireless communication device, the stereo vision system, the panoramic vision system and the camera control system, and the intelligent driverless system further includes a monitoring computer, and wireless
  • the communication device is connected to the monitoring computer, the main processor is also connected to the GPS and the IMU, and the main processor is connected to the wireless communication device through the network interface.
  • the main processor is connected to the camera control system via an RS232 interface.
  • the main processor is connected to the GPS and the IMU through an RS232 interface.
  • the main processor is connected to the stereo vision system and the panoramic vision system through a PCI interface.
  • the foregoing network device is connected to an execution device.
  • the intelligent driverless system further includes a collision signal processor and a collision detection sensor.
  • the collision detection sensor is connected to the collision signal processor, and the collision signal processor is connected to the underlying network device.
  • the vision-based intelligent driverless system provided by the present invention provides an unmanned driving system for implementing lane line detection and tracking, and can achieve good detection and tracking through experimental verification. The effect is high, the detection rate is low, and the robustness is strong.
  • FIG. 1 is a schematic structural view of a vision-based intelligent driverless system according to the present invention.
  • Reference numerals 1-main processor; 2-network interface; 3-lower network device; 4-RS232 interface; 5-PCI interface; 6-wireless communication device; 7-monitoring computer; 8-execution device; 9-collision signal processor; 10-collision detection sensor; 11-stereoscopic system; 12-panoramic vision system; 13-GPS; 14-IMU; 15-pTZ driver; 16-camera head.
  • the present invention provides a visually-based intelligent driverless system.
  • the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention
  • the vision-based intelligent driverless system provided by the present invention is mainly divided into a visual information acquisition and processing module and a motion control module of an unmanned vehicle. After the two modules are independently designed, information exchange is required, and "knowledge is completed.” - Decision-control "three aspects of the task.
  • the visual information acquisition and processing module obtains the actual information in the real scene, and sends and sends the next task instruction to the motion control module through analysis and processing.
  • the motion control module also includes the processor and the enforcement mechanism, and after receiving the instruction, the driver is driven.
  • the smart car completes the task. As shown in FIG.
  • the present invention specifically includes a main processor 1, an underlying network device 3, a wireless communication device 6, an execution device 8, a stereo vision system 11, a panoramic vision system 12, and an imaging control system, wherein the imaging control system is composed of The cloud platform driver 15 and the camera cloud platform 16 are configured.
  • the main processor 1 is respectively connected with the underlying network device 3, the wireless communication device 6, the stereo vision system 11, the panoramic vision system 12, and the camera control system, and the intelligent driverless system further includes monitoring.
  • the computer 7, the wireless communication device 6 is connected to the monitoring computer 7, the main processor 1 is also connected to the GPS 13 and the IMU 14, and the main processor 1 is connected to the wireless communication device 6 via the network interface 2.
  • the main processor 1 is connected to the camera control system via the RS232 interface 4.
  • the main processor 1 is connected to the GPS 13 and the IMU 14 via the RS232 interface 4.
  • the main processor 1 connects the stereo vision system 11 and the panoramic vision system 12 through the PCI interface 5.
  • the underlying network device 3 is connected to an execution device 8.
  • the intelligent driverless system further includes a collision signal processor 9 and a collision detecting sensor 10, the collision detecting sensor 10 is connected to the collision signal processor 9, and the collision signal processor 9 is connected to the underlying network device 3.
  • the selection of the camera mainly considers the lens focal length, the angle of view and other parameters, and the installation ⁇ mainly considers its height, pitch angle and other parameters.
  • the focal length and angle of view of the lens are fixed parameters of the camera, which directly affect the field of view of the camera and the quality of the image.
  • the gimbal is the connection between the unmanned vehicle and the camera. Through the gimbal, you can change the direction and posture of the camera to meet the needs of different scenes.
  • the vision-based intelligent driverless system provided by the present invention provides an unmanned driving system for implementing lane line detection and tracking, which can achieve good detection and tracking effects through experimental verification, and has high performance. Reality, low miss detection rate and strong robustness.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于视觉的智能无人驾驶系统,包括主处理器(1)、底层网络设备(3)、无线通讯装置(6)、执行装置(8)、立体视觉系统(11)、全景视觉系统(12)以及摄像控制系统,其中摄像控制系统由云台驱动器(15)和摄像机云台(16)构成,主处理器(1)分别与底层网络设备(3)、无线通讯装置(6)、立体视觉系统(11)、全景视觉系统(12)以及摄像控制系统连接。可以取得很好的检测和跟踪效果,并且具备较高的实时性,漏检率低,鲁棒性强。

Description

说明书 发明名称:基于视觉的智能无人驾驶系统 技术领域
[0001] 本发明涉及一种基于视觉的智能无人驾驶系统, 尤其涉及一种基于单目视觉的 无人驾驶车辆平台, 属于无人车或无人驾驶领域。
背景技术
[0002] 随着社会经济的进一步发展以及城市化进程的加快、 机动车的普及以及出行人 数的大量增加, 路网通行能力难以满足交通量日益增长的需要, 造成交通事故 频发, 交通拥挤加剧等现状。 公路交通的安全以及运输效率问题变得日益突出 , 许多国家已经转变思路, 从扩展路网规模逐渐转移到采用高新技术来改造现 有的道路交通系统和管理体系。 智能汽车是智能交通系统中的一个重要组成部 分, 智能汽车集环境感知、 规划决策、 多等级辅助驾驶等多功能为一体, 集中 运用了现代传感、 计算机、 通讯、 信息融合、 人工智能及自动控制等技术, 能 够自主辨识车辆所处的环境和状态, 根据各传感器所得到的信息做出分析和判 断, 提供下一步决策的依据, 在民用、 军事及航天领域等方面的应用范围十分 广泛。 智能车车道线识别方法一般可分为两大类, 一类为基于特征的识别方法 , 另一类为基于模型的识别方法。 目前基于视觉的车道线检测还存在以下几个 方面的难点: 场景的复杂性和算法的实吋性。 这两大问题是阻碍智能车无人驾 驶应用的主要难点, 需要对识别算法进一步进行研究。
技术问题
[0003] 鉴于上述现有技术的不足之处, 本发明的目的在于提供一种基于视觉的智能无 人驾驶系统。
问题的解决方案
技术解决方案
[0004] 为了达到上述目的, 本发明采取了以下技术方案:
[0005] 一种基于视觉的智能无人驾驶系统, 主要包括主处理器、 底层网络设备、 无线 通讯装置、 执行装置、 立体视觉系统、 全景视觉系统以及摄像控制系统, 其中 摄像控制系统由云台驱动器和摄像机云台构成, 主处理器分别与底层网络设备 、 无线通讯装置、 立体视觉系统、 全景视觉系统以及摄像控制系统连接, 智能 无人驾驶系统还包括监控计算机, 无线通讯装置与监控计算机连接, 主处理器 还连接 GPS和 IMU, 主处理器通过网络接口与无线通讯装置连接。
[0006] 优选地, 上述主处理器通过 RS232接口连接摄像控制系统。
[0007] 优选地, 上述主处理器通过 RS232接口连接 GPS和 IMU。
[0008] 优选地, 上述主处理器通过 PCI接口连接立体视觉系统和全景视觉系统。
[0009] 优选地, 上述底层网络设备连接有一执行装置。
[0010] 优选地, 上述智能无人驾驶系统还包括一碰撞信号处理器和一碰撞检测传感器
, 碰撞检测传感器与碰撞信号处理器连接, 碰撞信号处理器与底层网络设备连 接。
发明的有益效果
有益效果
[0011] 相较于现有技术, 本发明提供的基于视觉的智能无人驾驶系统, 提供了一种实 现车道线检测及跟踪的无人驾驶系统, 通过实验验证可以取得很好的检测和跟 踪效果, 并且具备较高的实吋性, 漏检率低, 鲁棒性强。
对附图的简要说明
附图说明
[0012] 图 1为本发明基于视觉的智能无人驾驶系统结构示意图。
[0013] 附图标记: 1-主处理器; 2-网络接口; 3-底层网络设备; 4-RS232接口; 5-PCI 接口; 6-无线通讯装置; 7-监控计算机; 8-执行装置; 9-碰撞信号处理器; 10-碰 撞检测传感器; 11-立体视觉系统; 12-全景视觉系统; 13-GPS; 14-IMU; 15-云 台驱动器; 16-摄像机云台。
本发明的实施方式
[0014] 本发明提供一种基于视觉的智能无人驾驶系统, 为使本发明的目的、 技术方案 及效果更加清楚、 明确, 以下参照附图并举实施例对本发明进一步详细说明。 应当理解, 此处所描述的具体实施例仅用以解释本发明, 并不用于限定本发明
[0015] 本发明提供的基于视觉的智能无人驾驶系统主要分为视觉信息获取及处理模块 和无人车的运动控制模块, 两个模块独立设计完成后, 需要进行信息的交换, 完成"知识-决策-控制"三方面的任务。 视觉信息获取及处理模块获取真实场景中 的实吋信息, 通过分析及处理, 发出下一步的任务指令, 传输给运动控制模块 , 运动控制模块也包括处理器及执性机构, 收取指令后, 驱动智能车完成任务 。 如图 1所示, 本发明具体包括主要包括主处理器 1、 底层网络设备 3、 无线通讯 装置 6、 执行装置 8、 立体视觉系统 11、 全景视觉系统 12以及摄像控制系统, 其 中摄像控制系统由云台驱动器 15和摄像机云台 16构成, 主处理器 1分别与底层网 络设备 3、 无线通讯装置 6、 立体视觉系统 11、 全景视觉系统 12以及摄像控制系 统连接, 智能无人驾驶系统还包括监控计算机 7, 无线通讯装置 6与监控计算机 7 连接, 主处理器 1还连接 GPS13和 IMU14, 主处理器 1通过网络接口 2与无线通讯 装置 6连接。
[0016] 其中, 主处理器 1通过 RS232接口 4连接摄像控制系统。 主处理器 1通过 RS232接 口 4连接 GPS13和 IMU14。 主处理器 1通过 PCI接口 5连接立体视觉系统 11和全景视 觉系统 12。 底层网络设备 3连接有一执行装置 8。 智能无人驾驶系统还包括一碰 撞信号处理器 9和一碰撞检测传感器 10, 碰撞检测传感器 10与碰撞信号处理器 9 连接, 碰撞信号处理器 9与底层网络设备 3连接。
[0017] 摄像头的选取主要考虑镜头焦距、 视场角等参数, 安装吋主要考虑其高度、 俯 仰角等参数。 镜头焦距及视场角是摄像头的固定参数, 直接影响着摄像头的视 野范围及成像质量好坏。 云台是无人车与摄像机之间的连接物件。 通过云台可 以改变摄像机的方向和姿态, 满足不同场景的需求。
[0018] 本发明提供的基于视觉的智能无人驾驶系统, 提供了一种实现车道线检测及跟 踪的无人驾驶系统, 通过实验验证可以取得很好的检测和跟踪效果, 并且具备 较高的实吋性, 漏检率低, 鲁棒性强。
[0019]
[0020] 可以理解的是, 对本领域普通技术人员来说, 可以根据本发明的技术方案及其 发明构思加以等同替换或改变, 而所有这些改变或替换都应属于本发明所附的 权利要求的保护范围。

Claims

权利要求书
一种基于视觉的智能无人驾驶系统, 其特征在于: 所述智能无人驾驶 系统主要包括主处理器 (1) 、 底层网络设备 (3) 、 无线通讯装置 ( 6) 、 执行装置 (8) 、 立体视觉系统 (11) 、 全景视觉系统 (12) 以 及摄像控制系统, 其中所述摄像控制系统由云台驱动器 (15) 和摄像 机云台 (16) 构成, 所述主处理器 (1) 分别与底层网络设备 (3) 、 无线通讯装置 (6) 、 立体视觉系统 (11) 、 全景视觉系统 (12) 以 及摄像控制系统连接, 所述智能无人驾驶系统还包括监控计算机 (7 ) , 所述无线通讯装置 (6) 与监控计算机 (7) 连接, 所述主处理器 (1) 还连接 GPS (13) 和 IMU (14) , 所述主处理器 (1) 通过网络 接口 (2) 与无线通讯装置 (6) 连接。
如权利要求 1所述的基于视觉的智能无人驾驶系统, 其特征在于: 所 述主处理器 (1) 通过 RS232接口 (4) 连接摄像控制系统。
如权利要求 1所述的基于视觉的智能无人驾驶系统, 其特征在于: 所 述主处理器 (1) 通过 RS232接口 (4) 连接 GPS (13) 和 IMU (14) 如权利要求 1所述的基于视觉的智能无人驾驶系统, 其特征在于: 所 述主处理器 (1) 通过 PCI接口 (5) 连接立体视觉系统 (11) 和全景 视觉系统 (12) 。
如权利要求 1所述的基于视觉的智能无人驾驶系统, 其特征在于: 所 述底层网络设备 (3) 连接有一执行装置 (8) 。
如权利要求 1所述的基于视觉的智能无人驾驶系统, 其特征在于: 所 述智能无人驾驶系统还包括一碰撞信号处理器 (9) 和一碰撞检测传 感器 (10) , 所述碰撞检测传感器 (10) 与碰撞信号处理器 (9) 连 接, 碰撞信号处理器 (9) 与底层网络设备 (3) 连接。
PCT/CN2016/095604 2016-08-16 2016-08-16 基于视觉的智能无人驾驶系统 WO2018032385A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/095604 WO2018032385A1 (zh) 2016-08-16 2016-08-16 基于视觉的智能无人驾驶系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/095604 WO2018032385A1 (zh) 2016-08-16 2016-08-16 基于视觉的智能无人驾驶系统

Publications (1)

Publication Number Publication Date
WO2018032385A1 true WO2018032385A1 (zh) 2018-02-22

Family

ID=61196159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/095604 WO2018032385A1 (zh) 2016-08-16 2016-08-16 基于视觉的智能无人驾驶系统

Country Status (1)

Country Link
WO (1) WO2018032385A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291130A1 (en) * 2006-06-19 2007-12-20 Oshkosh Truck Corporation Vision system for an autonomous vehicle
CN203658842U (zh) * 2014-01-21 2014-06-18 北京博创尚和科技有限公司 两轮差动式自主移动机器人
US9052721B1 (en) * 2012-08-28 2015-06-09 Google Inc. Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection
US20150206433A1 (en) * 2014-01-21 2015-07-23 Hitachi Construction Machinery Co., Ltd. Vehicle control system
US20150331422A1 (en) * 2013-12-31 2015-11-19 Harbrick LLC Autonomous Vehicle Interface System
US9355562B1 (en) * 2012-08-14 2016-05-31 Google Inc. Using other vehicle trajectories to aid autonomous vehicles driving through partially known areas
CN106155059A (zh) * 2016-08-16 2016-11-23 邹霞 基于视觉的智能无人驾驶系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291130A1 (en) * 2006-06-19 2007-12-20 Oshkosh Truck Corporation Vision system for an autonomous vehicle
US9355562B1 (en) * 2012-08-14 2016-05-31 Google Inc. Using other vehicle trajectories to aid autonomous vehicles driving through partially known areas
US9052721B1 (en) * 2012-08-28 2015-06-09 Google Inc. Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection
US20150331422A1 (en) * 2013-12-31 2015-11-19 Harbrick LLC Autonomous Vehicle Interface System
CN203658842U (zh) * 2014-01-21 2014-06-18 北京博创尚和科技有限公司 两轮差动式自主移动机器人
US20150206433A1 (en) * 2014-01-21 2015-07-23 Hitachi Construction Machinery Co., Ltd. Vehicle control system
CN106155059A (zh) * 2016-08-16 2016-11-23 邹霞 基于视觉的智能无人驾驶系统

Similar Documents

Publication Publication Date Title
WO2020233726A1 (zh) 环境主动感知型停车场自动泊车系统
CN112700470B (zh) 一种基于交通视频流的目标检测和轨迹提取方法
CN103473951B (zh) 一种基于临场感的自动停车场管理系统
CN103465906B (zh) 一种基于临场感的停车场自动停车实现方法
US10798319B2 (en) Camera device and method for capturing a surrounding region of a vehicle in a situation-adapted manner
JP7048353B2 (ja) 走行制御装置、走行制御方法およびプログラム
CN110930323B (zh) 图像去反光的方法、装置
WO2021212379A1 (zh) 车道线检测方法及装置
CN105654773A (zh) 一种车载伴飞无人机智能导引系统
CN107329466A (zh) 一种自动驾驶小型车
CN102774380A (zh) 一种判断车辆行驶状态的方法
US11453410B2 (en) Reducing processing requirements for vehicle control
Zhang et al. Robust inverse perspective mapping based on vanishing point
CN108830264A (zh) 一种无人驾驶公交车的站台乘客检测系统及方法
CN106092123A (zh) 一种视频导航方法及装置
CN102693640B (zh) 为设定车辆提供优先信号的控制方法
CN104574993A (zh) 一种道路监控的方法及装置
WO2021217447A1 (zh) 一种车辆通过道闸横杆的方法及装置
El-Hassan Experimenting with sensors of a low-cost prototype of an autonomous vehicle
WO2022244356A1 (en) Light interference detection during vehicle navigation
EP4388509A1 (en) Detected object path prediction for vision-based systems
CN114495066A (zh) 一种辅助倒车的方法
US12002359B2 (en) Communication method for vehicle dispatch system, vehicle dispatch system, and communication device
CN106515602A (zh) 一种无人驾驶汽车交通标志牌识别装置
JP7454685B2 (ja) 車両走行路内のデブリの検出

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16913145

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16913145

Country of ref document: EP

Kind code of ref document: A1