WO2020061794A1 - 车辆辅助驾驶装置、车辆以及信息处理方法 - Google Patents

车辆辅助驾驶装置、车辆以及信息处理方法 Download PDF

Info

Publication number
WO2020061794A1
WO2020061794A1 PCT/CN2018/107517 CN2018107517W WO2020061794A1 WO 2020061794 A1 WO2020061794 A1 WO 2020061794A1 CN 2018107517 W CN2018107517 W CN 2018107517W WO 2020061794 A1 WO2020061794 A1 WO 2020061794A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
target
camera
distance
range
Prior art date
Application number
PCT/CN2018/107517
Other languages
English (en)
French (fr)
Inventor
王铭钰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880011498.8A priority Critical patent/CN110312639A/zh
Priority to PCT/CN2018/107517 priority patent/WO2020061794A1/zh
Publication of WO2020061794A1 publication Critical patent/WO2020061794A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning

Definitions

  • the present application relates to the field of vehicles, and more particularly, to a vehicle assisted driving device, a vehicle, and an information processing method.
  • Vehicle assisted driving technology can provide the driver with surrounding information of the vehicle (such as video and / or sound information) to assist driving or provide evidence in the event of a failure.
  • the traditional vehicle assisted driving device can provide limited information, which limits its use scenarios.
  • the present application provides a vehicle assisted driving device, a vehicle, and an information processing method, which can enrich the functions of the vehicle assisted driving device and broaden its use scene.
  • a vehicle assisted driving device including: a multi-eye camera for collecting images of a scene in a target angle of view on the left, right, or rear of the vehicle; and an information processing system for acquiring the A multi-eye image collected by the eye camera; and based on the multi-eye image, calculating a current distance between an object within the target angle of view and the vehicle.
  • a vehicle including the vehicle assisted driving device according to the first aspect.
  • an information processing method is provided.
  • the method is applied to a vehicle assisted driving device of a vehicle.
  • the vehicle assisted driving device includes a multi-eye camera, and the multi-eye camera is used for Or behind a scene within a target perspective range; the method includes: acquiring a multi-eye image collected by the multi-eye camera; and calculating, based on the multi-eye image, an object within the target perspective range and the target The current distance of the vehicle.
  • the use of a multi-camera solution at the rear, left, or right of the vehicle enables the vehicle's assisted driving device to provide distance information of the vehicle's rear, left, or right, which enriches the functions of the vehicle's assisted driving device and broadens its use scene.
  • FIG. 2 is a schematic flowchart of an information processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a possible implementation manner of step S24 in FIG. 2.
  • FIG. 4 is a schematic flowchart of another possible implementation manner of step S24 in FIG. 2.
  • FIG. 5 is an example diagram of collision warning information provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of an information processing method according to another embodiment of the present application.
  • vehicle assisted driving devices There are many types of vehicle assisted driving devices installed on the vehicle. Among them, many vehicle assisted driving devices such as a reversing radar system and a driving recorder can provide the driver with image information of the surroundings of the vehicle. However, the image information that these vehicle assisted driving devices can provide is limited, and usually only a single image information can be provided, resulting in limited use scenarios of vehicle assisted driving devices. The following uses the driving recorder as an example for description.
  • the driving recorder is usually used to record the image of the vehicle during the journey, so as to provide evidence for a traffic accident.
  • the panoramic driving recorder has gradually become the first choice for more owners because it can record images within a 360 ° range.
  • the conventional driving recorder uses a monocular camera to collect images around the vehicle, and can only obtain a single image information.
  • a vehicle assistance driving device 10 is mounted on a vehicle 1.
  • the vehicle assisted driving device 10 may include a multi-camera camera 12 (such as a camera 12 a and a camera 12 b in FIG. 1) and an information processing system 14.
  • the multi-eye camera 12 can be used for image collection of scenes within a target angle of view on the left, right, or rear of the vehicle 1.
  • the multi-eye camera 12 may include two or more cameras.
  • the multi-camera camera 12 may include two cameras (ie, a binocular camera) for capturing color images.
  • the multi-eye camera 12 may include one camera for capturing color images and two cameras (ie, three-eye cameras) for capturing gray-scale images.
  • the multi-camera camera 12 may include a binocular camera 12 a, 12 b.
  • the binocular cameras 12a and 12b can be used to collect binocular images (including left-eye images and right-eye images), respectively.
  • the multi-eye camera 12 may be located at the rear, left or right of the vehicle 1. Taking the multi-eye camera 12 at the rear of the vehicle as an example, as shown in FIG. 1, the multi-eye camera 12 may be installed at the rear window of the vehicle 1, such as the top of the rear window. Alternatively, the multi-camera camera 12 may be mounted on the license plate of the vehicle, such as near the license plate or in the middle of the top of the license plate. Since the binocular cameras 12a and 12b in FIG. 1 are installed at the rear of the vehicle, the binocular cameras 12a and 12b may also be referred to as rear-view binocular cameras.
  • the multi-eye camera 12 can be used for image collection of scenes within the target range, but the value of the target angle of view range is not specifically limited in the embodiment of the present application, and may be based on the installation position of the multi-eye camera 12 and the field of view of the multi-eye camera 12 Factors such as angle.
  • the target viewing angle range may be, for example, a viewing angle range of 90 degrees or a viewing angle range of 135 degrees.
  • the information processing system 14 may be integrated with the multi-eye camera 12 or may be separated from each other (as shown in FIG. 1), as long as the information processing system 14 and the multi-eye camera 12 are connected to each other.
  • FIG. 2 is a schematic flowchart of an information processing method according to an embodiment of the present application.
  • the method in FIG. 2 may be executed by the information processing system 14 in FIG. 1.
  • the method of FIG. 2 may include steps S22 and S24.
  • step S24 the current distance between the object and the vehicle within the target viewing angle range is calculated based on the multi-eye image.
  • a disparity map can be generated by matching the multi-eye images, and then the current distance between the object and the vehicle within the target viewing angle range can be obtained.
  • the embodiment of the present application adopts a multi-camera solution at the rear, left, or right of the vehicle, so that the vehicle assisted driving device can provide distance information of the rear, left, or right of the vehicle, which broadens the use scene of the vehicle assisted driving device.
  • a vehicle assisted driving device can be used to provide distance warning information for the rear, left, or right of the vehicle for collision warning, thereby making the vehicle safer to drive.
  • the multi-eye images collected by the multi-eye camera can be matched to obtain the distance information of the objects in the scene within the range of the target perspective.
  • Multi-eye image matching can also be referred to as multi-eye image registration.
  • a multi-eye image as a binocular image (a left-eye image and a right-eye image) as an example, a disparity map of a scene can be calculated according to a left-eye image and a right-eye image, thereby calculating a depth map of the scene.
  • a depth map can be calculated by using these images in pairs, and finally a depth map of the entire scene is obtained.
  • the matching effect of the multi-eye image directly affects the accuracy of the distance information calculated by the information processing system 14.
  • the matching effect of multi-eye images is usually related to the environment in which the vehicle is located. For example, in a low-light scene or a scene with a relatively monotonous texture, the matching result of the multi-eye image may be inaccurate, thereby causing the distance information calculated by the information processing system 14 to be inaccurate.
  • step S24 is given below in conjunction with FIG. 3 to improve the accuracy of the distance information calculated by the information processing system 14.
  • step S32 the first distance information output by the ranging module is received.
  • the first distance information may be used to indicate a current distance between an object in a target angular direction and a vehicle within a target viewing angle range.
  • the ranging module may be, for example, a radar sensor (such as a parking sensor) installed on a vehicle.
  • the ranging module can be an ultrasonic-based ranging module, a frequency-modulated continuous wave (FMCW) ranging module, or a laser-based ranging module, such as laser detection and measurement (light detection). and ranging, Lidar) system.
  • FMCW frequency-modulated continuous wave
  • laser-based ranging module such as laser detection and measurement (light detection). and ranging, Lidar) system.
  • the target angular direction may be one or more angular directions within a target viewing angle range.
  • the target angular direction measured by the ranging module may include the left rear direction, the front rear direction, and the rear right direction of the vehicle.
  • step S34 the first distance information is used as a reference to match the multi-eye images to obtain a depth map, so that the difference between the second distance information and the first distance information in the depth map is less than a preset threshold.
  • the ranging module and the multi-eye camera are usually installed in different positions of the vehicle. Therefore, in order to facilitate the comparison of the distance information, the first distance information output by the ranging module may be coordinate transformed and corrected (for example, the first The distance information is converted to the camera coordinate system where the multi-camera camera is located) so that the reference of the first distance information and the second distance information are the same.
  • the first distance information and the second distance information may also be regarded as distance information collected under the same reference.
  • the multi-view images may be directly matched without using the first distance information as a reference; when the environment in which the vehicle is located does not meet the preset conditions, then The first distance information is used as a reference to match the multi-eye images.
  • the environment in which the vehicle is located does not meet the preset conditions.
  • the environment in which the vehicle is located may be a low-light environment or the environment in which the vehicle is located has a weak texture.
  • the distance information measured by the ranging module is usually more accurate.
  • the embodiment of the present application uses the distance information measured by the ranging module as a reference to correct the matching result of the multi-eye image, which can improve the accuracy of the distance information.
  • step S24 The embodiment shown in FIG. 3 gives a possible implementation manner of step S24.
  • the following describes another possible implementation manner of step S24 with reference to FIG. 4.
  • FIG. 4 includes steps S42 to S46. These steps are described in detail below.
  • step S42 the first distance information output by the ranging module is received.
  • the first distance information may be used to indicate the current distance between the object and the vehicle within the target viewing angle range.
  • the ranging module may be, for example, a radar sensor (such as a parking sensor) installed on a vehicle.
  • the ranging module can be an ultrasonic-based ranging module, an FMCW-based ranging module, or a laser-based ranging module, such as the Lidar system.
  • Step S44 Use a multi-eye image to identify an object within the target viewing angle range.
  • Step S46 Use the first distance information as the distance information of the objects within the target viewing angle range to form a depth map.
  • Step S48 Calculate the current distance between the object and the vehicle within the target viewing angle range according to the depth map.
  • the reversing radar can only provide collision warning information in a limited direction, which depends on the number of sensors (such as ultrasonic sensors) in the reversing radar and the installation location.
  • traditional back-up radar can usually only provide collision warning information for the rear left, forward and right rear directions of a vehicle.
  • the embodiment of the present application uses a multi-eye camera to provide distance information. Since the distance information provided by the multi-camera can include the current distances in various angle directions within the range of the target angle of view, the collision warning information provided in the embodiment of the present application can also include the collision warning information corresponding to each angle in the range of the target angle of view, thereby improving Early warning effect.
  • the early warning map may include at least one arc.
  • the arc may include points corresponding to various directional angles within the target viewing angle range.
  • the color of the arc can be used to characterize the current distance between the object and the vehicle within the target viewing angle range (the color of the arc is not shown in Figure 5, in fact, the same arc may have multiple colors. For example, two sides of an arc It can be red, the middle can be green, and other colors are used to gradually transition between red and green. Red can indicate that the vehicle is close to the object to remind the driver's attention; green can indicate that the vehicle is far away from the object).
  • an object within a target perspective range may also be identified based on one or more images in the multi-eye image, and distance information within the target perspective range may be obtained (the distance information may be determined by Multi-camera cameras can also be provided by the reversing radar or the fusion information of the distance information provided by the two).
  • the images displayed on the display screen identify objects that are too close to the vehicle. For example, when a pedestrian passes behind the vehicle and is too close to the vehicle, the pedestrian may be identified in a certain way in the image, such as coloring the pedestrian or using other marks to warn.
  • the vehicle driving assistance device described above may be a driving recorder.
  • a function of collecting distance information is added to the driving recorder, so that the driving recorder can be applied to a wider range of scenarios.
  • the driving recorder may be an ordinary driving recorder or a panoramic driving recorder, which is not limited in the embodiment of the present application.
  • the vehicle assistance driving device provided in the embodiment of the present application has an active collision recording function, that is, predicting whether a vehicle may collide, and if a collision may occur, a video recording function is enabled. The following describes in detail the execution flow of the active collision recording function provided by the embodiment of the present application with reference to FIG. 4.
  • the method of FIG. 6 may be executed by the information processing system 14 in the vehicle assisted driving device 10.
  • the method of FIG. 6 includes steps S62 to S64.
  • step S62 the possibility of collision between the vehicle and the object in the target perspective range is determined according to the current distance between the object and the vehicle in the target perspective range and the historical distance between the object and the vehicle in the target perspective range.
  • step S62 There may be multiple implementations of step S62.
  • the vehicle may collide with an object in the target perspective range.
  • the sampling interval of the distance between the vehicle and the object is usually fixed.
  • the difference between the historical distance and the current distance is greater than a certain threshold value, which indicates that the current object is approaching the vehicle quickly. At this time, it can be determined that the object and the vehicle have the possibility of collision.
  • the current distance between the object and the vehicle within the target angle of view may be the distance information collected by the information processing system 14 at the current sampling time, and the historical distance between the object and the vehicle within the target angle of view may be the previous or previous information processing system 14 Distance information collected at each sampling time.
  • Step S62 may be implemented in the following manner, for example. First, compare the current distance with the historical distance. If the distance between an object and the vehicle within the range of the target angle of view is close and the distance between the object and the vehicle is less than a preset threshold, it is determined that there is a possibility of collision between the vehicle and the object within the range of the target angle of view, and vehicle assistance is turned on
  • the collision recording function of the driving device uses a multi-camera camera for recording.
  • the video recording function of the multi-camera may be turned off, for example, the driver may turn off the collision recording function of the vehicle assisted driving device. Therefore, when it is determined that the vehicle may collide with an object within the target viewing angle range, if the recording function of the multi-camera camera is turned off, the recording function of the multi-camera camera may be forcibly turned on and the multi-camera camera may be used for recording.
  • the vehicle assisted driving device can also be applied to record some special events when the vehicle is stopped.
  • the information processing system 14 can also be used to determine whether a person or an object is approaching the vehicle based on the multi-eye image when the vehicle is in a parking state. Video recording to improve the safety of the vehicle when it is parked.
  • the embodiment of the present application further provides a vehicle.
  • the vehicle may be a vehicle 1 as shown in FIG. 1.
  • the vehicle 1 includes a vehicle driving assistance device 10.
  • the multi-eye camera 12 of the vehicle assisted driving device 10 may be installed on the rear window, the license plate, or a peripheral position of the license plate of the vehicle 1.
  • An embodiment of the present application further provides an information processing method.
  • the information processing method can be applied to a vehicle assisted driving device of a vehicle.
  • the vehicle assisted driving device includes a multi-eye camera, and the multi-eye camera is used for image collection of a scene in a target angle range of the left, right, or rear of the vehicle.
  • the method may include steps S22 to S24 shown in FIG. 2.
  • the information processing method may further include the steps shown in FIG. 3.
  • the information processing method may further include the steps shown in FIG. 4.
  • the information processing method may further include: generating collision warning information according to the current distance between the object and the vehicle within the range of the target angle of view.
  • the collision warning information may include collision warning information corresponding to various angles within a target angle of view.
  • the information processing method may further include: controlling the display screen to display an early warning map for representing the collision early warning information, the early warning map includes at least one arc, and the arc includes points corresponding to various angles within the target perspective range.
  • the color of the line is used to characterize the current distance between the object and the vehicle within the target viewing angle range.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server, or data center Transmission by wire (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc. .
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a digital video disc (DVD)
  • DVD digital video disc
  • SSD solid state disk
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车辆辅助驾驶装置(1)包括多目摄像头(12)和信息处理系统(14)。多目摄像头(12)用于对车辆(1)左方、右方或后方的目标视角范围内的场景进行图像采集。信息处理系统(14)用于根据多目摄像头(12)采集到的多目图像计算目标视角范围内的物体与车辆的当前距离。该车辆辅助驾驶装置拓宽了车辆辅助驾驶装置的使用场景。还提供了一种车辆以及信息处理方法。

Description

车辆辅助驾驶装置、车辆以及信息处理方法
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。
技术领域
本申请涉及车辆领域,更为具体地,涉及一种车辆辅助驾驶装置、车辆以及信息处理方法。
背景技术
随着车辆的广泛应用,车辆辅助驾驶技术越来越受到关注。
车辆辅助驾驶技术可以为驾驶者提供车辆的周边信息(如视频和/或声音信息),以辅助驾驶或在发生故障时提供证据。传统的车辆辅助驾驶装置能够提供的信息有限,导致其使用场景受限。
发明内容
本申请提供一种车辆辅助驾驶装置、车辆以及信息处理方法,能够丰富车辆辅助驾驶装置的功能,拓宽其使用场景。
第一方面,提供一种车辆辅助驾驶装置,包括:多目摄像头,用于对车辆左方、右方或后方的目标视角范围内的场景进行图像采集;信息处理系统,用于获取所述多目摄像头采集到的多目图像;根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离。
第二方面,提供一种车辆,包括如第一方面所述的车辆辅助驾驶装置。
第三方面,提供一种信息处理方法,所述方法应用于车辆的车辆辅助驾驶装置,所述车辆辅助驾驶装置包括多目摄像头,所述多目摄像头用于对所述车辆左方、右方或后方的目标视角范围内的场景进行图像采集;所述方法包括:获取所述多目摄像头采集到的多目图像;根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离。
在车辆的后方、左方或右方采用多目摄像头方案,使得车辆辅助驾驶装 置能够提供车辆后方、左方或右方的距离信息,丰富了车辆辅助驾驶装置的功能,拓宽了其使用场景。
附图说明
图1是本申请实施例提供的车辆辅助驾驶装置在车辆的安装位置的示例图。
图2是本申请一个实施例提供的信息处理方法的示意性流程图。
图3是图2中的步骤S24的一种可能的实现方式的示意性流程图。
图4是图2中的步骤S24的另一种可能的实现方式的示意性流程图。
图5是本申请实施例提供的碰撞预警信息的示例图。
图6是本申请另一实施例提供的信息处理方法的示意性流程图。
具体实施方式
车辆上安装的车辆辅助驾驶装置的种类很多,其中类似倒车雷达系统、行车记录仪等很多车辆辅助驾驶装置能够为驾驶者提供车辆周边的图像信息。但这些车辆辅助驾驶装置能够提供的图像信息有限,通常仅能提供单一的图像信息,导致车辆辅助驾驶装置的使用场景受限。下面以行车记录仪为例进行说明。
行车记录仪通常用于记录车辆行驶途中的影像,从而为交通事故提供证据。全景行车记录仪由于可以记录360°范围内的影像,逐渐成为更多车主的首选。
传统行车记录仪通常采用单目摄像头记录车辆周围的影像。以全景行车记录仪为例,该行车记录仪通常要求在车辆前后左右4个方向各安装1个摄像头。该4个摄像头采集到的影像可以拼接在一起,并在中控台或车主的手机上显示,呈现出车辆周围的360°影像。
但是,传统行车记录仪在车辆周围采用单目摄像头进行图像采集,只能得到单一的图像信息。
下面结合图1,详细描述本申请实施例提供的车辆辅助驾驶装置。如图1所示,车辆1上安装有车辆辅助驾驶装置10。车辆辅助驾驶装置10可以包括多目摄像头12(如图1中的摄像头12a和摄像头12b)和信息处理系统14。
多目摄像头12可用于对车辆1左方、右方或后方的目标视角范围内的场景进行图像采集。
多目摄像头12可以包括两个或两个以上的摄像头。作为一个示例,多目摄像头12可以包括用于采集彩色图像的两个摄像头(即双目摄像头)。作为另一个示例,多目摄像头12可以包括用于采集彩色图像的一个摄像头和用于采集灰度图像的两个摄像头(即三目摄像头)。
如图1所示,该多目摄像头12可以包括双目摄像头12a,12b。该双目摄像头12a,12b可分别用于采集双目图像(包括左眼图像和右眼图像)。
多目摄像头12可以位于车辆1的后方、左方或右方。以多目摄像头12位于车辆的后方为例,如图1所示,多目摄像头12可以安装在车辆1的后窗,如后窗的顶部。或者,多目摄像头12可以安装在车辆的车牌上,如车牌附近或车牌顶部的中间位置。由于图1中的双目摄像头12a,12b安装在车辆的后方,因此,也可将该双目摄像头12a,12b称为后视双目摄像头。
多目摄像头12可用于对目标范围内的场景进行图像采集,但本申请实施例对目标视角范围的取值不做具体限定,可以基于多目摄像头12的安装位置以及多目摄像头12的视场角等因素确定。目标视角范围例如可以是90度的视角范围,也可以是135度的视角范围。
信息处理系统14可以与多目摄像头12集成在一起,也可以相互分离(如图1所示),只要保证信息处理系统14与多目摄像头12通信连接即可。
图2是本申请实施例提供的信息处理方法的示意性流程图。图2的方法可以由图1中的信息处理系统14执行。图2的方法可以包括步骤S22和步骤S24。
在步骤S22,获取所述多目摄像头采集到的多目图像。
在步骤S24,根据多目图像计算目标视角范围内的物体与车辆的当前距离。
由于安装位置不同,多目图像之间会有视差。因此,可以通过多目图像的匹配生成视差图,进而得到目标视角范围内的物体与车辆的当前距离。
本申请实施例在车辆的后方、左方或右方采用多目摄像头方案,使得车辆辅助驾驶装置能够提供车辆后方、左方或右方的距离信息,拓宽了车辆辅助驾驶装置的使用场景。例如,可以利用车辆辅助驾驶装置提供车辆后方、左方或右方的距离信息进行碰撞预警,从而使得车辆的行驶更加安全。
多目摄像头采集到的多目图像可以进行匹配,以获取目标视角范围内的场景中的物体的距离信息。多目图像的匹配也可称为多目图像的配准。以多目图像为双目图像(左眼图像和右眼图像)为例,可以根据左眼图像和右眼图像计算场景的视差图,从而计算场景的深度图。以多目图像包括三张或三张以上的图像为例,可以利用这些图像两两计算深度图,最终得到整个场景的深度图。
多目图像的匹配效果直接影响信息处理系统14计算出的距离信息的准确性。多目图像的匹配效果通常与车辆所处的环境有关。例如,在暗光场景或纹理比较单调的场景,多目图像的匹配结果可能不准确,进而导致信息处理系统14计算出的距离信息不准确。
下面结合图3,给出步骤S24的一种可能的实现方式,以提高信息处理系统14计算出的距离信息的准确性。
在步骤S32,接收测距模块输出的第一距离信息。
第一距离信息可用于指示目标视角范围内的目标角度方向上的物体与车辆的当前距离。
该测距模块例如可以是车辆上安装的雷达传感器(如倒车雷达)。测距模块可以是基于超声波的测距模块,也可以是基于调频连续波(frequency modulated continuous wave,FMCW)的测距模块,也可以是基于激光的测距模块,比如激光探测与测量(light detection and ranging,Lidar)系统。
目标角度方向可以是目标视角范围内的一个或多个角度方向。以测距模块为倒车雷达为例,该测距模块测量的目标角度方向可以包括车辆的左后方向、正后方向和右后方向。
在步骤S34,将第一距离信息作为基准,对多目图像进行匹配,得到深度图,使得深度图中的第二距离信息与第一距离信息之间的差异小于预设阈值。
与第一距离信息类似,第二距离信息也可用于指示目标角度方向上的物体与车辆的当前距离。第一距离信息和第二距离信息的不同之处在于第一距离信息由测距模块测量得到,第二距离信息由多目图像匹配得到。
需要说明的是,测距模块和多目摄像头通常安装在车辆的不同位置,因此,为了方便距离信息的比较,可以对测距模块输出的第一距离信息进行坐标变换和校正(例如将第一距离信息转换到多目摄像头所在的相机坐标系 下),使得第一距离信息与第二距离信息的基准相同。当然,如果测距模块和多目摄像头的安装位置非常接近,也可以将第一距离信息和第二距离信息视为在同一基准下采集到的距离信息。
步骤S34可以理解为测距模块测量到的距离信息与深度图中的距离信息的一种融合,目的是对多目图像的匹配结果进行一定程度的校正。步骤S34的实现方式可以有多种。作为一个示例,可以先将多目图像进行匹配,得到初始深度图;当该初始深度图中的第二距离信息与第一距离信息之间的差异大于预设阈值时,重新调整多目图像中的像素的匹配关系,直到计算出的深度图中的第二距离信息与第一距离信息之间的差异小于预设阈值为止。当然,也可以采用其他信息融合方式进行信息融合,例如,直接使用第一距离信息代替深度图中的第二距离信息。
在某些实施例中,当车辆所处的环境满足预设条件时,可以无需将第一距离信息作为基准,直接对多目图像进行匹配;当车辆所处的环境不满足预设条件时再将第一距离信息作为基准,对多目图像进行匹配。车辆所处的环境不满足预设条件可以是车辆所处的环境为暗光环境或者车辆所处的环境纹理较弱。这种实现方式可以提高算法的灵活性,较少算法的计算量。
在步骤S36,根据深度图,计算目标视角范围内的物体与车辆的当前距离。
由于测距模块较少受到车辆所处环境的影响,因此,测距模块测到的距离信息通常比较准确。本申请实施例采用测距模块测量到的距离信息作为基准对多目图像的匹配结果进行校正,可以提高距离信息的准确性。
图3所示的实施例给出了步骤S24的一种可能的实现方式。下面结合图4,给出步骤S24的另一种可能的实现方式。
图4包括步骤S42-步骤S46。下面对这些步骤进行详细介绍。
在步骤S42,接收测距模块输出的第一距离信息。
第一距离信息可用于指示目标视角范围内的物体与车辆的当前距离。
该测距模块例如可以是车辆上安装的雷达传感器(如倒车雷达)。测距模块可以是基于超声波的测距模块,也可以是基于FMCW的测距模块,也可以是基于激光的测距模块,比如Lidar系统。
步骤S44、利用多目图像识别目标视角范围内的物体。
图像中的物体识别方式有很多,可以采用传统的基于支持向量机的图像 识别算法进行识别,也可以采用预先训练好的神经网络模型进行识别,本申请实施例对此并不限定。
步骤S46、将第一距离信息作为目标视角范围内的物体的距离信息,以形成深度图。
换句话说,步骤S46采用第一距离信息对多目图像中的物体的距离进行标注或标记,以生成深度图。
步骤S48、根据深度图,计算目标视角范围内的物体与车辆的当前距离。
由于测距模块较少受到车辆所处环境的影响,因此,测距模块测到的距离信息通常比较准确。本申请实施例采用测距模块测量到的距离信息直接作为多目图像中的物体的距离信息,可以提高测距准确性。
上文已经指出,采用多目摄像头方案使得车辆辅助驾驶装置能够提供车辆周围的距离信息,从而可以拓宽车辆辅助驾驶装置的使用场景。下文对本申请实施例提供的车辆辅助驾驶装置的功能和使用场景进行详细的举例说明。
作为一个示例,可以将本申请实施例提供的车辆辅助驾驶装置应用于后碰撞预警。例如,信息处理系统14可以根据目标视角范围内的物体与车辆的当前距离,生成碰撞预警信息。
传统车辆通常采用倒车雷达提供碰撞预警信息,但倒车雷达只能提供有限方向上的碰撞预警信息,这取决于倒车雷达中的传感器(如超声波传感器)的数量和安装位置。例如,传统倒车雷达通常只能提供车辆的左后方向、正后方向和右后方向的碰撞预警信息。与倒车雷达不同,本申请实施例采用多目摄像头提供距离信息。由于多目摄像头提供距离信息可以包括目标视角范围内的各个角度方向的当前距离,因此,本申请实施例提供的碰撞预警信息也可以包括目标视角范围内的各个角度对应的碰撞预警信息,从而提高了预警效果。
可选地,在一些实施例中,本申请实施例提供的碰撞预警信息可以是将多目摄像头提供的距离信息与倒车雷达提供的距离信息融合后得到的碰撞预警信息,以进一步提高预警效果。
可选地,在一些实施例中,信息处理系统14还可用于控制显示屏显示用于表征碰撞预警信息的预警图。
该显示屏例如可以是车主手机的显示屏,也可以是车辆1的中控面板上 的显示屏。
如图5所示,该预警图可以包含至少一条弧线。弧线可以包含与目标视角范围内的各个方向角度对应的点。弧线的颜色可用于表征目标视角范围内的物体与车辆之间的当前距离(图5未示出弧线的颜色,实际上,同一弧线可能具有多种颜色。例如,一条弧线的两边可以为红色,中间可以为绿色,红色和绿色之间采用其他颜色进行逐渐过度。红色可以表示车辆与物体距离很近,以提醒驾驶者注意;绿色可以表示车辆与物体距离较远)。
与传统的雷达预警图不同,本申请实施例提供的预警图可以提供车辆的全方向预警。
可选地,在某些实施例中,也可以基于多目图像中的一张或多张图像对目标视角范围内的物体进行识别,再结目标视角范围内的距离信息(该距离信息可以由多目摄像头提供,也可以由倒车雷达提供,也可以是二者提供的距离信息的融合信息),在显示屏显示的影像中标识出与车辆过近的物体。例如,当行人经过车辆后方且距离车辆过近时,可以在影像中对该行人采用一定方式进行标识,如对行人进行着色或采用其他标记进行警示。
上文描述的车辆辅助驾驶装置可以是行车记录仪,如在行车记录仪中添加采集距离信息的功能,使得行车记录仪能够应用于更加广泛的场景。该行车记录仪可以为普通的行车记录仪,也可以是全景行车记录仪,本申请实施例对此并不限定。
上文描述的是本申请实施例提供的车辆辅助驾驶装置在碰撞预警方面的应用。下文给出本申请实施例提供的车辆辅助驾驶装置在碰撞记录方面的应用。
传统车辆辅助驾驶装置(如行车记录仪)具备碰撞记录功能,但均是在检测到车辆震动并判定碰撞发生时才开启视频记录功能。因此,传统车辆辅助驾驶装置的碰撞记录功能可以理解为一种被动的碰撞记录功能。本申请实施例提供的车辆辅助驾驶装置具有主动的碰撞记录功能,即预测车辆是否可能发生碰撞,如果可能发生碰撞,即开启视频记录功能。下面结合图4,对本申请实施例提供的主动碰撞记录功能的执行流程进行详细说明。
图6的方法可以由车辆辅助驾驶装置10中的信息处理系统14执行。图6的方法包括步骤S62至步骤S64。
在步骤S62,根据目标视角范围内的物体与车辆的当前距离与目标视角 范围内的物体与车辆的历史距离的变化情况,确定车辆与目标视角范围内的物体发生碰撞的可能性。
步骤S62的实现方式可以有多种。
作为一个示例,当当前距离小于历史距离,且当前距离小于预设阈值时,确定车辆与目标视角范围内的物体可能发生碰撞。
当前距离小于历史距离可以表示物体在接近车辆,当前距离小于预设阈值可以表示物体与车辆之间的距离非常接近了,此时可以判定物体与车辆具有发生碰撞的可能性。
作为另一个示例,也可以判定历史距离与当前距离之差是否大于某个阈值,如果历史距离与当前距离之差大于某个阈值,则可以判定物体与车辆具有发生碰撞的可能性。
车辆与物体之间的距离的采样间隔通常是固定的,历史距离与当前距离之差大于某个阈值可以表示当前物体正在快速接近车辆,此时可以判定物体与车辆具有发生碰撞的可能性。
在步骤S64,当确定车辆与目标视角范围内的物体可能发生碰撞时,利用多目摄像头进行录像。
目标视角范围内的物体与车辆的当前距离可以是信息处理系统14在当前采样时刻采集到的距离信息,目标视角范围内的物体与车辆的历史距离可以是信息处理系统14在前一个或前几个采样时刻采集到的距离信息。步骤S62例如可以采用如下方式实现。首先,比较当前距离与历史距离。如果目标视角范围内的某个物体与车辆的距离在靠近且该物体与车辆之间的距离小于预设阈值,则确定车辆与目标视角范围内的物体存在发生碰撞的可能性,并开启车辆辅助驾驶装置的碰撞记录功能,利用多目摄像头进行录像。
此外,在一些情况下,多目摄像头的录像功能可能处于关闭状态,比如驾驶员可能关闭了车辆辅助驾驶装置的碰撞记录功能。因此,当确定车辆与目标视角范围内的物体可能发生碰撞时,如果多目摄像头的录像功能处于关闭状态,则可以强制开启多目摄像头的录像功能,并利用多目摄像头进行录像。
本申请提供的车辆辅助驾驶装置还可以应用于记录车辆停车状态时的一些特殊事件。例如,信息处理系统14还可用于当车辆处于停车状态时,根据多目图像判断是否有人或物体接近车辆,当判断有人或物体接近车辆 时,开启多目摄像头的录像功能,并利用多目摄像头进行录像,从而提高车辆在停车状态时的安全性。
本申请实施例还提供一种车辆。该车辆可以是如图1所示的车辆1。该车辆1包括车辆辅助驾驶装置10。可选地,在一些实施例中,车辆辅助驾驶装置10的多目摄像头12可以安装在车辆1的后窗、车牌上或车牌的周边位置。
本申请实施例还提供一种信息处理方法。信息处理方法可应用于车辆的车辆辅助驾驶装置。车辆辅助驾驶装置包括多目摄像头,多目摄像头用于对车辆左方、右方或后方的目标视角范围内的场景进行图像采集。该方法可包括图2所示的步骤S22至步骤S24。
可选地,该信息处理方法还可包图3所示的步骤。
可选地,该信息处理方法还可包图4所示的步骤。
可选地,该信息处理方法还可包括:根据目标视角范围内的物体与车辆的当前距离,生成碰撞预警信息。
可选地,碰撞预警信息可以包括目标视角范围内的各个角度对应的碰撞预警信息。
可选地,该信息处理方法还可包括:控制显示屏显示用于表征碰撞预警信息的预警图,预警图包含至少一条弧线,弧线包含与目标视角范围内的各个角度对应的点,弧线的颜色用于表征目标视角范围内的物体与车辆之间的当前距离。
可选地,该信息处理方法还包括图5所示的步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数 据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (31)

  1. 一种车辆辅助驾驶装置,其特征在于,包括:
    多目摄像头,用于对车辆左方、右方或后方的目标视角范围内的场景进行图像采集;
    信息处理系统,用于获取所述多目摄像头采集到的多目图像;根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离。
  2. 根据权利要求1所述的车辆辅助驾驶装置,其特征在于,所述信息处理系统还用于接收测距模块输出的第一距离信息,所述第一距离信息用于指示所述目标视角范围内的目标角度方向上的物体与所述车辆的当前距离;
    所述根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离,包括:
    将所述第一距离信息作为基准,对所述多目图像进行匹配,得到深度图,使得所述深度图中的第二距离信息与所述第一距离信息之间的差异小于预设阈值,其中,所述第二距离信息用于指示所述目标角度方向上的物体与所述车辆的当前距离;
    根据所述深度图,计算所述目标视角范围内的物体与所述车辆的当前距离。
  3. 根据权利要求2所述的车辆辅助驾驶装置,其特征在于,所述将所述第一距离信息作为基准,对所述多目图像进行匹配,包括:
    当所述车辆所处的环境不满足预设条件时,将所述第一距离信息作为基准,对所述多目图像进行匹配;
    所述信息处理系统还用于:
    当所述车辆所处的环境满足预设条件时,直接对所述多目图像进行匹配。
  4. 根据权利要求1所述的车辆辅助驾驶装置,其特征在于,所述信息处理系统还用于接收测距模块输出的第一距离信息,所述第一距离信息用于指示所述目标视角范围内的物体与所述车辆的当前距离;
    所述根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离,包括:
    利用所述多目图像识别所述目标视角范围内的物体;
    将所述第一距离信息作为所述目标视角范围内的物体的距离信息,以形 成深度图;
    根据所述深度图,计算所述目标视角范围内的物体与所述车辆的当前距离。
  5. 根据权利要求2-4中任一项所述的车辆辅助驾驶装置,其特征在于,所述测距模块为所述车辆上安装的雷达传感器。
  6. 根据权利要求1-5中任一项所述的车辆辅助驾驶装置,其特征在于,所述信息处理系统还用于根据所述目标视角范围内的物体与所述车辆的当前距离生成碰撞预警信息。
  7. 根据权利要求6所述的车辆辅助驾驶装置,其特征在于,所述碰撞预警信息包括所述目标视角范围内的各个角度对应的碰撞预警信息。
  8. 根据权利要求6或7所述的车辆辅助驾驶装置,其特征在于,所述信息处理系统还用于控制显示屏显示用于表征所述碰撞预警信息的预警图,所述预警图包含至少一条弧线,所述弧线包含与所述目标视角范围内的各个角度对应的点,所述弧线的颜色用于表征所述目标视角范围内的物体与所述车辆之间的当前距离。
  9. 根据权利要求1-8中任一项所述的车辆辅助驾驶装置,其特征在于,所述信息处理系统还用于根据所述目标视角范围内的物体与所述车辆的当前距离与所述目标视角范围内的物体与所述车辆的历史距离的变化情况,确定所述车辆与所述目标视角范围内的物体发生碰撞的可能性;当确定所述车辆与所述目标视角范围内的物体可能发生碰撞时,利用所述多目摄像头进行录像。
  10. 根据权利要求9所述的车辆辅助驾驶装置,其特征在于,所述当确定所述车辆与所述目标视角范围内的物体可能发生碰撞时,利用所述多目摄像头进行录像,包括:
    当确定所述车辆与所述目标视角范围内的物体可能发生碰撞时,如果所述多目摄像头的录像功能处于关闭状态,则强制开启所述多目摄像头的录像功能,并利用所述多目摄像头进行录像。
  11. 根据权利要求9或10所述的车辆辅助驾驶装置,其特征在于,所述根据所述目标视角范围内的物体与所述车辆的当前距离与所述目标视角范围内的物体与所述车辆的历史距离的变化情况,确定所述车辆与所述目标视角范围内的物体发生碰撞的可能性,包括:
    当所述当前距离小于所述历史距离,且所述当前距离小于预设阈值时,确定所述车辆与所述目标视角范围内的物体可能发生碰撞。
  12. 根据权利要求1-11中任一项所述的车辆辅助驾驶装置,其特征在于,所述信息处理系统还用于当所述车辆处于停车状态时,根据所述多目图像判断是否有人或物体接近所述车辆,当判断有人或物体接近所述车辆时,开启所述多目摄像头的录像功能,并利用所述多目摄像头进行录像。
  13. 根据权利要求1-12中任一项所述的车辆辅助驾驶装置,其特征在于,所述多目摄像头包括用于采集彩色图像的两个摄像头,或者包括用于采集彩色图像的一个摄像头和用于采集灰度图像的两个摄像头。
  14. 根据权利要求1-13中任一项所述的车辆辅助驾驶装置,其特征在于,所述车辆辅助驾驶装置为行车记录仪。
  15. 根据权利要求14所述的车辆辅助驾驶装置,其特征在于,所述行车记录仪为全景行车记录仪。
  16. 一种车辆,其特征在于,包括如权利要求1-15中任一项所述的车辆辅助驾驶装置。
  17. 根据权利要求16所述的车辆,其特征在于,所述多目摄像头安装在所述车辆的后窗、车牌上或车牌的周边位置。
  18. 一种信息处理方法,其特征在于,所述方法应用于车辆辅助驾驶装置,所述车辆辅助驾驶装置包括多目摄像头,所述多目摄像头用于对所述车辆左方、右方或后方的目标视角范围内的场景进行图像采集;
    所述方法包括:
    获取所述多目摄像头采集到的多目图像;
    根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离。
  19. 根据权利要求18所述的方法,其特征在于,所述方法还包括:
    接收测距模块输出的第一距离信息,所述第一距离信息用于指示所述目标视角范围内的目标角度方向上的物体与所述车辆的当前距离;
    所述根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离,包括:
    将所述第一距离信息作为基准,对所述多目图像进行匹配,得到深度图,使得所述深度图中的第二距离信息与所述第一距离信息之间的差异小于预 设阈值,其中,所述第二距离信息用于指示所述目标角度方向上的物体与所述车辆的当前距离;
    根据所述深度图,计算所述目标视角范围内的物体与所述车辆的当前距离。
  20. 根据权利要求19所述的方法,其特征在于,所述将所述第一距离信息作为基准,对所述多目图像进行匹配,包括:
    当所述车辆所处的环境不满足预设条件时,将所述第一距离信息作为基准,对所述多目图像进行匹配;
    所述信息处理系统还用于:
    当所述车辆所处的环境满足预设条件时,直接对所述多目图像进行匹配。
  21. 根据权利要求18所述的方法,其特征在于,所述方法还包括:
    接收测距模块输出的第一距离信息,所述第一距离信息用于指示所述目标视角范围内的物体与所述车辆的当前距离;
    所述根据所述多目图像,计算所述目标视角范围内的物体与所述车辆的当前距离,包括:
    利用所述多目图像识别所述目标视角范围内的物体;
    将所述第一距离信息作为所述目标视角范围内的物体的距离信息,以形成深度图;
    根据所述深度图,计算所述目标视角范围内的物体与所述车辆的当前距离。
  22. 根据权利要求19-21中任一项所述的方法,其特征在于,所述测距模块为所述车辆上安装的雷达传感器。
  23. 根据权利要求18-22中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述目标视角范围内的物体与所述车辆的当前距离生成碰撞预警信息。
  24. 根据权利要求23所述的方法,其特征在于,所述碰撞预警信息包括所述目标视角范围内的各个角度对应的碰撞预警信息。
  25. 根据权利要求23或24所述的方法,其特征在于,所述方法还包括:
    控制显示屏显示用于表征所述碰撞预警信息的预警图,所述预警图包含 至少一条弧线,所述弧线包含与所述目标视角范围内的各个角度对应的点,所述弧线的颜色用于表征所述目标视角范围内的物体与所述车辆之间的当距离。
  26. 根据权利要求18-25中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述目标视角范围内的物体与所述车辆的当前距离与所述目标视角范围内的物体与所述车辆的历史距离的变化情况,确定所述车辆与所述目标视角范围内的物体发生碰撞的可能性;
    当确定所述车辆与所述目标视角范围内的物体可能发生碰撞时,利用所述多目摄像头进行录像。
  27. 根据权利要求26所述的方法,其特征在于,所述当确定所述车辆与所述目标视角范围内的物体可能发生碰撞时,利用所述多目摄像头进行录像,包括:
    当确定所述车辆与所述目标视角范围内的物体可能发生碰撞时,如果所述多目摄像头的录像功能处于关闭状态,则强制开启所述多目摄像头的录像功能,并利用所述多目摄像头进行录像。
  28. 根据权利要求26或27所述的方法,其特征在于,所述根据所述目标视角范围内的物体与所述车辆的当前距离与所述目标视角范围内的物体与所述车辆的历史距离的变化情况,确定所述车辆与所述目标视角范围内的物体发生碰撞的可能性,包括:
    当所述当前距离小于所述历史距离,且所述当前距离小于预设阈值时,确定所述车辆与所述目标视角范围内的物体可能发生碰撞。
  29. 根据权利要求18-28中任一项所述的方法,其特征在于,所述方法还包括:
    当所述车辆处于停车状态时,根据所述多目图像判断是否有人或物体接近所述车辆;
    当判断有人或物体接近所述车辆时,开启所述多目摄像头的录像功能,并利用所述多目摄像头进行录像。
  30. 根据权利要求18-29中任一项所述的方法,其特征在于,所述多目摄像头包括用于采集彩色图像的两个摄像头,或者包括用于采集彩色图像的一个摄像头和用于采集灰度图像的两个摄像头。
  31. 根据权利要求18-30中任一项所述的方法,其特征在于,所述车辆辅助驾驶装置为行车记录仪。
PCT/CN2018/107517 2018-09-26 2018-09-26 车辆辅助驾驶装置、车辆以及信息处理方法 WO2020061794A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880011498.8A CN110312639A (zh) 2018-09-26 2018-09-26 车辆辅助驾驶装置、车辆以及信息处理方法
PCT/CN2018/107517 WO2020061794A1 (zh) 2018-09-26 2018-09-26 车辆辅助驾驶装置、车辆以及信息处理方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/107517 WO2020061794A1 (zh) 2018-09-26 2018-09-26 车辆辅助驾驶装置、车辆以及信息处理方法

Publications (1)

Publication Number Publication Date
WO2020061794A1 true WO2020061794A1 (zh) 2020-04-02

Family

ID=68074282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107517 WO2020061794A1 (zh) 2018-09-26 2018-09-26 车辆辅助驾驶装置、车辆以及信息处理方法

Country Status (2)

Country Link
CN (1) CN110312639A (zh)
WO (1) WO2020061794A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205686A (zh) * 2021-06-04 2021-08-03 华中科技大学 一种机动车后装360度全景无线安全辅助系统
CN113609945A (zh) * 2021-07-27 2021-11-05 深圳市圆周率软件科技有限责任公司 一种图像检测方法和车辆
CN113805566A (zh) * 2021-09-17 2021-12-17 南斗六星系统集成有限公司 一种集成辅助驾驶系统控制器的检测方法及系统
CN114407928A (zh) * 2022-01-24 2022-04-29 中国第一汽车股份有限公司 车辆避让控制方法以及车辆避让控制装置
CN114612762A (zh) * 2022-03-15 2022-06-10 首约科技(北京)有限公司 一种智能设备监管方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727064B (zh) * 2020-05-26 2024-03-22 北京罗克维尔斯科技有限公司 一种摄像头视场角的确定方法及装置
CN111986248B (zh) * 2020-08-18 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 多目视觉感知方法、装置及自动驾驶汽车
CN112184949A (zh) * 2020-09-29 2021-01-05 广州星凯跃实业有限公司 一种汽车图像监控方法、装置、设备和存储介质
JP7388338B2 (ja) * 2020-10-30 2023-11-29 トヨタ自動車株式会社 運転支援システム
CN112937486B (zh) * 2021-03-16 2022-09-02 吉林大学 一种道路积水车载在线监测与驾驶辅助系统及方法
CN115331483A (zh) * 2021-05-11 2022-11-11 宗盈国际科技股份有限公司 智能化机车警示装置及系统
CN114913626A (zh) * 2022-05-07 2022-08-16 中汽创智科技有限公司 一种数据处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024607A1 (en) * 2006-07-26 2008-01-31 Toyota Jidosha Kabushiki Kaisha Image display apparatus and method
US20080043113A1 (en) * 2006-08-21 2008-02-21 Sanyo Electric Co., Ltd. Image processor and visual field support device
CN101763640A (zh) * 2009-12-31 2010-06-30 无锡易斯科电子技术有限公司 车载多目摄像机环视系统的在线标定处理方法
CN106225764A (zh) * 2016-07-01 2016-12-14 北京小米移动软件有限公司 基于终端中双目摄像头的测距方法及终端
CN107146247A (zh) * 2017-05-31 2017-09-08 西安科技大学 基于双目摄像头的汽车辅助驾驶系统及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881058B (zh) * 2012-06-19 2015-04-08 浙江吉利汽车研究院有限公司杭州分公司 汽车刮擦预警及证据记录系统
US9704403B2 (en) * 2015-12-03 2017-07-11 Institute For Information Industry System and method for collision avoidance for vehicle
CN106355675A (zh) * 2016-08-31 2017-01-25 重庆市朗信智能科技开发有限公司 一种obd隐藏式汽车行车记录设备
CN108108680A (zh) * 2017-12-13 2018-06-01 长安大学 一种基于双目视觉的后方车辆识别与测距方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024607A1 (en) * 2006-07-26 2008-01-31 Toyota Jidosha Kabushiki Kaisha Image display apparatus and method
US20080043113A1 (en) * 2006-08-21 2008-02-21 Sanyo Electric Co., Ltd. Image processor and visual field support device
CN101763640A (zh) * 2009-12-31 2010-06-30 无锡易斯科电子技术有限公司 车载多目摄像机环视系统的在线标定处理方法
CN106225764A (zh) * 2016-07-01 2016-12-14 北京小米移动软件有限公司 基于终端中双目摄像头的测距方法及终端
CN107146247A (zh) * 2017-05-31 2017-09-08 西安科技大学 基于双目摄像头的汽车辅助驾驶系统及方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205686A (zh) * 2021-06-04 2021-08-03 华中科技大学 一种机动车后装360度全景无线安全辅助系统
CN113205686B (zh) * 2021-06-04 2024-05-17 华中科技大学 一种机动车后装360度全景无线安全辅助系统
CN113609945A (zh) * 2021-07-27 2021-11-05 深圳市圆周率软件科技有限责任公司 一种图像检测方法和车辆
CN113609945B (zh) * 2021-07-27 2023-06-13 圆周率科技(常州)有限公司 一种图像检测方法和车辆
CN113805566A (zh) * 2021-09-17 2021-12-17 南斗六星系统集成有限公司 一种集成辅助驾驶系统控制器的检测方法及系统
CN114407928A (zh) * 2022-01-24 2022-04-29 中国第一汽车股份有限公司 车辆避让控制方法以及车辆避让控制装置
CN114612762A (zh) * 2022-03-15 2022-06-10 首约科技(北京)有限公司 一种智能设备监管方法

Also Published As

Publication number Publication date
CN110312639A (zh) 2019-10-08

Similar Documents

Publication Publication Date Title
WO2020061794A1 (zh) 车辆辅助驾驶装置、车辆以及信息处理方法
CN110316182B (zh) 一种自动泊车系统及方法
US20210365696A1 (en) Vehicle Intelligent Driving Control Method and Device and Storage Medium
CN107264402B (zh) 环视提供设备和包括其的车辆
EP2955915B1 (en) Around view provision apparatus and vehicle including the same
EP2163428B1 (en) Intelligent driving assistant systems
EP1961613B1 (en) Driving support method and driving support device
CN107122770B (zh) 多目相机系统、智能驾驶系统、汽车、方法和存储介质
EP1892149A1 (en) Method for imaging the surrounding of a vehicle and system therefor
TW201144115A (en) Dual vision front vehicle safety alarm device and method thereof
US20100054580A1 (en) Image generation device, image generation method, and image generation program
CN104802710B (zh) 一种智能汽车倒车辅助系统及辅助方法
US11180082B2 (en) Warning output device, warning output method, and warning output system
KR101986734B1 (ko) 차량 운전 보조 장치 및 이의 안전 운전 유도 방법
US11999370B2 (en) Automated vehicle system
CN111835998B (zh) 超视距全景图像获取方法、装置、介质、设备及系统
JP6816769B2 (ja) 画像処理装置と画像処理方法
CN107826092A (zh) 先进驾驶辅助系统和方法、设备、程序和介质
US20190318178A1 (en) Method, system and device of obtaining 3d-information of objects
WO2019193928A1 (ja) 車両システム、空間領域推測方法及び空間領域推測装置
KR20200047257A (ko) 차량 주위 영상 표시 시스템 및 차량 주위 영상 표시 방법
WO2022160232A1 (zh) 一种检测方法、装置和车辆
CN113459951A (zh) 车外环境显示方法和装置、车辆、设备和存储介质
CN103377372A (zh) 一种环视合成图重叠区域划分方法和环视合成图表示方法
WO2023284748A1 (zh) 一种辅助驾驶系统及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18935183

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18935183

Country of ref document: EP

Kind code of ref document: A1