WO2022061999A1 - 一种 a 柱成像方法 - Google Patents

一种 a 柱成像方法 Download PDF

Info

Publication number
WO2022061999A1
WO2022061999A1 PCT/CN2020/121744 CN2020121744W WO2022061999A1 WO 2022061999 A1 WO2022061999 A1 WO 2022061999A1 CN 2020121744 W CN2020121744 W CN 2020121744W WO 2022061999 A1 WO2022061999 A1 WO 2022061999A1
Authority
WO
WIPO (PCT)
Prior art keywords
pillar
driver
image
monitoring camera
eyebrow
Prior art date
Application number
PCT/CN2020/121744
Other languages
English (en)
French (fr)
Inventor
盛大宁
张祺
戴大力
李晨轩
Original Assignee
浙江合众新能源汽车有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江合众新能源汽车有限公司 filed Critical 浙江合众新能源汽车有限公司
Publication of WO2022061999A1 publication Critical patent/WO2022061999A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Definitions

  • the invention belongs to the technical field of driving images, and in particular relates to an A-pillar imaging method.
  • the A-pillar refers to the pillar between the front windshield and the front door on the vehicle body.
  • the A-pillar makes the vehicle body have higher stability and rigidity, and plays an important role in protecting the driving safety of drivers and passengers.
  • the visual blind spot of the A-pillar appears. Now, by setting up cameras on the left and right rear-view mirrors, and displaying the A-pillar blind spot on the screen inside the A-pillar in real time, the A-pillar blind spot can be eliminated.
  • the general practice is to display the camera image on the screen after a certain amount of cropping.
  • the image on the screen may be different from the image seen by the human eye in terms of shape, size, etc. large deviation.
  • it is necessary to monitor the line of sight of the human eye and the distance of the obstacles in the blind area of the A-pillar on the basis of the current software and hardware, and dynamically adjust the screen display according to the line of sight of the human eye to reduce factors such as the driver's sitting posture and the distance of obstacles. impact.
  • Invention patent application CN201910440232.1 discloses a method for assisting the vision system in the blind area of the A-pillar field of view based on eye tracking technology, and specifically discloses that the method includes the following steps: S1: Use the eye tracking unit to locate the driver's human eyes, and the positioning information Send to the ECU unit; S2: The ECU unit receives the positioning information sent by the eye tracking unit, and controls the movement of the external camera unit according to the positioning information to collect the road condition information of the blind area of the A-pillar field of view; S3: The external camera unit follows the control of the ECU unit.
  • the driver's vision and motion collect the road condition information of the blind area of the A-pillar field of view;
  • S4 The indoor display unit displays the road condition information of the blind area of the A-pillar field of view collected by the external camera unit.
  • the present invention proposes an A-pillar imaging method, so that the color, distortion and brightness of the picture displayed on the A-pillar flexible display screen match what the human eye sees, thereby realizing the "transparency" effect of the A-pillar. .
  • An A-pillar imaging method characterized in that, it is realized based on an A-pillar imaging system comprising two A-pillar cameras, a brow monitoring camera, an A-pillar flexible display screen, and a control device, and the eyebrow monitoring camera is arranged on a steering column of a steering wheel of an automobile;
  • a method is applied to a control device; the method includes:
  • Step S01 calculating the coordinates of the driver's head position in the vehicle coordinate system based on the in-vehicle image collected by the eyebrow monitoring camera;
  • Step S02 tracking the driver's eyes through the eyebrow monitoring camera, calculating the three-dimensional coordinates of the driver's eyebrow in the world coordinate system, and obtaining the driver's line of sight;
  • Step S03 based on the images in front of the vehicle collected by the two A-pillar cameras, use a three-dimensional reconstruction algorithm to construct three-dimensional image information including obstacles in the blind area of the A-pillars;
  • step S04 the three-dimensional image information in step S03 is projected onto the normal plane where the driver's sight line is located in step S02 to obtain an image identical to that of the human eye and display it on the A-pillar flexible display screen.
  • the A-pillar cameras are arranged on the sides of the two A-pillars to collect the front image of the vehicle, and the eyebrow monitoring camera is arranged in the vehicle to collect the vehicle image and track the driver's eyes.
  • the invention finally obtains that the color, distortion and brightness of the displayed picture match what the human eye sees, and realizes the "transparency" effect of the A-pillar.
  • step S01 specifically includes:
  • Step S11 based on the in-vehicle image collected by the eyebrow monitoring camera, calculate the position of the reference point located on the B-pillar on the driver's side on the screen of the eyebrow monitoring camera;
  • Step S12 based on the position of the reference point in the vehicle coordinate system and the position on the screen of the eyebrow monitoring camera, calculate the coordinates of the eyebrow monitoring camera in the vehicle coordinate system;
  • Step S13 based on the coordinates of the eyebrow monitoring camera in the vehicle coordinate system, calculate the coordinates of the driver's head position in the vehicle coordinate system.
  • the reference point is a point on the B-pillar on the driver's side that can be captured by the eyebrow monitoring camera and is not blocked by the driver.
  • the eyebrow monitoring camera is a binocular camera
  • step S02 specifically includes:
  • Step S21 tracking the driver's eyes through the binocular camera, and calculating the three-dimensional coordinates of the driver's eyebrows in the world coordinate system;
  • Step S22 based on the three-dimensional coordinates of the driver's eyebrows, convert the driver's sight line, and obtain the driver's sight line trajectory.
  • step S03 specifically includes: using a three-dimensional reconstruction algorithm to construct three-dimensional image information including obstacles in the blind area of the A-pillar based on the image in front of the vehicle and the depth data collected by the two A-pillar cameras; The imaging size of the A-pillar flexible display screen obtained by the distance calculation of the camera and obstacles.
  • the obstacle distance is obtained by the following method: the obstacle distance is calculated using a single-view depth estimation algorithm based on the images in front of the vehicle collected by the two A-pillar cameras.
  • the A-pillar imaging system further includes a ranging sensor for acquiring the distance of obstacles outside the A-pillar.
  • step S04 specifically includes:
  • Step S41 obtain the normal plane where the driver's sight line obtained in step S02 is obtained in real time, and use it as the projection plane;
  • step S42 the three-dimensional image information in step S03 is projected onto the projection plane, so as to obtain an image identical to that of the human eye and display it on the A-pillar flexible display screen.
  • the step S04 further includes: before displaying the same image with the human eye on the A-pillar flexible display screen, partially intercepting the same image with the human eye.
  • the step of partially intercepting the same image as the human eye imaging includes:
  • a partial image located within the range of the human eye is cut out from the obtained image that is the same as the image of the human eye, and the partial image is smaller than the maximum image.
  • FIG. 1 is a flowchart of an A-pillar imaging method of the present invention
  • Fig. 2 is the schematic diagram that the eyebrow monitoring camera is arranged in the car; wherein X point is the reference point;
  • Figure 3 is a schematic diagram of imaging of optical devices with different focal lengths
  • Figure 4 is a simulation diagram of the obstacle presenting state under the viewing angle of the A-pillar camera
  • Figure 5 is a simulation diagram of the obstacle presenting state from the perspective of the human eye
  • FIG. 6 is a schematic diagram of three-dimensional reconstruction
  • FIG. 7 is a schematic diagram of the field of view of the A-pillar camera and the human eye blocked by the A-pillar;
  • An A-column imaging method of the present invention is implemented based on an A-column imaging system.
  • the A-pillar imaging system includes two A-pillar cameras, an eyebrow monitoring camera, an A-pillar flexible display screen, and a control device.
  • the two A-pillar cameras are located on the sides of the two A-pillars, such as at the top of the A-pillars, or on the left and right rearview mirrors.
  • the eyebrow monitoring camera is arranged on the steering column of the steering wheel of the automobile.
  • the images collected by the two A-pillar cameras and the eyebrow monitoring camera are sent to the control device, and the control device obtains an image that conforms to human vision and displays it on the A-pillar flexible display.
  • the A-pillar flexible display adopts an OLED display.
  • an A-pillar imaging method of the present invention is applied to a control device, and the method includes:
  • Step S01 calculating the coordinates of the driver's head position in the vehicle coordinate system based on the in-vehicle image collected by the eyebrow monitoring camera;
  • Step S02 tracking the driver's eyes through the eyebrow monitoring camera, calculating the three-dimensional coordinates of the driver's eyebrow in the world coordinate system, and obtaining the driver's line of sight;
  • Step S03 based on the images in front of the vehicle collected by the two A-pillar cameras, use a three-dimensional reconstruction algorithm to construct three-dimensional image information including obstacles in the blind area of the A-pillars;
  • step S04 the three-dimensional image information in step S03 is projected onto the normal plane where the driver's sight line is located in step S02 to obtain an image identical to that of the human eye and display it on the A-pillar flexible display screen.
  • step S02 and step S03 is not limited to the above-mentioned sequence. Step S02 and step S03 may be performed simultaneously, and step S03 may also be performed before step S02.
  • the step S01 specifically includes:
  • Step S11 based on the in-vehicle image collected by the eyebrow monitoring camera, calculate the position of the reference point located on the B-pillar on the driver's side on the screen of the eyebrow monitoring camera;
  • Step S12 based on the position of the reference point in the vehicle coordinate system and the position on the screen of the eyebrow monitoring camera, calculate the coordinates of the eyebrow monitoring camera in the vehicle coordinate system;
  • Step S13 based on the coordinates of the eyebrow monitoring camera in the vehicle coordinate system, calculate the coordinates of the driver's head position in the vehicle coordinate system.
  • step S01 mainly uses the position of the reference point to reverse the position of the monitoring camera between the eyebrows, and then obtains the position of the driver's head.
  • the camera on the steering column of the steering wheel can illuminate a certain reference point X on the B-pillar on the driving side of the car without being obscured by the driver.
  • the specific position of the fixed icon on the camera screen is detected by the image algorithm, so that the position of the reference point X in the car coordinate system is used to reverse the coordinates of the camera in the car coordinate system, and finally the position of the driver's head in the car coordinate system is obtained. specific location.
  • step S02 is used to solve this problem.
  • the eyebrow monitoring camera is a binocular camera, and step S02 specifically includes:
  • Step S21 tracking the driver's eyes through the binocular camera, and calculating the three-dimensional coordinates of the driver's eyebrows in the world coordinate system;
  • Step S22 based on the three-dimensional coordinates of the driver's eyebrows, convert the driver's sight line, and obtain the driver's sight line trajectory.
  • the driver's eyes are tracked by the binocular camera, the three-dimensional coordinates of the driver's eyebrow in the world coordinate system are calculated, and the driver's line of sight trajectory is obtained, and the specific imaging of the A-pillar screen is adjusted in real time according to the driver's line of sight trajectory, so as to ensure the camera's imaging angle.
  • the depth matches what the driver sees in the real world, without image distortion or offset, making the driver feel like he is looking out from a wide "window".
  • Step S03 includes: constructing three-dimensional image information including obstacles in the blind area of the A-pillar by using a three-dimensional reconstruction algorithm based on the front image and depth data of the vehicle collected by the two A-pillar cameras.
  • the depth data includes the imaging size of the A-pillar flexible display screen calculated according to the focal length of the human eye, the A-pillar camera, and the distance of the obstacle.
  • the three-dimensional information of the obstacles in the blind area of the A-pillar is restored.
  • the focal length, erection position and angle, and the direction of the human eye line of sight transform the three-dimensional world information perspective to the same plane of human vision, and obtain the same image as the human eye imaging and display it on the A-pillar screen (see Figure 6).
  • Figure 3 shows a schematic diagram of imaging of two different focal length optics.
  • the larger the focal length of the optics the larger the change in image size.
  • the human eye can be equivalent to an optical device with a certain focal length.
  • the focal length of the camera is inconsistent with that of the human eye, when looking at objects at different distances, the size ratio of the two images is different.
  • the imaging ratio of the same object at position 1 on the upper and lower optical devices is 27:59, and the imaging ratio of the same object at position 2 is 55:195. Therefore, it is necessary to make different images of the camera on the A-pillar display for objects at different distances. Only by scaling the scale to match the human visual system can the shape and size be consistent.
  • the ranging method to estimate the distance of the obstacle, and then convert the size of the A-pillar screen image according to the focal length of the human eye, the focal length of the camera, and the distance of the obstacle.
  • the significance of this module is to provide the necessary "three-dimensional reconstruction" algorithm. Supported by depth data, it can have better dynamic effects at different distances.
  • the obstacle distance is obtained by the following method: Based on the images in front of the vehicle collected by the two A-pillar cameras, the obstacle distance is calculated using a single-view depth estimation algorithm.
  • the A-pillar imaging system further includes a ranging sensor for obtaining the distance of obstacles outside the A-pillar, and the obstacle distance is obtained by a ranging sensor, and the ranging sensor is not limited to millimeter-wave radar, lidar, ultrasonic radar, depth camera.
  • Step S04 specifically includes:
  • Step S41 obtain the normal plane where the driver's sight line obtained in step S02 is obtained in real time, and use it as the projection plane;
  • step S42 the three-dimensional image information in step S03 is projected onto the projection plane, so as to obtain an image identical to that of the human eye and display it on the A-pillar flexible display screen.
  • the algorithm needs to determine the projection of the three-dimensional world information onto the plane of the driver's eyes, otherwise it cannot be achieved. Transparent" effect. Drivers often turn their heads and adjust their sitting posture during driving, which may cause the driver's eye imaging plane to change from time to time. Therefore, the plane information of perspective transformation is adjusted in real time.
  • step S41 the driver's sight line detected in real time in step S02 needs to be obtained, and then the projection plane of the perspective transformation needs to be adjusted.
  • the three-dimensional image information in step S03 is projected onto the projection plane to obtain the scene seen by the human eye, thereby realizing a true "transparency" effect.
  • the step S04 further includes: before displaying the same image as the human eye on the A-pillar flexible display screen, partially intercepting the same image as the human eye.
  • the step of partially intercepting the same image as the human eye imaging includes:
  • a partial image located within the range of the human eye is cut out from the obtained image that is the same as the image of the human eye, and the partial image is smaller than the maximum image.
  • the above steps improve the clarity of the part of the image captured from the camera image displayed on the inner screen of the A-pillar, thereby improving user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

一种A柱成像方法,属于行车影像技术领域。该方法基于A柱成像系统实现,包括:步骤S01,基于眉心监测摄像头采集的车内影像,计算驾驶员头部位置在汽车坐标系的坐标;眉心监测摄像头设置于汽车方向盘的转向柱上;步骤S02,通过眉心监测摄像头跟踪驾驶员双眼,计算在世界坐标系中驾驶员眉心的三维坐标,并获得驾驶员视线轨迹;步骤S03,基于两个A柱摄像头采集的车前方影像,利用三维重建算法构建包含A柱盲区障碍物的三维影像信息;步骤S04,将步骤S03中的三维影像信息投影到步骤S02中驾驶员视线所在法平面,得到与人眼成像相同的图像并显示于A柱柔性显示屏上。该方法能实现透明A柱效果,根据人眼视线动态调整显示画面并确保画面显示符合人眼视觉。

Description

一种A柱成像方法 技术领域
本发明属于行车影像技术领域,尤其涉及一种A柱成像方法。
背景技术
A柱是指车身上前挡玻璃与前车门之间的柱子,A柱使得车身具有更高的稳定性和刚度,对保护驾乘人员行车安全起到重要的作用。同时因为A柱的存在,导致A柱视觉盲区的出现。现在通过左右后视镜上架设摄像头、将A柱盲区画面实时显示在A柱内侧的屏幕上,可消除A柱盲区。
一般的做法是将摄像头画面经过一定的裁剪后显示在屏幕上,但是由于驾驶人员身高与坐姿、障碍物远近等因素影响,屏幕上的画面可能和人眼看到的画面在形状、大小等方面存在较大偏差。为实现“透明”效果,需要在当前软硬件基础上,监测人眼视线轨迹及A柱盲区障碍物距离,根据人眼视线动态调整屏幕显示画面,减小因驾驶人员坐姿、障碍物远近等因素造成的影响。
发明专利申请CN201910440232.1 公开了基于眼球追踪技术的A柱视野盲区辅助视觉系统的方法,并具体公开了方法包括以下步骤:S1:通过眼球追踪单元进行驾驶员的人眼定位,并将定位信息发送给ECU单元;S2:ECU单元接收眼球追踪单元发送过来的定位信息,并依据定位信息控制外部摄像单元运动采集A柱视野盲区的路况信息;S3:外部摄像单元在ECU单元的控制下,跟随驾驶员的视觉,运动采集A柱视野盲区的路况信息;S4:室内显示单元显示外部摄像单元采集到的A柱视野盲区的路况信息。该发明虽然利用了眼球追踪获取盲区影像,但获得的影像的色彩、畸变、亮度无法与人眼看到的高度匹配。
技术问题
本发明针对现有技术存在的问题,提出了一种A柱成像方法,使得A柱柔性显示屏上显示画面的色彩、畸变、亮度都与人眼看到的相匹配,实现A柱“透明”效果。
技术解决方案
本发明是通过以下技术方案得以实现的:
一种A柱成像方法,其特征在于,基于包括两个A柱摄像头、眉心监测摄像头、A柱柔性显示屏、控制装置的A柱成像系统实现,眉心监测摄像头设置于汽车方向盘的转向柱上;方法应用于控制装置;方法包括:
步骤S01,基于眉心监测摄像头采集的车内影像,计算驾驶员头部位置在汽车坐标系的坐标;
步骤S02,通过眉心监测摄像头跟踪驾驶员双眼,计算在世界坐标系中驾驶员眉心的三维坐标,并获得驾驶员视线轨迹;
步骤S03,基于两个A柱摄像头采集的车前方影像,利用三维重建算法构建包含A柱盲区障碍物的三维影像信息;
步骤S04,将步骤S03中的三维影像信息投影到步骤S02中驾驶员视线所在法平面,得到与人眼成像相同的图像并显示于A柱柔性显示屏上。
本发明通过在两个A柱侧设置A柱摄像头采集车前方影像,在车内设置眉心监测摄像头采集车内影像并追踪驾驶员双眼,基于上述方式监测人眼视线轨迹及A柱盲区障碍物距离,根据人眼视线动态调整屏幕显示画面,减小因驾驶人员坐姿、障碍物远近等因素造成的影响。本发明最终获得显示画面的色彩、畸变、亮度都与人眼看到的相匹配,实现A柱“透明”效果。
作为优选,步骤S01具体包括:
步骤S11,基于眉心监测摄像头采集的车内影像,计算位于驾驶侧B柱上的参考点在眉心监测摄像头画面上的位置;
步骤S12,基于参考点在汽车坐标系的位置和眉心监测摄像头画面上的位置,计算眉心监测摄像头在汽车坐标系的坐标;
步骤S13,基于眉心监测摄像头在汽车坐标系的坐标,计算驾驶员头部位置在汽车坐标系的坐标。
作为优选,所述参考点为位于驾驶侧B柱上能够被眉心监测摄像头采集且不被驾驶员遮挡的点。
作为优选,所述眉心监测摄像头为双目摄像头,步骤S02具体包括:
步骤S21,通过双目摄像头跟踪驾驶员双眼,计算在世界坐标系中驾驶员眉心的三维坐标;
步骤S22,基于驾驶员眉心的三维坐标,换算驾驶员视线,并获得驾驶员视线轨迹。
作为优选,步骤S03具体包括:基于两个A柱摄像头采集的车前方影像和深度数据利用三维重建算法构建包含A柱盲区障碍物的三维影像信息;所述深度数据包括根据人眼焦距、A柱摄像头、障碍物距离计算获得的A柱柔性显示屏成像大小。
作为优选,所述障碍物距离通过如下方法获得:基于两个A柱摄像头采集的车前方影像,使用单视图深度估计算法计算出障碍物距离。
作为优选,所述A柱成像系统还包括用于获取A柱外障碍物距离的测距传感器。
作为优选,步骤S04具体包括:
步骤S41,获取步骤S02实时获得的驾驶员视线所在法平面,并将其作为投影平面;
步骤S42,将步骤S03中的三维影像信息投影到所述投影平面,得到与人眼成像相同的图像并显示于A柱柔性显示屏上。
作为优选,所述步骤S04还包括:在将与人眼成像相同的图像显示于A柱柔性显示屏上之前,对与人眼成像相同的图像进行部分截取。
作为优选,所述对与人眼成像相同的图像进行部分截取的步骤包括:
当检测到实时车速不大于车速阈值时,从获得的与人眼成像相同的图像中截取部分位于人眼范围内的最大图像;
当检测到实时车速大于车速阈值时,从获得的与人眼成像相同的图像中截取部分位于人眼范围内的局部图像,所述局部图像小于最大图像。
有益效果
一种A柱成像方法,
(1)   能够将显示屏上显示画面的色彩、畸变、亮度都与人眼看到的相匹配,实现A柱“透明”效果;
(2)   利用双目眉心追踪方法得到驾驶人员的视线轨迹,根据驾驶人员视线轨迹实时调整A柱屏幕具体成像,保证摄像头成像的角度和深度符合驾驶人员看到的真实世界的情况,不会出现成像扭曲或者偏移,让驾驶员感觉是从一个宽大的“窗口”向外望去;
(3)   通过三维重建算法和基于眉心位置的视角变换算法,得到人眼所看到的景象,实现真正的“透明”效果。
附图说明
图1为本发明一种A柱成像方法的流程图;
图2为眉心监测摄像头布置于车内的示意图;其中X点为参考点;
图3为不同焦距光学器件成像示意图;
图4为在A柱摄像头的视角下,障碍物呈现状态模拟图;
图5为在人眼的视角下,障碍物呈现状态模拟图;
图6为三维重建示意图;
图7为A柱摄像头与人眼被A柱遮挡的视野范围示意图;
X-参考点;5-摄像头位置;6-人眼位置;7-摄像头轴心位置。
本发明的最佳实施方式
以下是本发明的具体实施例并结合附图,对本发明的技术方案作进一步的描述,但本发明并不限于这些实施例。
本发明一种A柱成像方法基于A柱成像系统实现。A柱成像系统包括两个A柱摄像头、眉心监测摄像头、A柱柔性显示屏、控制装置。两个A柱摄像头设于两个A柱侧,如设于A柱顶端位置,或设于左右后视镜上。所述眉心监测摄像头设置于汽车方向盘的转向柱上。两个A柱摄像头和眉心监测摄像头采集的图像送入控制装置,控制装置获得符合人眼视觉的图像后在A柱柔性显示屏上显示。所述A柱柔性显示屏采用OLED显示屏。
如图1,本发明一种A柱成像方法应用于控制装置,方法包括:
步骤S01,基于眉心监测摄像头采集的车内影像,计算驾驶员头部位置在汽车坐标系的坐标;
步骤S02,通过眉心监测摄像头跟踪驾驶员双眼,计算在世界坐标系中驾驶员眉心的三维坐标,并获得驾驶员视线轨迹;
步骤S03,基于两个A柱摄像头采集的车前方影像,利用三维重建算法构建包含A柱盲区障碍物的三维影像信息;
步骤S04,将步骤S03中的三维影像信息投影到步骤S02中驾驶员视线所在法平面,得到与人眼成像相同的图像并显示于A柱柔性显示屏上。
其中,步骤S02和步骤S03的顺序不限于上述前后顺序。步骤S02、步骤S03可同时执行,步骤S03也可在步骤S02之前执行。
所述步骤S01具体包括:
步骤S11,基于眉心监测摄像头采集的车内影像,计算位于驾驶侧B柱上的参考点在眉心监测摄像头画面上的位置;
步骤S12,基于参考点在汽车坐标系的位置和眉心监测摄像头画面上的位置,计算眉心监测摄像头在汽车坐标系的坐标;
步骤S13,基于眉心监测摄像头在汽车坐标系的坐标,计算驾驶员头部位置在汽车坐标系的坐标。
眉心监测摄像头检测到的驾驶员眉心坐标是基于该摄像头坐标系的,为了将驾驶员眉心坐标转换到汽车坐标系用于其他应用时,还需要知晓眉心监测摄像头在汽车坐标系上的坐标。由于汽车方向盘转向柱一般有上下前后四项机械调节,无法直接定位此转向柱上眉心监测摄像头在汽车坐标系的坐标。为此,步骤S01主要利用参考点的位置,反推眉心监测摄像头位置,继而获得驾驶员头部位置。
方向盘转向柱上的摄像头可以照射到汽车驾驶侧B柱上的某个参考点X,而又不会被驾驶员遮挡。通过图像算法检测该固定图标在摄像头画面上的具体位置,从而利用参考点X在汽车坐标系的位置,反推出摄像头在汽车坐标系的坐标,最终求出驾驶员头部位置在汽车坐标系的具体位置。
驾驶过程中,因驾驶人员的身高、坐姿等因素,驾驶人员的视角往往不固定,因此驾驶人员看A柱外的景象也不一样,这时候A柱内侧屏幕上显示的画面需要作相应变换,否则屏幕显示和人眼看到的景象就会有较大的扭曲或者偏移。为此,利用步骤S02来解决此问题。所述眉心监测摄像头为双目摄像头,步骤S02具体包括:
步骤S21,通过双目摄像头跟踪驾驶员双眼,计算在世界坐标系中驾驶员眉心的三维坐标;
步骤S22,基于驾驶员眉心的三维坐标,换算驾驶员视线,并获得驾驶员视线轨迹。
这样通过双目摄像头跟踪驾驶人员双眼,计算出在世界坐标系中驾驶人员眉心的三维坐标,得到驾驶人员的视线轨迹,根据驾驶人员视线轨迹实时调整A柱屏幕具体成像,从而保证摄像头成像的角度和深度符合驾驶人员看到的真实世界的情况,不会出现成像扭曲或者偏移,让驾驶员感觉是从一个宽大的“窗口”向外望去。
从图4和图5中看出,摄像头视线与人眼视线往往不重合,所以在世界坐标系中人眼视觉平面的法向量与摄像头成像平面的法向量并不一致,那么就无法实现A柱“透明”效果。
一般做法是调整摄像头的安装位置及角度,但鉴于驾驶人员的身高、坐姿可能都不相同,摄像头固定的安装位置及角度无法满足不同驾驶人员的需求,而类似于电动座椅的调整方式势必影响行车安全。为此,我们提出步骤S03的三维重建方法。步骤S03包括:基于两个A柱摄像头采集的车前方影像和深度数据利用三维重建算法构建包含A柱盲区障碍物的三维影像信息。所述深度数据包括根据人眼焦距、A柱摄像头、障碍物距离计算获得的A柱柔性显示屏成像大小。具体地,通过对两个A柱摄像头图像和深度数据的预处理、点云计算、特征提取、点云配准、数据融合、表面生成等步骤,还原A柱盲区障碍物的三维信息,根据摄像机焦距、架设位置及角度、人眼视线方向,将该三维世界信息透视变换到人眼视觉的同一平面上,得到与人眼成像相同的图像显示在A柱屏幕上(参见图6)。
为什么需要考虑深度数据?图3示出了两个不同焦距的光学器件成像示意图。图中可见,光学器件焦距大,成像大小变化也更大。人眼可等效为一光学器件,具有一定的焦距,当摄像头焦距和人眼不一致的时候,看不同距离的物体,二者成像的大小比例是不同的,如图3所示,在物距1位置同一物体在上下两光学器件成像比例是27:59,在物距2位置同一物体的成像比例是55:195,因此需要把摄像头对不同距离的物体在A柱显示屏上的成像做不同比例的缩放以匹配人眼视觉系统,才能做到形状、大小一致。
为此,我们利用测距方法估算处障碍物距离,继而根据人眼焦距、摄像头焦距、障碍物距离,换算出A柱屏幕成像的大小,此模块的意义在于为“三维重建”算法提供必要的深度数据支持,在不同的距离上都能有较好的动态效果。所述障碍物距离通过如下方法获得:基于两个A柱摄像头采集的车前方影像,使用单视图深度估计算法计算出障碍物距离。或者,所述A柱成像系统还包括用于获取A柱外障碍物距离的测距传感器,所述障碍物距离通过测距传感器获取,测距传感器不限于毫米波雷达、激光雷达、超声波雷达、深度相机。
步骤S04具体包括:
步骤S41,获取步骤S02实时获得的驾驶员视线所在法平面,并将其作为投影平面;
步骤S42,将步骤S03中的三维影像信息投影到所述投影平面,得到与人眼成像相同的图像并显示于A柱柔性显示屏上。
鉴于驾驶人员的身高、坐姿往往都不相同,看到同一障碍物的视角也不相同,在透视变换过程中,算法需要确定将三维世界信息投影到驾驶人员眼睛成像的平面上,否则无法实现“透明”效果。驾驶人员在驾驶过程中往往有扭头、调整坐姿等动作,导致驾驶人员眼睛成像平面可能时常发生变化,因此实时调整透视变换的平面信息。则步骤S41需要获得步骤S02中实时检测的驾驶员视线,进而调整透视变换的投影平面。之后,将步骤S03中的三维影像信息投影到投影平面上,得到人眼所看到的景象,实现真正的“透明”效果。
所述步骤S04还包括:在将与人眼成像相同的图像显示于A柱柔性显示屏上之前,对与人眼成像相同的图像进行部分截取。
所述对与人眼成像相同的图像进行部分截取的步骤包括:
当检测到实时车速不大于车速阈值时,从获得的与人眼成像相同的图像中截取部分位于人眼范围内的最大图像;
当检测到实时车速大于车速阈值时,从获得的与人眼成像相同的图像中截取部分位于人眼范围内的局部图像,所述局部图像小于最大图像。
参见图7,当车速不大时,由于摄像头视野(1-4)大于人眼被A柱遮挡的范围,则截取2-3部分作为最大图像显示于显示屏上.。当车速较高时,大于车速阈值时,由于算法处理与软件运行导致的时延,为了“透明”效果,这时要截取摄像头画面中相应靠前的一部分A-B作为局部图像显示于显示屏上。
上述步骤提升从摄像头图像中截取出的那部分图像显示在A柱内侧屏幕上的清晰度,改善用户体验。
本领域的技术人员应理解,上述描述及附图中所示的本发明的实施例只作为举例而并不限制本发明。本发明的目的已经完整有效地实现。本发明的功能及结构原理已在实施例中展示和说明,在没有背离所述原理下,本发明的实施方式可以有任何变形或修改。

Claims (10)

  1. 一种A柱成像方法,其特征在于,基于包括两个A柱摄像头、眉心监测摄像头、A柱柔性显示屏、控制装置的A柱成像系统实现,眉心监测摄像头设置于汽车方向盘的转向柱上;方法应用于控制装置;方法包括:
    步骤S01,基于眉心监测摄像头采集的车内影像,计算驾驶员头部位置在汽车坐标系的坐标;
    步骤S02,通过眉心监测摄像头跟踪驾驶员双眼,计算在世界坐标系中驾驶员眉心的三维坐标,并获得驾驶员视线轨迹;
    步骤S03,基于两个A柱摄像头采集的车前方影像,利用三维重建算法构建包含A柱盲区障碍物的三维影像信息;
    步骤S04,将步骤S03中的三维影像信息投影到步骤S02中驾驶员视线所在法平面,得到与人眼成像相同的图像并显示于A柱柔性显示屏上。
  2. 根据权利要求1所述的一种A柱成像方法,其特征在于,步骤S01具体包括:
    步骤S11,基于眉心监测摄像头采集的车内影像,计算位于驾驶侧B柱上的参考点在眉心监测摄像头画面上的位置;
    步骤S12,基于参考点在汽车坐标系的位置和眉心监测摄像头画面上的位置,计算眉心监测摄像头在汽车坐标系的坐标;
    步骤S13,基于眉心监测摄像头在汽车坐标系的坐标,计算驾驶员头部位置在汽车坐标系的坐标。
  3. 根据权利要求2所述的一种A柱成像方法,其特征在于,所述参考点为位于驾驶侧B柱上能够被眉心监测摄像头采集且不被驾驶员遮挡的点。
  4. 根据权利要求1所述的一种A柱成像方法,其特征在于,所述眉心监测摄像头为双目摄像头,步骤S02具体包括:
    步骤S21,通过双目摄像头跟踪驾驶员双眼,计算在世界坐标系中驾驶员眉心的三维坐标;
    步骤S22,基于驾驶员眉心的三维坐标,换算驾驶员视线,并获得驾驶员视线轨迹。
  5. 根据权利要求1所述的一种A柱成像方法,其特征在于,步骤S03具体包括:基于两个A柱摄像头采集的车前方影像和深度数据利用三维重建算法构建包含A柱盲区障碍物的三维影像信息;所述深度数据包括根据人眼焦距、A柱摄像头、障碍物距离计算获得的A柱柔性显示屏成像大小。
  6. 根据权利要求5所述的一种A柱成像方法,其特征在于,所述障碍物距离通过如下方法获得:基于两个A柱摄像头采集的车前方影像,使用单视图深度估计算法计算出障碍物距离。
  7. 根据权利要求5所述的一种A柱成像方法,其特征在于,所述A柱成像系统还包括用于获取A柱外障碍物距离的测距传感器。
  8. 根据权利要求1所述的一种A柱成像方法,其特征在于,步骤S04具体包括:
    步骤S41,获取步骤S02实时获得的驾驶员视线所在法平面,并将其作为投影平面;
    步骤S42,将步骤S03中的三维影像信息投影到所述投影平面,得到与人眼成像相同的图像并显示于A柱柔性显示屏上。
  9. 根据权利要求1所述的一种A柱成像方法,其特征在于,所述步骤S04还包括:在将与人眼成像相同的图像显示于A柱柔性显示屏上之前,对与人眼成像相同的图像进行部分截取。
  10. 根据权利要求9所述的一种A柱成像方法,其特征在于,所述对与人眼成像相同的图像进行部分截取的步骤包括:
    当检测到实时车速不大于车速阈值时,从获得的与人眼成像相同的图像中截取部分位于人眼范围内的最大图像;
    当检测到实时车速大于车速阈值时,从获得的与人眼成像相同的图像中截取部分位于人眼范围内的局部图像,所述局部图像小于最大图像。
PCT/CN2020/121744 2020-09-27 2020-10-19 一种 a 柱成像方法 WO2022061999A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011031799.2 2020-09-27
CN202011031799.2A CN112298039A (zh) 2020-09-27 2020-09-27 一种a柱成像方法

Publications (1)

Publication Number Publication Date
WO2022061999A1 true WO2022061999A1 (zh) 2022-03-31

Family

ID=74489851

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/121744 WO2022061999A1 (zh) 2020-09-27 2020-10-19 一种 a 柱成像方法

Country Status (2)

Country Link
CN (1) CN112298039A (zh)
WO (1) WO2022061999A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111402A (zh) * 2021-03-24 2021-07-13 浙江合众新能源汽车有限公司 一种基于catia知识的a柱障碍角参数化设计方法
CN113064279B (zh) * 2021-03-26 2022-09-16 芜湖汽车前瞻技术研究院有限公司 Ar-hud系统的虚像位置调整方法、装置及存储介质
CN113239735B (zh) * 2021-04-15 2024-04-12 重庆利龙中宝智能技术有限公司 一种基于双目摄像头的汽车透明a柱系统及实现方法
CN113335184A (zh) * 2021-07-08 2021-09-03 合众新能源汽车有限公司 一种汽车a柱盲区的影像生成方法和装置
CN113306492A (zh) * 2021-07-14 2021-08-27 合众新能源汽车有限公司 一种生成汽车a柱盲区图像的方法和装置
CN113343935A (zh) * 2021-07-14 2021-09-03 合众新能源汽车有限公司 一种生成汽车a柱盲区图像的方法和装置
CN113676618A (zh) * 2021-08-20 2021-11-19 东北大学 一种透明a柱的智能显示系统及方法
CN113610053A (zh) * 2021-08-27 2021-11-05 合众新能源汽车有限公司 一种用于透明a柱的眉心定位方法
CN113665485B (zh) * 2021-08-30 2023-12-26 东风汽车集团股份有限公司 汽车前挡风玻璃防眩光系统及控制方法
CN113815534B (zh) * 2021-11-05 2023-05-16 吉林大学重庆研究院 基于应对人眼的位置变化而对图形进行动态处理的方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206465860U (zh) * 2017-02-13 2017-09-05 北京惠泽智业科技有限公司 一种消除汽车a柱盲区设备
US20190217780A1 (en) * 2018-01-17 2019-07-18 Japan Display Inc. Monitor display system and display method of the same
CN110614952A (zh) * 2019-10-28 2019-12-27 崔成哲 一种汽车盲区消除系统
CN210852234U (zh) * 2019-06-27 2020-06-26 中国第一汽车股份有限公司 车内显示装置及汽车
CN111572452A (zh) * 2020-06-12 2020-08-25 胡海峰 一种防遮挡的汽车a柱盲区监测装置和方法
JP2020145687A (ja) * 2017-05-19 2020-09-10 株式会社ユピテル ドライブレコーダー、ドライブレコーダー用表示装置及びプログラム等
CN211468310U (zh) * 2019-12-17 2020-09-11 上汽通用汽车有限公司 车辆显示系统及车辆

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103358996B (zh) * 2013-08-13 2015-04-29 吉林大学 汽车a柱透视化车载显示装置
CN107776488A (zh) * 2016-08-24 2018-03-09 京东方科技集团股份有限公司 汽车用辅助显示系统、显示方法以及汽车
JP7163732B2 (ja) * 2018-11-13 2022-11-01 トヨタ自動車株式会社 運転支援装置、運転支援システム、運転支援方法およびプログラム
CN109859270A (zh) * 2018-11-28 2019-06-07 浙江合众新能源汽车有限公司 一种人眼三维坐标定位方法及分离式双目摄像装置
CN109941277A (zh) * 2019-04-08 2019-06-28 宝能汽车有限公司 显示汽车a柱盲区图像的方法、装置和车辆
CN110509924A (zh) * 2019-08-13 2019-11-29 浙江合众新能源汽车有限公司 一种车内摄像头定位人脸位置的方法及结构
CN110901534A (zh) * 2019-11-14 2020-03-24 浙江合众新能源汽车有限公司 一种a柱透视实现方法及系统
CN111016785A (zh) * 2019-11-26 2020-04-17 惠州市德赛西威智能交通技术研究院有限公司 一种基于人眼位置的平视显示系统调节方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206465860U (zh) * 2017-02-13 2017-09-05 北京惠泽智业科技有限公司 一种消除汽车a柱盲区设备
JP2020145687A (ja) * 2017-05-19 2020-09-10 株式会社ユピテル ドライブレコーダー、ドライブレコーダー用表示装置及びプログラム等
US20190217780A1 (en) * 2018-01-17 2019-07-18 Japan Display Inc. Monitor display system and display method of the same
CN210852234U (zh) * 2019-06-27 2020-06-26 中国第一汽车股份有限公司 车内显示装置及汽车
CN110614952A (zh) * 2019-10-28 2019-12-27 崔成哲 一种汽车盲区消除系统
CN211468310U (zh) * 2019-12-17 2020-09-11 上汽通用汽车有限公司 车辆显示系统及车辆
CN111572452A (zh) * 2020-06-12 2020-08-25 胡海峰 一种防遮挡的汽车a柱盲区监测装置和方法

Also Published As

Publication number Publication date
CN112298039A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
WO2022061999A1 (zh) 一种 a 柱成像方法
CN107444263B (zh) 车辆用显示装置
CN107021015B (zh) 用于图像处理的系统及方法
US6369701B1 (en) Rendering device for generating a drive assistant image for drive assistance
JP5874920B2 (ja) 車両周囲確認用モニター装置
WO2011118125A1 (ja) 車両の運転を支援するための装置
US9787946B2 (en) Picture processing device and method
JP3228086B2 (ja) 運転操作補助装置
WO2018036250A1 (zh) 车用辅助显示装置、显示方法以及车辆
US20190100145A1 (en) Three-dimensional image driving assistance device
JP3663801B2 (ja) 車両後方視界支援装置
CN111267616A (zh) 一种车载抬头显示模块、方法及车辆
US11601621B2 (en) Vehicular display system
CN111277796A (zh) 图像处理方法、车载视觉辅助系统及存储设备
CN111739101A (zh) 一种消除车辆a柱盲区的装置和方法
WO2021093391A1 (zh) 一种a柱透视实现方法及系统
US20210039554A1 (en) Image processing apparatus, image processing method, and image processing program
WO2019034916A1 (en) SYSTEM AND METHOD FOR PRESENTING AND CONTROLLING VIRTUAL CAMERA IMAGE FOR A VEHICLE
CN211468310U (zh) 车辆显示系统及车辆
TW201605247A (zh) 影像處理系統及方法
JP2017056909A (ja) 車両用画像表示装置
JP5861871B2 (ja) 俯瞰画像提示装置
US10896017B2 (en) Multi-panel display system and method for jointly displaying a scene
WO2021240872A1 (ja) 表示制御装置、車両及び表示制御方法
CN111016786B (zh) 基于3d视线估计的汽车a柱遮挡区域显示方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20954845

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20954845

Country of ref document: EP

Kind code of ref document: A1