WO2021218602A1 - Vision-based adaptive ar-hud brightness adjustment method - Google Patents

Vision-based adaptive ar-hud brightness adjustment method Download PDF

Info

Publication number
WO2021218602A1
WO2021218602A1 PCT/CN2021/086415 CN2021086415W WO2021218602A1 WO 2021218602 A1 WO2021218602 A1 WO 2021218602A1 CN 2021086415 W CN2021086415 W CN 2021086415W WO 2021218602 A1 WO2021218602 A1 WO 2021218602A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
brightness
hud
light source
hud display
Prior art date
Application number
PCT/CN2021/086415
Other languages
French (fr)
Chinese (zh)
Inventor
余新
邓岳慈
康瑞
Original Assignee
深圳光峰科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光峰科技股份有限公司 filed Critical 深圳光峰科技股份有限公司
Publication of WO2021218602A1 publication Critical patent/WO2021218602A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B2027/0192Supplementary details
    • G02B2027/0196Supplementary details having transparent supporting structure for display mounting, e.g. to a window or a windshield

Definitions

  • the present invention relates to the technical field of HUD brightness adjustment, in particular to a vision-based AR-HUD brightness adaptive adjustment method.
  • AR-HUD is a technology that integrates Augmented Reality into a head-up display to solve the problem of lack of traditional HUD content and no immersive experience.
  • the HUD uses a projector to project the displayed image on the windshield. After optical imaging, a virtual image can be displayed a few meters away in front of the car window.
  • the on-board computing system renders the important information needed by the driver in the virtual screen through calculation, such as navigation information and road condition information, and the rendered information can be integrated with the real scene in the driver's field of vision to achieve the effect of AR .
  • the HUD display effect of the same brightness will be greatly reduced.
  • the display brightness needs to be enhanced, but the virtual image range at this time If there is an area with weak background light, the excessively strong display brightness will bring discomfort to the driver.
  • Adaptively adjusting the brightness of the light source displayed by the HUD can not only improve the overall experience of the driver, but also reduce the overall power of the HUD, alleviating energy consumption and heat dissipation pressure.
  • the technical problem mainly solved by the present invention is to provide a vision-based AR-HUD brightness adaptive adjustment method, which solves the current technical problems of inconvenient adjustment of the brightness of the HUD and the inability to adjust the brightness locally.
  • a technical solution adopted by the present invention is to provide a vision-based AR-HUD brightness adaptive adjustment method, the adjustment method includes:
  • the power of the light source corresponding to the image area is adjusted according to the target brightness corresponding to the image area.
  • the generating a HUD display image according to the scene image includes:
  • the scene image in front of the car window is captured, an RGB image and a depth image are obtained according to the scene image, and a HUD display image is generated from the RGB image and the depth image.
  • the acquiring the brightness distribution of the HUD display image includes:
  • the depth image is converted into a 3D point cloud, and the 3D point cloud is mapped to the HUD display image to obtain a brightness distribution.
  • the dividing the HUD display image into one or more image regions includes:
  • the HUD display image is divided into more than one area.
  • the dividing the HUD display image into more than one area according to the brightness value in the brightness distribution includes:
  • the edge detection algorithm is used to extract the edge information of the brightness distribution, and the edge information is fitted with a rectangular frame to obtain several image regions.
  • the calculating the target brightness corresponding to each of the image regions according to the brightness distribution of the scene image includes:
  • the average brightness value of each image area is calculated, and the target brightness is calculated according to the average brightness value.
  • the adjusting the light source power corresponding to the image area according to the target brightness corresponding to the image area includes:
  • Obtain preset brightness-light source power mapping data obtain the light source power reference value of different image areas according to the target brightness value of each image area, and use the light source power reference value to drive the light-emitting elements of the corresponding image area of the HUD light source.
  • the conversion formula for converting the depth image into a 3D point cloud is:
  • the z c is the depth value of the depth image pixel
  • d x and d y are the actual size of the pixel in the photosensitive chip of the camera that shoots the depth image
  • u o and v o are the center of the camera image plane
  • f is the focal length of the camera.
  • d x , d y , u o , v o and f are the internal parameters of the camera
  • the 3 ⁇ 4 RT matrix is the external parameters of the camera.
  • the conversion formula for mapping the 3D point cloud to the image displayed by the HUD is:
  • d x , d y , u o , v o and f are the internal parameters of the camera
  • the 3 ⁇ 4 RT matrix is the external parameters of the camera
  • X w , Y w , and Z w are the 3D points Cloud parameters.
  • the present invention also provides a computer-readable storage medium, the storage medium stores a computer program, and when the program is executed, it is used to implement the steps of the above-mentioned method.
  • the beneficial effect of the vision-based AR-HUD brightness adaptive adjustment method of the present invention is that the intensity of the ambient background light is quantified through the method of visual perception, and the comfortable brightness range felt by the human eye is established.
  • Brightness-light source power mapping relationship adjust the HUD final display image brightness, realize the dynamic adjustment of HUD image brightness, not only can improve the overall experience of the driver, but also reduce the overall power of the HUD, relieve energy consumption and heat dissipation pressure.
  • Fig. 1 is a flowchart of a vision-based AR-HUD brightness adaptive adjustment method of the present invention
  • FIG. 2 is a specific flow chart of step S200
  • FIG. 3 is a specific flow chart of step S300
  • Figure 4 is a schematic diagram of HUD brightness distribution
  • Figure 5 is a schematic diagram of a vision-based AR-HUD brightness adaptive adjustment system
  • Fig. 6 is a distribution diagram of a vision-based AR-HUD brightness adaptive adjustment system in a car.
  • the present invention provides a vision-based AR-HUD brightness adaptive adjustment method.
  • the adjustment method includes: obtaining a scene image in front of a car window; generating a HUD display image according to the scene image, obtaining the brightness distribution of the HUD display image; displaying the HUD image Divide into one or more image areas; calculate the target brightness corresponding to each image area according to the brightness distribution of the scene image; adjust the light source power corresponding to the image area according to the target brightness corresponding to the image area.
  • the intensity of the ambient background light is quantified and calculated, and the brightness-light source power mapping relationship is established according to the comfortable brightness range experienced by the human eye, and the final image brightness displayed by the HUD is adjusted to realize the dynamic adjustment of the HUD image brightness.
  • FIG. 1 is a flowchart of a vision-based AR-HUD brightness adaptive adjustment method of the present invention.
  • the adjustment method in FIG. 1 is implemented by step S100, step S200, step S300, step S400, and step S500, where,
  • Step S100 is to obtain a scene image in front of the car window.
  • the scene image in front of the car window includes the external scene in front of the car window and the lighting conditions, which are used to adjust the brightness of the HUD to form an adjustment benchmark.
  • the brightness adjustment of the HUD uses the scene image information in front of the car window as the contrast object.
  • Step S200 is to generate a HUD display image according to the scene image, and obtain the brightness distribution of the HUD display image.
  • the method for acquiring the scene image in this embodiment is to set a camera in front of the car window to shoot the scene image of the outside world to obtain an RGB image (red, green, and blue primary color image) or an RGB image and a depth image. Image or RGB image and depth image conversion to get the brightness in front of the car window.
  • the first embodiment is to set up a monocular camera in front of the car window.
  • the monocular camera is formed based on a monocular hand-eye camera and a laser rangefinder. It uses a monocular with a line laser, a single CCD camera, pinhole imaging, and a laser surface constraint model.
  • the visual measurement method takes pictures of the external environment to generate RGB images. After obtaining the RGB image of the scene outside the window of the car in the process of traveling, the RGB image is converted into a YUV image, and the value of the Y channel in the YUV image is extracted.
  • the conversion formula is:
  • R, G, and B are the values of red, green, and blue in the RGB image, respectively.
  • the global average value of the Y channel under the YUV image is calculated by the above conversion formula, so as to obtain the brightness of the scene outside the car window.
  • the second embodiment is to set up an RGBD camera in front of the car window to obtain the RGB image and the depth image of the scene outside the car window when the car is traveling.
  • the RGBD camera is a binocular camera, and two cameras fixed at different positions are used for positioning.
  • two RGBD cameras are installed on both sides of a car window, so that the obtained image covers the entire car window. Part of the scene, the image of the external environment on the two camera image planes were obtained, RGB image and depth image were generated, and then the RGB image and depth image were converted and mapped to obtain the HUD display image.
  • Step S200 further includes step S201.
  • Step S201 is to generate a HUD display image according to the scene image. RGB image and depth image, HUD display image is generated from RGB image and depth image.
  • Step S201 uses the second embodiment described above to generate a HUD display image, and an RGBD camera is set near the front windshield of the car.
  • the RGBD camera shoots to form an RGB image and a depth image, and then the RGB image and the depth image are subsequently converted and mapped to generate a HUD Display the image.
  • the RGBD camera in this embodiment is a binocular camera, the depth image is calculated from the RGB image obtained by the RGBD camera, and the pixels of the RGB image and the depth image are in one-to-one correspondence.
  • the RGB image selected when calculating the depth image is any one of the RGB images obtained by the two left and right RGBD cameras.
  • Step S200 further includes step S202.
  • step S202 obtaining the brightness distribution of the HUD display image includes: converting the depth image into a 3D point cloud, and mapping the 3D point cloud to the HUD display image to obtain the brightness distribution.
  • the color image is acquired by the camera, and then the color information of the pixel at the corresponding position is assigned to the corresponding point in the point cloud, which is recorded in the form of points.
  • Each point contains three-dimensional coordinates and color information, that is, the depth image is converted into a 3D point cloud. Then map the 3D point cloud to the HUD display image, the HUD display image is accompanied by color information, and the brightness distribution can be obtained.
  • the conversion formula for converting the depth image into a 3D point cloud in this embodiment is:
  • z c is the depth value of the depth image pixel
  • d x and d y are the actual size of the pixel in the camera's photosensitive chip that shoots the depth image
  • u o and v o are the center of the camera image plane
  • f is the focal length of the camera.
  • d x , d y , u o , v o and f are the internal parameters of the camera
  • the 3 ⁇ 4 RT matrix is the external parameters of the camera.
  • the internal and external parameters of the camera are extended to 4 ⁇ 3 matrices and 4 ⁇ 4 matrices.
  • the internal and external parameters of the camera are obtained in advance through calibration, and the obtained point cloud information includes spatial coordinates x w , y w , z w and a brightness intensity value.
  • the conversion formula for mapping the 3D point cloud to the image displayed by the HUD in this embodiment is:
  • d x , dy , u o , v o and f are the internal parameters of the camera
  • the 3 ⁇ 4 RT matrix is the external parameters of the camera
  • x w , y w , and z w are the parameters of the 3D point cloud.
  • Each 3D point cloud carries a piece of brightness information, so the HUD image after 3D point cloud mapping is the brightness distribution map, which contains the brightness information of the external scene image.
  • Step S300 is to divide the HUD display image into one or more image regions, and determine the brightness adjustment of different regions by dividing different image regions, so as to make the user's visual experience more comfortable.
  • Step S300 further includes steps S301 and S302.
  • step S301 the HUD display image is regarded as an image area.
  • step S302 the HUD display image is displayed according to the brightness value in the brightness distribution. Divide into more than one area.
  • step S301 the entire HUD display image is regarded as an image area, so that the overall brightness of the HUD display image is adjusted.
  • the adjustment method of step S301 adopts the first embodiment described above to generate a HUD display image, set a monocular camera to photograph the external environment, generate an RGB image, then convert the RGB image into a YUV image, and extract the global average value of the Y channel in the YUV image ,
  • the calculation formula is: Among them, y i is the size of the pixels of the Y channel of the image, and N is the number of pixels in a single channel of the image.
  • the brightness-light source power mapping data is preset, the light source power reference value is obtained according to the calculated global average value of the Y channel, and the light-emitting element of the HUD light source is driven by the light source power reference value.
  • the brightness-light source power mapping data is established through experimental data in advance to form a corresponding functional relationship.
  • step S302 the HUD display image is divided into a plurality of image areas, so that the brightness of different HUD display areas is adjusted respectively, and the division method is based on the brightness value in the brightness distribution.
  • step S202 the depth image is converted into a 3D point cloud, and the 3D point cloud is mapped to the HUD display image to obtain the brightness distribution, and the brightness value is obtained from the brightness distribution.
  • Clustering classifying the objects closest to them, averaging all the data points belonging to this class, taking the average as the new class center, repeating until convergence, to get the brightness value of each point in the brightness distribution.
  • Another acquisition method is to divide the HUD display image into multiple regions in advance, merge the adjacent regions whose average brightness difference is less than the threshold in the multiple regions, and then calculate the brightness value of each region.
  • the division of the HUD display image into more than one region according to the brightness value in the brightness distribution in this application includes: using an edge detection algorithm to extract edge information of the brightness distribution, and fitting the edge information with a rectangular frame to obtain several image regions.
  • Figure 4 shows the brightness distribution of the HUD at a certain moment, and the edge information is calculated by the edge detection algorithm (such as the Canny edge detection algorithm).
  • the dashed line is the sub-region fitted by the rectangle. If there is an intersection region, the average brightness of the region is the average value of the average brightness of the two intersecting regions. The brightness value corresponding to each area and the pixel coordinate set of the area will be stored.
  • Step S400 in the present application is to calculate the target brightness corresponding to each image area according to the brightness distribution of the scene image, specifically, calculating the average brightness value of each image area, and calculating the target brightness according to the average brightness value. If the entire HUD display image is regarded as an image area, the average brightness value of the entire image is calculated. If the HUD display image is divided into multiple regions, the average brightness value of each region is calculated separately. According to the brightness of each area of the scene image, the target brightness that best meets the user's visual experience is calculated.
  • Step S500 in the present application is to adjust the light source power corresponding to the image area according to the target brightness corresponding to the image area, and adjust the light source power to change the brightness of the image area so that the brightness value is consistent with the target brightness.
  • the preset brightness-light source power mapping data is obtained, the light source power reference value of different image areas is obtained according to the target brightness value of each image area, and the light-emitting elements of the corresponding image area of the HUD light source are driven by the light source power reference value.
  • This application establishes brightness-light source power mapping data through experimental data in advance, and selects multiple sets of different brightness and light source power data through multiple experiments, such as connecting the experimental results to select points to form the corresponding relationship between brightness and light source power, and then Fit the experimental data to form a functional relationship.
  • the corresponding light source power is determined according to the above functional relationship, and the corresponding light source power is used to drive the light-emitting elements of the corresponding image area of the HUD light source.
  • the present invention also provides a computer-readable storage medium, the storage medium stores a computer program, and when the program is executed, it is used to implement the steps of the above adjustment method.
  • the aforementioned computer-readable storage medium may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
  • the computer-readable storage medium may be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), any type of storage disk (such as optical disk) , DVD, etc.), or similar storage media, or a combination of them.
  • the present invention also provides a vision-based AR-HUD brightness adaptive adjustment system. Please refer to Figures 5 and 6.
  • the system includes a visual perception module 100, a display control module 200, a HUD display module 300, and a storage module 400:
  • the visual perception module 100 captures external environment information, that is, the scene image in front of the car window, and sends the scene image to the display control module 200.
  • the display control module 200 receives the external environment information sent by the visual perception module 100, evaluates the brightness of the external environment, and outputs the light source power signal displayed by the HUD according to the brightness.
  • the HUD display module 300 receives the output signal of the display control module 200, and projects the HUD display image with the light source power of the output signal.
  • the storage module 400 is used to store brightness and light source power mapping data; the display control module obtains the corresponding light source power according to the external environment information sent by the visual perception module and the brightness and light source power mapping data in the storage module, and sends it to the HUD display module.
  • the brightness and light source power are mapped as a functional relationship established in advance through experimental data.
  • the experimental object is the corresponding relationship between the car's light source power and the brightness.
  • the corresponding relationship between the brightness and the light source power is formed through a certain method, such as the connection of the selected points in the experiment. Import the functional relationship formed by the experimental data into the storage module for subsequent adjustment of the light source power accordingly.
  • the present invention also provides a vision-based AR-HUD brightness adaptive adjustment device.
  • the adjustment device includes a HUD display device, a controller, and a monocular camera or RGBD camera.
  • the HUD display image is projected on the windshield, and the monocular camera or RGBD camera is installed behind the windshield to capture the information of the scene in front of the car window to obtain the RGB image.
  • the brightness distribution of the scene image can be obtained.
  • the controller calculates the target brightness corresponding to each image area according to the calculated brightness distribution of the scene image, and then according to the stored brightness and light source power mapping relationship, The corresponding light source power is obtained, and the light-emitting element in the corresponding image area of the HUD light source is driven with the corresponding light source power value.
  • the vision-based AR-HUD brightness adaptive adjustment method of the present invention through the method of visual perception, the intensity of the ambient background light is quantified and calculated, and the brightness-light source power mapping relationship is established according to the comfortable brightness range felt by the human eye, and the final adjustment of the HUD is
  • the displayed image brightness which realizes the dynamic adjustment of the HUD image brightness, can not only improve the overall experience of the driver, but also reduce the overall power of the HUD, alleviating energy consumption and heat dissipation pressure.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Instrument Panels (AREA)

Abstract

Disclosed is a vision-based adaptive AR-HUD brightness adjustment method. The adjustment method comprises: acquiring a scene image in front of a vehicle window; generating an HUD display image according to the scene image, and acquiring the brightness distribution of the HUD display image; dividing the HUD display image into one or more image regions; calculating, according to the brightness distribution of the scene image, a target brightness corresponding to each of the image regions; and adjusting, according to the target brightness corresponding to the image region, a light source power corresponding to the image region. In the vision-based adaptive AR-HUD brightness adjustment method of the present invention, the intensity of ambient background light is quantitatively calculated by means of a visual perception method, a brightness-light source power mapping relationship is established according to the range of a comfortable brightness perceived by human eyes, and the brightness of an image finally displayed by an HUD is adjusted, so as to realize the dynamic adjustment of an HUD image brightness, such that not only can the overall perception and experience of a driver be improved, the overall power of an HUD is also reduced, thereby alleviating the energy consumption and heat dissipation pressure.

Description

一种基于视觉的AR-HUD亮度自适应调节方法A vision-based adaptive adjustment method of AR-HUD brightness 技术领域Technical field
本发明涉及HUD亮度调节技术领域,特别是涉及一种基于视觉的AR-HUD亮度自适应调节方法。The present invention relates to the technical field of HUD brightness adjustment, in particular to a vision-based AR-HUD brightness adaptive adjustment method.
背景技术Background technique
AR-HUD是一种把增强现实(Augmented Reality)结合进抬头显示器(Head-up Display)的技术,以解决传统HUD内容匮乏,无沉浸式体验的问题。HUD通过一个投影机把所显示的图像投射在挡风玻璃上,经过光学成像后可在车窗前几米远处显示一个虚拟画面。车载计算系统通过计算把驾驶员需要的重要信息渲染在虚拟画面中,如导航信息和路况信息等,并且所渲染的信息在驾驶员的视野下能与真实的场景融合在一起,达到AR的效果。然而,在真实的应用场景下,由于环境光和场景物体的客观影响,同等亮度的HUD显示效果将会大打折扣,比如在较强的背景光下,显示的亮度需要增强,但此时虚像范围内若有背景光较弱的区域,那么过强的显示亮度会给司机带来感受上的不适。自适应调节HUD显示的光源亮度不仅能提高驾驶员整体的感受体验,而且降低了HUD整体的功率,缓解能耗和散热压力。AR-HUD is a technology that integrates Augmented Reality into a head-up display to solve the problem of lack of traditional HUD content and no immersive experience. The HUD uses a projector to project the displayed image on the windshield. After optical imaging, a virtual image can be displayed a few meters away in front of the car window. The on-board computing system renders the important information needed by the driver in the virtual screen through calculation, such as navigation information and road condition information, and the rendered information can be integrated with the real scene in the driver's field of vision to achieve the effect of AR . However, in a real application scenario, due to the objective influence of ambient light and scene objects, the HUD display effect of the same brightness will be greatly reduced. For example, under strong background light, the display brightness needs to be enhanced, but the virtual image range at this time If there is an area with weak background light, the excessively strong display brightness will bring discomfort to the driver. Adaptively adjusting the brightness of the light source displayed by the HUD can not only improve the overall experience of the driver, but also reduce the overall power of the HUD, alleviating energy consumption and heat dissipation pressure.
目前已有的HUD亮度调节有两种方案,一种是人工手动调整,另一种是基于光电传感器。手动调整需要人眼判断画面显示亮度的舒适程度,在行驶的过程中不方便调节。基于光电传感器的方法十分依赖传感器的精度,以及传感器的有效范围,且通常需要安装多个传感器在车上,部署起来十分不便。此外,以上两种方法都只能调节HUD投影的全局光源功率,不能局部调节所显示画面的亮度,若所显示画面的环境背景光在空间上分布不均匀,那么调节的图像亮度在人眼的感受下会呈现出一块亮,一块不亮的情况,影响司机的体验。There are currently two schemes for adjusting the brightness of the HUD, one is manual adjustment, and the other is based on a photoelectric sensor. Manual adjustment requires human eyes to judge the comfort level of the screen display brightness, which is inconvenient to adjust during driving. The photoelectric sensor-based method relies heavily on the accuracy of the sensor and the effective range of the sensor, and usually requires multiple sensors to be installed on the vehicle, which is very inconvenient to deploy. In addition, the above two methods can only adjust the global light source power of the HUD projection, and cannot adjust the brightness of the displayed picture locally. If the ambient background light of the displayed picture is unevenly distributed in space, then the adjusted image brightness is in the human eye. When you feel it, there will be a bright and no bright situation, which will affect the driver's experience.
发明内容Summary of the invention
本发明主要解决的技术问题是:提供一种基于视觉的AR-HUD亮度自适应调节方法,解决目前存在的HUD亮度调节不便以及不能进行局部调节亮度的技术问题。The technical problem mainly solved by the present invention is to provide a vision-based AR-HUD brightness adaptive adjustment method, which solves the current technical problems of inconvenient adjustment of the brightness of the HUD and the inability to adjust the brightness locally.
为解决上述技术问题,本发明采用的一个技术方案是:提供一种基于视觉的AR-HUD亮度自适应调节方法,调节方法包括:In order to solve the above technical problems, a technical solution adopted by the present invention is to provide a vision-based AR-HUD brightness adaptive adjustment method, the adjustment method includes:
获取汽车车窗前的场景图像;Obtain the scene image in front of the car window;
根据所述场景图像生成HUD显示图像,获取所述HUD显示图像的亮度分布;Generating a HUD display image according to the scene image, and acquiring the brightness distribution of the HUD display image;
将所述HUD显示图像划分为一个或一个以上的图像区域;Dividing the HUD display image into one or more image regions;
根据所述场景图像的亮度分布计算所述图像区域各自对应的目标亮度;Calculating the target brightness corresponding to each of the image regions according to the brightness distribution of the scene image;
根据所述图像区域对应的目标亮度调节所述图像区域对应的光源功率。The power of the light source corresponding to the image area is adjusted according to the target brightness corresponding to the image area.
其中,所述根据所述场景图像生成HUD显示图像包括:Wherein, the generating a HUD display image according to the scene image includes:
捕捉车窗前的场景图像,根据所述场景图像得到RGB图像和深度图像,由所述RGB图像和所述深度图像生成HUD显示图像。The scene image in front of the car window is captured, an RGB image and a depth image are obtained according to the scene image, and a HUD display image is generated from the RGB image and the depth image.
其中,所述获取所述HUD显示图像的亮度分布包括:Wherein, the acquiring the brightness distribution of the HUD display image includes:
将所述深度图像转化为3D点云,并将所述3D点云映射到HUD显示图像上,得到亮度分布。The depth image is converted into a 3D point cloud, and the 3D point cloud is mapped to the HUD display image to obtain a brightness distribution.
其中,所述将所述HUD显示图像划分为一个或一个以上的图像区域包括:Wherein, the dividing the HUD display image into one or more image regions includes:
将所述HUD显示图像整体作为一个图像区域;或Use the entire HUD display image as an image area; or
依据亮度分布中的亮度数值将HUD显示图像划分为一个以上的区域。According to the brightness value in the brightness distribution, the HUD display image is divided into more than one area.
其中,所述依据亮度分布中的亮度数值将HUD显示图像划分为一个以上的区域包括:Wherein, the dividing the HUD display image into more than one area according to the brightness value in the brightness distribution includes:
使用边缘检测算法提取出所述亮度分布的边缘信息,用矩形框拟合 所述边缘信息,得到若干图像区域。The edge detection algorithm is used to extract the edge information of the brightness distribution, and the edge information is fitted with a rectangular frame to obtain several image regions.
其中,所述根据所述场景图像的亮度分布计算所述图像区域各自对应的目标亮度包括:Wherein, the calculating the target brightness corresponding to each of the image regions according to the brightness distribution of the scene image includes:
计算每个图像区域的平均亮度值,根据所述平均亮度值计算得出目标亮度。The average brightness value of each image area is calculated, and the target brightness is calculated according to the average brightness value.
其中,所述根据所述图像区域对应的目标亮度调节所述图像区域对应的光源功率包括:Wherein, the adjusting the light source power corresponding to the image area according to the target brightness corresponding to the image area includes:
获取预设的亮度-光源功率映射数据,依据每个图像区域的目标亮度值获取不同图像区域的光源功率参考值,并以所述光源功率参考值驱动HUD光源相应图像区域的发光元件。Obtain preset brightness-light source power mapping data, obtain the light source power reference value of different image areas according to the target brightness value of each image area, and use the light source power reference value to drive the light-emitting elements of the corresponding image area of the HUD light source.
其中,将所述深度图像转化为3D点云的转化公式为:Wherein, the conversion formula for converting the depth image into a 3D point cloud is:
Figure PCTCN2021086415-appb-000001
Figure PCTCN2021086415-appb-000001
其中,所述z c是深度图像像素点的深度值,d x和d y是拍摄所述深度图像的相机感光芯片中像素的实际大小,u o和v o是所述相机图像平面的中心,f是所述相机的焦距。d x、d y、u o、v o和f为所述相机的内参数,3×4的RT矩阵为所述相机的外参数。 Wherein, the z c is the depth value of the depth image pixel, d x and d y are the actual size of the pixel in the photosensitive chip of the camera that shoots the depth image, u o and v o are the center of the camera image plane, f is the focal length of the camera. d x , d y , u o , v o and f are the internal parameters of the camera, and the 3×4 RT matrix is the external parameters of the camera.
其中,将所述3D点云映射到HUD显示的图像上的转化公式为:Wherein, the conversion formula for mapping the 3D point cloud to the image displayed by the HUD is:
Figure PCTCN2021086415-appb-000002
Figure PCTCN2021086415-appb-000002
其中,d x、d y、u o、v o和f为所述相机的内参数,3×4的RT矩阵为所述相机的外参数,X w、Y w、Z w为所述3D点云的参数。 Where d x , d y , u o , v o and f are the internal parameters of the camera, the 3×4 RT matrix is the external parameters of the camera, and X w , Y w , and Z w are the 3D points Cloud parameters.
为解决技术问题,本发明还提供一种计算机可读存储介质,所述存储介质存储有计算机程序,所述程序被执行时用于实现上述所述方法的步骤。To solve the technical problem, the present invention also provides a computer-readable storage medium, the storage medium stores a computer program, and when the program is executed, it is used to implement the steps of the above-mentioned method.
与现有技术相比,本发明的基于视觉的AR-HUD亮度自适应调节方法的有益效果为,通过视觉感知的方法,量化计算环境背景光的强度,根据人眼所感受的舒适亮度范围建立亮度-光源功率映射关系,调节HUD最终显示的图像亮度,实现HUD图像亮度的动态调节,不仅能提高驾驶员整体的感受体验,而且降低了HUD整体的功率,缓解能耗和散热压力。Compared with the prior art, the beneficial effect of the vision-based AR-HUD brightness adaptive adjustment method of the present invention is that the intensity of the ambient background light is quantified through the method of visual perception, and the comfortable brightness range felt by the human eye is established. Brightness-light source power mapping relationship, adjust the HUD final display image brightness, realize the dynamic adjustment of HUD image brightness, not only can improve the overall experience of the driver, but also reduce the overall power of the HUD, relieve energy consumption and heat dissipation pressure.
附图说明Description of the drawings
为更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions in the embodiments of the present invention more clearly, the following will briefly introduce the drawings used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1是本发明基于视觉的AR-HUD亮度自适应调节方法的流程图;Fig. 1 is a flowchart of a vision-based AR-HUD brightness adaptive adjustment method of the present invention;
图2是步骤S200的具体流程图;Figure 2 is a specific flow chart of step S200;
图3是步骤S300的具体流程图;Figure 3 is a specific flow chart of step S300;
图4是HUD亮度分布示意图;Figure 4 is a schematic diagram of HUD brightness distribution;
图5是基于视觉的AR-HUD亮度自适应调节系统的示意图;Figure 5 is a schematic diagram of a vision-based AR-HUD brightness adaptive adjustment system;
图6是基于视觉的AR-HUD亮度自适应调节系统的在汽车内的分布图。Fig. 6 is a distribution diagram of a vision-based AR-HUD brightness adaptive adjustment system in a car.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
本申请中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方 法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", etc. in this application are used to distinguish different objects, rather than to describe a specific sequence. In addition, the terms "including" and "having" and any variations of them are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not limited to the listed steps or units, but optionally includes unlisted steps or units, or optionally also includes Other steps or units inherent to these processes, methods, products or equipment.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。The reference to "embodiments" herein means that a specific feature, structure, or characteristic described in conjunction with the embodiments may be included in at least one embodiment of the present application. The appearance of the phrase in various places in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment mutually exclusive with other embodiments. Those skilled in the art clearly and implicitly understand that the embodiments described herein can be combined with other embodiments.
本发明提供一种基于视觉的AR-HUD亮度自适应调节方法,调节方法包括:获取汽车车窗前的场景图像;根据场景图像生成HUD显示图像,获取HUD显示图像的亮度分布;将HUD显示图像划分为一个或一个以上的图像区域;根据场景图像的亮度分布计算图像区域各自对应的目标亮度;根据图像区域对应的目标亮度调节图像区域对应的光源功率。通过视觉感知的方法,量化计算环境背景光的强度,根据人眼所感受的舒适亮度范围建立亮度-光源功率映射关系,调节HUD最终显示的图像亮度,实现HUD图像亮度的动态调节。以下将通过具体实施例进行详细说明。The present invention provides a vision-based AR-HUD brightness adaptive adjustment method. The adjustment method includes: obtaining a scene image in front of a car window; generating a HUD display image according to the scene image, obtaining the brightness distribution of the HUD display image; displaying the HUD image Divide into one or more image areas; calculate the target brightness corresponding to each image area according to the brightness distribution of the scene image; adjust the light source power corresponding to the image area according to the target brightness corresponding to the image area. Through the method of visual perception, the intensity of the ambient background light is quantified and calculated, and the brightness-light source power mapping relationship is established according to the comfortable brightness range experienced by the human eye, and the final image brightness displayed by the HUD is adjusted to realize the dynamic adjustment of the HUD image brightness. A detailed description will be given below through specific embodiments.
请参阅图1,图1为本发明基于视觉的AR-HUD亮度自适应调节方法的流程图,图1中的调节方法通过步骤S100、步骤S200、步骤S300、步骤S400和步骤S500实现,其中,Please refer to FIG. 1. FIG. 1 is a flowchart of a vision-based AR-HUD brightness adaptive adjustment method of the present invention. The adjustment method in FIG. 1 is implemented by step S100, step S200, step S300, step S400, and step S500, where,
步骤S100为获取汽车车窗前的场景图像。汽车车窗前的场景图像包括车窗前的外部场景和光照情况,用以对HUD的亮度调节形成调节基准,调节HUD的亮度以汽车车窗前的场景图像信息作为对比对象。Step S100 is to obtain a scene image in front of the car window. The scene image in front of the car window includes the external scene in front of the car window and the lighting conditions, which are used to adjust the brightness of the HUD to form an adjustment benchmark. The brightness adjustment of the HUD uses the scene image information in front of the car window as the contrast object.
获取汽车车窗前的场景图像可以有多种方式,例如通过在车窗附近设置光电传感器、摄像机或带有抓取程序的摄像头,通过对车窗前的场景图像进行感应或画面抓取,获取车窗前的亮度情况。There are many ways to obtain the scene image in front of the car window, for example, by setting up a photoelectric sensor, a camera or a camera with a capture program near the car window, and by sensing or capturing the scene image in front of the car window. The brightness in front of the car window.
步骤S200为根据场景图像生成HUD显示图像,获取HUD显示图像的亮度分布。本实施例中场景图像的获取方法为通过在汽车车窗前设置相机,对外界的场景图像进行拍摄,从而得到RGB图像(红、绿、 蓝三原色图像)或RGB图像及深度图像,再将RGB图像或RGB图像及深度图像转化,得到车窗前的亮度情况。Step S200 is to generate a HUD display image according to the scene image, and obtain the brightness distribution of the HUD display image. The method for acquiring the scene image in this embodiment is to set a camera in front of the car window to shoot the scene image of the outside world to obtain an RGB image (red, green, and blue primary color image) or an RGB image and a depth image. Image or RGB image and depth image conversion to get the brightness in front of the car window.
实施例一:Example one:
实施例一为在汽车车窗前设置单目相机,单目相机为基于单目手眼相机和激光测距仪而形成,采用线激光器、单CCD相机、小孔成像和激光面约束模型的单目视觉测量方法,对外界环境进行拍摄,生成RGB图像。获取汽车行进过程中车窗外场景的RGB图像后,再将RGB图像转化为YUV图像,并提取YUV图像中Y通道的数值,转化公式为:The first embodiment is to set up a monocular camera in front of the car window. The monocular camera is formed based on a monocular hand-eye camera and a laser rangefinder. It uses a monocular with a line laser, a single CCD camera, pinhole imaging, and a laser surface constraint model. The visual measurement method takes pictures of the external environment to generate RGB images. After obtaining the RGB image of the scene outside the window of the car in the process of traveling, the RGB image is converted into a YUV image, and the value of the Y channel in the YUV image is extracted. The conversion formula is:
Y=0.299R+0.587G+0.114B,Y=0.299R+0.587G+0.114B,
U=-0.147R-0.289G-0.436B,U=-0.147R-0.289G-0.436B,
V=0.615R-0.515G-0.100B,V=0.615R-0.515G-0.100B,
其中R、G、B分别为RGB图像中红、绿、蓝的数值。通过上述转化公式计算YUV图像下Y通道的全局平均值,从而获取车窗外场景的亮度情况。Among them, R, G, and B are the values of red, green, and blue in the RGB image, respectively. The global average value of the Y channel under the YUV image is calculated by the above conversion formula, so as to obtain the brightness of the scene outside the car window.
实施例二:Embodiment two:
实施例二为在汽车车窗前设置RGBD相机,获取汽车行进过程中车窗外场景的RGB图像和深度图像。本实施例中RGBD相机为双目相机,使用两部固定于不同位置的相机来定位,例如分别将两个RGBD相机安装于汽车车窗的两侧,从而使获得的图像覆盖整个汽车车窗前部的场景,分别获得外界环境在两部相机像平面上的图像,生成RGB图像及深度图像,再将RGB图像及深度图像进行转化和映射,得到HUD显示图像。The second embodiment is to set up an RGBD camera in front of the car window to obtain the RGB image and the depth image of the scene outside the car window when the car is traveling. In this embodiment, the RGBD camera is a binocular camera, and two cameras fixed at different positions are used for positioning. For example, two RGBD cameras are installed on both sides of a car window, so that the obtained image covers the entire car window. Part of the scene, the image of the external environment on the two camera image planes were obtained, RGB image and depth image were generated, and then the RGB image and depth image were converted and mapped to obtain the HUD display image.
请参阅图2,图2为步骤S200的具体流程图,步骤S200中进一步包括步骤S201,步骤S201为根据场景图像生成HUD显示图像包括:根据捕捉到的车窗前的场景图像,根据场景图像得到RGB图像和深度图像,由RGB图像和深度图像生成HUD显示图像。Please refer to Figure 2. Figure 2 is a specific flowchart of step S200. Step S200 further includes step S201. Step S201 is to generate a HUD display image according to the scene image. RGB image and depth image, HUD display image is generated from RGB image and depth image.
步骤S201采用上述的第二实施例生成HUD显示图像,在汽车前挡风玻璃附近设置RGBD相机,由RGBD相机拍摄形成RGB图像和深度图像,再由RGB图像和深度图像经后续转化和映射生成HUD显示图像。本实施例中的RGBD相机为双目相机,深度图像由RGBD相机获取到 的RGB图像计算得出,RGB图像和深度图像像素一一对应。计算深度图像时选取的RGB图像为左右两个RGBD相机获取到的RGB图像的其中任一个。步骤S200中进一步包括步骤S202,步骤S202为获取HUD显示图像的亮度分布包括:将深度图像转化为3D点云,并将3D点云映射到HUD显示图像上,得到亮度分布。通过相机获取彩色影像,然后将对应位置的像素的颜色信息赋予点云中对应的点,以点的形式记录,每一个点均包含有三维坐标和颜色信息,即将深度图像转化为3D点云。再将3D点云映射至HUD显示图像上,HUD显示图像附带有颜色信息,可以得到亮度分布情况。Step S201 uses the second embodiment described above to generate a HUD display image, and an RGBD camera is set near the front windshield of the car. The RGBD camera shoots to form an RGB image and a depth image, and then the RGB image and the depth image are subsequently converted and mapped to generate a HUD Display the image. The RGBD camera in this embodiment is a binocular camera, the depth image is calculated from the RGB image obtained by the RGBD camera, and the pixels of the RGB image and the depth image are in one-to-one correspondence. The RGB image selected when calculating the depth image is any one of the RGB images obtained by the two left and right RGBD cameras. Step S200 further includes step S202. In step S202, obtaining the brightness distribution of the HUD display image includes: converting the depth image into a 3D point cloud, and mapping the 3D point cloud to the HUD display image to obtain the brightness distribution. The color image is acquired by the camera, and then the color information of the pixel at the corresponding position is assigned to the corresponding point in the point cloud, which is recorded in the form of points. Each point contains three-dimensional coordinates and color information, that is, the depth image is converted into a 3D point cloud. Then map the 3D point cloud to the HUD display image, the HUD display image is accompanied by color information, and the brightness distribution can be obtained.
本实施例中的将深度图像转化为3D点云的转化公式为:The conversion formula for converting the depth image into a 3D point cloud in this embodiment is:
Figure PCTCN2021086415-appb-000003
Figure PCTCN2021086415-appb-000003
其中,z c是深度图像像素点的深度值,d x和d y是拍摄深度图像的相机感光芯片中像素的实际大小,u o和v o是相机图像平面的中心,f是相机的焦距。d x、d y、u o、v o和f为相机的内参数,3×4的RT矩阵为相机的外参数。 Among them, z c is the depth value of the depth image pixel, d x and d y are the actual size of the pixel in the camera's photosensitive chip that shoots the depth image, u o and v o are the center of the camera image plane, and f is the focal length of the camera. d x , d y , u o , v o and f are the internal parameters of the camera, and the 3×4 RT matrix is the external parameters of the camera.
在投影几何中,空间坐标常用齐次坐标来表示,故将相机的内外参数延伸至4×3矩阵和4×4矩阵。相机的内外参提前通过标定的方式得到,得到的点云信息包括空间坐标x w、y w、z w和一个亮度强度值。本实施例中的将3D点云映射到HUD显示的图像上的转化公式为: In projection geometry, spatial coordinates are often expressed in homogeneous coordinates, so the internal and external parameters of the camera are extended to 4×3 matrices and 4×4 matrices. The internal and external parameters of the camera are obtained in advance through calibration, and the obtained point cloud information includes spatial coordinates x w , y w , z w and a brightness intensity value. The conversion formula for mapping the 3D point cloud to the image displayed by the HUD in this embodiment is:
Figure PCTCN2021086415-appb-000004
Figure PCTCN2021086415-appb-000004
其中,d x、d y、u o、v o和f为相机的内参数,3×4的RT矩阵为相机的外参数,x w、y w、z w为3D点云的参数。 Among them, d x , dy , u o , v o and f are the internal parameters of the camera, the 3×4 RT matrix is the external parameters of the camera, and x w , y w , and z w are the parameters of the 3D point cloud.
每个3D点云均携带了一个亮度信息,因此经过3D点云映射后的 HUD图像即为亮度分布图,包含外界场景图像的亮度信息。Each 3D point cloud carries a piece of brightness information, so the HUD image after 3D point cloud mapping is the brightness distribution map, which contains the brightness information of the external scene image.
步骤S300为将HUD显示图像划分为一个或一个以上的图像区域,通过划分不同的图像区域从而确定不同区域的亮度调节,以便使用户的视觉体验更加舒适。Step S300 is to divide the HUD display image into one or more image regions, and determine the brightness adjustment of different regions by dividing different image regions, so as to make the user's visual experience more comfortable.
请参阅图3,图3为步骤S300的具体流程图,步骤S300进一步包括步骤S301和步骤S302,步骤S301将HUD显示图像整体作为一个图像区域,步骤S302依据亮度分布中的亮度数值将HUD显示图像划分为一个以上的区域。Please refer to FIG. 3, which is a specific flow chart of step S300. Step S300 further includes steps S301 and S302. In step S301, the HUD display image is regarded as an image area. In step S302, the HUD display image is displayed according to the brightness value in the brightness distribution. Divide into more than one area.
若执行步骤S301时,即为将HUD显示图像整体作为一个图像区域,从而对HUD显示图像全局进行亮度调节。步骤S301的调节方式采用上述的实施例一生成HUD显示图像,设置单目相机对外界环境进行拍摄,生成RGB图像,再将RGB图像转化为YUV图像,并提取YUV图像中Y通道的全局平均值,计算公式为:
Figure PCTCN2021086415-appb-000005
其中,y i是图像Y通道像素的大小,N是图像单通道的像素个数。在进行亮度调节之前,预设亮度-光源功率映射数据,依据计算出的Y通道的全局平均值获取光源功率参考值,并以光源功率参考值驱动HUD光源的发光元件。亮度-光源功率映射数据为事先经实验数据建立,形成对应的函数关系。
If step S301 is executed, the entire HUD display image is regarded as an image area, so that the overall brightness of the HUD display image is adjusted. The adjustment method of step S301 adopts the first embodiment described above to generate a HUD display image, set a monocular camera to photograph the external environment, generate an RGB image, then convert the RGB image into a YUV image, and extract the global average value of the Y channel in the YUV image , The calculation formula is:
Figure PCTCN2021086415-appb-000005
Among them, y i is the size of the pixels of the Y channel of the image, and N is the number of pixels in a single channel of the image. Before the brightness adjustment, the brightness-light source power mapping data is preset, the light source power reference value is obtained according to the calculated global average value of the Y channel, and the light-emitting element of the HUD light source is driven by the light source power reference value. The brightness-light source power mapping data is established through experimental data in advance to form a corresponding functional relationship.
若执行步骤S302时,即为将HUD显示图像分割为多个图像区域,从而分别对不同的HUD显示区域进行亮度调节,分割方式为依据亮度分布中的亮度数值。在步骤S202中,通过将深度图像转化为3D点云,并将3D点云映射到HUD显示图像上,得到亮度分布,从亮度分布中获取亮度数值。亮度数值的获取方式至少包括以下两种,其中一种为为根据亮度分布中的各个点的亮度数值将相邻像素点聚类,例如采用K-means算法,以空间中k个点为中心进行聚类,对最靠近他们的对象归类,对所有属于该类的数据点求平均,将平均值作为新的类中心,重复直至收敛,得到亮度分布中各个点的亮度数值。另一种获取方式为预先将HUD显示图像划分为多个区域,将多个区域中平均亮度差值小于阈值的相邻区域合并,再计算各个区域的亮度数值。If step S302 is executed, the HUD display image is divided into a plurality of image areas, so that the brightness of different HUD display areas is adjusted respectively, and the division method is based on the brightness value in the brightness distribution. In step S202, the depth image is converted into a 3D point cloud, and the 3D point cloud is mapped to the HUD display image to obtain the brightness distribution, and the brightness value is obtained from the brightness distribution. There are at least two ways to obtain the brightness value. One of them is to cluster adjacent pixels according to the brightness value of each point in the brightness distribution. For example, the K-means algorithm is used to center k points in the space. Clustering, classifying the objects closest to them, averaging all the data points belonging to this class, taking the average as the new class center, repeating until convergence, to get the brightness value of each point in the brightness distribution. Another acquisition method is to divide the HUD display image into multiple regions in advance, merge the adjacent regions whose average brightness difference is less than the threshold in the multiple regions, and then calculate the brightness value of each region.
本申请的依据亮度分布中的亮度数值将HUD显示图像划分为一个 以上的区域包括:使用边缘检测算法提取出亮度分布的边缘信息,用矩形框拟合边缘信息,得到若干图像区域。请参阅图4,图4为HUD某一刻的亮度分布,边缘信息由边缘检测算法(如Canny边缘检测算法)计算得出。虚线范围为矩形拟合的子区域,如有交集区域,则该区域的平均亮度为两个相交区域平均亮度的均值。每个区域所对应的亮度数值及该区域的像素坐标集合将被储存。The division of the HUD display image into more than one region according to the brightness value in the brightness distribution in this application includes: using an edge detection algorithm to extract edge information of the brightness distribution, and fitting the edge information with a rectangular frame to obtain several image regions. Please refer to Figure 4. Figure 4 shows the brightness distribution of the HUD at a certain moment, and the edge information is calculated by the edge detection algorithm (such as the Canny edge detection algorithm). The dashed line is the sub-region fitted by the rectangle. If there is an intersection region, the average brightness of the region is the average value of the average brightness of the two intersecting regions. The brightness value corresponding to each area and the pixel coordinate set of the area will be stored.
本申请中的步骤S400为根据场景图像的亮度分布计算图像区域各自对应的目标亮度,具体为,计算每个图像区域的平均亮度值,根据平均亮度值计算得出目标亮度。如果将HUD显示图像整体作为一个图像区域,则计算整体图像的平均亮度值。如果将HUD显示图像划分为多个的区域,则分别计算每个区域的平均亮度值。依据场景图像各区域的亮度,计算出最符合用户视觉体验的目标亮度。Step S400 in the present application is to calculate the target brightness corresponding to each image area according to the brightness distribution of the scene image, specifically, calculating the average brightness value of each image area, and calculating the target brightness according to the average brightness value. If the entire HUD display image is regarded as an image area, the average brightness value of the entire image is calculated. If the HUD display image is divided into multiple regions, the average brightness value of each region is calculated separately. According to the brightness of each area of the scene image, the target brightness that best meets the user's visual experience is calculated.
本申请中的步骤S500为根据图像区域对应的目标亮度调节图像区域对应的光源功率,通过调节光源功率改变图像区域的亮度,使亮度值与目标亮度一致。Step S500 in the present application is to adjust the light source power corresponding to the image area according to the target brightness corresponding to the image area, and adjust the light source power to change the brightness of the image area so that the brightness value is consistent with the target brightness.
具体为,获取预设的亮度-光源功率映射数据,依据每个图像区域的目标亮度值获取不同图像区域的光源功率参考值,并以光源功率参考值驱动HUD光源相应图像区域的发光元件。本申请事先通过实验数据建立亮度-光源功率映射数据,通过多次试验,选取多组不同的亮度和光源功率数据,如将实验结果选点连线形成亮度和光源功率之间的对应关系,再将实验数据拟合形成函数关系。当需要调节亮度时,依据上述函数关系确定对应的光源功率,以对应的光源功率驱动HUD光源相应图像区域的发光元件。Specifically, the preset brightness-light source power mapping data is obtained, the light source power reference value of different image areas is obtained according to the target brightness value of each image area, and the light-emitting elements of the corresponding image area of the HUD light source are driven by the light source power reference value. This application establishes brightness-light source power mapping data through experimental data in advance, and selects multiple sets of different brightness and light source power data through multiple experiments, such as connecting the experimental results to select points to form the corresponding relationship between brightness and light source power, and then Fit the experimental data to form a functional relationship. When the brightness needs to be adjusted, the corresponding light source power is determined according to the above functional relationship, and the corresponding light source power is used to drive the light-emitting elements of the corresponding image area of the HUD light source.
为解决技术问题,本发明还提供一种计算机可读存储介质,存储介质存储有计算机程序,程序被执行时用于实现上述调节方法的步骤。上述提到的计算机可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据等。例如,计算机可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、任 何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。In order to solve the technical problem, the present invention also provides a computer-readable storage medium, the storage medium stores a computer program, and when the program is executed, it is used to implement the steps of the above adjustment method. The aforementioned computer-readable storage medium may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on. For example, the computer-readable storage medium may be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), any type of storage disk (such as optical disk) , DVD, etc.), or similar storage media, or a combination of them.
本发明还提供一种基于视觉的AR-HUD亮度自适应调节系统,请参阅图5和图6,系统包括视觉感知模块100、显示控制模块200、HUD显示模块300和存储模块400:The present invention also provides a vision-based AR-HUD brightness adaptive adjustment system. Please refer to Figures 5 and 6. The system includes a visual perception module 100, a display control module 200, a HUD display module 300, and a storage module 400:
其中,视觉感知模块100捕捉外界环境信息,即汽车车窗前的场景图像,并将发送场景图像至显示控制模块200。Among them, the visual perception module 100 captures external environment information, that is, the scene image in front of the car window, and sends the scene image to the display control module 200.
显示控制模块200接收视觉感知模块100发送的外界环境信息,评估外界环境的亮度并根据亮度情况输出HUD显示的光源功率信号。The display control module 200 receives the external environment information sent by the visual perception module 100, evaluates the brightness of the external environment, and outputs the light source power signal displayed by the HUD according to the brightness.
HUD显示模块300接收显示控制模块200的输出信号,并以输出信号的光源功率投影HUD显示图像。The HUD display module 300 receives the output signal of the display control module 200, and projects the HUD display image with the light source power of the output signal.
存储模块400用于存储亮度和光源功率映射数据;显示控制模块根据视觉感知模块发送的外界环境信息,依据存储模块中的亮度和光源功率映射数据得到相应的光源功率,并发送至HUD显示模块。亮度和光源功率映射为预先通过实验数据建立的函数关系,实验对象为汽车的光源功率与亮度的对应关系,通过一定方式,如实验选点连线形成亮度和光源功率之间的对应关系,再将实验数据形成的函数关系导入存储模块内,以供后续对光源功率进行相应的调节。The storage module 400 is used to store brightness and light source power mapping data; the display control module obtains the corresponding light source power according to the external environment information sent by the visual perception module and the brightness and light source power mapping data in the storage module, and sends it to the HUD display module. The brightness and light source power are mapped as a functional relationship established in advance through experimental data. The experimental object is the corresponding relationship between the car's light source power and the brightness. The corresponding relationship between the brightness and the light source power is formed through a certain method, such as the connection of the selected points in the experiment. Import the functional relationship formed by the experimental data into the storage module for subsequent adjustment of the light source power accordingly.
本发明还提供一种基于视觉的AR-HUD亮度自适应调节装置,调节装置包括HUD显示装置、控制器以及单目相机或RGBD相机,HUD显示装置设置于汽车的内部空间中,对汽车的挡风玻璃上投射HUD显示图像,单目相机或RGBD相机设置于挡风玻璃后,用于捕捉车窗前场景的信息,以得到RGB图像。经过对RGB图像进行转化及映射后,可以得到场景图像的亮度分布,控制器根据计算出的场景图像的亮度分布,计算各个图像区域对应的目标亮度,之后依据储存的亮度和光源功率映射关系,获取到对应的光源功率,以对应的光源功率值驱动HUD光源相应图像区域的发光元件。The present invention also provides a vision-based AR-HUD brightness adaptive adjustment device. The adjustment device includes a HUD display device, a controller, and a monocular camera or RGBD camera. The HUD display image is projected on the windshield, and the monocular camera or RGBD camera is installed behind the windshield to capture the information of the scene in front of the car window to obtain the RGB image. After converting and mapping the RGB image, the brightness distribution of the scene image can be obtained. The controller calculates the target brightness corresponding to each image area according to the calculated brightness distribution of the scene image, and then according to the stored brightness and light source power mapping relationship, The corresponding light source power is obtained, and the light-emitting element in the corresponding image area of the HUD light source is driven with the corresponding light source power value.
使用本发明的基于视觉的AR-HUD亮度自适应调节方法,通过视觉感知的方法,量化计算环境背景光的强度,根据人眼所感受的舒适亮度 范围建立亮度-光源功率映射关系,调节HUD最终显示的图像亮度,实现HUD图像亮度的动态调节,不仅能提高驾驶员整体的感受体验,而且降低了HUD整体的功率,缓解能耗和散热压力。Using the vision-based AR-HUD brightness adaptive adjustment method of the present invention, through the method of visual perception, the intensity of the ambient background light is quantified and calculated, and the brightness-light source power mapping relationship is established according to the comfortable brightness range felt by the human eye, and the final adjustment of the HUD is The displayed image brightness, which realizes the dynamic adjustment of the HUD image brightness, can not only improve the overall experience of the driver, but also reduce the overall power of the HUD, alleviating energy consumption and heat dissipation pressure.
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only the embodiments of the present invention, and do not limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied to other related technologies In the same way, all fields are included in the scope of patent protection of the present invention.

Claims (10)

  1. 一种基于视觉的AR-HUD亮度自适应调节方法,其特征在于,包括:A vision-based AR-HUD brightness adaptive adjustment method, which is characterized in that it includes:
    获取汽车车窗前的场景图像;Obtain the scene image in front of the car window;
    根据所述场景图像生成HUD显示图像,获取所述HUD显示图像的亮度分布;Generating a HUD display image according to the scene image, and acquiring the brightness distribution of the HUD display image;
    将所述HUD显示图像划分为一个或一个以上的图像区域;Dividing the HUD display image into one or more image regions;
    根据所述场景图像的亮度分布计算所述图像区域各自对应的目标亮度;Calculating the target brightness corresponding to each of the image regions according to the brightness distribution of the scene image;
    根据所述图像区域对应的目标亮度调节所述图像区域对应的光源功率。The power of the light source corresponding to the image area is adjusted according to the target brightness corresponding to the image area.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述场景图像生成HUD显示图像包括:The method according to claim 1, wherein the generating a HUD display image according to the scene image comprises:
    捕捉车窗前的场景图像,根据所述场景图像得到RGB图像和深度图像,由所述RGB图像和所述深度图像生成HUD显示图像。The scene image in front of the car window is captured, an RGB image and a depth image are obtained according to the scene image, and a HUD display image is generated from the RGB image and the depth image.
  3. 根据权利要求2所述的方法,其特征在于,所述获取所述HUD显示图像的亮度分布包括:The method according to claim 2, wherein the acquiring the brightness distribution of the HUD display image comprises:
    将所述深度图像转化为3D点云,并将所述3D点云映射到HUD显示图像上,得到亮度分布。The depth image is converted into a 3D point cloud, and the 3D point cloud is mapped to the HUD display image to obtain a brightness distribution.
  4. 根据权利要求3所述的方法,其特征在于,所述将所述HUD显示图像划分为一个或一个以上的图像区域包括:The method according to claim 3, wherein the dividing the HUD display image into one or more image regions comprises:
    将所述HUD显示图像整体作为一个图像区域;或Use the entire HUD display image as an image area; or
    依据亮度分布中的亮度数值将HUD显示图像划分为一个以上的区域。According to the brightness value in the brightness distribution, the HUD display image is divided into more than one area.
  5. 根据权利要求4所述的方法,其特征在于,所述依据亮度分布中的亮度数值将HUD显示图像划分为一个以上的区域包括:The method according to claim 4, wherein the dividing the HUD display image into more than one area according to the brightness value in the brightness distribution comprises:
    使用边缘检测算法提取出所述亮度分布的边缘信息,用矩形框拟合所述边缘信息,得到若干图像区域。An edge detection algorithm is used to extract the edge information of the brightness distribution, and a rectangular frame is used to fit the edge information to obtain several image regions.
  6. 根据权利要求4或5所述的方法,其特征在于,所述根据所述场景图像的亮度分布计算所述图像区域各自对应的目标亮度包括:The method according to claim 4 or 5, wherein the calculating the target brightness corresponding to each of the image regions according to the brightness distribution of the scene image comprises:
    计算每个图像区域的平均亮度值,根据所述平均亮度值计算得出目标亮度。The average brightness value of each image area is calculated, and the target brightness is calculated according to the average brightness value.
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述图像区域对应的目标亮度调节所述图像区域对应的光源功率包括:The method according to claim 6, wherein the adjusting the light source power corresponding to the image area according to the target brightness corresponding to the image area comprises:
    获取预设的亮度-光源功率映射数据,依据每个图像区域的目标亮度值获取不同图像区域的光源功率参考值,并以所述光源功率参考值驱动HUD光源相应图像区域的发光元件。Obtain preset brightness-light source power mapping data, obtain light source power reference values of different image areas according to the target brightness value of each image area, and use the light source power reference values to drive the light-emitting elements of the corresponding image area of the HUD light source.
  8. 根据权利要求3所述的方法,其特征在于,将所述深度图像转化为3D点云的转化公式为:The method according to claim 3, wherein the conversion formula for converting the depth image into a 3D point cloud is:
    Figure PCTCN2021086415-appb-100001
    Figure PCTCN2021086415-appb-100001
    其中,所述z c是深度图像像素点的深度值,d x和d y是拍摄所述深度图像的相机感光芯片中像素的实际大小,u o和v o是所述相机图像平面的中心,f是所述相机的焦距。d x、d y、u o、v o和f为所述相机的内参数,3×4的RT矩阵为所述相机的外参数。 Wherein, the z c is the depth value of the depth image pixel, d x and d y are the actual size of the pixel in the photosensitive chip of the camera that shoots the depth image, u o and v o are the center of the camera image plane, f is the focal length of the camera. d x , d y , u o , v o and f are the internal parameters of the camera, and the 3×4 RT matrix is the external parameters of the camera.
  9. 根据权利要求8所述的方法,其特征在于,将所述3D点云映射到HUD显示的图像上的转化公式为:The method according to claim 8, wherein the conversion formula for mapping the 3D point cloud onto the image displayed by the HUD is:
    Figure PCTCN2021086415-appb-100002
    Figure PCTCN2021086415-appb-100002
    其中,d x、d y、u o、v o和f为所述相机的内参数,3×4的RT矩阵为所述相机的外参数,X w、Y w、Z w为所述3D点云的参数。 Where d x , d y , u o , v o and f are the internal parameters of the camera, the 3×4 RT matrix is the external parameters of the camera, and X w , Y w , and Z w are the 3D points Cloud parameters.
  10. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述程序被执行时用于实现权利要求1-9任一项所述方法 的步骤。A computer-readable storage medium, wherein the storage medium stores a computer program, and when the program is executed, it is used to implement the steps of the method according to any one of claims 1-9.
PCT/CN2021/086415 2020-04-29 2021-04-12 Vision-based adaptive ar-hud brightness adjustment method WO2021218602A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010356822.9 2020-04-29
CN202010356822.9A CN113573035A (en) 2020-04-29 2020-04-29 AR-HUD brightness self-adaptive adjusting method based on vision

Publications (1)

Publication Number Publication Date
WO2021218602A1 true WO2021218602A1 (en) 2021-11-04

Family

ID=78157755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086415 WO2021218602A1 (en) 2020-04-29 2021-04-12 Vision-based adaptive ar-hud brightness adjustment method

Country Status (2)

Country Link
CN (1) CN113573035A (en)
WO (1) WO2021218602A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100983A (en) * 2022-05-27 2022-09-23 中国第一汽车股份有限公司 Method, device and equipment for adjusting brightness of AR picture and storage medium
CN115866218A (en) * 2022-11-03 2023-03-28 重庆化工职业学院 Scene image fused vehicle-mounted AR-HUD brightness self-adaptive adjusting method
CN116539285A (en) * 2023-07-06 2023-08-04 深圳市海塞姆科技有限公司 Light source detection method, device, equipment and storage medium based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196366A (en) * 2018-01-03 2018-06-22 京东方科技集团股份有限公司 A kind of method and apparatus of adjusting display brightness
CN108847200A (en) * 2018-07-02 2018-11-20 京东方科技集团股份有限公司 Backlight adjusting method and device, head up display, system and storage medium
CN110120010A (en) * 2019-04-12 2019-08-13 嘉兴恒创电力集团有限公司博创物资分公司 A kind of stereo storage rack vision checking method and system based on camera image splicing
CN110264927A (en) * 2019-06-18 2019-09-20 上海蔚来汽车有限公司 Control method, device, controller and the storage medium of HUD display brightness

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196366A (en) * 2018-01-03 2018-06-22 京东方科技集团股份有限公司 A kind of method and apparatus of adjusting display brightness
CN108847200A (en) * 2018-07-02 2018-11-20 京东方科技集团股份有限公司 Backlight adjusting method and device, head up display, system and storage medium
CN110120010A (en) * 2019-04-12 2019-08-13 嘉兴恒创电力集团有限公司博创物资分公司 A kind of stereo storage rack vision checking method and system based on camera image splicing
CN110264927A (en) * 2019-06-18 2019-09-20 上海蔚来汽车有限公司 Control method, device, controller and the storage medium of HUD display brightness

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100983A (en) * 2022-05-27 2022-09-23 中国第一汽车股份有限公司 Method, device and equipment for adjusting brightness of AR picture and storage medium
CN115866218A (en) * 2022-11-03 2023-03-28 重庆化工职业学院 Scene image fused vehicle-mounted AR-HUD brightness self-adaptive adjusting method
CN115866218B (en) * 2022-11-03 2024-04-16 重庆化工职业学院 Scene image fusion vehicle-mounted AR-HUD brightness self-adaptive adjustment method
CN116539285A (en) * 2023-07-06 2023-08-04 深圳市海塞姆科技有限公司 Light source detection method, device, equipment and storage medium based on artificial intelligence
CN116539285B (en) * 2023-07-06 2023-09-01 深圳市海塞姆科技有限公司 Light source detection method, device, equipment and storage medium based on artificial intelligence

Also Published As

Publication number Publication date
CN113573035A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
WO2021218602A1 (en) Vision-based adaptive ar-hud brightness adjustment method
JP6653369B2 (en) Augmented reality system and color compensation method thereof
US11877086B2 (en) Method and system for generating at least one image of a real environment
US11206360B2 (en) Exposure control method for obtaining HDR image, related exposure control device and electronic device
US10670880B2 (en) Image display apparatus and image display method
JP6084434B2 (en) Image processing system and image processing method
US20100194902A1 (en) Method for high dynamic range imaging
US11140364B2 (en) Sensor fusion based perceptually enhanced surround view
CN115552317A (en) Display for generating environment-matched artificial reality content using light sensors
US20150304625A1 (en) Image processing device, method, and recording medium
US20170337712A1 (en) Image processing apparatus, image processing method, and storage medium
WO2020119822A1 (en) Virtual reality display method and device, apparatus, and computer storage medium
JP2003091720A (en) View point converting device, view point converting program and image processor for vehicle
JP7408298B2 (en) Image processing device, image processing method, and program
CN108564654B (en) Picture entering mode of three-dimensional large scene
WO2022133683A1 (en) Mixed reality display method, mixed reality device, and storage medium
KR20170044319A (en) Method for extending field of view of head mounted display
US11388391B2 (en) Head-mounted display having an image sensor array
US10578877B1 (en) Near-eye display system and display method thereof
WO2020084894A1 (en) Multi-camera system, control value calculation method and control device
CN108230273B (en) Three-dimensional image processing method of artificial compound eye camera based on geometric information
WO2023162504A1 (en) Information processing device, information processing method, and program
US11858420B2 (en) Below vehicle rendering for surround view systems
JP2024013890A (en) Information processing device, information processing method and program
TW202215113A (en) Virtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21795774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21795774

Country of ref document: EP

Kind code of ref document: A1