WO2023241214A1 - 显示方法、装置及电子后视镜系统 - Google Patents

显示方法、装置及电子后视镜系统 Download PDF

Info

Publication number
WO2023241214A1
WO2023241214A1 PCT/CN2023/089173 CN2023089173W WO2023241214A1 WO 2023241214 A1 WO2023241214 A1 WO 2023241214A1 CN 2023089173 W CN2023089173 W CN 2023089173W WO 2023241214 A1 WO2023241214 A1 WO 2023241214A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fog
foggy
display method
response
Prior art date
Application number
PCT/CN2023/089173
Other languages
English (en)
French (fr)
Inventor
王淑琴
Original Assignee
中国第一汽车股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国第一汽车股份有限公司 filed Critical 中国第一汽车股份有限公司
Publication of WO2023241214A1 publication Critical patent/WO2023241214A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1253Mirror assemblies combined with other articles, e.g. clocks with cameras, video cameras or video screens

Definitions

  • the present disclosure relates to the field of automobile safety technology, and in particular to a display method, device and electronic rearview mirror system.
  • the purpose of the present disclosure is to provide a display method, device and electronic rearview mirror system to increase the image accuracy of the electronic rearview mirror system in foggy conditions and improve driving safety.
  • the present disclosure provides a display method applied to an electronic rearview mirror system, wherein the display method includes:
  • a camera defogging signal is sent;
  • defogging is performed on the foggy image and a display signal is sent.
  • the display method also includes:
  • a camera defogging signal is sent, and the foggy image is dehazed and a display signal is sent out.
  • the fog image judgment model includes:
  • the image to be determined is a foggy image
  • the image to be determined is a normal image.
  • the display method also includes:
  • the preset fog threshold is adjusted according to the vehicle speed information; wherein the preset fog threshold is negatively correlated with the vehicle speed information.
  • the display method also includes:
  • An initial machine learning model is trained through the training data set, and the fog image judgment model is obtained after the training.
  • the step of dehazing the hazy image and sending a display signal includes:
  • the gain image is processed through a second logarithmic function to obtain a dehazed image; wherein the second logarithmic function is used to compensate for the brightness reduced by the first logarithmic function.
  • the step of performing automatic gain adjustment and fusing the enhanced dark area image and the filtered image to obtain a gain image includes:
  • the second automatic gain adjustment is performed, and we obtain to gain the second sub-image
  • the gain image is obtained by fusing the first gain sub-image and the second gain sub-image.
  • the present disclosure also provides a display device applied to an electronic rearview mirror system, including:
  • an acquisition module configured to acquire rearview images collected by a camera of the electronic rearview mirror system
  • a judgment module configured to use a predetermined fog image judgment model to judge whether the rear view image is a foggy image
  • a fog source analysis module configured to, in response to the rear view image being a foggy image, obtain and determine the fog source causing the foggy image based on the fog condition information of the corresponding area of the rear view image;
  • a processing module configured to: in response to the fog source being lens fog, send a camera defogging signal;
  • defogging is performed on the foggy image and a display signal is sent.
  • the display device further includes:
  • the training module is configured to: obtain several historical back-view images under foggy conditions and several historical back-view images under normal conditions;
  • An initial machine learning model is trained through the training data set, and the fog image judgment model is obtained after the training.
  • the present disclosure also provides an electronic rearview mirror system, including a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized in that when the processor executes the program Implement any of the methods described above.
  • the display method, device and electronic rearview mirror system obtained the rearview image collected by the camera of the electronic rearview mirror system; use the predetermined fog image judgment model, Determine whether the rear view image is a foggy image; in response to the rear view image being a foggy image, obtain and determine the fog source that caused the foggy image based on the fog condition information of the corresponding area of the rear view image ; In response to the fog source being lens fog, a camera defogging signal is sent; in response to the fog source being environmental fog, defogging is performed on the foggy image and a display signal is sent. In this way, the recognition of foggy images and matching processing methods for different fog sources are achieved.
  • This method improves the recognition of objects in the image, ensures that the electronic rearview mirror system can achieve accurate display even in foggy conditions, and improves vehicle driving safety.
  • the entire process runs fully automatically without driver operation, avoiding the driver's distraction due to fog and further ensuring driving safety.
  • Figure 1 is a schematic flowchart of a display method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of constructing a fog image judgment model provided by an embodiment of the present disclosure
  • Figure 3 is a schematic flow chart of a defogging method provided by an embodiment of the present disclosure
  • Figure 4 is a partial structural schematic diagram of a display device provided by an embodiment of the present disclosure.
  • FIG. 5 is a partial structural schematic diagram of an electronic rearview mirror system provided by an embodiment of the present disclosure.
  • the present disclosure proposes a display method, which is applied to the electronic rearview mirror system to improve the recognition of objects in the rearview mirror image, and ultimately achieves the purpose of improving driving safety.
  • the display method includes:
  • the electronic rearview mirror system includes a camera and a display screen.
  • the camera is located on the lower front side of the front window of the vehicle, which is basically the same position as the physical rearview mirror.
  • the display screen is usually located inside the cab for easy observation by the driver, and is not specifically limited here.
  • S102 Use a predetermined fog image judgment model to judge whether the rear view image is a foggy image
  • the predetermined fog image judgment model may be a fog image judgment rule or a pre-trained fog image judgment model.
  • fog can be graded based on distance from horizontal visibility. For example, if the horizontal visibility distance is between 1 and 10 kilometers, it is light fog; if the horizontal visibility distance is less than 1 kilometer, it is fog; if the horizontal visibility distance is between 200 and 500 meters, it is heavy fog.
  • the foggy image here should be understood as the rearview mirror image that affects the driver's judgment of real-world objects, and should not be understood as the rearview mirror image formed in any foggy situation.
  • the normal image should be understood as a rearview mirror image that does not affect the driver's judgment of objects in the real world, and should not be understood as a rearview mirror image that does not include any fog.
  • the display signal can be sent directly.
  • the display credit may be used for the display screen to display the rearview mirror image.
  • S1032 In response to the rear view image being a foggy image, obtain and determine the fog source causing the foggy image based on the fog condition information of the corresponding area of the rear view image;
  • the regional fog condition information can be obtained through obtaining weather information through the Internet or through a vehicle-mounted fog sensor. There is no specific limitation here.
  • the fog sources of foggy images are environmental fog and lens fog respectively.
  • environmental fog refers to high air humidity and fog in the environmental space
  • lens fog refers to fog on the camera. If the regional fog information shows that the environmental space is foggy, the fog source is environmental fog; otherwise, the fog source is lens fog.
  • the fog source may contain both environmental fog and lens fog. Since environmental fog cannot be eliminated, it can be processed according to environmental fog.
  • the performance is more obvious.
  • Taking the rear view image as the starting point first judging the fog in the rear view image and then obtaining the regional fog information, it can not only effectively monitor the lens fog, but also obtain the regional fog information in a targeted manner, avoiding blind acquisition of fog.
  • the interference of situation information on the display method is effectively guaranteed to ensure the stability and reliability of the display method.
  • the electronic rearview mirror system includes camera defogging components such as heating pipes.
  • the controller of the heating tube receives the camera defogging signal, it can defogging the camera by starting the heating tube.
  • heating tube here is only an example and does not limit the defogging components of the camera. Those skilled in the art can also use other defogging methods and components for defogging, which will not be described in detail here.
  • S1042 In response to the fog source being environmental fog, perform defogging processing on the foggy image and send a display signal.
  • the removal processing method may be a defogging algorithm based on image enhancement, a defogging algorithm based on image restoration, or a defogging algorithm based on deep learning, etc., which will not be described in detail here.
  • a display signal is emitted based on the defogged image and displayed on the display screen, which helps the driver identify real-world objects and improves driving safety.
  • the display method further includes:
  • the method of obtaining vehicle speed information is an existing technology and will not be described again here.
  • a camera defogging signal is sent, and the foggy image is dehazed and a display signal is sent out.
  • the fog image judgment model includes:
  • the image to be determined is a foggy image
  • the image to be determined is a normal image.
  • the concentration of fog is directly proportional to the difference between brightness and saturation.
  • the difference between brightness and saturation is close to zero, such as 0.18%, while in an image on a foggy day, the difference between brightness and saturation is large, such as 6.36%.
  • the preset fog threshold may be 3%, 5%, etc., which will not be given here. Those skilled in the art can adjust the preset fog threshold according to the visibility requirements during driving.
  • the display method also includes:
  • the preset fog threshold is adjusted according to the vehicle speed information; wherein the preset fog threshold is negatively correlated with the vehicle speed information.
  • the preset fog threshold is lowered, thereby reducing the accuracy of foggy image judgment.
  • Some rearview images suitable for low-speed driving but not suitable for high-speed driving can be judged as foggy images, allowing them to pass
  • the display after fog treatment is beneficial to ensuring high-speed driving safety.
  • the preset fog threshold is increased to enhance the accuracy of judgment of foggy images.
  • Rear view images judged to be foggy images under some high-speed driving conditions can be processed as normal images, which can save computing resources at the same time. Ensure driving safety.
  • the display method further includes:
  • the foggy condition refers to a state that interferes with the driver's recognition of real-world objects, rather than any foggy state.
  • normal conditions refer to a state that does not interfere with the driver's recognition of real-world objects. It does not specifically refer to a clear state, but can also be a light fog state.
  • S203 Train an initial machine learning model through the training data set, and obtain the fog image judgment model after the training.
  • the initial machine learning model can be a neural network model, such as LeNet model, AlexNet model, GoogLeNet model, etc.
  • S201 Obtaining several historical rear-view images under foggy conditions and several historical rear-view images under normal conditions can be replaced by: Obtaining several historical rear-view images under different vehicle speeds under foggy conditions. image and several historical rearview images at different vehicle speeds under normal conditions. At this time, the fog image judgment model is obtained while taking into account the vehicle speed factor.
  • the present disclosure also provides a specific defogging method.
  • the steps of dehazing the hazy image and sending a display signal include:
  • S301 Process the foggy image through the first logarithmic function to obtain an enhanced dark area image; such processing can remove interfering signals, which is beneficial to the processing of step S302, and the overall effect of the image will become darker.
  • S302 Use a guided filtering algorithm to filter the enhanced dark area image to obtain a filtered image; in this way, a clear outline of the image can be obtained.
  • the output image q is obtained after filtering through the guidance image I, where p and I are both inputs to the algorithm.
  • the guidance image here can be the same as the enhanced dark area image, in which case the algorithm becomes an edge-preserving filter.
  • S303 Perform automatic gain adjustment and fusion on the enhanced dark area image and the filtered image to obtain a gain image; here, through automatic gain control, the brightness of the dark area is enhanced, which can prevent the image from flickering due to image processing adjustment overshoot. Dark shaking.
  • S304 Process the gain image through a second logarithmic function to obtain a dehazed image; wherein the second logarithmic function is used to compensate for the brightness reduced by the first logarithmic function. Finally, the image exposure is increased to compensate for the overall darkening of the image by the S301, achieving the effect of overall enhanced brightness.
  • the step of performing automatic gain adjustment and fusing the enhanced dark area image and the filtered image to obtain a gain image includes:
  • the filtered image is subjected to the first automatic gain adjustment to obtain the first gain sub-image; here, the first automatic gain adjustment can be Boost gain/Tone curve, which is not limited here.
  • a second automatic gain adjustment is performed to obtain a gain second sub-image;
  • the second automatic gain adjustment can be Gain/Coring, which is not limited here.
  • the gain image is obtained by fusing the first gain sub-image and the second gain sub-image.
  • the methods in the embodiments of the present disclosure can be executed by a single device, such as a computer or server.
  • the method of this embodiment can also be applied in a distributed scenario, and is completed by multiple devices cooperating with each other.
  • one device among the multiple devices can only perform one or more steps in the method of the embodiment of the present disclosure, and the multiple devices will interact with each other to complete all the steps. method described.
  • the present disclosure also provides a display device.
  • the display device is applied to an electronic rearview mirror system and includes:
  • the acquisition module 401 is configured to acquire the rear view image collected by the camera of the electronic rearview mirror system
  • the judgment module 402 is configured to use a predetermined fog image judgment model to judge whether the rear view image is a foggy image;
  • the fog source analysis module 403 is configured to, in response to the rear view image being a foggy image, obtain and determine the fog source causing the foggy image based on the fog condition information of the corresponding area of the rear view image;
  • the processing module 404 is configured to: in response to the fog source being lens fog, send a camera defogging signal;
  • defogging is performed on the foggy image and a display signal is sent.
  • the acquisition module 401 is also configured to acquire vehicle speed information
  • the processing module is further configured to: in response to the vehicle speed information being greater than zero, send a camera defogging signal, perform defogging processing on the foggy image and send out a display signal.
  • the fog image judgment model includes:
  • the image to be determined is a foggy image
  • the image to be determined is a normal image.
  • it also includes:
  • the acquisition module 401 is also configured to acquire vehicle speed information
  • the judgment module 402 is further configured to adjust the preset fog threshold according to the vehicle speed information; wherein the preset fog threshold is negatively correlated with the vehicle speed information.
  • the display device further includes:
  • the training module is configured to: obtain several historical back-view images under foggy conditions and several historical back-view images under normal conditions;
  • processing module 404 is configured to:
  • the gain image is processed through a second logarithmic function to obtain a dehazed image; wherein the second logarithmic function is used to compensate for the brightness reduced by the first logarithmic function.
  • processing module 404 is configured to:
  • a second automatic gain adjustment is performed to obtain a gain second sub-image
  • the present disclosure also provides an electronic rearview mirror system, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, the display method described in any of the above embodiments is implemented.
  • the processor 1010 can be implemented using a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related tasks. program to implement The technical solutions provided by the embodiments of this specification.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits
  • ASIC Application Specific Integrated Circuit
  • Bus 1050 includes a path that carries information between various components of the device (eg, processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
  • the above device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, during specific implementation, the device may also include necessary components for normal operation. Other components.
  • the above-mentioned device may only include components necessary to implement the embodiments of this specification, and does not necessarily include all components shown in the drawings.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本公开提供一种显示方法、装置及电子后视镜系统。具体地,所述显示方法包括:获取借助所述电子后视镜系统的摄像头采集的后视图像;利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;响应于所述雾源是镜头雾,则发出摄像头除雾信号;响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。通过这样的方式,实现有雾图像的识别以及针对不同雾源确定匹配的处理方式,提高图像中物体的辨识度,确保电子后视镜系统能够在起雾的条件下也能够实现准确显示,提高车辆行车安全性。

Description

显示方法、装置及电子后视镜系统 技术领域
本公开涉及汽车安全技术领域,尤其涉及一种显示方法、装置及电子后视镜系统。
背景技术
随着汽车电子电气化和摄像头技术的不断发展使得摄像头的应用领域逐渐增多,物理后视镜有逐步被电子后视镜系统取代的趋势。电子后视镜与传统的物理后视相比,有着诸多的优势:下雨时不会因雨水打在玻璃窗和镜片上而影响视线;外部光线较差时,摄像头的低照度功能更容易看清物体;随摄像头转动显示等等。
即便如此,电子后视镜系统在起雾时,由于雾气遮挡,物体比较难辨认,也将影响行车安全。
发明内容
有鉴于此,本公开的目的在于提出一种显示方法、装置及电子后视镜系统,增加电子后视镜系统在起雾情况下的图像准确度,提高行车安全性。
基于上述目的,第一方面,本公开提供了一种显示方法,应用于电子后视镜系统,其中,所述显示方法包括:
获取借助所述电子后视镜系统的摄像头采集的后视图像;
利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;
响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;
响应于所述雾源是镜头雾,则发出摄像头除雾信号;
响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。
进一步地,所述显示方法还包括:
获取车速信息;
响应于所述车速信息大于零,则发出摄像头除雾信号同时对所述有雾图像进行去雾处理并发出显示信号。
进一步地,所述雾图像判断模型包括:
获取待判断图像的亮度和饱和度;
计算所述亮度和所述饱和度之差值,将所述差值与预设有雾阈值相比较;
响应于所述差值超过预设有雾阈值;则所述待判断图像为有雾图像;
响应于所述差值不超过预设有雾阈值;则所述待判断图像为正常图像。
进一步地,所述显示方法还包括:
获取车速信息;
根据所述车速信息,调整所述预设有雾阈值;其中,所述预设有雾阈值和所述车速信息成负相关。
进一步地,所述显示方法还包括:
获取若干有雾条件下的历史后视图像和若干正常条件下的历史后视图像;
提取所述历史后视图像中的车身子图像,构成训练数据集;
通过所述训练数据集,对一初始的机器学习模型进行训练,并在训练结束后得到所述雾图像判断模型。
进一步地,所述对所述有雾图像进行去雾处理并发出显示信号的步骤,包括:
通过第一对数函数,对所述有雾图像进行处理,得到增强暗区图像;
利用引导滤波算法,对所述增强暗区图像进行滤波,得到滤波图像;
对所述增强暗区图像和所述滤波图像进行自动增益调整并融合得到增益图像;
通过第二对数函数,对所述增益图像进行处理,得到去雾图像;其中,所述第二对数函数用于补偿所述第一对数函数降低的亮度。
进一步地,所述对所述增强暗区图像和所述滤波图像进行自动增益调整并融合得到增益图像的步骤,包括:
将所述滤波图像进行第一自动增益调节,得到增益第一子图像;
将所述滤波图像和所述增强暗区图像融合后进行第二自动增益调节,得 到增益第二子图像;
将所述增益第一子图像和所述增益第二子图像进行融合得到所述增益图像。
第二方面,本公开还提供一种显示装置,应用于电子后视镜系统,包括:
获取模块,被配置为获取借助所述电子后视镜系统的摄像头采集的后视图像;
判断模块,被配置为利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;
雾源分析模块,被配置为响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;
处理模块,被配置为:响应于所述雾源是镜头雾,则发出摄像头除雾信号;
响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。
进一步地,所述显示装置还包括:
训练模块,被配置为:获取若干有雾条件下的历史后视图像和若干正常条件下的历史后视图像;
提取所述历史后视图像中的车身子图像,构成训练数据集;
通过所述训练数据集,对一初始的机器学习模型进行训练,并在训练结束后得到所述雾图像判断模型。
第三方面,本公开还提供一种电子后视镜系统,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如前述任意一所述的方法。
从上面所述可以看出,本公开提供的显示方法、装置及电子后视镜系统,通过获取借助所述电子后视镜系统的摄像头采集的后视图像;利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;响应于所述雾源是镜头雾,则发出摄像头除雾信号;响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。通过这样的方式,实现有雾图像的识别以及针对不同雾源确定匹配的处理方 式,提高图像中物体的辨识度,确保电子后视镜系统能够在起雾的条件下也能够实现准确显示,提高车辆行车安全性。此外,整个过程全自动运行,无需驾驶员操作,避免了驾驶员因起雾分散注意力,进一步保障了行车安全。
附图说明
为了更清楚地说明本公开或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的显示方法的一种流程示意图;
图2为本公开实施例提供的构建雾图像判断模型的一种流程示意图;
图3为本公开实施例提供的去雾方法的一种流程示意图;
图4为本公开实施例提供的显示装置的部分结构示意图;
图5为本公开实施例提供的电子后视镜系统的部分结构示意图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本公开进一步详细说明。
需要说明的是,除非另外定义,本公开实施例使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开实施例中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。
电子后视镜系统在起雾时,由于雾气遮挡,物体比较难辨认,也将影响行车安全。对于环境起雾,例如雨后、秋冬季节的早晨,雾气是弥漫在大气中,无法通过擦拭摄像头或显示屏来提高物体的辨别度。对于摄像头起雾,例如夏天由地下车库开出至地面道路上,需要去除摄像头上的雾气。然而,对于电子后视镜系统来说,其致力于最大限度的还原真实世界的物体图像。 在起雾的情况下,同样会在显示屏上看到带有雾气的图像,其并不会对图像像是否有雾进行区分,更不会区别雾气来源,也就不可能采取有针对性的解决方案,由此带来交通隐患。
由此,针对这种情况,本公开提出一种显示方法,应用于电子后视镜系统,提高后视镜图像中物体的辨识度,最终达到提高行车安全的目的。
请参考图1,所述显示方法,包括:
S101:获取借助所述电子后视镜系统的摄像头采集的后视图像;
这里,电子后视镜系统包括摄像头和显示屏。其中,摄像头位于车辆前排车窗的前侧靠下的位置,基本与物理后视镜的位置相同。显示屏通常位于驾驶室内部,以便于驾驶员观察,这里不做具体限定。
S102:利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;
需要说明的是,预先确定的雾图像判断模型可以是雾图像判断规则,也可以是预先训练的雾图像判断模型。
现实场景中,根据水平能见度的距离可以对雾进行分级。例如,水平能见度距离在1~10公里之间,是轻雾;水平能见度距离低于1公里,是雾;水平能见度距离200~500米之间,是大雾。
本领域技术人员能够理解的,对于不同等级的雾对于驾驶员对真实世界中的物体进行判断的影响是不同的。因此,这里的有雾图像应理解为影响驾驶员判断真实世界的物体的后视镜图像,而不应理解为任何起雾情况下形成的后视镜图像。
S1031:响应于所述后视图像为正常图像,则发出显示信号;
这里,前述有雾图像相对应,正常图像应理解为不影响驾驶员判断真实世界的物体的后视镜图像,而不应理解为不包括任何雾气的后视镜图像。
对于正常图像,可以直接发出显示信号。这里,所述显示信用可以用于显示屏显示后视镜图像。
S1032:响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;
这里,区域雾况信息可以通过联网获取天气信息获取,也可以通过车载雾传感器来获取,这里不做具体限定。
需要说明的是,有雾图像的雾源分别是环境雾和镜头雾。其中,环境雾是指空气湿度大,环境空间中有雾;镜头雾是指摄像头上有雾。若区域雾况信息显示环境空间有雾,则雾源为环境雾;否则,则雾源为镜头雾。
应当理解的是,雾源可能同时存在环境雾和镜头雾,由于环境雾不可消除,可以按照环境雾进行处理。
这里,由于雾况信息较为复杂,具有散发性、区域性等特点,特别是在山区,表现更为明显。将后视图像作为出发点,先对后视图像进行有雾判断后获取区域雾况信息的方式,不但可以对镜头雾进行有效的监控,而且能够针对性的获取区域雾况信息,避免盲目获取雾况信息对显示方法的干扰,有效保障显示方法的稳定性和可靠性。
S1041:响应于所述雾源是镜头雾,则发出摄像头除雾信号;
这里,电子后视镜系统包括摄像头除雾部件,例如加热管。当加热管的控制器收到摄像头除雾信号,可以通过启动加热管对摄像头进行除雾。
应当理解的是,这里的加热管仅是举例,并非对摄像头除雾部件的限定,本领域技术人员也可以采用其他的除雾方式、部件进行除雾,这里不再详细说明。
S1042:响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。
需要说明的是,去除处理的方法可以是基于图像增强的去雾算法、基于图像复原的去雾算法或基于深度学习的去雾算法等,这里不再详述。
根据经过去雾处理的图像发出显示信号,使其在显示屏上显示,有助于驾驶员识别真实世界的物体,提高行驶安全性。
由此可见,通过这样的方式,实现有雾图像的识别以及针对不同雾源确定匹配的处理方式,提高图像中物体的辨识度,确保电子后视镜系统能够在起雾的条件下也能够实现准确显示,提高车辆行车安全性。此外,整个过程全自动运行,无需驾驶员操作,避免了驾驶员因起雾分散注意力,进一步保障了行车安全。
在一些实施例中,所述显示方法还包括:
获取车速信息;
这里,获取车速信息的方式为现有技术,这里不再赘述。
响应于所述车速信息大于零,则发出摄像头除雾信号同时对所述有雾图像进行去雾处理并发出显示信号。
摄像头除雾需要时间,可能几秒钟,也可以能几分钟,当车辆处于行驶状态时,同时对所述有雾图像进行去雾处理并发出显示信号,有助于保证除雾期间安全行驶。
在一些实施例中,所述雾图像判断模型包括:
获取待判断图像的亮度和饱和度;
计算所述亮度和所述饱和度之差值,将所述差值与预设有雾阈值相比较;
响应于所述差值超过预设有雾阈值;则所述待判断图像为有雾图像;
响应于所述差值不超过预设有雾阈值;则所述待判断图像为正常图像。
需要说明的是,雾的浓度与亮度和饱和度之差成正比关系,亮度和饱和度之差越大,表明雾的浓度越大,水平能见度越低;反之,亮度和饱和度之差越小,表明雾的浓度越小,水平能见度越高。
示例性的,晴朗天气的图像,亮度和饱和度之差接近于零,例如0.18%,而雾天的图像,亮度和饱和度之差较大,例如6.36%。
可选地,所述预设有雾阈值可以是3%、5%等,这里不再举例。本领域技术人员可以根据行驶过程中对能见度的要求,对预设有雾阈值进行调整。
进一步地,所述显示方法还包括:
获取车速信息;
根据所述车速信息,调整所述预设有雾阈值;其中,所述预设有雾阈值和所述车速信息成负相关。
应当理解的是,车速提高,对水平能见度的要求提高;车速降低,对水平能见度的要求降低。
当车速增加时,降低预设有雾阈值,从而降低有雾图像的判断准确,能够将部分适宜低速驾驶的后视图像而不适应高速驾驶的后视图像判断为有雾图像,使其通过去雾处理后显示,有利于保障高速行车安全。
反之,当车速降低时,提高预设有雾阈值,从而增强有雾图像的判断准确,能够将部分高速驾驶条件下判断为有雾图像的后视图像作为正常图像处理,可以节省计算资源的同时保障行车安全。
在一些实施例中,如图2所示,所述显示方法还包括:
S201:获取若干有雾条件下的历史后视图像和若干正常条件下的历史后视图像;
这里,有雾条件是指对驾驶员识别真实世界的物体构成干扰的状态,而不是指代任何有雾的状态。相对应的,正常条件是指对驾驶员识别真实世界的物体不构成干扰的状态,不是专指晴朗的状态,也可以是轻雾状态。
S202:提取所述历史后视图像中的车身子图像,构成训练数据集;
需要说明的是,历史后视图像中的物体多样,提取车身子图像,能够降低训练的难度,提高训练效率和准确性。
S203:通过所述训练数据集,对一初始的机器学习模型进行训练,并在训练结束后得到所述雾图像判断模型。
这里,初始的机器学习模型可以是神经网络模型,例如LeNet模型、AlexNet模型、GoogLeNet模型等。
进一步地,提取后视图像中的车身子图像,将所述车身子图像输入所述雾图像判断模型,即可实现对所述后视图像是否为有雾图像的判断。
作为一个可替换的实施方式,S201:获取若干有雾条件下的历史后视图像和若干正常条件下的历史后视图像,可以替换为:获取若干有雾条件下的不同车速下的历史后视图像和若干正常条件下的不同车速下的历史后视图像。此时,得到所述雾图像判断模型同时考虑车速因素。
对后视图像进行判断时,也需要获取车速信息,利用雾图像判断模型,对所述车速条件下的所述后视图像进行判断。
在一些实施例中,本公开还提供一种具体的去雾方法。
请参阅图3,所述对所述有雾图像进行去雾处理并发出显示信号的步骤,包括:
S301:通过第一对数函数,对所述有雾图像进行处理,得到增强暗区图像;这样的处理,能够去掉干扰的信号,有利于S302步骤的处理,图片的整体效果将变暗。
S302:利用引导滤波算法,对所述增强暗区图像进行滤波,得到滤波图像;这样的方式,可以得出图像的清晰轮廓。
需要说明的是,对于一个输入的图像p,通过引导图像I,经过滤波后得到输出图像q,其中p和I都是算法的输入。
示例性的,这里引导图像可以和所述增强暗区图像相同,此时该算法即成为一个边缘保持滤波器。
S303:对所述增强暗区图像和所述滤波图像进行自动增益调整并融合得到增益图像;这里,通过自动增益控制,增强暗区的亮度,可以防止图像处理调节过冲带来图像忽明忽暗的晃动。
S304:通过第二对数函数,对所述增益图像进行处理,得到去雾图像;其中,所述第二对数函数用于补偿所述第一对数函数降低的亮度。最后,提升图像曝光,补偿S301整体拉暗的图像,达到整体增强亮度的效果。
在一些实施例中,所述对所述增强暗区图像和所述滤波图像进行自动增益调整并融合得到增益图像的步骤,包括:
将所述滤波图像进行第一自动增益调节,得到增益第一子图像;这里,第一自动增益调节可以是Boost gain/Tone curve,这里不做限定。
将所述滤波图像和所述增强暗区图像融合后进行第二自动增益调节,得到增益第二子图像;这里,第二自动增益调节可以是Gain/Coring,这里不做限定。
将所述增益第一子图像和所述增益第二子图像进行融合得到所述增益图像。
采用多次自动增益和图案融合的方式,能够有效保障自动增益的效果。
需要说明的是,本公开实施例的方法可以由单个设备执行,例如一台计算机或服务器等。本实施例的方法也可以应用于分布式场景下,由多台设备相互配合来完成。在这种分布式场景的情况下,这多台设备中的一台设备可以只执行本公开实施例的方法中的某一个或多个步骤,这多台设备相互之间会进行交互以完成所述的方法。
需要说明的是,上述对本公开的一些实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于上述实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
基于同一发明构思,与上述任意实施例方法相对应的,本公开还提供了一种显示装置。
参考图4,所述显示装置,应用于电子后视镜系统,包括:
获取模块401,被配置为获取借助所述电子后视镜系统的摄像头采集的后视图像;
判断模块402,被配置为利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;
雾源分析模块403,被配置为响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;
处理模块404,被配置为:响应于所述雾源是镜头雾,则发出摄像头除雾信号;
响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。
在一些实施例中,获取模块401,还被配置为获取车速信息;
所述处理模块,还被配置为:响应于所述车速信息大于零,则发出摄像头除雾信号同时对所述有雾图像进行去雾处理并发出显示信号。
在一些实施例中,所述雾图像判断模型包括:
获取待判断图像的亮度和饱和度;
计算所述亮度和所述饱和度之差值,将所述差值与预设有雾阈值相比较;
响应于所述差值超过预设有雾阈值;则所述待判断图像为有雾图像;
响应于所述差值不超过预设有雾阈值;则所述待判断图像为正常图像。
在一些实施例中,还包括:
获取模块401,还被配置为获取车速信息;
判断模块402,还被配置为根据所述车速信息,调整所述预设有雾阈值;其中,所述预设有雾阈值和所述车速信息成负相关。
在一些实施例中,所述显示装置还包括:
训练模块,被配置为:获取若干有雾条件下的历史后视图像和若干正常条件下的历史后视图像;
提取所述历史后视图像中的车身子图像,构成训练数据集;
通过所述训练数据集,对一初始的机器学习模型进行训练,并在训练结 束后得到所述雾图像判断模型。
在一些实施例中,处理模块404,被配置为:
通过第一对数函数,对所述有雾图像进行处理,得到增强暗区图像;
利用引导滤波算法,对所述增强暗区图像进行滤波,得到滤波图像;
对所述增强暗区图像和所述滤波图像进行自动增益调整并融合得到增益图像;
通过第二对数函数,对所述增益图像进行处理,得到去雾图像;其中,所述第二对数函数用于补偿所述第一对数函数降低的亮度。
在一些实施例中,处理模块404,被配置为:
将所述滤波图像进行第一自动增益调节,得到增益第一子图像;
将所述滤波图像和所述增强暗区图像融合后进行第二自动增益调节,得到增益第二子图像;
将所述增益第一子图像和所述增益第二子图像进行融合得到所述增益图像。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本公开时可以把各模块的功能在同一个或多个软件和/或硬件中实现。
上述实施例的装置用于实现前述任一实施例中相应的显示方法,并且具有相应的方法实施例的有益效果,在此不再赘述。
基于同一发明构思,与上述任意实施例方法相对应的,本公开还提供了一种电子后视镜系统,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上任意一实施例所述的显示方法。
图5示出了本实施例所提供的一种更为具体的电子后视镜系统部分硬件结构示意图,该设备可以包括:处理器1010、存储器1020、输入/输出接口1030、通信接口1040和总线1050。其中处理器1010、存储器1020、输入/输出接口1030和通信接口1040通过总线1050实现彼此之间在设备内部的通信连接。
处理器1010可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现 本说明书实施例所提供的技术方案。
存储器1020可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器1020可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器1020中,并由处理器1010来调用执行。
输入/输出接口1030用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。
通信接口1040用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。
总线1050包括一通路,在设备的各个组件(例如处理器1010、存储器1020、输入/输出接口1030和通信接口1040)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器1010、存储器1020、输入/输出接口1030、通信接口1040以及总线1050,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。
上述实施例的电子后视镜系统用于实现前述任一实施例中相应的显示方法,并且具有相应的方法实施例的有益效果,在此不再赘述。
基于同一发明构思,与上述任意实施例方法相对应的,本公开还提供了一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行如上任一实施例所述的显示方法。
本实施例的计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储 器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
上述实施例的存储介质存储的计算机指令用于使所述计算机执行如上任一实施例所述的显示方法,并且具有相应的方法实施例的有益效果,在此不再赘述。
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本公开的范围(包括权利要求)被限于这些例子;在本公开的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本公开实施例的不同方面的许多其它变化,为了简明它们没有在细节中提供。
另外,为简化说明和讨论,并且为了不会使本公开实施例难以理解,在所提供的附图中可以示出或可以不示出与集成电路(IC)芯片和其它部件的公知的电源/接地连接。此外,可以以框图的形式示出装置,以便避免使本公开实施例难以理解,并且这也考虑了以下事实,即关于这些框图装置的实施方式的细节是高度取决于将要实施本公开实施例的平台的(即,这些细节应当完全处于本领域技术人员的理解范围内)。在阐述了具体细节(例如,电路)以描述本公开的示例性实施例的情况下,对本领域技术人员来说显而易见的是,可以在没有这些具体细节的情况下或者这些具体细节有变化的情况下实施本公开实施例。因此,这些描述应被认为是说明性的而不是限制性的。
尽管已经结合了本公开的具体实施例对本公开进行了描述,但是根据前面的描述,这些实施例的很多替换、修改和变型对本领域普通技术人员来说将是显而易见的。例如,其它存储器架构(例如,动态RAM(DRAM))可以使用所讨论的实施例。
本公开实施例旨在涵盖落入所附权利要求的宽泛范围之内的所有这样的替换、修改和变型。因此,凡在本公开实施例的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (10)

  1. 一种显示方法,应用于电子后视镜系统,其中,所述显示方法包括:
    获取借助所述电子后视镜系统的摄像头采集的后视图像;
    利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;
    响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;
    响应于所述雾源是镜头雾,则发出摄像头除雾信号;
    响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。
  2. 根据权利要求1所述的显示方法,其中,所述显示方法还包括:
    获取车速信息;
    响应于所述车速信息大于零,则发出摄像头除雾信号同时对所述有雾图像进行去雾处理并发出显示信号。
  3. 根据权利要求1所述的显示方法,其中,所述雾图像判断模型包括:
    获取待判断图像的亮度和饱和度;
    计算所述亮度和所述饱和度之差值,将所述差值与预设有雾阈值相比较;
    响应于所述差值超过预设有雾阈值;则所述待判断图像为有雾图像;
    响应于所述差值不超过预设有雾阈值;则所述待判断图像为正常图像。
  4. 根据权利要求3所述的显示方法,其中,所述显示方法还包括:
    获取车速信息;
    根据所述车速信息,调整所述预设有雾阈值;其中,所述预设有雾阈值和所述车速信息成负相关。
  5. 根据权利要求1所述的显示方法,其中,所述显示方法还包括:
    获取若干有雾条件下的历史后视图像和若干正常条件下的历史后视图像;
    提取所述历史后视图像中的车身子图像,构成训练数据集;
    通过所述训练数据集,对一初始的机器学习模型进行训练,并在训练结束后得到所述雾图像判断模型。
  6. 根据权利要求1所述的显示方法,其中,所述对所述有雾图像进行去 雾处理并发出显示信号的步骤,包括:
    通过第一对数函数,对所述有雾图像进行处理,得到增强暗区图像;
    利用引导滤波算法,对所述增强暗区图像进行滤波,得到滤波图像;
    对所述增强暗区图像和所述滤波图像进行自动增益调整并融合得到增益图像;
    通过第二对数函数,对所述增益图像进行处理,得到去雾图像;其中,所述第二对数函数用于补偿所述第一对数函数降低的亮度。
  7. 根据权利要求6所述的显示方法,其中,所述对所述增强暗区图像和所述滤波图像进行自动增益调整并融合得到增益图像的步骤,包括:
    将所述滤波图像进行第一自动增益调节,得到增益第一子图像;
    将所述滤波图像和所述增强暗区图像融合后进行第二自动增益调节,得到增益第二子图像;
    将所述增益第一子图像和所述增益第二子图像进行融合得到所述增益图像。
  8. 一种显示装置,应用于电子后视镜系统,包括:
    获取模块,被配置为获取借助所述电子后视镜系统的摄像头采集的后视图像;
    判断模块,被配置为利用预先确定的雾图像判断模型,对所述后视图像是否为有雾图像进行判断;
    雾源分析模块,被配置为响应于所述后视图像为有雾图像,获取并根据所述后视图像对应区域雾况信息,确定导致所述有雾图像的雾源;
    处理模块,被配置为:响应于所述雾源是镜头雾,则发出摄像头除雾信号;
    响应于所述雾源是环境雾,则对所述有雾图像进行去雾处理并发出显示信号。
  9. 根据权利要求8所述的显示装置,其中,所述显示装置还包括:
    训练模块,被配置为:获取若干有雾条件下的历史后视图像和若干正常条件下的历史后视图像;
    提取所述历史后视图像中的车身子图像,构成训练数据集;
    通过所述训练数据集,对一初始的机器学习模型进行训练,并在训练结 束后得到所述雾图像判断模型。
  10. 一种电子后视镜系统,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1至7任意一项所述的方法。
PCT/CN2023/089173 2022-06-16 2023-04-19 显示方法、装置及电子后视镜系统 WO2023241214A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210689210.0 2022-06-16
CN202210689210.0A CN115147675A (zh) 2022-06-16 2022-06-16 显示方法、装置及电子后视镜系统

Publications (1)

Publication Number Publication Date
WO2023241214A1 true WO2023241214A1 (zh) 2023-12-21

Family

ID=83408556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/089173 WO2023241214A1 (zh) 2022-06-16 2023-04-19 显示方法、装置及电子后视镜系统

Country Status (2)

Country Link
CN (1) CN115147675A (zh)
WO (1) WO2023241214A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147675A (zh) * 2022-06-16 2022-10-04 中国第一汽车股份有限公司 显示方法、装置及电子后视镜系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905678A (zh) * 2019-03-21 2019-06-18 重庆工程职业技术学院 一种煤矿监控图像去雾处理系统
CN111791834A (zh) * 2019-04-08 2020-10-20 上海擎感智能科技有限公司 汽车及其除雾方法和装置
WO2021164463A1 (zh) * 2020-02-17 2021-08-26 华为技术有限公司 检测方法、装置及存储介质
US20220138912A1 (en) * 2020-01-20 2022-05-05 Tencent Technology (Shenzhen) Company Limited Image dehazing method, apparatus, and device, and computer storage medium
CN115147675A (zh) * 2022-06-16 2022-10-04 中国第一汽车股份有限公司 显示方法、装置及电子后视镜系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905678A (zh) * 2019-03-21 2019-06-18 重庆工程职业技术学院 一种煤矿监控图像去雾处理系统
CN111791834A (zh) * 2019-04-08 2020-10-20 上海擎感智能科技有限公司 汽车及其除雾方法和装置
US20220138912A1 (en) * 2020-01-20 2022-05-05 Tencent Technology (Shenzhen) Company Limited Image dehazing method, apparatus, and device, and computer storage medium
WO2021164463A1 (zh) * 2020-02-17 2021-08-26 华为技术有限公司 检测方法、装置及存储介质
CN115147675A (zh) * 2022-06-16 2022-10-04 中国第一汽车股份有限公司 显示方法、装置及电子后视镜系统

Also Published As

Publication number Publication date
CN115147675A (zh) 2022-10-04

Similar Documents

Publication Publication Date Title
JP7481790B2 (ja) 強化された高ダイナミック・レンジ画像化及びトーン・マッピング
CN107392103B (zh) 路面车道线的检测方法及装置、电子设备
CN105306648B (zh) 具有学习能力的基于车辆状态的免提电话自适应降噪
WO2023241214A1 (zh) 显示方法、装置及电子后视镜系统
KR20180025591A (ko) 자율 주행 차량을 위한 비전 센서의 제어 방법 및 장치
CN110135235B (zh) 一种眩光处理方法、装置及车辆
CN113276774B (zh) 无人车远程驾驶过程中视频画面的处理方法、装置及设备
CN111506057A (zh) 辅助自动驾驶的自动驾驶辅助眼镜
CN113165651B (zh) 电子装置及其控制方法
US20240029444A1 (en) Correction of images from a panoramic-view camera system in the case of rain, incident light and contamination
CN111160237A (zh) 头部姿态估计方法和装置、电子设备和存储介质
CN111027506B (zh) 视线方向的确定方法、装置、电子设备及存储介质
KR20190095567A (ko) 객체를 인식하는 방법 및 장치
CN113895357A (zh) 一种后视镜调整方法、装置、设备及存储介质
CN116634095A (zh) 车辆盲区路面感知方法、装置、设备及存储介质
CN113052047B (zh) 交通事件的检测方法、路侧设备、云控平台及系统
CN117445794A (zh) 一种隧道场景下的车灯控制方法、装置及存储介质
JP2013083926A (ja) メディア音量制御システム
CN116923372A (zh) 驾驶控制方法、装置、设备及介质
KR20170108564A (ko) 영상을 이용한 차량 침입 검출 시스템 및 방법
CN111071037A (zh) 设备控制方法、装置、车载平视显示设备及存储介质
CN115743093A (zh) 车辆控制方法、装置、自动泊车辅助控制器、终端及系统
US10893388B2 (en) Map generation device, map generation system, map generation method, and non-transitory storage medium including instructions for map generation
CN115240170A (zh) 一种基于事件相机的道路行人检测跟踪方法及系统
CN111756987B (zh) 一种车载摄像头的控制方法、装置及车载图像捕捉系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23822785

Country of ref document: EP

Kind code of ref document: A1