WO2022165705A1 - 低光环境检测方法及自动驾驶方法 - Google Patents

低光环境检测方法及自动驾驶方法 Download PDF

Info

Publication number
WO2022165705A1
WO2022165705A1 PCT/CN2021/075262 CN2021075262W WO2022165705A1 WO 2022165705 A1 WO2022165705 A1 WO 2022165705A1 CN 2021075262 W CN2021075262 W CN 2021075262W WO 2022165705 A1 WO2022165705 A1 WO 2022165705A1
Authority
WO
WIPO (PCT)
Prior art keywords
low
light environment
detection
sample set
performance
Prior art date
Application number
PCT/CN2021/075262
Other languages
English (en)
French (fr)
Inventor
任卫红
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/075262 priority Critical patent/WO2022165705A1/zh
Publication of WO2022165705A1 publication Critical patent/WO2022165705A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention relates to the technical field of visual detection, in particular to a low-light environment detection method, an automatic driving method, a method for training a low-light environment detection performance discrimination module, and a low-light environment detection system.
  • Embodiments of the present invention provide a low-light environment detection method, an automatic driving method, a method for training a low-light environment detection performance discrimination module, and a low-light environment detection system, which are used to solve at least one of the above technical problems.
  • an embodiment of the present invention provides a low-light environment detection method, including:
  • an embodiment of the present invention provides an automatic driving method, which is applied to an automatic driving terminal.
  • the method includes: using the low-light environment detection method described in any embodiment of the present invention to perform object detection; and performing automatic driving according to the detection result. control.
  • an embodiment of the present invention provides a method for training a low-light environment detection performance discrimination module, where the low-light environment detection performance discrimination module is used to perform performance determination on a detection result output by the low-light environment detection module; the method include:
  • the low-light environment detection performance discrimination module is trained based on the low-light environment sample set.
  • an embodiment of the present invention provides a low-light environment detection system, wherein the system includes:
  • the low-light environment detection module is configured to detect the image to be detected to obtain the detection result
  • a low-light environment detection performance discrimination module configured to determine a performance discrimination result according to the to-be-detected image and the detection result; when the performance discrimination result is qualified, output the detection result; when the performance discrimination result is not When qualified, an alarm will be given.
  • an embodiment of the present invention provides an electronic device, comprising: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores data that can be used by the at least one processor Instructions to be executed, the instructions being executed by the at least one processor to enable the at least one processor to perform the steps of the method of any embodiment of the present invention.
  • an embodiment of the present invention provides an automatic driving terminal, which is characterized in that it is configured with the electronic device described in any embodiment of the present invention.
  • an embodiment of the present invention provides a storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the method described in any embodiment of the present invention are implemented.
  • an embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program stored on a storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer, causes the The computer executes any one of the above-mentioned low-light environment detection methods.
  • the low-light environment detection method provided by the embodiment of the present invention can use the low-light environment detection performance discrimination module to discriminate the performance of the visual detection module in the low-light environment, so as to achieve the function of pre-warning the system.
  • FIG. 1 is a flowchart of an embodiment of a low-light environment detection method of the present invention
  • FIG. 2 is a flowchart of another embodiment of the low-light environment detection method of the present invention.
  • FIG. 3 is a schematic diagram of the principle of the low-light environment detection method of the present invention.
  • FIG. 4 is a flowchart of another embodiment of the low-light environment detection method of the present invention.
  • FIG. 5 is a schematic diagram of detecting a target vehicle in a low-light environment image according to the present invention.
  • FIG. 6 is a schematic diagram of an embodiment of training a low-light environment detection performance discrimination module in the present invention.
  • FIG. 7 is a schematic diagram of the low-light environment image after processing with different enhancement algorithms in the present invention.
  • FIG. 8 is a schematic diagram of an embodiment of a low-light environment detection system of the present invention.
  • FIG. 9 is a schematic structural diagram of an embodiment of an electronic device of the present invention.
  • the present invention aims to solve the problem of visual detection in low light environment.
  • visual detection algorithms face huge challenges. They will have serious false detections or missed detections, which will bring many risks to self-driving cars, drones, etc.
  • the present invention first designs a low-light environment detection performance judgment module, which is used to judge the performance of the low-light environment detection module in a low-light environment.
  • a series of image enhancement methods are designed in the present invention to improve the performance of the detection module in the low-light environment for the scenes that the low-light environment detection module detects poorly.
  • a low-light environment detection method provided by an embodiment of the present invention, the method in the modified embodiment includes:
  • the performance discrimination results include: normal detection results and abnormal detection results, where abnormal detection results include: false detections and/or missed detections that affect system operation, and false detections and/or missed detections that do not affect system operation.
  • the low-light environment detection performance discrimination module provided by the embodiment of the present invention is used to discriminate the performance of the visual detection module in the low-light environment, so as to achieve the function of pre-warning the system.
  • FIG. 2 is a flowchart of another embodiment of the low-light environment detection method of the present invention.
  • an alarm prompt is performed, including:
  • performing enhancement processing on the image to be detected includes: performing brightness/contrast enhancement processing on the image to be detected; and/or performing image denoising processing on the image to be detected; and/or performing image super-resolution processing on the image to be detected.
  • the system can know whether there is a problem in the current detection performance. If there is a problem, the image quality is improved in order to improve detection to the extent that the system is usable.
  • the detection scheme designed by the present invention does not need to retrain the detection module, and only needs to adjust the image quality to achieve the purpose of improving the detection performance.
  • the solution of the present invention can be easily embedded into the visual inspection module without additional configuration.
  • the embodiments of the present invention utilize deep learning and machine learning technologies, and mainly design two models to solve the problem of visual detection in low-light environments, namely, a low-light environment detection performance discrimination module and a low-light environment image enhancement module.
  • the low-light environment detection performance discrimination module uses deep learning technology to discriminate the performance of visual detection. According to the discrimination results, the image enhancement module in low-light environment will perform a series of improvements to the quality of the image to improve the detection performance.
  • the present invention mainly has four steps to deal with the low-light visual detection problem:
  • the present invention is dedicated to solving the problem of visual detection in low light environment, and mainly designs two modules to solve the problem of visual detection in low light environment, namely, a low light environment detection performance discrimination module and an image enhancement module in low light environment.
  • FIG. 3 is a schematic diagram of the principle of the low-light environment detection method of the present invention, and FIG. 3 shows the overall solution flow of the present invention, which mainly includes three parts.
  • the visual detection module ie, the low-light environment detection module, which can be 2D frame detection, 3D frame detection or lane line detection, etc.
  • the low-light environment detection performance discrimination module Judge the quality of the test results. If the detection result meets the system requirements, the result will be output directly. Otherwise, the image is sent to the image enhancement module for image enhancement.
  • the image enhancement module in the present invention mainly includes three functions: brightness/contrast enhancement, image denoising and image super-resolution.
  • the low-light environment detection module is obtained by training a first training sample set, and the first training sample set includes non-low-light environment samples and low-light environment samples; the low-light environment detection performance judgment module uses the second training sample
  • the second training sample set is a low-light environment sample set.
  • FIG. 4 is a flowchart of another embodiment of the low-light environment detection method of the present invention, which further includes:
  • the performance evaluation of visual detection is regarded as a classification problem, and the categories are defined as: 0, the detection is normal; 1, there is a false detection/missing detection but does not affect the system operation; 2, there is an error Check/miss check and affect system operation.
  • the data of the present invention mainly comes from two aspects:
  • the low-light environment detection performance judgment module is obtained by training a neural network model.
  • the present invention uses a relatively shallow classification network to judge the detection performance. There are two main inputs to the model: images and existing detection results (detection box + score), and the output is 3 categories.
  • FIG. 6 is a schematic diagram of an embodiment of training a low-light environment detection performance discrimination module in the present invention.
  • the image first passes through the RPN module to obtain a series of candidate frames, and then subtracts the known detection frames from the candidate frames. box, and the remaining boxes are subjected to ROIPooling to obtain the final classification features.
  • FIG. 7 is a schematic diagram of the low-light environment image after processing with different enhancement algorithms in the present invention.
  • low-light image enhancement is performed in three main ways.
  • traditional methods such as Gama correction are used to improve the brightness and contrast of the image, and then the improved image is sent to the visual detection module again for detection.
  • more than 90% of the missed detections/false detections can be alleviated after the brightness and contrast of the image are improved.
  • the image denoising module needs to be called to remove noise.
  • the present invention uses the existing deep learning denoising model to perform noise removal.
  • Noise removal After noise removal, the detection results of most low-light images can be greatly improved. If the detection result is still not good at this time, then the last step of the super-resolution algorithm is called to improve the clarity and contrast of the image. If the detection results are still poor after the final enhancement, then an early warning is issued to the system. This situation indicates that there is a big problem with the sensor or the image detection module.
  • the low-light environment detection performance discrimination module may use other deep learning models, and the image enhancement module may use a combination of different algorithms, which is not limited in the present invention.
  • the data is usually marked, and then retrained and sent to the detection module again, which is usually time-consuming.
  • the gain may not be obvious in practical applications.
  • the method of the present invention directly enhances the input image, and then re-detects it, without additional labeling and workload, and can actually solve most of the visual detection problems caused by image quality.
  • the solution of the present invention can be combined with the existing solution, which can effectively reduce the workload of data labeling.
  • the present invention further provides an automatic driving method, which is applied to an automatic driving terminal.
  • the method includes: using the low-light environment detection method described in any embodiment of the present invention to detect objects; driving controls.
  • the autopilot terminal may be any one of a multi-rotor unmanned aerial vehicle, an unmanned ship, and an unmanned vehicle.
  • the present invention also provides a method for training a low-light environment detection performance judgment module, in which the low-light environment detection performance judgment module is used to perform performance judgment on the detection results output by the low-light environment detection module ; the method includes:
  • the low-light environment detection performance discrimination module is trained based on the low-light environment sample set.
  • pre-acquiring a low-light environment sample set includes:
  • the low-light environment sample set is generated according to the first low-light environment sample set and/or the second low-light environment sample set.
  • FIG. 8 is a schematic diagram of an embodiment of the low-light environment detection system of the present invention.
  • the system 800 includes:
  • the low-light environment detection module 810 is configured to detect the to-be-detected image to obtain a detection result
  • the low-light environment detection performance discrimination module 820 is configured to determine a performance discrimination result according to the to-be-detected image and the detection result; when the performance discrimination result is qualified, output the detection result; when the performance discrimination result is When unqualified, an alarm will be given.
  • the low-light environment detection system further includes: an enhancement processing module configured to perform enhancement processing on the image to be detected;
  • an alarm prompt is issued, including:
  • the to-be-detected image is enhanced and input again to the low-light environment detection module for detection;
  • the performance discrimination results include: normal detection results and abnormal detection results, wherein the abnormal detection results include: false detections and/or missed detections and affecting the operation of the system, and false detections and/or missed detections but not affecting the system run.
  • performing enhancement processing on the image to be detected includes: performing brightness/contrast enhancement processing on the image to be detected; and/or performing image denoising processing on the image to be detected; and/or performing image denoising processing on the image to be detected. Detect images for image super-resolution processing.
  • the low-light environment detection module is obtained by training with a first training sample set, and the first training sample set includes non-low-light environment samples and low-light environment samples; the low-light environment detection performance judgment module is trained using the second training sample set. It is obtained that the second training sample set is a low-light environment sample set.
  • the low-light environment detection system of the above-mentioned embodiment of the present invention can be used to execute the low-light environment detection method of the embodiment of the present invention, and correspondingly achieve the technical effect achieved by the implementation of the low-light environment detection method of the above-mentioned embodiment of the present invention, which is not repeated here. Repeat.
  • the relevant functional modules may be implemented by a hardware processor (hardware processor).
  • the low-light environment detection system further includes: generating a first low-light environment sample set in advance according to historical detection results and historical bug tickets of the low-light environment detection module; and/or
  • the low-light environment sample set is generated according to the first low-light environment sample set and/or the second low-light environment sample set.
  • the present invention also provides an electronic device comprising: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores data that can be processed by the at least one processor The instructions are executed by the at least one processor to enable the at least one processor to perform the steps of the method according to any embodiment of the present invention.
  • the present invention further provides an automatic driving terminal, which is configured with the electronic device described in any embodiment of the present invention.
  • the autopilot terminal may be any one of a multi-rotor drone, an unmanned ship, and an unmanned vehicle.
  • the present invention also provides a storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the method described in any embodiment of the present invention are implemented.
  • FIG. 9 is a schematic diagram of a hardware structure of an electronic device for performing a low-light environment detection method provided by another embodiment of the present application. As shown in FIG. 9 , the device includes:
  • One or more processors 910 and a memory 920, one processor 910 is taken as an example in FIG. 9 .
  • the apparatus for performing the low-light environment detection method may further include: an input device 930 and an output device 940 .
  • the processor 910, the memory 920, the input device 930, and the output device 940 may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 9 .
  • the memory 920 can be used to store non-volatile software programs, non-volatile computer-executable programs and modules, such as those corresponding to the low-light environment detection method in the embodiments of the present application.
  • the processor 910 executes various functional applications and data processing of the server by running the non-volatile software programs, instructions and modules stored in the memory 920, that is, to implement the low-light environment detection method of the above method embodiment.
  • the memory 920 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the low-light environment detection device, and the like. Additionally, memory 920 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 920 may optionally include memory located remotely from processor 910, and these remote memories may be connected to the low-light environment detection device via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 930 may receive input numerical or character information, and generate signals related to user settings and function control of the low-light environment detection device.
  • the output device 940 may include a display device such as a display screen.
  • the one or more modules are stored in the memory 920, and when executed by the one or more processors 910, execute the low-light environment detection method in any of the above method embodiments.
  • the above product can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the above product can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the electronic devices of the embodiments of the present application exist in various forms, including but not limited to:
  • Mobile communication equipment This type of equipment is characterized by having mobile communication functions, and its main goal is to provide voice and data communication.
  • Such terminals include: smart phones (eg iPhone), multimedia phones, feature phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has the characteristics of mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as iPads.
  • Portable entertainment equipment This type of equipment can display and play multimedia content.
  • Such devices include: audio and video players (eg iPod), handheld game consoles, e-books, as well as smart toys and portable car navigation devices.
  • the device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each embodiment can be implemented by means of software plus a general hardware platform, and certainly can also be implemented by hardware.
  • the above-mentioned technical solutions can be embodied in the form of software products in essence, or the parts that make contributions to related technologies, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic disks , optical disc, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in various embodiments or some parts of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种低光环境检测方法,包括:利用低光环境检测模块对待检测图像进行检测得到检测结果(S10);将待检测图像和检测结果输入至低光环境检测性能判别模块得到性能判别结果(S20);当性能判别结果为合格时,输出检测结果(S30);当性能判别结果为不合格时,进行报警提示(S40);低光环境检测方法可以通过低光环境检测性能判别模块来判别低光环境下视觉检测模块的性能,从而达到对系统进行预先预警的功能。

Description

低光环境检测方法及自动驾驶方法 技术领域
本发明涉及视觉检测技术领域,尤其涉及一种低光环境检测方法、自动驾驶方法、训练低光环境检测性能判别模块的方法以及低光环境检测系统。
背景技术
针对低光环境下的视觉检测问题,目前工业界通常的做法是尽量收集和标注多种低光环境下的数据,然后对深度学习模型进行重新训练,使它们对低光环境具有适应能力,它们主要存在两个方面的缺陷:
1.收集和标注低光数据非常耗费资源。首先,收集低光数据需要考虑场景的多样性,同时需要兼顾图像的质量。比如,如果图像太暗以至于人眼都看不到物体,那么这些数据几乎没有利用价值。反之,如果图像亮度较高,那么这些数据也不能称之为低光数据。另外,低光数据标注比较耗费人力和精力,标注员需要反复确认物体是否存在,物体的边缘在何处,标注过程繁琐和复杂。
2.利用标注数据训练模型周期较长,且收益存在不确定性。随着标注数据量的不断增加,视觉检测模块的训练周期也变得很长。另外,较多的低光训练数据会影响检测模块的整体性能,比如检测模块在白天时的性能可能出现下降。在真实应用场景下,我们很能确定训练数据中低光数据的组成,此外,受深度学习模型网络容量的限制,目前的视觉检测模块很难覆盖到所有的低光场景数据。
发明内容
本发明实施例提供一种低光环境检测方法、自动驾驶方法、训练低光环境检测性能判别模块的方法以及低光环境检测系统,用于至少解决上述技术问题之一。
第一方面,本发明实施例提供一种低光环境检测方法,包括:
利用低光环境检测模块对待检测图像进行检测得到检测结果;
将所述待检测图像和所述检测结果输入至低光环境检测性能判别模块得到性能判别结果;
当所述性能判别结果为合格时,输出所述检测结果;
当所述性能判别结果为不合格时,进行报警提示。
第二方面,本发明实施例提供一种自动驾驶方法,应用于自动驾驶终端,该方法包括:采用本发明任一实施例所述的低光环境检测方法进行物体检测;根据检测结果进行自动驾驶控制。
第三方面,本发明实施例提供一种训练低光环境检测性能判别模块的方法,所述低光环境检测性能判别模块用于对低光环境检测模块输出的检测结果进行性能判断;所述方法包括:
预先获取低光环境样本集;
基于所述低光环境样本集训练低光环境检测性能判别模块。
第四方面,本发明实施例提供一种低光环境检测系统,其特征在于,所述系统包括:
低光环境检测模块,配置为对待检测图像进行检测得到检测结果;
低光环境检测性能判别模块,配置为根据所述待检测图像和所述检测结果确定性能判别结果;当所述性能判别结果为合格时,输出所述检测结果;当所述性能判别结果为不合格时,进行报警提示。
第五方面,本发明实施例提供一种电子设备,其包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明任一实施例所述方法的步骤。
第六方面,本发明实施例提供一种自动驾驶终端,其特征在于,配置有本发明任一实施例所述的电子设备。
第七方面,本发明实施例提供一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现本发明任一实施例所述方法的步骤。
第八方面,本发明实施例还提供一种计算机程序产品,所述计算机程序产品包括存储在存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述任一项低光环境检测方法。
本发明实施例所提供的低光环境检测方法可以通过低光环境检测性能判别模块来判别低光环境下视觉检测模块的性能,从而达到对系统进行预先预警的功能。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明的低光环境检测方法的一实施例的流程图;
图2为本发明的低光环境检测方法的另一实施例的流程图;
图3为本发明的低光环境检测方法的原理示意图;
图4为本发明的低光环境检测方法的另一实施例的流程图;
图5为本发明中在低光环境图像中检测目标车辆的示意图;
图6为本发明中训练低光环境检测性能判别模块的一实施例的示意图;
图7为为本发明中对低光环境图像进行不同增强算法处理之后的示意图;
图8为本发明的低光环境检测系统的一实施例的示意图;
图9为本发明的电子设备的一实施例的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提 下所获得的所有其他实施例,都属于本发明保护的范围。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”,不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本发明旨在解决低光环境下视觉检测问题。在低光环境下,视觉检测算法面临着巨大的挑战,它们会存在严重的误检或漏检,这将会给自动驾驶汽车,无人机等带来诸多风险。
针对低光环境下视觉检测问题,目前工业界通常的做法是尽量收集和标注多种低光环境下的数据,然后对深度学习模型进行重新训练,使它们对低光环境具有适应能力。目前,解决低光环境下视觉检测的方案存在很多问题。首先,收集和标注数据非常耗费资源。其次,随着数据量的不断增加,模型的训练周期也变得较长。另外,较多的低光训练数据会影响检测模块的整体性能,比如检测模块在白天时的性能出现下降。
为了解决低光环境下视觉检测问题,本发明首先设计了一个低光环境检测性能判别模块,用来判断低光环境下低光环境检测模块的性能。针对低光环境检测模块检测不佳的场景,本发明设计了一系列图像增强方法,用来提高检测模块在低光环境下的性能。
如图1所示,本发明的一实施例提供的低光环境检测方法,再改实施例中该方法包括:
S10、利用低光环境检测模块对待检测图像进行检测得到检测结果。
S20、将所述待检测图像和所述检测结果输入至低光环境检测性能判别模块得到性能判别结果。
示例性地,性能判别结果包括:检测结果正常和检测结果异常,检测结果异常包括:存在误检和/或漏检且影响系统运行,以及存在误检和/或 漏检但是不影响系统运行。
S30、当所述性能判别结果为合格时,输出所述检测结果。
S40、当所述性能判别结果为不合格时,进行报警提示。
目前学术界和工业界会对低光环境进行判别和区分,但是它们很少评测低光环境下视觉检测的效果,这是因为它们缺乏有效的评价指标。对于检测的评价,目前的指标均需要标注好的Ground truth作为输入,这对实际的应用场景没有一点帮助。在无人车或无人机系统运行时,如果视觉检测模块出现漏检/误检,只能通过系统的反馈来获得该信息,也就是说故障发生之后,才知道系统存在视觉检测问题。本发明实施例所提供的低光环境检测性能判别模块,用来判别低光环境下视觉检测模块的性能,从而达到对系统进行预先预警的功能。
如图2所示为本发明的低光环境检测方法的另一实施例的流程图,在该实施例中,当所述性能判别结果为不合格时,进行报警提示,包括:
S41、当所述性能判别结果为不合格时,对所述待检测图像进行增强处理后再次输入至所述低光环境检测模块进行检测。
示例性地,对待检测图像进行增强处理包括:对待检测图像进行亮度/对比度提升处理;和/或对待检测图像进行图像去噪处理;和/或对待检测图像进行图像超分辨率处理。
S42、当再次检测之后的新的检测结果所对应的新的性能判别结果为合格时输出所述新的检测结果。
S43、当再次检测之后的新的检测结果所对应的新的性能判别结果为不合格时,进行报警提示。
本发明实施例通过低光环境检测性能判别模块,系统可以知道目前的检测性能是否存在问题。如果存在问题,则对图像质量进行提升,以便提高检测效果,达到系统可用的程度。本发明所设计的检测方案,不需要重新训练检测模块,只需要对图像质量进行调节,就可以达到提升检测性能的目的。此外,本发明的方案可以轻松地嵌入到视觉检测模块,不需要额外的配置。
本发明实施例利用深度学习和机器学习技术,主要设计了两个模型用以解决低光环境下视觉检测问题,分别为低光环境检测性能判别模块和低光环境下图像增强模块。其中低光环境检测性能判别模块利用深度学习技术对视觉检测的性能进行判别,根据判别结果,低光环境下图像增强模块会对图像的质量进行一系列提升,以达到提高检测性能的目的。
本发明处理低光视觉检测问题主要有四个步骤:
1)、利用已有低光数据和视觉检测结果进行标注(0:检测正常,1:存在误检/漏检但不影响系统运行,2:存在误检/漏检且影响系统运行);
2)、利用标注好的低光数据和已有的视觉检测结果训练低光环境检测性能判别模块;
3)、如果视觉检测存在问题,则将图像送入低光环境下图像增强模块进行图像增强,然后再次进行视觉检测;
4)、再次进行视觉检测后的检测结果再次经过低光环境检测性能判别模块,如果结果可以接受,系统正常运行,否则,向系统发出预警信息。
本发明致力于解决低光环境下视觉检测问题,主要设计了两个模块用以解决低光环境下视觉检测问题,分别为低光环境检测性能判别模块和低光环境下图像增强模块。
如图3所示为本发明的低光环境检测方法的原理示意图,图3展示了本发明的整体方案流程,主要包含三个部分。首先,将图像输入至视觉检测模块(即,低光环境检测模块,可以是2D框检测、3D框检测或车道线检测等)获得检测结果,然后将检测结果送入低光环境检测性能判别模块判断检测结果的好坏。如果检测结果满足系统要求,那么就直接输出结果。否则,将图像送入图像增强模块进行图像增强。本发明中的图像增强模块主要包含三个功能:亮度/对比度提升,图像去噪及图像超分辨率。在进行某种图像增强时,如果图像之后的检测结果已经满足检测要求,那么就停止继续增强。如果经过最终的增强之后,检测结果仍然较差,那么就会向系统发出预警。此种情况说明传感器或者图像检测模块出现了较大问题。
在一些实施例中,低光环境检测模块采用第一训练样本集进行训练得 到,第一训练样本集包括非低光环境样本和低光环境样本;低光环境检测性能判别模块采用第二训练样本集训练得到,第二训练样本集为低光环境样本集。
如图4所示为本发明的低光环境检测方法的另一实施例的流程图,在该实施例中还包括:
S01、预先根据所述低光环境检测模块的历史检测结果和历史Bug单生成第一低光环境样本集;和/或
S02、预先根据用于训练所述低光环境检测模块的已知标注数据生成第二低光环境样本;
S03、根据所述第一低光环境样本集和/或所述第二低光环境样本集生成所述低光环境样本集。
示例性地,本发明实施例中将视觉检测的性能评价看成一个分类问题,其中的类别定义为:0,检测正常;1,存在误检/漏检但不影响系统运行;2,存在误检/漏检且影响系统运行。本发明的数据主要来源于两个方面:
a)、已有的系统检测结果和Bug单。对于已有的严重影响系统正常运行的Bug单,将其标记为类别2;对于存在误检/漏检,但不影响系统功能的Bug单标记为1;其它正常的检测结果标记为0。
b)、合成标注数据。为了训练视觉检测模型,实际已经存在大量的标注数据,可以利用这些数据合成本发明所需的训练数据。根据检测结果对系统的影响,我们可以人为的模拟视觉检测误检/漏检的情况。如图5所示,如果图中的框2中的车辆漏检,那么将会造成严重的系统切出或者车辆事故,因此将此种情况的漏检/误检标记为类别2。对于图中远处的框1,暂时的漏检/误检对系统没有影响或者影响较小(不会导致系统切出),这时我们将其标记为类别1。对于其它正常的检测情况,将其标记为类别0。
经过上述的数据处理,可以获得足够的样本进行低光环境检测性能判别模块的训练。
在一些实施例中,低光环境检测性能判别模块采用神经网络模型训练得到,为了降低整个系统的复杂度,本发明采用比较浅层的分类网络进行 检测性能的判定。模型的输入主要有两个:图像和已有的检测结果(检测框+得分),输出为3个类别。
如图6所示为本发明中训练低光环境检测性能判别模块的一实施例的示意图,在该实施例中图像首先经过RPN模块获得一系列候选框,然后将候选框减去已知的检测框,剩余框经过ROIPooling得到最终分类特征。
图7所示为本发明中对低光环境图像进行不同增强算法处理之后的示意图。在一些实施例中,主要采用三种方式进行低光图像增强。对于检测性能不佳的低光图像,首先利用Gama校正等传统方法对图像的亮度和对比度进行提升,然后将提升后的图像再次送入视觉检测模块进行检测。根据经验和已有的实验结果,图像经过亮度和对比度提升之后90%以上的漏检/误检都能得到缓解。如果经过第一步亮度提升之后,视觉检测的性能仍然不佳,那么说明图像中存在严重的噪声,此时需要调用图像去噪模块进行噪声去除,本发明使用已有的深度学习去噪模型进行噪声去除。经过噪声去除之后,绝大多数低光图像的检测结果都能得到巨大提升。如果此时检测结果仍然不佳,那么就调用最后一步超分辨率算法提升图像的清晰度和对比度。如果经过最终的增强之后,检测结果仍然较差,那么就会向系统发出预警。此种情况说明传感器或者图像检测模块出现了较大问题。
在一些实施例中,低光环境检测性能判别模块可使用其它的深度学习模型,图像增强模块可以使用不同算法的组合,本发明对以上内容不做限定。
低光环境下不同的视觉检测方案中,对于传统的检测方案,当系统切出或者出现故障时,通常将数据进行标注,然后重新训练并再次送入检测模块,这种做法通常比较耗时,实际应用中增益可能不明显。本发明的方法是直接将输入图像进行增强,然后重新检测,不需要额外的标注和工作量,实际当中能解决大部分图像质量造成的视觉检测问题。此外,本发明的方案可以和现有的方案进行结合,可以有效减少数据标注的工作量。
在一些实施例中,本发明还提供一种自动驾驶方法,应用于自动驾驶终端,该方法包括:采用本发明任一实施例所述的低光环境检测方法进行 物体检测;根据检测结果进行自动驾驶控制。
在一些实施例中,自动驾驶终端可以是多旋翼无人机、无人船和无人车中的任意一种。
在一些实施例中,本发明还提供一种训练低光环境检测性能判别模块的方法,在该方法中,低光环境检测性能判别模块用于对低光环境检测模块输出的检测结果进行性能判断;该方法包括:
预先获取低光环境样本集;
基于所述低光环境样本集训练低光环境检测性能判别模块。
在一些实施例中,预先获取低光环境样本集,包括:
预先根据所述低光环境检测模块的历史检测结果和历史Bug单生成第一低光环境样本集;和/或
预先根据用于训练所述低光环境检测模块的已知标注数据生成第二低光环境样本;
根据所述第一低光环境样本集和/或所述第二低光环境样本集生成所述低光环境样本集。
如图8所示为本发明的低光环境检测系统的一实施例的示意图,在该事实例中,该系统800包括:
低光环境检测模块810,配置为对待检测图像进行检测得到检测结果;
低光环境检测性能判别模块820,配置为根据所述待检测图像和所述检测结果确定性能判别结果;当所述性能判别结果为合格时,输出所述检测结果;当所述性能判别结果为不合格时,进行报警提示。
在一些实施例中,低光环境检测系统还包括:增强处理模块,配置为对待检测图像进行增强处理;
当所述性能判别结果为不合格时,进行报警提示,包括:
当所述性能判别结果为不合格时,对所述待检测图像进行增强处理后再次输入至所述低光环境检测模块进行检测;
当再次检测之后的新的检测结果所对应的新的性能判别结果为合格时输出所述新的检测结果;
当再次检测之后的新的检测结果所对应的新的性能判别结果为不合格时,进行报警提示。
示例性地,性能判别结果包括:检测结果正常和检测结果异常,所述检测结果异常包括:存在误检和/或漏检且影响系统运行,以及存在误检和/或漏检但是不影响系统运行。
示例性地,对所述待检测图像进行增强处理包括:对所述待检测图像进行亮度/对比度提升处理;和/或对所述待检测图像进行图像去噪处理;和/或对所述待检测图像进行图像超分辨率处理。
示例性地,低光环境检测模块采用第一训练样本集进行训练得到,第一训练样本集包括非低光环境样本和低光环境样本;低光环境检测性能判别模块采用第二训练样本集训练得到,第二训练样本集为低光环境样本集。
上述本发明实施例的低光环境检测系统可用于执行本发明实施例的低光环境检测方法,并相应的达到上述本发明实施例的实现低光环境检测方法所达到的技术效果,这里不再赘述。本发明实施例中可以通过硬件处理器(hardware processor)来实现相关功能模块。
在一些实施例中,低光环境检测系统还包括:预先根据所述低光环境检测模块的历史检测结果和历史Bug单生成第一低光环境样本集;和/或
预先根据用于训练所述低光环境检测模块的已知标注数据生成第二低光环境样本;
根据所述第一低光环境样本集和/或所述第二低光环境样本集生成所述低光环境样本集。
在一些实施例中,本发明还提供一种电子设备,其包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明任一实施例所述方法 的步骤。
在一些实施例中,本发明还提供一种自动驾驶终端,其配置有本发明任一实施例所述的电子设备。示例性地,自动驾驶终端可以是多旋翼无人机、无人船和无人车中的任意一种。
在一些实施例中,本发明还提供一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现本发明任一实施例所述方法的步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作合并,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
图9是本申请另一实施例提供的执行低光环境检测方法的电子设备的硬件结构示意图,如图9所示,该设备包括:
一个或多个处理器910以及存储器920,图9中以一个处理器910为例。
执行低光环境检测方法的设备还可以包括:输入装置930和输出装置940。
处理器910、存储器920、输入装置930和输出装置940可以通过总线或者其他方式连接,图9中以通过总线连接为例。
存储器920作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的低光环境检测方法对应的程序指令/模块。处理器910通过运行存储在存储器920中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例低光环境检测方法。
存储器920可以包括存储程序区和存储数据区,其中,存储程序区可 存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据低光环境检测装置的使用所创建的数据等。此外,存储器920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器920可选包括相对于处理器910远程设置的存储器,这些远程存储器可以通过网络连接至低光环境检测装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置930可接收输入的数字或字符信息,以及产生与低光环境检测装置的用户设置以及功能控制有关的信号。输出装置940可包括显示屏等显示设备。
所述一个或者多个模块存储在所述存储器920中,当被所述一个或者多个处理器910执行时,执行上述任意方法实施例中的低光环境检测方法。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)其他具有数据交互功能的电子装置。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现 本实施例方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种低光环境检测方法,包括:
    利用低光环境检测模块对待检测图像进行检测得到检测结果;
    将所述待检测图像和所述检测结果输入至低光环境检测性能判别模块得到性能判别结果;
    当所述性能判别结果为合格时,输出所述检测结果;
    当所述性能判别结果为不合格时,进行报警提示。
  2. 根据权利要求1所述的方法,其特征在于,当所述性能判别结果为不合格时,进行报警提示,包括:
    当所述性能判别结果为不合格时,对所述待检测图像进行增强处理后再次输入至所述低光环境检测模块进行检测;
    当再次检测之后的新的检测结果所对应的新的性能判别结果为合格时输出所述新的检测结果;
    当再次检测之后的新的检测结果所对应的新的性能判别结果为不合格时,进行报警提示。
  3. 根据权利要求1所述的方法,其特征在于,所述性能判别结果包括:检测结果正常和检测结果异常,所述检测结果异常包括:存在误检和/或漏检且影响系统运行,以及存在误检和/或漏检但是不影响系统运行。
  4. 根据权利要求1所述的方法,其特征在于,所述对所述待检测图像进行增强处理包括:
    对所述待检测图像进行亮度/对比度提升处理;和/或
    对所述待检测图像进行图像去噪处理;和/或
    对所述待检测图像进行图像超分辨率处理。
  5. 根据权利要求1所述的方法,其特征在于,
    所述低光环境检测模块采用第一训练样本集进行训练得到,所述第一 训练样本集包括非低光环境样本和低光环境样本;所述低光环境检测性能判别模块采用第二训练样本集训练得到,所述第二训练样本集为低光环境样本集。
  6. 根据权利要求1所述的方法,其特征在于,还包括:
    预先根据所述低光环境检测模块的历史检测结果和历史Bug单生成第一低光环境样本集;和/或
    预先根据用于训练所述低光环境检测模块的已知标注数据生成第二低光环境样本;
    根据所述第一低光环境样本集和/或所述第二低光环境样本集生成所述低光环境样本集。
  7. 一种自动驾驶方法,应用于自动驾驶终端,其特征在于,所述方法包括:
    采用权利要求1-6中任一项所述的低光环境检测方法进行物体检测;
    根据检测结果进行自动驾驶控制。
  8. 根据权利要求7所述的方法,其特征在于,所述自动驾驶终端可以是多旋翼无人机、无人船和无人车中的任意一种。
  9. 一种训练低光环境检测性能判别模块的方法,其特征在于,所述低光环境检测性能判别模块用于对低光环境检测模块输出的检测结果进行性能判断;所述方法包括:
    预先获取低光环境样本集;
    基于所述低光环境样本集训练低光环境检测性能判别模块。
  10. 根据权利要求9所示的方法,其特征在于,所述预先获取低光环境样本集,包括:
    预先根据所述低光环境检测模块的历史检测结果和历史Bug单生成第一低光环境样本集;和/或
    预先根据用于训练所述低光环境检测模块的已知标注数据生成第二低光环境样本;
    根据所述第一低光环境样本集和/或所述第二低光环境样本集生成所述低光环境样本集。
  11. 一种低光环境检测系统,其特征在于,所述系统包括:
    低光环境检测模块,配置为对待检测图像进行检测得到检测结果;
    低光环境检测性能判别模块,配置为根据所述待检测图像和所述检测结果确定性能判别结果;当所述性能判别结果为合格时,输出所述检测结果;当所述性能判别结果为不合格时,进行报警提示。
  12. 根据权利要求11所述的系统,其特征在于,还包括:增强处理模块,配置为对待检测图像进行增强处理;
    当所述性能判别结果为不合格时,进行报警提示,包括:
    当所述性能判别结果为不合格时,对所述待检测图像进行增强处理后再次输入至所述低光环境检测模块进行检测;
    当再次检测之后的新的检测结果所对应的新的性能判别结果为合格时输出所述新的检测结果;
    当再次检测之后的新的检测结果所对应的新的性能判别结果为不合格时,进行报警提示。
  13. 根据权利要求11所述的系统,其特征在于,所述性能判别结果包括:检测结果正常和检测结果异常,所述检测结果异常包括:存在误检和/或漏检且影响系统运行,以及存在误检和/或漏检但是不影响系统运行。
  14. 根据权利要求11所述的系统,其特征在于,所述对所述待检测图像进行增强处理包括:
    对所述待检测图像进行亮度/对比度提升处理;和/或
    对所述待检测图像进行图像去噪处理;和/或
    对所述待检测图像进行图像超分辨率处理。
  15. 根据权利要求11所述的系统,其特征在于,
    所述低光环境检测模块采用第一训练样本集进行训练得到,所述第一训练样本集包括非低光环境样本和低光环境样本;所述低光环境检测性能判别模块采用第二训练样本集训练得到,所述第二训练样本集为低光环境样本集。
  16. 根据权利要求11所述的系统,其特征在于,还包括:
    预先根据所述低光环境检测模块的历史检测结果和历史Bug单生成第一低光环境样本集;和/或
    预先根据用于训练所述低光环境检测模块的已知标注数据生成第二低光环境样本;
    根据所述第一低光环境样本集和/或所述第二低光环境样本集生成所述低光环境样本集。
  17. 一种电子设备,其包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8中任意一项所述方法的步骤。
  18. 一种自动驾驶终端,其特征在于,配置有权利要求17所述的电子设备。
  19. 根据权利要求18所述的自动驾驶终端,其特征在于,所述自动驾驶终端可以是多旋翼无人机、无人船和无人车中的任意一种。
  20. 一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-8中任意一项所述方法的步骤。
PCT/CN2021/075262 2021-02-04 2021-02-04 低光环境检测方法及自动驾驶方法 WO2022165705A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/075262 WO2022165705A1 (zh) 2021-02-04 2021-02-04 低光环境检测方法及自动驾驶方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/075262 WO2022165705A1 (zh) 2021-02-04 2021-02-04 低光环境检测方法及自动驾驶方法

Publications (1)

Publication Number Publication Date
WO2022165705A1 true WO2022165705A1 (zh) 2022-08-11

Family

ID=82740765

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075262 WO2022165705A1 (zh) 2021-02-04 2021-02-04 低光环境检测方法及自动驾驶方法

Country Status (1)

Country Link
WO (1) WO2022165705A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033142A (zh) * 2023-03-30 2023-04-28 北京城建智控科技股份有限公司 一种基于摄像装置的环境光测量方法与系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831586A (zh) * 2012-08-08 2012-12-19 无锡锦囊科技发展有限公司 一种对恶劣光照条件下的图像/视频实时增强方法
US20170064278A1 (en) * 2014-04-18 2017-03-02 Autonomous Solutions, Inc. Stereo vision for sensing vehicles operating environment
CN110610463A (zh) * 2019-08-07 2019-12-24 深圳大学 一种图像增强方法及装置
CN110675336A (zh) * 2019-08-29 2020-01-10 苏州千视通视觉科技股份有限公司 一种低照度图像增强方法及装置
CN111539975A (zh) * 2020-04-09 2020-08-14 普联技术有限公司 一种运动目标的检测方法、装置、设备及存储介质
CN112257759A (zh) * 2020-09-27 2021-01-22 华为技术有限公司 一种图像处理的方法以及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831586A (zh) * 2012-08-08 2012-12-19 无锡锦囊科技发展有限公司 一种对恶劣光照条件下的图像/视频实时增强方法
US20170064278A1 (en) * 2014-04-18 2017-03-02 Autonomous Solutions, Inc. Stereo vision for sensing vehicles operating environment
CN110610463A (zh) * 2019-08-07 2019-12-24 深圳大学 一种图像增强方法及装置
CN110675336A (zh) * 2019-08-29 2020-01-10 苏州千视通视觉科技股份有限公司 一种低照度图像增强方法及装置
CN111539975A (zh) * 2020-04-09 2020-08-14 普联技术有限公司 一种运动目标的检测方法、装置、设备及存储介质
CN112257759A (zh) * 2020-09-27 2021-01-22 华为技术有限公司 一种图像处理的方法以及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033142A (zh) * 2023-03-30 2023-04-28 北京城建智控科技股份有限公司 一种基于摄像装置的环境光测量方法与系统
CN116033142B (zh) * 2023-03-30 2023-06-23 北京城建智控科技股份有限公司 一种基于摄像装置的环境光测量方法与系统

Similar Documents

Publication Publication Date Title
US20230041233A1 (en) Image recognition method and apparatus, computing device, and computer-readable storage medium
US20200192389A1 (en) Building an artificial-intelligence system for an autonomous vehicle
EP3796112B1 (en) Virtual vehicle control method, model training method, control device and storage medium
CN111489403A (zh) 利用gan来生成虚拟特征图的方法及装置
US11403560B2 (en) Training apparatus, image recognition apparatus, training method, and program
CN111652087B (zh) 验车方法、装置、电子设备和存储介质
CN110348278B (zh) 用于自主驾驶的基于视觉的样本高效的强化学习框架
CN113033537A (zh) 用于训练模型的方法、装置、设备、介质和程序产品
CN111686450B (zh) 游戏的剧本生成及运行方法、装置、电子设备和存储介质
CN113792791B (zh) 针对视觉模型的处理方法及装置
US12067705B2 (en) Method for detecting data defects and computing device utilizing method
US20210390667A1 (en) Model generation
WO2022165705A1 (zh) 低光环境检测方法及自动驾驶方法
CN109214616B (zh) 一种信息处理装置、系统和方法
GB2576660A (en) Computationally derived assessment in childhood education systems
CN110298302A (zh) 一种人体目标检测方法及相关设备
CN115471439A (zh) 显示面板缺陷的识别方法、装置、电子设备及存储介质
CN116977256A (zh) 缺陷检测模型的训练方法、装置、设备及存储介质
CN113326829B (zh) 视频中手势的识别方法、装置、可读存储介质及电子设备
CN116123040A (zh) 一种基于多模态数据融合的风机叶片状态检测方法及系统
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
WO2023035263A1 (zh) 确定图像信号处理参数的方法、装置和感知系统
CN112528790B (zh) 基于行为识别的教学管理方法、装置及服务器
US20220391762A1 (en) Data generation device, data generation method, and program recording medium
CN111507421A (zh) 一种基于视频的情感识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21923730

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21923730

Country of ref document: EP

Kind code of ref document: A1