CN114065838B - Low-light obstacle detection method, system, terminal and storage medium - Google Patents

Low-light obstacle detection method, system, terminal and storage medium Download PDF

Info

Publication number
CN114065838B
CN114065838B CN202111235613.XA CN202111235613A CN114065838B CN 114065838 B CN114065838 B CN 114065838B CN 202111235613 A CN202111235613 A CN 202111235613A CN 114065838 B CN114065838 B CN 114065838B
Authority
CN
China
Prior art keywords
low
illumination
image
light
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111235613.XA
Other languages
Chinese (zh)
Other versions
CN114065838A (en
Inventor
张旺
秦文健
陈昊
陈震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202111235613.XA priority Critical patent/CN114065838B/en
Priority to PCT/CN2021/137601 priority patent/WO2023065497A1/en
Publication of CN114065838A publication Critical patent/CN114065838A/en
Application granted granted Critical
Publication of CN114065838B publication Critical patent/CN114065838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a low-light obstacle detection method, a system, a terminal and a storage medium. The method comprises the following steps: converting the normal illumination sample image into a low illumination sample image using a cyclic countermeasure network; performing enhancement processing on the low-illumination sample image through an illumination enhancement network to obtain a low-illumination enhancement image; inputting the low-illumination enhanced image into a target detection model for training to obtain a trained target detection model; and identifying the obstacle in the low-light image to be detected through the trained target detection model. According to the embodiment of the application, the self-supervision low-illumination enhancement algorithm is introduced to enhance the low-illumination image, the low-illumination enhancement algorithm is effectively combined with the target detection algorithm, the robustness and the accuracy of the target detection algorithm on low-illumination target detection are improved, and the problem that the detection effect is poor when the common target detection algorithm is used for detecting the obstacle in the low-illumination scene is solved.

Description

Low-light obstacle detection method, system, terminal and storage medium
Technical Field
The application belongs to the technical field of natural image processing, and particularly relates to a low-light obstacle detection method, a system, a terminal and a storage medium.
Background
According to world health organization statistics, nearly 2.53 million people worldwide have vision disorders, of which 3600 ten thousand completely lose vision, and at the same time, the increase of population growth and the aggravation of aging further aggravate the scale of vision disorder people, so that low vision blind guiding devices have a great demand which cannot be ignored.
Traditional blind guiding means comprise a blind guiding rod, a blind guiding dog, braille, a blind road and the like, but the safety of the blind guiding means is not completely reliable, and the blind guiding means are high in price and narrow in application range. In recent years, with deep learning and the breakthrough development of the deep learning in the field of computer vision, the development of blind guiding devices has come to a new life. The current main flow framework based on deep learning target detection can be roughly divided into two large technical directions of a double-stage target detection algorithm represented by Fast R-CNN and a single-stage target detection algorithm represented by YOLO series, SSD and the like, and compared with the traditional algorithm effect, the deep learning algorithm has breakthrough promotion, and brings new research hot tide to the target detection technology. For example, lin et al developed an indoor vision assistance system based on fast-RCNN, detected daily articles such as fans, chairs, tables, and dehumidifiers in an indoor environment, and informed the user of the current location and planned a moving route through an intelligent device. Tapu et al developed a wearable device capable of detecting and alerting of obstacles by means of YOLOv3, designed a smart phone-based object detection and classification scheme to detect objects (cars, bicycles, people, etc.) in an outdoor environment to help people with vision disorders; sonayDuman et al also invent a YOLO-based portable blind guidance system that helps visually impaired people perceive surrounding objects and people and accurately estimate their distance.
In the above description, although the existing blind guiding device based on the target detection can complete the tasks of identifying the obstacle, identifying the face of the person or guiding the route to the destination, the existing blind guiding device or technology is mostly only suitable for the normal illumination condition, and is difficult to cope with the blind guiding requirement in the low illumination scene.
Disclosure of Invention
The application provides a low-light obstacle detection method, a system, a terminal and a storage medium, which aim to solve one of the technical problems in the prior art at least to a certain extent.
In order to solve the above problems, the present application provides the following technical solutions:
a low-light obstacle detection method, comprising:
converting the normal illumination sample image into a low illumination sample image using a cyclic countermeasure network;
performing enhancement processing on the low-illumination sample image through an illumination enhancement network to obtain a low-illumination enhancement image;
inputting the low-illumination enhanced image into a target detection model for training to obtain a trained target detection model;
and identifying the obstacle in the low-light image to be detected through the trained target detection model.
The technical scheme adopted by the embodiment of the application further comprises: the converting the normal light sample image to a low light sample image using the recurring antagonism network includes:
the cyclic countermeasure network comprises a generator and a discriminator, wherein the generator is used for converting a normal illumination image, the discriminator is used for discriminating whether the converted image meets the requirement of a low illumination image, and if not, the converted image is discarded; if the low-light image requirements are met, the converted image is placed into a low-light image dataset.
The technical scheme adopted by the embodiment of the application further comprises: the enhancing the low-light sample image through the illumination enhancement network comprises the following steps:
the illumination enhancement network comprises a CNN network, the input of the CNN network is a low-illumination image data set, and when the network is trained, the CNN network utilizes the relationship between a low-illumination image histogram and a normal-illumination image histogram curve and a data distribution design loss function to form a self-supervision learning network and output a mapping matrix for representing illumination enhancement characteristics.
The technical scheme adopted by the embodiment of the application further comprises: the enhancing the low-light sample image through the illumination enhancement network further comprises:
the illumination enhancement network further comprises a residual connection adjustment module, wherein the residual connection adjustment module comprises four convolution layers with residual connection; the input of the residual error connection adjustment module is a low-illumination image data set, the low-illumination image in the low-illumination image data set is processed by four convolution layers to obtain three gray parameter matrixes, and the mapping matrixes output by the CNN network are subjected to iterative adjustment through the three gray parameter matrixes to obtain an adjustment image corresponding to each low-illumination image.
The technical scheme adopted by the embodiment of the application further comprises: the enhancing the low-light sample image through the illumination enhancement network further comprises:
and respectively adding each low-illumination image and the corresponding adjustment image to generate a low-illumination enhanced image.
The technical scheme adopted by the embodiment of the application further comprises: the inputting the low illumination enhanced image into a target detection model for training comprises the following steps:
the target detection algorithm of the target detection model comprises a YOLO algorithm or a Fast R-CNN detection algorithm.
The embodiment of the application adopts another technical scheme that: a low-light obstacle detection system, comprising:
an image conversion module: for converting the normal illumination sample image to a low illumination sample image using a recurring antagonism network;
an image enhancement module: the method comprises the steps of performing enhancement processing on the low-illumination sample image through an illumination enhancement network to obtain a low-illumination enhancement image;
model training module: the method comprises the steps of inputting the low-illumination enhanced image into a target detection model for training to obtain a trained target detection model;
an image recognition module: and the obstacle recognition module is used for recognizing the obstacle of the low-light image to be detected through the trained target detection model.
The embodiment of the application adopts the following technical scheme: a terminal comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the low-light obstacle detection method;
the processor is configured to execute the program instructions stored by the memory to control low light obstacle detection.
The embodiment of the application adopts the following technical scheme: a storage medium storing program instructions executable by a processor for performing the low-light obstacle detection method.
Compared with the prior art, the beneficial effect that this application embodiment produced lies in: according to the low-illumination obstacle detection method, system, terminal and storage medium, the self-supervision low-illumination enhancement algorithm is introduced to enhance the low-illumination image, the low-illumination enhancement algorithm is effectively combined with the target detection algorithm, the robustness and the accuracy of the target detection algorithm on low-illumination target detection are improved, and the problem that the detection effect of using the common target detection algorithm to detect the obstacle in a low-illumination scene is poor is solved. And a residual error connection module is added, so that the original low-illumination image information can be better utilized, structural damage of an illumination enhancement network to the low-illumination image is avoided, and the target detection accuracy of a target detection algorithm under the condition of low illumination is further improved.
Drawings
FIG. 1 is a flow chart of a low light level obstacle detection method of an embodiment of the present application;
FIG. 2 is a schematic diagram of a cyclic countermeasure network according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a low-light obstacle detection system according to an embodiment of the disclosure;
fig. 4 is a schematic diagram of a terminal structure according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Referring to fig. 1, a flowchart of a low-light obstacle detection method according to an embodiment of the application is shown. The low-light obstacle detection method comprises the following steps:
s10: acquiring a certain number of normal illumination sample images;
in this step, a normal illumination sample image may be acquired from a common dataset of normal illumination images.
S20: converting the normal illumination sample image into a low illumination sample image by using the cyclic countermeasure network, and generating a low illumination data set for training the target detection network;
in this step, the existing public data set of the illumination image is mostly a normal illumination image, and the data amount provided by the low illumination image data set is very limited (for example, an Exdark data set). To ensure adequate training of the target detection network, embodiments of the present application utilize a cyclic countermeasure network (CycleGAN) to convert normal illumination sample images to low illumination sample images. Specifically, fig. 2 is a schematic diagram of a cyclic countermeasure network structure according to an embodiment of the present application. Wherein the cyclic countermeasure network comprises a generator and a discriminator, the A-style data set represents a normal illumination image, the B-style data set represents a low illumination image, and the cyclic countermeasure network passes through the generator G A Converting the normal illumination image in the A-style data set into a low illumination image, and inputting the converted image into a discriminator D A Judging whether the B style is met, discarding the image if the B style is not met, and if the B style is met, putting the image into a B style data set so as to convert the normal illumination sample image into a low illumination sample image. Similarly, through generator G B Converting the image in the B-style data set into a normal illumination image, and inputting the converted image into a discriminator D B Judging whether the A style is met, discarding the image if the A style is not met, and if the A style is met, putting the image into an A style data set so as to convert the low-illumination sample image into a normal illumination sample image. And after the network training is finished, generating a low-light data set for training the target detection network, and providing sufficient training data support for the target detection network.
S30: inputting the low-illumination image data set into an illumination enhancement network for enhancement processing, and outputting the low-illumination enhancement image data set through the illumination enhancement network;
in this step, the illumination enhancement network includes a CNN (convolutional neural network) network and a residual connection adjustment module, where the input of the CNN network is a low-illumination image dataset, and during network training, the CNN network uses the relationship between the low-illumination image histogram and the normal-illumination image histogram curve and the data distribution design loss function to form a self-supervision learning network, and outputs a mapping matrix for characterizing the illumination enhancement feature. The input of the residual connection adjustment module is a low-illumination image data set, the low-illumination image data set comprises four convolution layers with residual connection, three gray parameter matrixes are obtained after the low-illumination image is processed by the four convolution layers, three iterative adjustment is carried out on the mapping matrixes output by the CNN network through the three gray parameter matrixes, an adjustment image corresponding to each low-illumination image is obtained, and finally, each low-illumination image is added with the corresponding adjustment image to generate a low-illumination enhanced image.
In the foregoing, the residual connection adjustment module in the embodiment of the application can perform fine adjustment on the weight of the illumination enhancement network effectively during network training, and can better utilize the original image information, so as to avoid structural damage of the illumination enhancement network to the low-illumination image.
S40: inputting the low-illumination enhanced image data set into a target detection model for training, and outputting target category and position information in the low-illumination enhanced image through the target detection model;
in this step, the target detection model may use YOLO algorithm or Fast R-CNN detection algorithm, etc.
S50: and inputting the low-light image to be detected into a trained target detection model, and identifying the obstacle of the low-light image through the target detection model.
Based on the above, the low-illumination obstacle detection method in the embodiment of the application performs enhancement processing on the low-illumination image by introducing the self-supervision low-illumination enhancement algorithm, effectively combines the low-illumination enhancement with the target detection algorithm, improves the robustness and the precision of the target detection algorithm on the low-illumination target detection, and solves the problem of poor detection effect existing in the obstacle detection under the low-illumination scene by using the common target detection algorithm. And a residual error connection module is added, so that the original low-illumination image information can be better utilized, structural damage of an illumination enhancement network to the low-illumination image is avoided, and the target detection accuracy of a target detection algorithm under the condition of low illumination is further improved.
Fig. 3 is a schematic structural diagram of a low-light obstacle detection system according to an embodiment of the disclosure. The low-light obstacle detection system 40 of the embodiment of the present application includes:
the image conversion module 41: for converting the normal illumination sample image to a low illumination sample image using a recurring antagonism network;
image enhancement module 42: the method comprises the steps of performing enhancement processing on a low-illumination sample image through an illumination enhancement network to obtain a low-illumination enhancement image;
model training module 43: the method comprises the steps of inputting a low-illumination enhanced image into a target detection model for training to obtain a trained target detection model;
image recognition module 44: the obstacle recognition method is used for recognizing the obstacle of the low-light image to be detected through the trained target detection model.
Fig. 4 is a schematic diagram of a terminal structure according to an embodiment of the present application. The terminal 50 includes a processor 51, a memory 52 coupled to the processor 51.
The memory 52 stores program instructions for implementing the low-light obstacle detection method described above.
The processor 51 is configured to execute program instructions stored in the memory 52 to control low light obstacle detection.
The processor 51 may also be referred to as a CPU (Central Processing Unit ). The processor 51 may be an integrated circuit chip with signal processing capabilities. Processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Fig. 5 is a schematic structural diagram of a storage medium according to an embodiment of the present application. The storage medium of the embodiment of the present application stores a program file 61 capable of implementing all the methods described above, where the program file 61 may be stored in the storage medium in the form of a software product, and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A method of detecting a low-light obstacle, comprising:
converting the normal illumination sample image into a low illumination sample image using a cyclic countermeasure network;
performing enhancement processing on the low-illumination sample image through an illumination enhancement network to obtain a low-illumination enhancement image;
inputting the low-illumination enhanced image into a target detection model for training to obtain a trained target detection model;
performing obstacle recognition on the low-light image to be detected through the trained target detection model;
the converting the normal light sample image to a low light sample image using the recurring antagonism network includes:
the cyclic countermeasure network comprises a generator and a discriminator, wherein the generator is used for converting a normal illumination image, the discriminator is used for discriminating whether the converted image meets the requirement of a low illumination image, and if not, the converted image is discarded; if the low-light image requirements are met, the converted image is placed into a low-light image dataset;
the enhancing the low-light sample image through the illumination enhancement network comprises the following steps:
the illumination enhancement network comprises a CNN network, wherein the input of the CNN network is a low-illumination image data set, and the CNN network utilizes the relationship between a low-illumination image histogram and a normal-illumination image histogram curve and a data distribution design loss function to form a self-supervision learning network and output a mapping matrix for representing illumination enhancement characteristics during network training;
the illumination enhancement network further comprises a residual connection adjustment module, wherein the residual connection adjustment module comprises four convolution layers with residual connection; the input of the residual error connection adjustment module is a low-illumination image data set, the low-illumination image in the low-illumination image data set is processed by four convolution layers to obtain three gray parameter matrixes, and the mapping matrixes output by the CNN network are subjected to iterative adjustment through the three gray parameter matrixes to obtain an adjustment image corresponding to each low-illumination image.
2. The low-light obstacle detection method of claim 1, wherein the enhancing the low-light sample image through the light enhancement network further comprises:
and respectively adding each low-illumination image and the corresponding adjustment image to generate a low-illumination enhanced image.
3. The low-light obstacle detection method of any one of claims 1 to 2, wherein the inputting the low-light enhanced image into a target detection model for training comprises:
the target detection algorithm of the target detection model comprises a YOLO algorithm or a Fast R-CNN detection algorithm.
4. A low-light obstacle detection system employing the low-light obstacle detection method as claimed in any one of claims 1 to 3, comprising:
an image conversion module: for converting the normal illumination sample image to a low illumination sample image using a recurring antagonism network;
an image enhancement module: the method comprises the steps of performing enhancement processing on the low-illumination sample image through an illumination enhancement network to obtain a low-illumination enhancement image;
model training module: the method comprises the steps of inputting the low-illumination enhanced image into a target detection model for training to obtain a trained target detection model;
an image recognition module: and the obstacle recognition module is used for recognizing the obstacle of the low-light image to be detected through the trained target detection model.
5. A terminal comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the low-light obstacle detection method of any one of claims 1 to 3;
the processor is configured to execute the program instructions stored by the memory to control low light obstacle detection.
6. A storage medium storing program instructions executable by a processor for performing the low-light obstacle detection method as claimed in any one of claims 1 to 3.
CN202111235613.XA 2021-10-22 2021-10-22 Low-light obstacle detection method, system, terminal and storage medium Active CN114065838B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111235613.XA CN114065838B (en) 2021-10-22 2021-10-22 Low-light obstacle detection method, system, terminal and storage medium
PCT/CN2021/137601 WO2023065497A1 (en) 2021-10-22 2021-12-13 Low-illumination obstacle detection method and system, and terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111235613.XA CN114065838B (en) 2021-10-22 2021-10-22 Low-light obstacle detection method, system, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114065838A CN114065838A (en) 2022-02-18
CN114065838B true CN114065838B (en) 2023-07-14

Family

ID=80235298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111235613.XA Active CN114065838B (en) 2021-10-22 2021-10-22 Low-light obstacle detection method, system, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN114065838B (en)
WO (1) WO2023065497A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620090A (en) * 2022-11-07 2023-01-17 中电科新型智慧城市研究院有限公司 Model training method, low-illumination target re-recognition method and device and terminal equipment
CN117893880A (en) * 2024-01-25 2024-04-16 西南科技大学 Target detection method for self-adaptive feature learning of low-light image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886161A (en) * 2019-01-30 2019-06-14 江南大学 A kind of road traffic index identification method based on possibility cluster and convolutional neural networks
CN111161178A (en) * 2019-12-25 2020-05-15 湖南大学 Single low-light image enhancement method based on generation type countermeasure network
CN111968044A (en) * 2020-07-16 2020-11-20 中国科学院沈阳自动化研究所 Low-illumination image enhancement method based on Retinex and deep learning

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102641116B1 (en) * 2018-08-23 2024-02-27 삼성전자주식회사 Method and device to recognize image and method and device to train recognition model based on data augmentation
CN110210401B (en) * 2019-06-03 2022-10-21 多维协同人工智能技术研究院(重庆)有限公司 Intelligent target detection method under weak light
CN111291885B (en) * 2020-01-20 2023-06-09 北京百度网讯科技有限公司 Near infrared image generation method, training method and device for generation network
CN111402145B (en) * 2020-02-17 2022-06-07 哈尔滨工业大学 Self-supervision low-illumination image enhancement method based on deep learning
CN111798400B (en) * 2020-07-20 2022-10-11 福州大学 Non-reference low-illumination image enhancement method and system based on generation countermeasure network
CN112560579A (en) * 2020-11-20 2021-03-26 中国科学院深圳先进技术研究院 Obstacle detection method based on artificial intelligence
CN112614077B (en) * 2020-12-30 2022-08-19 北京航空航天大学杭州创新研究院 Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN112766056B (en) * 2020-12-30 2023-10-27 厦门大学 Method and device for detecting lane lines in low-light environment based on deep neural network
CN112950561B (en) * 2021-02-22 2022-07-26 中国地质大学(武汉) Optical fiber end face defect detection method, device and storage medium
CN113052210B (en) * 2021-03-11 2024-04-26 北京工业大学 Rapid low-light target detection method based on convolutional neural network
CN113392702B (en) * 2021-05-10 2024-06-11 南京师范大学 Target identification method based on self-adaptive image enhancement under weak illumination environment
CN113313657B (en) * 2021-07-29 2021-12-21 北京航空航天大学杭州创新研究院 Unsupervised learning method and system for low-illumination image enhancement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886161A (en) * 2019-01-30 2019-06-14 江南大学 A kind of road traffic index identification method based on possibility cluster and convolutional neural networks
CN111161178A (en) * 2019-12-25 2020-05-15 湖南大学 Single low-light image enhancement method based on generation type countermeasure network
CN111968044A (en) * 2020-07-16 2020-11-20 中国科学院沈阳自动化研究所 Low-illumination image enhancement method based on Retinex and deep learning

Also Published As

Publication number Publication date
WO2023065497A1 (en) 2023-04-27
CN114065838A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
US11657230B2 (en) Referring image segmentation
US10354362B2 (en) Methods and software for detecting objects in images using a multiscale fast region-based convolutional neural network
CN111723786B (en) Method and device for detecting wearing of safety helmet based on single model prediction
CN107273458B (en) Depth model training method and device, and image retrieval method and device
CN114065838B (en) Low-light obstacle detection method, system, terminal and storage medium
WO2024045444A1 (en) Processing method and apparatus for visual question answering task, and device and non-volatile readable storage medium
CN108021908B (en) Face age group identification method and device, computer device and readable storage medium
CN110414550B (en) Training method, device and system of face recognition model and computer readable medium
WO2023173552A1 (en) Establishment method for target detection model, application method for target detection model, and device, apparatus and medium
CN110111365B (en) Training method and device based on deep learning and target tracking method and device
JP2011248879A (en) Method for classifying object in test image
CN111813997A (en) Intrusion analysis method, device, equipment and storage medium
EP3726435A1 (en) Deep neural network training method and apparatus, and computer device
CN112836625A (en) Face living body detection method and device and electronic equipment
CN112215188B (en) Traffic police gesture recognition method, device, equipment and storage medium
CN111178178B (en) Multi-scale pedestrian re-identification method, system, medium and terminal combined with region distribution
CN110633689B (en) Face recognition model based on semi-supervised attention network
US12033075B2 (en) Training transformer neural networks to generate parameters of convolutional neural networks
Wang et al. Text detection algorithm based on improved YOLOv3
Li et al. Detection of partially occluded pedestrians by an enhanced cascade detector
CN114630238B (en) Stage sound box volume control method and device, electronic equipment and medium
CN116468043A (en) Nested entity identification method, device, equipment and storage medium
CN114511877A (en) Behavior recognition method and device, storage medium and terminal
KR20230042192A (en) Method, device, device and storage medium for detecting the degree of relevance between face and hand
CN114387603A (en) Method, system and computing device for detecting and correcting Chinese characters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant