CN116434164A - Lane line detection method, system, medium and computer integrating multiple illumination information - Google Patents

Lane line detection method, system, medium and computer integrating multiple illumination information Download PDF

Info

Publication number
CN116434164A
CN116434164A CN202310091893.4A CN202310091893A CN116434164A CN 116434164 A CN116434164 A CN 116434164A CN 202310091893 A CN202310091893 A CN 202310091893A CN 116434164 A CN116434164 A CN 116434164A
Authority
CN
China
Prior art keywords
fusion
exposure
image
lane line
line detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310091893.4A
Other languages
Chinese (zh)
Inventor
田炜
赵文博
余先旺
冯挽强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Intelligent New Energy Vehicle Research Institute
Original Assignee
Nanchang Intelligent New Energy Vehicle Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Intelligent New Energy Vehicle Research Institute filed Critical Nanchang Intelligent New Energy Vehicle Research Institute
Priority to CN202310091893.4A priority Critical patent/CN116434164A/en
Publication of CN116434164A publication Critical patent/CN116434164A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a lane line detection method, a lane line detection system, a lane line detection medium and a lane line detection computer which are integrated with multi-illumination information, wherein the lane line detection method comprises the steps of acquiring an input image acquired by a vision sensor, and performing multi-exposure generation on the input image to obtain a corresponding multi-exposure image; inputting the multi-exposure image and the input image into a fusion network model for multi-exposure fusion to obtain a corresponding fusion image; and extracting the characteristics of the fusion image, and inputting the fusion image after the characteristic extraction into a lane line detection network to obtain lane line parameters or segmented images. On the basis of not affecting the real-time detection, the method effectively improves the detection precision of the lane line detection algorithm under the low illumination condition, and improves the robustness and safety of an automatic driving system.

Description

Lane line detection method, system, medium and computer integrating multiple illumination information
Technical Field
The invention relates to the technical field of automatic driving and intelligent automobiles, in particular to a lane line detection method, a lane line detection system, a lane line detection medium and a lane line detection computer which are integrated with multi-illumination information.
Background
As artificial intelligence technology continues to advance, smart cars are also becoming a focus of attention. The vehicle-mounted sensor senses the surrounding environment, and automatic driving is realized under the condition of no manual operation. The onboard camera assumes the responsibility of acquiring visual information of the surrounding environment. It is desirable that the onboard camera accurately recognize and transmit various information such as pedestrians, traffic lights, vehicles, lane lines, etc. nearby to the onboard computer for vehicle control and planning.
Lane lines are standard arrangements of global roads, and each country has corresponding laws requiring that vehicles must follow the lane lines to travel or change lanes. The relative position of the vehicle and the lane line is critical information for the motion planning and control of an unmanned vehicle or a vehicle carrying an intelligent driving assistance system. Therefore, the real-time and accurate detection of the lane line is also an important subject in the field of intelligent automobiles.
Currently, with the development of deep learning technology, there are many solutions for lane line detection based on a deep learning method, which uses a deep neural network instead of a manually designed feature detector, and obtains the ability to identify lane lines through training learning. Typically, pixel-level semantic segmentation is applied to segment lane line classes and background classes or to fit lane line shapes by predicting cubic curve parameters. These methods have achieved high detection accuracy on some standard data sets such as Tusimple, CULane and allow for real-time detection.
However, the data currently used for developing lane line detection algorithms tend to focus on driving environments with good visibility and illumination conditions, and mainstream algorithms are also mostly developed based on normal illumination conditions. Under low illumination conditions such as night, tunnel and the like, the detection precision is obviously reduced due to the difficulty in feature extraction, and the subsequent decision control is greatly influenced.
In practical application, the dynamic change of the motion scene of the vehicle is extremely large. The all-weather reliable environment perception is the basis for realizing intelligent automobiles and automatic driving. Therefore, the method has important practical application value for development and research of environment sensing algorithms, particularly lane line detection algorithms, in low-light scenes.
Disclosure of Invention
Based on this, an object of the present invention is to provide a lane line detection method, system, medium and computer that fuses multiple illumination information, so as to at least solve the above-mentioned drawbacks in the related art.
The invention provides a lane line detection method integrating multiple illumination information, which comprises the following steps:
acquiring an input image acquired by a vision sensor, and performing multi-exposure generation on the input image to obtain a corresponding multi-exposure image;
inputting the multi-exposure image and the input image into a fusion network model for multi-exposure fusion to obtain a corresponding fusion image;
and extracting the characteristics of the fusion image, and inputting the fusion image after the characteristic extraction into a lane line detection network to obtain lane line parameters or segmented images.
Further, the step of performing multi-exposure generation on the input image to obtain a corresponding multi-exposure image includes:
performing multi-exposure generation on the image information by utilizing gamma conversion according to the following formula to output a high-exposure image aiming at a low-illumination scene, so as to obtain a multi-exposure image:
Figure BDA0004085587100000021
wherein I is exp And I input Respectively a multi-exposure image and an input image, and gamma is a gamma adjusting factor.
Further, before the step of inputting the multi-exposure image and the input image into a fusion network model for multi-exposure fusion, the method further includes:
acquiring a network training set, and pre-training a combined model through the network training set;
acquiring self-supervision loss of a multi-exposure fusion network and matching loss of a lane line detection network, and carrying out weighted addition on the self-supervision loss and the matching loss according to a preset proportion to obtain a corresponding loss function;
training the pre-trained combined model according to the loss function to obtain a fusion network model.
Further, the fusion network model includes a plurality of dense modules, and the step of inputting the multi-exposure image and the input image into the fusion network model to perform multi-exposure fusion so as to obtain a corresponding fusion image includes:
constructing a fusion algorithm of the fusion network model according to the integrated unsupervised image fusion network;
and utilizing the fusion algorithm to sequentially pass through each dense module in the fusion network model to carry out rolling and pooling processing on the multi-exposure image and the input image so as to obtain the fusion image.
The invention also provides a lane line detection system integrating multiple illumination information, which comprises:
the exposure generation module is used for acquiring an input image acquired by the vision sensor and performing multi-exposure generation on the input image so as to obtain a corresponding multi-exposure image;
the exposure fusion module is used for inputting the multi-exposure image and the input image into a fusion network model to perform multi-exposure fusion so as to obtain a corresponding fusion image;
the lane line detection module is used for extracting the characteristics of the fusion image and inputting the fusion image after the characteristics are extracted into a lane line detection network so as to obtain lane line parameters or segmented images.
Further, the exposure generating module includes:
the exposure generating unit is used for performing multi-exposure generation on the image information by utilizing gamma conversion according to the following formula so as to output a high-exposure image aiming at a low-illumination scene, so as to obtain a multi-exposure image:
Figure BDA0004085587100000031
wherein I is exp And I input Respectively a multi-exposure image and an input image, and gamma is a gamma adjusting factor.
Further, the system further comprises:
the training set acquisition module is used for acquiring a network training set and pre-training the combined model through the network training set;
the function calculation module is used for acquiring self-supervision loss of the multi-exposure fusion network and matching loss of the lane line detection network, and carrying out weighted addition on the self-supervision loss and the matching loss according to a preset proportion so as to obtain a corresponding loss function;
and the model training module is used for training the pre-trained combined model according to the loss function so as to obtain a fusion network model.
Further, the converged network model includes a plurality of dense modules, and the exposure convergence module includes:
the algorithm construction unit is used for constructing a fusion algorithm of the fusion network model according to the integrated unsupervised image fusion network;
and the exposure fusion unit is used for utilizing the fusion algorithm to sequentially roll and pool the multi-exposure image and the input image through each dense module in the fusion network model so as to obtain the fusion image.
The invention also provides a medium, on which a computer program is stored, which when being executed by a processor, realizes the lane line detection method integrating multiple illumination information.
The invention also provides a computer, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the lane line detection method integrating multiple illumination information is realized when the processor executes the computer program.
Compared with the prior art, the invention has the beneficial effects that: the multi-exposure fusion algorithm is introduced into a lane line detection task, so that the information obtained from a single vision sensor is fully expanded, and the subsequent feature extraction of the image is facilitated; and a semantic segmentation model based on a deep neural network is used, and a high-precision and rapid lane line detection result is obtained by combining cyclic feature migration aggregation. On the basis of not affecting the real-time detection, the method effectively improves the detection precision of the lane line detection algorithm under the low illumination condition, and improves the robustness and safety of an automatic driving system; the detection precision under the low illumination visual condition is greatly improved, and meanwhile, the light-weight multi-exposure generation and fusion module has the characteristics of easy training, high detection speed and the like; the lane line detection task is well optimized, and a guarantee is provided for decision making after automatic driving perception.
Drawings
FIG. 1 is a flowchart of a lane line detection method with multi-illumination information fusion according to a first embodiment of the present invention;
FIG. 2 is a block diagram showing a lane line detection method with multi-illumination information integrated in a first embodiment of the present invention;
FIG. 3 is a detailed flowchart of step S102 in FIG. 1;
FIG. 4 is a diagram showing an algorithm structure of a multi-exposure fusion module in a lane line detection method for fusing multi-illumination information according to a first embodiment of the present invention;
FIG. 5 is a flowchart of another embodiment of a lane line detection method with multiple illumination information fusion according to a first embodiment of the present invention;
FIG. 6 is a diagram showing an algorithm of a lane line detection module in a lane line detection method with multi-illumination information fusion according to a first embodiment of the present invention;
FIG. 7 is a block diagram showing a lane line detection system with multiple illumination information fusion according to a second embodiment of the present invention;
fig. 8 is a block diagram showing the structure of a computer according to a third embodiment of the present invention.
The invention will be further described in the following detailed description in conjunction with the above-described figures.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Several embodiments of the invention are presented in the figures. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "mounted" on another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like are used herein for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Example 1
Referring to fig. 1, a lane line detection method for fusing multiple illumination information in a first embodiment of the present invention is shown, and the method specifically includes steps S101 to S103:
s101, acquiring an input image acquired by a vision sensor, and performing multi-exposure generation on the input image to obtain a corresponding multi-exposure image;
in a specific implementation, the embodiment provides a lane line detection method for fusing multiple illumination information, which is composed of a multiple exposure generation module, a multiple exposure fusion module and a lane line detection module (as shown in fig. 2).
The algorithm inputs image information acquired by the vision sensor, and firstly enters a multi-exposure generation module to generate a multi-exposure image. In this embodiment, the multi-exposure generation module performs multi-exposure generation by gamma conversion, as shown in formula (1), wherein λ is a scaling factor, in this embodiment 1, i exp And I input The method comprises the steps of respectively obtaining a multi-exposure image and an input image, wherein gamma is a gamma adjusting factor, in the embodiment, gamma=1.5 is taken for generating a high-exposure image, and the high-exposure image and the original image are input into a subsequent multi-exposure fusion module together.
Figure BDA0004085587100000061
S102, inputting the multi-exposure image and the input image into a fusion network model for multi-exposure fusion to obtain a corresponding fusion image;
further, referring to fig. 3, the step 102 specifically includes steps S1021 to S1022:
s1021, constructing a fusion algorithm of the fusion network model according to the integrated non-supervision image fusion network;
s1022, the multi-exposure image and the input image are sequentially subjected to rolling and pooling processing through each dense module in the fusion network model by utilizing the fusion algorithm, so that the fusion image is obtained.
In the implementation, for the multi-exposure fusion module, the fusion module is required to have a simple structure and be easy to splice so as to prevent the problems of difficult training, time consumption in detection and the like caused by excessively large models. In addition, in order to ensure that the fusion network can be optimized for lane line detection tasks, the multi-exposure fusion module needs to be an end-to-end input/output network. And, the multi-exposure fusion network is required to be a self-supervision network to ensure training support for the existing lane line data set. In this embodiment, a unified unsupervised image Fusion network U2Fusion is used as a multi-exposure Fusion module algorithm, and the algorithm structure is as shown in fig. 4.
In some alternative embodiments, referring to fig. 5, before step S102, the method further includes steps S201 to S203:
s201, acquiring a network training set, and pre-training a combined model through the network training set;
s202, acquiring self-supervision loss of a multi-exposure fusion network and matching loss of a lane line detection network, and carrying out weighted addition on the self-supervision loss and the matching loss according to a preset proportion to obtain a corresponding loss function;
and S203, training the pre-trained combined model according to the loss function to obtain a fusion network model.
In the specific implementation, after the corresponding multi-exposure fusion network is obtained, firstly, pre-training is carried out on the combination of the multi-exposure generation module and the multi-exposure fusion module during training, and then the matching loss 1 of the self-supervision loss of the multi-exposure fusion network and the lane line detection network is calculated: and 1, performing weighted addition and training as a loss function of the whole network to meet the requirements of lane line detection tasks, and enabling the fusion network to optimize the lane line detection tasks.
The training dataset (i.e., the network training set) needs to have a proportion of low-light data to meet the network support for low-light scenes. In this embodiment, the CULane data set is used to train the network (i.e., the above-mentioned combination model generated by combining the multi-exposure generating module and the multi-exposure fusion module), which has multiple visual conditions such as normal illumination, night, shadow, high exposure, etc., so that the robustness of the network can be improved well.
And S103, carrying out feature extraction on the fusion image, and inputting the fusion image after feature extraction into a lane line detection network to obtain lane line parameters or segmented images.
In specific implementation, the lane line detection module inputs the image fusion result, the characteristics of the fused image are used as the input of a subsequent lane line detection network, further characteristic extraction is performed, and final lane line parameters or segmented images are output after passing through the lane line detection network. In this embodiment, a lane line detection algorithm RESA based on cyclic feature migration aggregation is used as a lane line detection module, and the algorithm structure is as shown in fig. 6.
In summary, in the lane line detection method for fusing multi-illumination information in the above embodiment of the present invention, a multi-exposure fusion algorithm is introduced into a lane line detection task, so as to fully expand information obtained from a single vision sensor and help to extract features of images in the following steps; and a semantic segmentation model based on a deep neural network is used, and a high-precision and rapid lane line detection result is obtained by combining cyclic feature migration aggregation. On the basis of not affecting the real-time detection, the method effectively improves the detection precision of the lane line detection algorithm under the low illumination condition, and improves the robustness and safety of an automatic driving system; the detection precision under the low illumination visual condition is greatly improved, and meanwhile, the light-weight multi-exposure generation and fusion module has the characteristics of easy training, high detection speed and the like; the lane line detection task is well optimized, and a guarantee is provided for decision making after automatic driving perception.
Example two
In another aspect, referring to fig. 7, a lane line detection system for fusing multiple illumination information in a second embodiment of the present invention is shown, including:
the exposure generation module 11 is configured to acquire an input image acquired by the vision sensor, and perform multi-exposure generation on the input image to obtain a corresponding multi-exposure image;
further, the exposure generating module 11 includes:
the exposure generating unit is used for performing multi-exposure generation on the image information by utilizing gamma conversion according to the following formula so as to output a high-exposure image aiming at a low-illumination scene, so as to obtain a multi-exposure image:
Figure BDA0004085587100000081
wherein I is exp And I input Respectively a multi-exposure image and an input image, and gamma is a gamma adjusting factor.
The exposure fusion module 12 is configured to input the multi-exposure image and the input image into a fusion network model for multi-exposure fusion, so as to obtain a corresponding fusion image;
further, the converged network model includes a plurality of dense modules, and the exposure convergence module 12 includes:
the algorithm construction unit is used for constructing a fusion algorithm of the fusion network model according to the integrated unsupervised image fusion network;
and the exposure fusion unit is used for utilizing the fusion algorithm to sequentially roll and pool the multi-exposure image and the input image through each dense module in the fusion network model so as to obtain the fusion image.
The lane line detection module 13 is configured to perform feature extraction on the fused image, and input the fused image after feature extraction into a lane line detection network to obtain lane line parameters or segmented images.
In some alternative embodiments, the system further comprises:
the training set acquisition module is used for acquiring a network training set and pre-training the combined model through the network training set;
the function calculation module is used for acquiring self-supervision loss of the multi-exposure fusion network and matching loss of the lane line detection network, and carrying out weighted addition on the self-supervision loss and the matching loss according to a preset proportion so as to obtain a corresponding loss function;
and the model training module is used for training the pre-trained combined model according to the loss function so as to obtain a fusion network model.
The functions or operation steps implemented when the above modules and units are executed are substantially the same as those in the above method embodiments, and are not described herein again.
The lane line detection system with multiple illumination information fusion provided by the embodiment of the invention has the same implementation principle and the same technical effects as those of the embodiment of the method, and for the sake of brief description, the corresponding contents in the embodiment of the method can be referred to for the parts of the embodiment of the system which are not mentioned.
Example III
The present invention also proposes a computer, please refer to fig. 8, which shows a computer according to a third embodiment of the present invention, including a memory 10, a processor 20, and a computer program 30 stored in the memory 10 and capable of running on the processor 20, wherein the processor 20 implements the lane line detection method of fusing multiple illumination information when executing the computer program 30.
The memory 10 includes at least one type of medium including flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. Memory 10 may in some embodiments be an internal storage unit of a computer, such as a hard disk of the computer. The memory 10 may also be an external storage device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), etc. Further, the memory 10 may also include both internal storage units and external storage devices of the computer. The memory 10 may be used not only for storing application software installed in a computer and various types of data, but also for temporarily storing data that has been output or is to be output.
The processor 20 may be, in some embodiments, an electronic control unit (Electronic Control Unit, ECU), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chip, for executing program codes or processing data stored in the memory 10, such as executing an access restriction program, or the like.
It should be noted that the structure shown in fig. 8 does not constitute a limitation of the computer, and in other embodiments, the computer may include fewer or more components than shown, or may combine certain components, or may have a different arrangement of components.
The embodiment of the invention also provides a medium, and a computer program is stored on the medium, and the computer program realizes the lane line detection method integrating multiple illumination information when being executed by a processor.
Those of skill in the art will appreciate that the logic and/or steps illustrated in the flowcharts or otherwise described herein, e.g., a sequence of executable instructions that may be considered to implement the logic functions, may be embodied in any computer medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. The lane line detection method integrating multiple illumination information is characterized by comprising the following steps of:
acquiring an input image acquired by a vision sensor, and performing multi-exposure generation on the input image to obtain a corresponding multi-exposure image;
inputting the multi-exposure image and the input image into a fusion network model for multi-exposure fusion to obtain a corresponding fusion image;
and extracting the characteristics of the fusion image, and inputting the fusion image after the characteristic extraction into a lane line detection network to obtain lane line parameters or segmented images.
2. The lane line detection method according to claim 1, wherein the step of performing multi-exposure generation on the input image to obtain a corresponding multi-exposure image comprises:
performing multi-exposure generation on the image information by utilizing gamma conversion according to the following formula to output a high-exposure image aiming at a low-illumination scene, so as to obtain a multi-exposure image:
Figure FDA0004085587080000011
wherein I is exp And I input Respectively a multi-exposure image and an input image, and gamma is a gamma adjusting factor.
3. The lane line detection method of fusing multi-illumination information according to claim 1, wherein before the step of inputting the multi-exposure image and the input image into a fused network model for multi-exposure fusion, the method further comprises:
acquiring a network training set, and pre-training a combined model through the network training set;
acquiring self-supervision loss of a multi-exposure fusion network and matching loss of a lane line detection network, and carrying out weighted addition on the self-supervision loss and the matching loss according to a preset proportion to obtain a corresponding loss function;
training the pre-trained combined model according to the loss function to obtain a fusion network model.
4. The lane line detection method according to claim 1, wherein the fusion network model includes a plurality of dense modules, and the step of inputting the multi-exposure image and the input image into the fusion network model to perform multi-exposure fusion to obtain a corresponding fusion image includes:
constructing a fusion algorithm of the fusion network model according to the integrated unsupervised image fusion network;
and utilizing the fusion algorithm to sequentially pass through each dense module in the fusion network model to carry out rolling and pooling processing on the multi-exposure image and the input image so as to obtain the fusion image.
5. The lane line detection system integrating multiple illumination information is characterized by comprising:
the exposure generation module is used for acquiring an input image acquired by the vision sensor and performing multi-exposure generation on the input image so as to obtain a corresponding multi-exposure image;
the exposure fusion module is used for inputting the multi-exposure image and the input image into a fusion network model to perform multi-exposure fusion so as to obtain a corresponding fusion image;
the lane line detection module is used for extracting the characteristics of the fusion image and inputting the fusion image after the characteristics are extracted into a lane line detection network so as to obtain lane line parameters or segmented images.
6. The lane line detection system of claim 5 wherein the exposure generation module comprises:
the exposure generating unit is used for performing multi-exposure generation on the image information by utilizing gamma conversion according to the following formula so as to output a high-exposure image aiming at a low-illumination scene, so as to obtain a multi-exposure image:
Figure FDA0004085587080000021
wherein I is exp And I input Respectively a multi-exposure image and an input image, and gamma is a gamma adjusting factor.
7. The lane line detection system of claim 5 wherein the system further comprises:
the training set acquisition module is used for acquiring a network training set and pre-training the combined model through the network training set;
the function calculation module is used for acquiring self-supervision loss of the multi-exposure fusion network and matching loss of the lane line detection network, and carrying out weighted addition on the self-supervision loss and the matching loss according to a preset proportion so as to obtain a corresponding loss function;
and the model training module is used for training the pre-trained combined model according to the loss function so as to obtain a fusion network model.
8. The lane-line detection system of claim 5 wherein the fusion network model comprises a plurality of dense modules, the exposure fusion module comprising:
the algorithm construction unit is used for constructing a fusion algorithm of the fusion network model according to the integrated unsupervised image fusion network;
and the exposure fusion unit is used for utilizing the fusion algorithm to sequentially roll and pool the multi-exposure image and the input image through each dense module in the fusion network model so as to obtain the fusion image.
9. A medium having stored thereon a computer program which, when executed by a processor, implements the lane line detection method of fusing multiple illumination information according to any one of claims 1 to 4.
10. A computer comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the lane detection method of fusing multiple illumination information as claimed in any one of claims 1 to 4.
CN202310091893.4A 2023-02-09 2023-02-09 Lane line detection method, system, medium and computer integrating multiple illumination information Pending CN116434164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310091893.4A CN116434164A (en) 2023-02-09 2023-02-09 Lane line detection method, system, medium and computer integrating multiple illumination information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310091893.4A CN116434164A (en) 2023-02-09 2023-02-09 Lane line detection method, system, medium and computer integrating multiple illumination information

Publications (1)

Publication Number Publication Date
CN116434164A true CN116434164A (en) 2023-07-14

Family

ID=87082106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310091893.4A Pending CN116434164A (en) 2023-02-09 2023-02-09 Lane line detection method, system, medium and computer integrating multiple illumination information

Country Status (1)

Country Link
CN (1) CN116434164A (en)

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
US20180157972A1 (en) Partially shared neural networks for multiple tasks
CN113468967B (en) Attention mechanism-based lane line detection method, attention mechanism-based lane line detection device, attention mechanism-based lane line detection equipment and attention mechanism-based lane line detection medium
CN113312983B (en) Semantic segmentation method, system, device and medium based on multi-mode data fusion
CN112307978B (en) Target detection method and device, electronic equipment and readable storage medium
CN111582189A (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN111256693B (en) Pose change calculation method and vehicle-mounted terminal
CN111142402B (en) Simulation scene construction method, device and terminal
CN116484971A (en) Automatic driving perception self-learning method and device for vehicle and electronic equipment
EP0837378A2 (en) Method for identifying marking stripes of road lanes
CN115578705A (en) Aerial view feature generation method based on multi-modal fusion
US11308324B2 (en) Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof
CN111079634B (en) Method, device and system for detecting obstacle in running process of vehicle and vehicle
CN117036895B (en) Multi-task environment sensing method based on point cloud fusion of camera and laser radar
CN111210411B (en) Method for detecting vanishing points in image, method for training detection model and electronic equipment
CN116229406B (en) Lane line detection method, system, electronic equipment and storage medium
CN112241963A (en) Lane line identification method and system based on vehicle-mounted video and electronic equipment
CN114120259A (en) Empty parking space identification method and system, computer equipment and storage medium
CN116343148A (en) Lane line detection method, device, vehicle and storage medium
Ye et al. Neural network‐based semantic segmentation model for robot perception of driverless vision
CN116434164A (en) Lane line detection method, system, medium and computer integrating multiple illumination information
CN115346184A (en) Lane information detection method, terminal and computer storage medium
CN111174796B (en) Navigation method based on single vanishing point, electronic equipment and storage medium
CN111077893B (en) Navigation method based on multiple vanishing points, electronic equipment and storage medium
CN114332805A (en) Lane position acquisition method, lane position acquisition device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination