CN116894789A - Method, device, equipment and medium for enhancing image under mine based on LED control - Google Patents

Method, device, equipment and medium for enhancing image under mine based on LED control Download PDF

Info

Publication number
CN116894789A
CN116894789A CN202310910118.7A CN202310910118A CN116894789A CN 116894789 A CN116894789 A CN 116894789A CN 202310910118 A CN202310910118 A CN 202310910118A CN 116894789 A CN116894789 A CN 116894789A
Authority
CN
China
Prior art keywords
image
illumination
model
evaluation result
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310910118.7A
Other languages
Chinese (zh)
Inventor
张立斌
胡金成
吴航海
姚超修
蒋泽
蒋志龙
王琪
王鹏
郝东波
谢浩
胡亚磊
曹宁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiandi Changzhou Automation Co Ltd
Changzhou Research Institute of China Coal Technology and Engineering Group Corp
Original Assignee
Tiandi Changzhou Automation Co Ltd
Changzhou Research Institute of China Coal Technology and Engineering Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tiandi Changzhou Automation Co Ltd, Changzhou Research Institute of China Coal Technology and Engineering Group Corp filed Critical Tiandi Changzhou Automation Co Ltd
Priority to CN202310910118.7A priority Critical patent/CN116894789A/en
Publication of CN116894789A publication Critical patent/CN116894789A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of image processing, in particular to an underground image enhancement method based on LED control, which comprises the steps of setting initial illuminance of an LED lamp and acquiring an underground low-illuminance image in an illumination area; acquiring an uphole natural illumination image and inputting the uphole natural illumination image and all low-illumination images into an image enhancement model for training; inputting an uphole natural illumination image into an image quality evaluation model for training; inputting the low-illumination image into a trained image enhancement model, enhancing the low-illumination image, and inputting the low-illumination image into a trained image quality assessment model to obtain a current assessment result; according to the current evaluation result, the illumination of the LED lamp is regulated, an image is acquired in real time, the image is input into an image quality evaluation model for evaluation after enhancement, the current evaluation result is compared with the last evaluation result until the current evaluation result is more than or equal to the last evaluation result, and the illumination regulation of the LED lamp is stopped. The video monitoring system has good imaging quality, and improves the generalization of intelligent identification and detection of the AI model.

Description

Method, device, equipment and medium for enhancing image under mine based on LED control
Technical Field
The application relates to the technical field of image processing, in particular to an under-mine image enhancement method, device and equipment based on LED control and a medium thereof.
Background
With the development of deep learning in image recognition, technologies such as personnel safety behavior detection, equipment state detection, underground environment safety detection and the like based on algorithms such as target detection, image segmentation and the like are increasingly applied to industrial intelligent monitoring systems. However, the underground coal mine is affected by factors such as illumination, dust, water mist and the like, the video monitoring imaging quality is poor, the visual detection is difficult to distinguish under many conditions, the monitoring effect is seriously affected, and the intelligent identification detection of the AI model is also prevented from being used in the field of coal mine video. The display effect of the intelligent video platform and the accuracy of intelligent video monitoring are affected, and the requirements of a coal mine site cannot be met.
Disclosure of Invention
The application aims to solve the technical problems that: in order to solve the technical problem of poor imaging quality of a video monitoring system in the prior art, the application provides the mine image enhancement method based on LED control, the imaging quality of the video monitoring system is good, and the intelligent identification detection generalization of an AI model is improved.
The technical scheme adopted for solving the technical problems is as follows: an under-mine image enhancement method based on LED control, the method comprising the following steps:
s1, setting initial illuminance of an LED lamp, enabling the LED lamp to provide illumination for an area to be illuminated under the initial illuminance to form an illumination area, acquiring video streams of a coal mine underground in the illumination area by an image acquisition module, and performing frame extraction processing on the video streams to obtain a plurality of frames of low-illuminance images;
s2, acquiring the information of an uphole natural illumination image, forming an asymmetric data set with all the low-illumination images, constructing an image enhancement model, inputting the asymmetric data set into the image enhancement model, and training the image enhancement model in an unsupervised mode;
s3, constructing an image quality evaluation model, inputting the above-well natural illumination image information into the image quality evaluation model, and training the image quality evaluation model;
s4, inputting the low-illumination image into a trained image enhancement model to complete enhancement of the low-illumination image, inputting the enhanced low-illumination image into a trained image quality assessment model to obtain a current assessment result;
s5, adjusting the illuminance of the LED lamp according to the current evaluation result, acquiring an image in real time through an image acquisition module, inputting the image acquired in real time into the step S4, comparing the current evaluation result with the last evaluation result until the current evaluation result is greater than or equal to the last evaluation result, and stopping adjusting the illuminance of the LED lamp.
Further, specifically, in step S1, the illuminance interval of the LED lamp is [0,1], and the initial illuminance is 0.5.
Further specifically, in step S2, the image enhancement model is composed of a generator and a discriminator, the asymmetric data set is input into the image enhancement model for training, and the image enhancement model is trained by iteratively repeating the training of the loss function of the generator and the discriminator until the image enhancement model reaches a nash equilibrium state.
Further specifically, in step S4, the low-illuminance image is input to a trained image enhancement model, and enhancement of the low-illuminance image is completed, including the steps of:
s411, inputting a piece of low-illumination image to the generator to generate a reflection map and an illumination map;
s412, taking the reflection graph and the illumination graph as inputs of the discriminator;
s413, calculating a loss function according to the output of the discriminator, and enhancing the low-illumination image through the fusion network detail enhancement module and the fusion network color enhancement module according to the calculation result of the loss function.
Further specifically, in step S4, inputting the enhanced low-illumination image into a trained image quality assessment model, and obtaining the current assessment result includes the following steps:
s421, after a first image block with the sharpness value larger than or equal to a preset sharpness threshold value in the uphole natural illumination image and a second image block with the sharpness value larger than or equal to the preset sharpness threshold value in the low-illumination image are screened out, determining a first spatial domain characteristic value of each pixel point in the first image block and a second spatial domain characteristic value of each pixel point in the second image block;
s422, a first multi-element Gaussian model of the natural illumination image on the well is obtained according to the first spatial domain characteristic value of each pixel point in the first image block, and a second multi-element Gaussian model of the natural illumination image on the well is obtained according to the second spatial domain characteristic value of each pixel point in the second image block;
s423, obtaining the distance between the first multi-element Gaussian model and the second multi-element Gaussian model, and taking the distance as an evaluation result of the low-illumination image.
Preferably, the preset sharpness threshold is P, P e [0.6,0.9].
Further specifically, in step S5, the current evaluation result is compared with the last evaluation result, if the current evaluation result is smaller than the last evaluation result, the minimum value of the evaluation result is obtained, and the illuminance corresponding to the minimum value of the evaluation result is obtained to perform illuminance adjustment on the LED lamp.
An image enhancement device under mine based on LED illumination control includes
The LED lamp is used for working in an illumination mode to provide illumination for an area to be illuminated so as to form an illumination area;
the image acquisition module is used for acquiring the image of the illumination area;
the control module is used for processing the mine image enhancement method based on LED control.
A computer device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the method of downhole image enhancement based on LED control as described above
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement an under-mine image enhancement method based on LED control as described above.
The beneficial effects of the application are as follows:
(1) According to the application, the non-supervision image enhancement model training is carried out by adopting the underground low-illumination image and the underground natural illumination image, the initial illumination of the LED lamp is set, the current enhancement image and the evaluation result are obtained by inputting the initial illumination into the trained model, the LED lamp is turned on and turned off according to the evaluation result, the optimal enhancement effect is obtained by iterative processing until the requirements are met, the display effect of the underground monitoring picture is facilitated, the underground real-time condition is facilitated to be judged, and the working intensity of research personnel is reduced.
(2) The application improves the display effect of the on-well system and improves the accuracy of the AI model on the detection of the underground behaviors. The video system is not limited to the effect of post-hoc analysis any more, and can make alarms and judgments in advance or in the event.
(3) The application can break through the limit of extreme environment, and the problem of poor generalization of the underground deployment of the image enhancement model is solved by the image enhancement model and the image quality evaluation model in the underground deployment stage, and the application is beneficial to improving the recognition precision of the advanced visual task.
Drawings
The application will be further described with reference to the drawings and examples.
Fig. 1 is a schematic flow chart of a first embodiment of the present application.
Fig. 2 is a schematic diagram of image enhancement model processing according to a first embodiment of the present application.
Fig. 3 is a schematic diagram of an evaluation result of an image quality evaluation model according to a first embodiment of the present application.
Fig. 4 is a comparison of an embodiment of the present application before and after image processing.
Fig. 5 is a schematic flow chart of a second embodiment of the present application.
FIG. 6 is a schematic diagram of a third embodiment of the present application.
In the figure, 1, an LED lamp; 2. an image acquisition module; 3. a control module; 10. a computer device; 1002. a processor; 1004. a memory; 1006. a transmission device;
Detailed Description
The application will now be described in further detail with reference to the accompanying drawings. The drawings are simplified schematic representations which merely illustrate the basic structure of the application and therefore show only the structures which are relevant to the application.
Example 1
As shown in fig. 1, an embodiment of the present application provides an image enhancement method under a mine based on LED control, including the following steps:
s1, setting initial illuminance of an LED lamp, enabling the LED lamp to provide illumination for an area to be illuminated under the initial illuminance to form an illumination area, and acquiring a video stream of a coal mine underground in the illumination area by an image acquisition module, and performing frame extraction processing on the video stream to obtain a plurality of frames of low-illuminance images; the illumination interval of the LED lamp is [0,1], the initial illumination is 0.5, the interval is divided into 20 different illumination values, and experiments show that when the initial illumination is 0.5, the calculated result value of the image quality evaluation model is lower, and the enhanced optimal effect can be obtained only by fine adjustment of the model.
S2, acquiring the information of the natural illumination image on the well, forming an asymmetric data set with all the low-illumination images, constructing an image enhancement model, inputting the asymmetric data set into the image enhancement model, and training the image enhancement model in an unsupervised mode; further, in step S2, the image enhancement model is composed of a generator and a discriminator, the asymmetric data set is input into the image enhancement model for training, and the image enhancement model is trained by repeating iteration through the loss functions of the generator and the discriminator until the image enhancement model reaches a nash equilibrium state.
The arbiter adopts, but is not limited to, a Markov arbiter, and has certain advantages of high resolution and high detail retention for ultrahigh resolution and picture definition in style migration.
S3, constructing an image quality evaluation model, inputting the image information of the natural illumination on the well into the image quality evaluation model, and training the image quality evaluation model.
S4, inputting the low-illumination image into a trained image enhancement model, completing enhancement of the low-illumination image, inputting the enhanced low-illumination image into a trained image quality assessment model, and obtaining a current assessment result;
further, as shown in fig. 2, the low-illumination image is input into the trained image enhancement model, and the enhancement of the low-illumination image is completed, which comprises the following steps:
s411, inputting a low-illumination image into a generator to generate a reflection map and an illumination map; in particular, a Retinex-based multi-scale feature extraction generator is used to generate a clear illumination map and a reflection map.
S412, taking the reflection diagram and the illumination diagram as input of a discriminator;
s413, calculating a loss function according to the output of the discriminator, and enhancing the low-illumination image through the fusion network detail enhancement module and the fusion network color enhancement module according to the calculation result of the loss function.
Further, the method for inputting the enhanced low-illumination image into the trained image quality evaluation model, and obtaining the current evaluation result comprises the following steps:
s421, after a first image block with the sharpness value larger than or equal to a preset sharpness threshold value in an uphole natural illumination image and a second image block with the sharpness value larger than or equal to the preset sharpness threshold value in a low-illumination image are screened out, determining a first spatial domain feature value of each pixel point in the first image block and a second spatial domain feature value of each pixel point in the second image block;
specifically, the method for calculating the local mean value of each pixel point of each image block, the method for calculating the local standard deviation of each pixel point of each image block, and the method for calculating the first sharpness value and the second sharpness value may be as follows:
δ(b)=∑∑ (i,j∈patchb σ(i,j) (3);
wherein, formula (1) represents performing Gaussian blur processing on each pixel point of each image block to obtain a local average value of each pixel point of each image block, formula (2) represents performing Gaussian blur processing on the square of the difference between the pixel value of each pixel point of each image block and the local average value to obtain a local standard deviation of each pixel point of each image block, and { w in formula (1) and formula (2) k,l I k= -K, … K; l= -L, … L } is a two-dimensional circularly symmetric gaussian weight function sampled from 3 standard deviations (k=l=3) and rescaled to a unit volume, equation (3) represents that for each image block of the natural illumination image, the sum of the local standard deviations corresponding to each pixel point in the image block is taken as a first sharpness value for each image block, or for each image block of the low illumination image, the sum of the local standard deviations corresponding to each pixel point in the image block is taken as a second sharpness value for the image block, wherein patch B represents the image block that has been divided into P x P, δ represents the sharpness value of the image block, the sharpness value of the local image is quantized with the local standard deviations, the P x P image block is marked with b=1, 2, B, and the average local offset range of each image block is calculated in a straightforward manner.
In this embodiment, the maximum sharpness value P for all image blocks is set to a sharpness threshold, where P e [0.6,0.9], P is preferably 0.75. And reserving image blocks which are larger than or equal to the sharpness threshold value, and eliminating image blocks which are smaller than the sharpness threshold value.
S422, a first multi-element Gaussian model of the natural illumination image of the well is obtained according to the first spatial domain characteristic value of each pixel point in the first image block, and a second multi-element Gaussian model of the natural illumination image of the well is obtained according to the second spatial domain characteristic value of each pixel point in the second image block;
specifically, the method for calculating the first spatial domain characteristic value of each pixel point in the first image block and the second spatial domain characteristic value of each pixel point in the second image block may be:
where i.e {1, 2..M }, j.e {1,2,..N } e {1,2,..N } is the spatial coordinates of the image, and M and N are the spatial dimensions of the image.
Dimension I (I, j) is a first spatial feature value of each pixel in the first image block or a second spatial feature value of each pixel in the second image block, I (I, j) is a pixel value of each pixel in the first image block or a pixel value of each pixel in the second target image block, μ (I, j) is a local mean of each pixel in the first image block or a local mean of each pixel in the second image block, and σ (I, j) is a local standard deviation of each pixel in the first image block or a local standard deviation of each pixel in the second image block.
The generalized Gaussian distribution formula (GGD) with 0 as the mean is:
in the formula (4), parameters α, β are obtained by using a fast matching algorithm, and further a movement-matching is used to estimate the parameters α, β, gamma function Γ (·) as follows:
the method for obtaining the first multi-element Gaussian model of the natural illumination image on the well, namely fitting the first multi-element Gaussian model of the natural illumination image according to the first airspace characteristic value of each pixel point in the first image block, obtaining the second multi-element Gaussian model of the natural illumination image on the well, namely fitting the second multi-element Gaussian model of the natural illumination image according to the second airspace characteristic value of each pixel point in the second image block, simply speaking, inputting characteristic parameters of the natural illumination image, and obtaining the Gaussian model by carrying out maximum likelihood estimation comprises the following steps:
in the formula (6), (x) 1 ,...,x k ) Is a first spatial eigenvalue or a second spatial eigenvalue, v represents a first mean vector of a first multiple gaussian Model (MVG) or a second mean vector of a second multiple gaussian Model (MVG), and Σ represents a first covariance matrix of the first mean vector of the first multiple gaussian Model (MVG) or a second covariance matrix of the second multiple gaussian Model (MVG).
S423, obtaining the distance between the first multi-element Gaussian model and the second multi-element Gaussian model, and taking the distance as an evaluation result of the low-illumination image.
The calculation method for calculating the distances between the first mean vector and the first covariance matrix and the second mean vector and the second covariance matrix may be:
in the formula (7), v1, v2, sigma 1 ,∑ 2 Representing a first mean vector, a second mean vector, a first covariance matrix and a second covariance matrix, respectively, D (v 1, v2, sigma) 1 ,∑ 2 ) Representing distance, (v 1, v 2) T A matrix transpose representing the first mean vector and the second mean vector.
S5, according to the current evaluation result, the illumination of the LED lamp is adjusted, an image is acquired in real time through the image acquisition module, the image acquired in real time is input into the step S4, the current evaluation result is compared with the last evaluation result until the current evaluation result is more than or equal to the last evaluation result, and illumination adjustment of the LED lamp is stopped, as shown in FIG. 4, (a) the image is acquired by the LED lamp at preset illumination, and (b) the acquired image after the LED lamp is adjusted in order to meet the evaluation adjustment.
In this embodiment, the current evaluation result is compared with the last evaluation result, if the current evaluation result is smaller than the last evaluation result, as shown in fig. 3, the smaller the evaluation result value is, the better the image quality is, the minimum value of the evaluation result is obtained, and the illuminance corresponding to the minimum value of the evaluation result is obtained to perform illuminance adjustment on the LED lamp.
In summary, the method for enhancing the underground image based on the LED control provided by the application adopts the underground low-illumination image and the underground natural illumination image to carry out the non-supervision image enhancement model training, sets the initial illumination of the LED lamp, inputs the initial illumination into a trained model to obtain the current enhancement image and the evaluation result, and lightens and darkens the LED lamp according to the evaluation result until the requirement is met, so that the optimal enhancement effect is obtained, the display effect of the underground monitoring picture is facilitated, the underground real-time condition is facilitated to be judged, and the working intensity of research personnel is reduced; the application improves the display effect of the on-well system and improves the accuracy of the AI model on the detection of the underground behaviors. The video system is not limited to the effect of post analysis any more, and can carry out alarming and judgment in advance or in the past; the application can break through the limit of extreme environment, and the problem of poor generalization of the underground deployment of the image enhancement model is solved by the image enhancement model and the image quality evaluation model in the underground deployment stage, and the application is beneficial to improving the recognition precision of the advanced visual task.
Example 2
As shown in fig. 5, an embodiment of the present application provides an image enhancement device under a mine based on LED illumination control, including: an LED lamp 1 for operating in an illumination mode to provide illumination to an area to be illuminated, forming an illumination area; an image acquisition module 2 for acquiring an image of the illumination area; and the control module 3 is used for processing the mine image enhancement method based on the LED control.
The various modifications and embodiments of the foregoing LED-based image enhancement method under a mine in the first embodiment of fig. 1 are equally applicable to the LED-based image enhancement system under a mine in the present embodiment, and those skilled in the art will clearly know the implementation of the LED-based image enhancement system under a mine in the present embodiment through the foregoing detailed description of the LED-based image enhancement method under a mine in the present embodiment, so that the details thereof will not be described herein for brevity.
Example 3: the embodiment of the application provides computer equipment, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the at least one instruction or the at least one section of program is loaded and executed by the processor to realize the LED control-based image enhancement method under a mine.
Fig. 6 shows a schematic hardware structure of an apparatus for implementing an LED-based method for enhancing an image under a mine according to an embodiment of the present application, where the apparatus may participate in forming or including an apparatus or system according to an embodiment of the present application. As shown in fig. 6, the computer device 10 may include one or more processors 1002 (the processors may include, but are not limited to, processing means such as a microprocessor MCU or a programmable logic device FPGA), memory 1004 for storing data, and transmission means 1006 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 6 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, computer device 10 may also include more or fewer components than shown in FIG. 6, or have a different configuration than shown in FIG. 6.
It should be noted that the one or more processors and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer device 10 (or mobile device). As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The memory 1004 may be used to store software programs and modules of application software, such as a program instruction/data storage device corresponding to an image enhancement method under a mine based on LED control in the embodiment of the present application, and the processor executes the software programs and modules stored in the memory 1004 to perform various functional applications and data processing, that is, implement one of the methods described above. Memory 1004 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1004 may further include memory located remotely from the processor, which may be connected to computer device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 1006 is for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of the computer device 10. In one example, the transmission means 1006 includes a network adapter (NetworkInterfaceController, NIC) that can be connected to other network devices via a base station to communicate with the internet. In one example, the transmission means 1006 may be a radio frequency (RadioFrequency, RF) module for communicating wirelessly with the internet.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer device 10 (or mobile device).
Embodiment 4. The present application further provides a computer readable storage medium, where the computer readable storage medium may be configured in a server to store at least one instruction or at least one program related to implementing an LED-based image enhancement method under a mine in the method embodiment, where the at least one instruction or the at least one program is loaded and executed by the processor to implement an LED-based image enhancement method under a mine provided in the method embodiment.
Alternatively, in this embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Example 5: embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs an under-mine image enhancement method based on LED control provided in the above-mentioned various alternative embodiments.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for apparatus, devices and storage medium embodiments, the description is relatively simple as it is substantially similar to method embodiments, with reference to the description of method embodiments in part.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
With the above-described preferred embodiments according to the present application as an illustration, the above-described descriptions can be used by persons skilled in the relevant art to make various changes and modifications without departing from the scope of the technical idea of the present application. The technical scope of the present application is not limited to the description, but must be determined according to the scope of claims.

Claims (10)

1. An under-mine image enhancement method based on LED control, which is characterized by comprising the following steps:
s1, setting initial illuminance of an LED lamp, enabling the LED lamp to provide illumination for an area to be illuminated under the initial illuminance to form an illumination area, acquiring video streams of a coal mine underground in the illumination area by an image acquisition module, and performing frame extraction processing on the video streams to obtain a plurality of frames of low-illuminance images;
s2, acquiring the information of an uphole natural illumination image, forming an asymmetric data set with all the low-illumination images, constructing an image enhancement model, inputting the asymmetric data set into the image enhancement model, and training the image enhancement model in an unsupervised mode;
s3, constructing an image quality evaluation model, inputting the above-well natural illumination image information into the image quality evaluation model, and training the image quality evaluation model;
s4, inputting the low-illumination image into a trained image enhancement model to complete enhancement of the low-illumination image, inputting the enhanced low-illumination image into a trained image quality assessment model to obtain a current assessment result;
s5, adjusting the illuminance of the LED lamp according to the current evaluation result, acquiring an image in real time through an image acquisition module, inputting the image acquired in real time into the step S4, comparing the current evaluation result with the last evaluation result until the current evaluation result is greater than or equal to the last evaluation result, and stopping adjusting the illuminance of the LED lamp.
2. The method for enhancing an image under a mine based on LED control as claimed in claim 1, wherein in step S1, the illuminance interval of the LED lamp is [0,1], and the initial illuminance is 0.5.
3. The LED control-based under-mine image enhancement method of claim 1, wherein in step S2, the image enhancement model is composed of a generator and a discriminator, the asymmetric data set is input into the image enhancement model for training, and the image enhancement model is trained by repeating iteration through a loss function of the generator and the discriminator until the image enhancement model reaches a nash equilibrium state.
4. The method for enhancing an image under a mine based on LED control as claimed in claim 3, wherein in step S4, the low-illuminance image is input to a trained image enhancement model, and enhancement of the low-illuminance image is completed, comprising the steps of:
s411, inputting a piece of low-illumination image to the generator to generate a reflection map and an illumination map;
s412, taking the reflection graph and the illumination graph as inputs of the discriminator;
s413, calculating a loss function according to the output of the discriminator, and enhancing the low-illumination image through the fusion network detail enhancement module and the fusion network color enhancement module according to the calculation result of the loss function.
5. The LED control-based under-mine image enhancement method of claim 1, wherein in step S4, inputting the enhanced low-illuminance image to a trained image quality evaluation model, obtaining a current evaluation result includes the steps of:
s421, after a first image block with the sharpness value larger than or equal to a preset sharpness threshold value in the uphole natural illumination image and a second image block with the sharpness value larger than or equal to the preset sharpness threshold value in the low-illumination image are screened out, determining a first spatial domain characteristic value of each pixel point in the first image block and a second spatial domain characteristic value of each pixel point in the second image block;
s422, a first multi-element Gaussian model of the natural illumination image on the well is obtained according to the first spatial domain characteristic value of each pixel point in the first image block, and a second multi-element Gaussian model of the natural illumination image on the well is obtained according to the second spatial domain characteristic value of each pixel point in the second image block;
s423, obtaining the distance between the first multi-element Gaussian model and the second multi-element Gaussian model, and taking the distance as an evaluation result of the low-illumination image.
6. The LED control-based method of downhole image enhancement of claim 5, wherein the preset sharpness threshold is P, P e [0.6,0.9].
7. The method for enhancing an image under a mine based on LED control according to claim 1, wherein in the step S5, the current evaluation result is compared with the last evaluation result, if the current evaluation result is smaller than the last evaluation result, the minimum value of the evaluation result is obtained, and the illuminance corresponding to the minimum value of the evaluation result is obtained to perform illuminance adjustment on the LED lamp.
8. An image enhancement device under mine based on LED illumination control, which is characterized by comprising
An LED lamp (1) for operating in an illumination mode to provide illumination to an area to be illuminated, forming an illumination area;
an image acquisition module (2) for acquiring an image of the illumination area;
a control module (3) for processing the LED control-based under-mine image enhancement method as claimed in any one of claims 1 to 7.
9. A computer device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the LED control-based method of downhole image enhancement as claimed in any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to implement the LED control-based under-mine image enhancement method of any one of claims 1 to 7.
CN202310910118.7A 2023-07-21 2023-07-21 Method, device, equipment and medium for enhancing image under mine based on LED control Pending CN116894789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310910118.7A CN116894789A (en) 2023-07-21 2023-07-21 Method, device, equipment and medium for enhancing image under mine based on LED control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310910118.7A CN116894789A (en) 2023-07-21 2023-07-21 Method, device, equipment and medium for enhancing image under mine based on LED control

Publications (1)

Publication Number Publication Date
CN116894789A true CN116894789A (en) 2023-10-17

Family

ID=88314724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310910118.7A Pending CN116894789A (en) 2023-07-21 2023-07-21 Method, device, equipment and medium for enhancing image under mine based on LED control

Country Status (1)

Country Link
CN (1) CN116894789A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097988A (en) * 2023-10-18 2023-11-21 煤炭科学研究总院有限公司 Complex environment image acquisition system and method for fully mechanized coal mining face

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097988A (en) * 2023-10-18 2023-11-21 煤炭科学研究总院有限公司 Complex environment image acquisition system and method for fully mechanized coal mining face
CN117097988B (en) * 2023-10-18 2024-01-19 煤炭科学研究总院有限公司 Complex environment image acquisition system and method for fully mechanized coal mining face

Similar Documents

Publication Publication Date Title
Zoran et al. Learning ordinal relationships for mid-level vision
US7224735B2 (en) Adaptive background image updating
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
US11443454B2 (en) Method for estimating the pose of a camera in the frame of reference of a three-dimensional scene, device, augmented reality system and computer program therefor
CN109961444B (en) Image processing method and device and electronic equipment
Pang et al. Classifying discriminative features for blur detection
US20150187076A1 (en) System and Methods for Persona Identification Using Combined Probability Maps
Ramirez-Quintana et al. Self-adaptive SOM-CNN neural system for dynamic object detection in normal and complex scenarios
Guo et al. Robust foreground detection using smoothness and arbitrariness constraints
CN109685045B (en) Moving target video tracking method and system
CN108694719B (en) Image output method and device
CN110222686B (en) Object detection method, object detection device, computer equipment and storage medium
CN109766828A (en) A kind of vehicle target dividing method, device and communication equipment
CN116894789A (en) Method, device, equipment and medium for enhancing image under mine based on LED control
CN111935479A (en) Target image determination method and device, computer equipment and storage medium
CN112329851A (en) Icon detection method and device and computer readable storage medium
CN110795975B (en) Face false detection optimization method and device
CN112633221A (en) Face direction detection method and related device
WO2023279799A1 (en) Object identification method and apparatus, and electronic system
Delibasis et al. A novel robust approach for handling illumination changes in video segmentation
CN108334870A (en) The remote monitoring system of AR device data server states
Zhang et al. Stereoscopic video saliency detection based on spatiotemporal correlation and depth confidence optimization
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
KR102171384B1 (en) Object recognition system and method using image correction filter
Yu et al. Skin detection for adult image identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination