Disclosure of Invention
The invention provides a lightning detection method, a lightning detection device, computer equipment and a readable medium, aiming at solving the problems that the existing lightning detection mode is easy to generate false alarm and has low lightning detection accuracy.
To achieve the above object, according to a first aspect of the present invention, there is provided a lightning detection method including:
acquiring an image shot by a lightning stroke image device, and determining the ambient illumination intensity according to the average brightness of the image;
when the ambient illumination intensity is greater than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
when the brightness value of any image block is larger than a preset second brightness threshold, starting exposure of the lightning stroke image device and setting exposure time according to the difference between the brightness value and the second brightness threshold;
acquiring a video stream shot by a lightning strike image device after exposure is started, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and inputting the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
and when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value, judging that the current frame image is a thunder and lightning picture.
Preferably, in the above lightning detection method, the training process of the lightning target detection model includes:
acquiring a lightning sample image, wherein the lightning sample image is provided with a global rectangular marking frame with a direction, and the size, the position and the target rotation angle of the global rectangular marking frame; the target rotation angle is an angle of the global rectangular marking frame offset relative to the horizontal or vertical direction;
and training a lightning target detection model with a space rotation alignment network according to the lightning sample image to obtain the lightning target detection model capable of adapting to target detection in different directions.
Preferably, in the lightning detection method, training the lightning target detection model with the spatial rotation alignment network according to the lightning sample image specifically includes:
generating a directional detection frame and the size, position and predicted rotation angle of the directional detection frame according to a lightning sample image through a lightning target detection model to be trained;
calculating loss functions between the size, the position and the predicted rotation angle of the directional detection frame and the size, the position and the target rotation angle of the global rectangular marking frame, and reversely adjusting model parameters of the lightning target detection model according to the loss functions;
and returning to the lightning target detection model to be trained, and continuing to execute the step of generating the directional detection frame according to the lightning sample image until the iteration stop condition is met, and stopping iteration to obtain the trained lightning target detection model.
Preferably, in the lightning detection method, the loss function is:
Ldet=Lk+λsizeLsize+λoffLoff+λangLang, (3)
wherein L is
detRepresenting a loss function; l is
kRepresenting the position of the center point of the orientation detection frame; l is
sizeRepresenting the size loss between the directional detection frame and the global rectangular marking frame; l is
offRepresenting the position offset loss between the directional detection frame and the global rectangular marking frame; l is
angRepresenting the rotation angle loss between the orientation detection frame and the global rectangular marking frame; lambda [ alpha ]
size,λ
off、λ
angWeights respectively representing the size loss, the position offset loss and the rotation angle loss; theta represents the target rotation angle of the global rectangular labeling box,
representing a predicted rotation angle of the orientation detection bezel; n represents the correct number of samples.
Preferably, in the lightning detection method, the directional detection frame is:
wherein, Plt,Prt,PlbAnd PrbRespectively representing four angular points of the directional detection frame; mrRepresenting a rotation matrix; (w, h) represents the width and height of the orientation detection bezel; (c)x,cy) Representing orientation detection framesCoordinates of the central point; (deltax,δy) Representing the offset of the center point of the orientation detection frame from the corner point.
Preferably, in the lightning detection method, before detecting the chrominance information of each frame of image in the video stream, the method further includes: and smoothing, denoising, image enhancing, background segmentation and morphological processing are carried out on each frame of image to obtain a lightning channel image represented by continuous single pixel points.
According to a second aspect of the present invention, there is also provided a lightning detection apparatus, comprising:
the system comprises a brightness detection module, a light source module and a light source module, wherein the brightness detection module is configured to acquire an image shot by a lightning stroke image device and determine the ambient illumination intensity according to the average brightness of the image; when the ambient illumination intensity is larger than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
the triggering module is configured to start exposure of the lightning strike image device and set exposure time according to a difference value between a brightness value and a second brightness threshold value when the brightness value of any image block is larger than the preset second brightness threshold value;
the device comprises a prediction module, a target detection module and a control module, wherein the prediction module is configured to obtain a video stream shot by a lightning strike image device after exposure is started, detect the chrominance information of each frame of image in the video stream, compare the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and input the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
and the output module is used for judging that the current frame image is a thunder and lightning picture when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value.
According to a third aspect of the present invention, there is also provided a computer device comprising at least one processing unit, and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to perform the steps of any of the lightning detection methods described above.
According to a fourth aspect of the present invention, there is also provided a computer-readable medium, characterized in that it stores a computer program executable by a computer device, which when run on the computer device causes the computer device to perform the steps of any of the lightning detection methods described above.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the lightning detection scheme provided by the invention, the parameters of the lightning stroke image device are set regularly according to the ambient illumination intensity, so that the pictures acquired by the camera have higher quality and definition; the suspected lightning picture is preliminarily determined according to the brightness detection, then the lightning target area is judged again on the suspected lightning picture through the lightning target detection model, the lightning target is detected and identified, misjudgment caused by other factors is eliminated, and the accuracy of lightning detection is improved. The invention combines brightness detection and current deep learning artificial intelligence, improves the accuracy of lightning snapshot and reduces the false alarm rate.
(2) According to the lightning detection scheme provided by the invention, when a lightning target detection model is trained, a global rectangular marking frame with a direction is adopted to mark a lightning sample image, and a target rotation angle is marked besides the size and the position of the global rectangular marking frame; putting the marked thunder and lightning sample image into a thunder and lightning target detection model with a space rotation alignment network to train the thunder and lightning sample image to obtain a thunder and lightning target detection model capable of adapting to target detection in different directions; the suspected lightning picture is detected by using the lightning target detection model, so that the detection accuracy can be obviously improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
For convenience of understanding, a system scenario to which the lightning detection scheme provided in the present application is applied is described first, and referring to fig. 1, a schematic diagram of a component architecture of a lightning detection system according to the present application is shown.
The system can comprise: the camera and the device host are in communication connection through a network. In one specific example, the camera is mounted on a tripod to monitor lightning and take images of the lightning; the equipment host acquires image information acquired by the camera, detects and processes the image and determines a real thunder and lightning picture.
In order to implement the corresponding functions on the device host, a computer program for implementing the corresponding functions needs to be stored in the memory of the device host. To facilitate understanding of the hardware configuration of the device host, the device host is described as an example. As shown in fig. 2, which is a schematic diagram of a component structure of the device host of the present application, the device host in this embodiment may include: a processor 201, a memory 202, a communication interface 203, an input unit 204, a display 205 and a communication bus 206.
The processor 201, the memory 202, the communication interface 203, the input unit 204, and the display 205 all communicate with each other through the communication bus 206.
In this embodiment, the processor 201 may be a Central Processing Unit (CPU), an application specific integrated circuit, a digital signal processor, an off-the-shelf programmable gate array, or other programmable logic device.
The processor 201 may call a program stored in the memory 202. Specifically, the processor 201 may perform operations performed on the server side in the following embodiments of the logistics community village group prediction method.
The memory 202 is used for storing one or more programs, which may include program codes including computer operation instructions, and in the embodiment of the present application, the memory stores at least the programs for implementing the following functions:
determining the environmental illumination intensity, and determining the exposure time of the lightning stroke image device according to the environmental illumination intensity;
acquiring a video stream shot by a lightning strike image device after exposure is started, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and inputting the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
and when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value, judging that the current frame image is a thunder and lightning picture.
In one possible implementation, the memory 202 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as luminance and chrominance detection), and the like; the storage data area may store data created during use of the computer, such as a prediction model and lightning image samples, etc.
Further, the memory 202 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The communication interface 203 may be an interface of a communication module, such as an interface of a GSM module.
Of course, the structure of the device host shown in fig. 2 does not constitute a limitation of the device host in the embodiment of the present application, and in practical applications, the device host may include more or less components than those shown in fig. 2, or some components may be combined.
With reference to fig. 3, this embodiment shows a schematic flow chart of a lightning detection method, and the method in this embodiment includes the following steps:
step 301, determining the ambient illumination intensity, and setting the exposure time of the lightning stroke image device according to the ambient illumination intensity;
the ambient light intensity has a large influence on the definition of the image acquired by the camera, and therefore, the exposure time, brightness, contrast and other parameters of the camera are adjusted according to the ambient light intensity acquired in real time, so that high-quality image information in the current environment can be obtained.
In a specific example, determining the ambient light intensity, and setting the exposure time of the lightning strike imaging device according to the ambient light intensity specifically includes:
(1) acquiring an image shot by a camera, and determining the ambient illumination intensity according to the average brightness of the image;
in a specific example, a camera photographs sky images at preset time intervals and transmits the photographed images to a device host; and the device host acquires the image and then performs brightness detection on the image to obtain an average brightness value of the image and uses the average brightness value as the illumination intensity of the environment.
In a specific example, the camera is controlled by the device host to shoot images at preset time intervals; the preset time interval may be customized, such as 1 hour.
(2) When the ambient illumination intensity is greater than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block; when the brightness value of any image block is larger than a preset second brightness threshold, starting the exposure of the camera and setting the exposure time according to the difference between the brightness value and the second brightness threshold;
in one embodiment, when the device host detects that the ambient illumination intensity is greater than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block; the division rule and the number of divided image blocks are not particularly limited; when the brightness value of any image block is larger than a preset second brightness threshold, starting the exposure of the camera by the equipment host, and setting the exposure time according to the difference between the brightness value and the second brightness threshold; the exposure time is positively correlated with the difference, and the larger the difference between the brightness value and the second brightness threshold is, the longer the exposure time is. The values of the first brightness threshold and the second brightness threshold may be self-defined, and this embodiment is not particularly limited.
In one specific example, the device host acquires images shot by the camera at preset time intervals to acquire the illumination intensity of the environment, and automatically adjusts the parameters of the camera according to the illumination intensity of the environment.
Step 302, acquiring a video stream shot by a camera after exposure is started, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and inputting the current frame of image into a pre-trained thunder and lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
in a specific example, referring to fig. 4, an apparatus host acquires a video stream captured by a camera after exposure is started, and first processes each frame of image in the video stream, including three parts, namely image preprocessing, image segmentation, and morphological processing; the image preprocessing comprises the steps of converting a color RGB image into a gray image, then eliminating a background by adopting a background difference method, taking the average of a plurality of frames of images which are not influenced by a discharging process before lightning occurs as a calibration image, making a difference value between the image to be processed and the calibration image, removing background brightness, avoiding interference of background highlight, reducing interference and influence on lightning channel identification caused by clouds or other brighter objects and more obvious edges in the background as far as possible, simultaneously performing enhancement processing on part of the image, expanding the dynamic range of image gray values, improving the contrast of the image, and performing filtering processing. And then, carrying out image segmentation on the preprocessed image, realizing the separation of a target object lightning channel (namely the foreground) and the background, and realizing the binaryzation of the image. And finally, performing morphological processing on the binary image which preliminarily realizes foreground and background separation, and finally extracting to obtain a lightning channel image represented by continuous single pixel points.
After each frame of image in the video stream is processed, detecting the chrominance information of each frame of image respectively, comparing the chrominance information of the current frame of image with the chrominance mean value of the previous N frames of image, wherein the value of N is related to the acquisition speed of the camera, and is not limited in particular, in a specific example, N is 20-200; and when the compared difference value is larger than a preset third brightness threshold value, the current frame image is probably a thunder image, the current frame image is placed in a queue, otherwise, the next frame image is detected, or the video stream shot by the camera is continuously read. The value of the second brightness threshold may be customized, and this embodiment is not particularly limited.
Step 303, sequentially reading each frame of image stored in the queue, inputting the current frame of image into a pre-trained thunder and lightning target detection model, and obtaining at least one target detection frame and a corresponding confidence coefficient; when the confidence coefficient of any target detection frame is greater than a preset target detection threshold value, judging that the current frame image is a thunder and lightning picture;
detecting each frame of image stored in the queue by adopting a pre-trained lightning target detection model, and predicting a target detection frame corresponding to each frame of image and a corresponding confidence coefficient; comparing the confidence with a preset target detection threshold, and judging the corresponding current frame image as a thunder and lightning picture and storing when the confidence is greater than the target detection threshold; otherwise, detecting the next frame of image, or continuously reading the video stream shot by the camera. The value of the target detection threshold may be self-defined, and this embodiment is not particularly limited.
When thunder and lightning occur, the sky can generate obvious brightness change, the frame of image is stored in a queue according to the brightness change of the image, each frame of image in the queue is detected through a thunder and lightning target detection model, a lightning target area is judged, a thunder and lightning target is detected and identified, and a brightness image caused by other factors is removed.
In a specific example, the training process of the lightning target detection model specifically includes:
acquiring a thunder and lightning sample image, wherein the thunder and lightning sample image is provided with a global rectangular marking frame with a direction, and the size, the position and the target rotation angle of the global rectangular marking frame; the target rotation angle is an angle of the global rectangular marking frame offset relative to the horizontal or vertical direction; the size of the global rectangular marking frame is represented by the width and the height, and the position of the global rectangular marking frame is represented by four corner point coordinates.
In a specific example, a large number of real thunder and lightning pictures are collected to form a thunder and lightning picture library (including thunder and lightning pictures of various forms and backgrounds), and the thunder and lightning pictures in the thunder and lightning picture library are respectively labeled to form a thunder and lightning sample image.
Training a lightning target detection model with a spatial rotation alignment network according to the lightning sample image to obtain a lightning target detection model capable of adapting to target detection in different directions; the method specifically comprises the following steps:
generating a directional detection frame and the size, position and predicted rotation angle of the directional detection frame according to the lightning sample image through a lightning target detection model to be trained;
calculating loss functions between the size, the position and the predicted rotation angle of the directional detection frame and the size, the position and the target rotation angle of the global rectangular marking frame, and reversely adjusting model parameters of the lightning target detection model according to the loss functions;
and returning to the lightning target detection model to be trained, and continuing to execute the step of generating the directional detection frame according to the lightning sample image until the iteration stop condition is met, and stopping iteration to obtain the trained lightning target detection model. The iteration stop condition, such as the number of iterations is greater than or equal to the iteration threshold, and further such as the loss function corresponding to a single iteration has been minimized, etc., is not specifically limited herein.
In a specific example, the directional detection frame output by the lightning target detection model is as follows:
wherein, Plt,Prt,PlbAnd PrbRespectively representing the positions of four corner points of the directional detection frame; mrRepresenting a rotation matrix (the rotation matrix includes a target rotation angle); (w, h) represents the width and height of the orientation detection bezel; (c)x,cy) Representing the coordinates of the center point of the directional detection frame; (deltax,δy) Representing the offset of the center point of the orientation detection frame from the corner point.
In one particular example, the loss function of the lightning target detection model is defined as:
Ldet=Lk+λsizeLsize+λoffLoff+λangLang, (3)
wherein L is
detRepresenting a loss function; l is
kRepresenting the position of the center point of the orientation detection frame; l is
sizeRepresenting the size loss between the directional detection frame and the global rectangular marking frame; l is
offRepresenting the position offset loss between the directional detection frame and the global rectangular marking frame; l is
angRepresenting orientation detection bounding boxes and global rectanglesLoss of rotation angle between the labeling frames; lambda [ alpha ]
size,λ
off、λ
angWeights respectively representing the size loss, the position offset loss and the rotation angle loss; theta represents the target rotation angle of the global rectangular labeling box,
representing a predicted rotation angle of the orientation detection bezel; n represents the correct number of samples (i.e., the total number of samples containing the lightning image).
FIG. 5 is a schematic diagram of a network structure of a lightning target detection model provided in this embodiment, and referring to FIG. 5, the lightning target detection model includes a first feature extraction network, a second feature extraction network, a first detection network, a spatial rotation alignment network, and a second detection network;
the first feature extraction network is used for carrying out feature extraction processing on an input image to obtain a main network feature map;
the second feature extraction network is used for processing the backbone network feature map output by the first feature extraction network to obtain a multi-scale feature map;
the first detection network is used for carrying out regression processing on the multi-scale feature map output by the second feature extraction network to obtain lightning rough positioning information;
the spatial rotation alignment network comprises an input layer, an RPN layer, an alignment network layer and an output layer; the input layer acquires lightning rough positioning information output by the first detection network and performs feature extraction processing; the RPN layer acquires a multi-scale feature map output by a second feature extraction network and performs regional feature extraction processing; and the RPN layer and the features output by the input layer are fused to obtain a regional feature map, the regional feature map is subjected to regression processing by the alignment network layer to obtain features with spatial directions and is transmitted to the output layer, and the spatial band direction features are obtained after the output layer processes the features.
And the second detection network acquires the main network characteristic diagram output by the first characteristic extraction network, performs regression processing to obtain a detection frame, aligns the detection frame with the spatial band directional characteristic line output by the spatial rotation alignment network, and performs regression processing to obtain the detection frame with the directional characteristic, thereby realizing accurate detection of the lightning target.
Because the position and the direction of the lightning appearing in the picture have randomness, the embodiment adopts a global rectangular marking frame with the direction when marking the lightning sample image, and marks a target rotation angle besides the size and the position of the global rectangular marking frame, wherein the target rotation angle is the offset angle of the global rectangular marking frame relative to the horizontal or vertical direction; putting the marked thunder and lightning sample image into a thunder and lightning target detection model with a space rotation alignment network to train the thunder and lightning sample image to obtain a thunder and lightning target detection model capable of adapting to target detection in different directions; the suspected lightning picture is detected by using the lightning target detection model, so that the detection accuracy can be obviously improved.
Fig. 6 is a flowchart of the lightning detection method provided in this embodiment, and referring to fig. 6, the method specifically includes the following steps:
(1) firstly, initializing equipment;
(2) reading the configuration file, and acquiring a first brightness threshold, a third brightness threshold, an image block division rule, a target detection threshold and the like;
(3) reading an image shot by a camera, and determining the ambient illumination intensity according to the average brightness of the image; parameters such as exposure time, brightness, contrast and the like of the camera are intelligently adjusted according to the ambient illumination intensity, so that high-quality image information under the current ambient illumination condition can be conveniently obtained, and reliable guarantee is provided for post-stage data processing and viewing;
(4) reading a camera video stream, acquiring real-time image data, and performing image preprocessing;
(5) acquiring current frame chrominance information;
(6) comparing the average value of the chrominance information of the current frame with the average value of the previous N frames, judging that the current frame is a thunder image when the compared difference value is larger than the brightness threshold value in the configuration file, placing the image in a queue, and returning to the step (4) if the difference value is not larger than the brightness threshold value, and continuously reading the video stream of the camera;
(7) reading a queue picture;
(8) detecting the picture after image preprocessing by using a lightning target detection model with a space rotation alignment network;
(9) acquiring a detection frame and confidence;
(10) comparing the confidence coefficient with a set target detection threshold value in the configuration file, judging the image to be a thunder image when the confidence coefficient is larger than the set target detection threshold value, storing the thunder image, and returning to the step (4) if the confidence coefficient is not larger than the set target detection threshold value, and continuously reading the camera video stream;
(11) and (4) if a program exit instruction is received, ending the program, otherwise returning to the step (4) and continuously reading the camera video stream.
In one embodiment, as shown in FIG. 7, there is provided a lightning detection apparatus 700 comprising: a brightness detection module 701, a trigger module 702, a prediction module 703 and an output module 704, wherein:
a brightness detection module 701 configured to acquire an image captured by a camera, determine an ambient light intensity according to an average brightness of the image; when the ambient illumination intensity is larger than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
a triggering module 702 configured to start exposure of the camera and set an exposure time according to a difference between the brightness value and a second brightness threshold when the brightness value of any image block is greater than the preset second brightness threshold;
the prediction module 703 is configured to obtain a video stream shot by the camera after the exposure is started, detect chrominance information of each frame of image in the video stream, compare the chrominance information of the current frame of image with chrominance mean values of a plurality of previous frames of images, and input the current frame of image into a pre-trained lightning target detection model when a compared difference value is greater than a pre-configured third luminance threshold value, so as to obtain at least one target detection frame and a corresponding confidence coefficient;
and the output module 704 judges that the current frame image is a thunder and lightning picture when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value.
For the specific definition of the lightning detection means, reference may be made to the above definition of the method of the lightning detection means, which is not described herein again. The modules in the lightning detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.