CN112396116B - Thunder and lightning detection method and device, computer equipment and readable medium - Google Patents

Thunder and lightning detection method and device, computer equipment and readable medium Download PDF

Info

Publication number
CN112396116B
CN112396116B CN202011329977.XA CN202011329977A CN112396116B CN 112396116 B CN112396116 B CN 112396116B CN 202011329977 A CN202011329977 A CN 202011329977A CN 112396116 B CN112396116 B CN 112396116B
Authority
CN
China
Prior art keywords
lightning
image
frame
target detection
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011329977.XA
Other languages
Chinese (zh)
Other versions
CN112396116A (en
Inventor
黄凯
韩俊龙
李恒
张菲
舒宽
雷丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhonggu Maituo Technology Co ltd
Original Assignee
Wuhan Sanjiang Clp Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Sanjiang Clp Technology Co ltd filed Critical Wuhan Sanjiang Clp Technology Co ltd
Priority to CN202011329977.XA priority Critical patent/CN112396116B/en
Publication of CN112396116A publication Critical patent/CN112396116A/en
Application granted granted Critical
Publication of CN112396116B publication Critical patent/CN112396116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a lightning detection method, a lightning detection device, computer equipment and a readable medium, wherein the method comprises the following steps: determining the environmental illumination intensity, and determining the exposure time of the lightning stroke image device according to the environmental illumination intensity; acquiring a video stream shot by a lightning strike image device, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, inputting the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a third luminance threshold value to obtain a target detection frame and a corresponding confidence coefficient, and storing the current frame of image with the confidence coefficient greater than a preset target detection threshold value as a lightning picture; the invention can realize the automatic detection and storage of the thunder and lightning in the monitored image and has higher identification accuracy.

Description

Thunder and lightning detection method and device, computer equipment and readable medium
Technical Field
The invention belongs to the technical field of lightning monitoring, and particularly relates to a lightning detection method and device, computer equipment and a readable medium.
Background
The thunder and lightning disaster is listed as one of the most serious ten natural disasters by united nations, and becomes a third meteorological disaster which is only second to rainstorm flood and landslide, and seriously threatens the safety of lives and property of human beings. The fire-fighting water tank can not only destroy buildings, power supply and distribution systems, communication equipment and cause forest fire, cause the combustion and even explosion of computer information systems, storage, oil refineries, oil fields and the like, harm the property and personal safety of people, but also have great threat to carriers such as aerospace and the like, and even directly hit human beings to cause disability or death. The method can realize objective identification of the reason and the property of the lightning stroke fault and improve the accuracy of fault analysis for the research on the optical path form in the lightning stroke process. First-hand data are provided for the research of the lightning stroke mechanism, and reliable basis is provided for the judgment of the lightning stroke fault property.
Chinese patent ' an acquisition device of a natural lightning stroke discharge channel static photo ' (publication number: CN202488569U) ' provides a device for fully automatically recording a lightning stroke discharge channel, which drives an electronic shutter switch of a camera by a trigger level generated by detecting a lightning optical signal and realizes the purpose of recording lightning by the high-speed response speed of an electronic device. The invention only takes the light change as a trigger source, but the shooting of the camera is caused by the light change of other non-lightning factors in the environment, so that more false triggers can be caused, and more false alarms are given to the shot lightning pictures.
In the prior lightning detection technology, the lightning snapshot is mainly based on brightness change, a large amount of false alarms can occur in the detection mode, and the accuracy is not high. Currently, with the development of deep learning technology, the lightning snapshot accuracy can be improved by a method based on brightness detection and a deep learning algorithm.
Disclosure of Invention
The invention provides a lightning detection method, a lightning detection device, computer equipment and a readable medium, aiming at solving the problems that the existing lightning detection mode is easy to generate false alarm and has low lightning detection accuracy.
To achieve the above object, according to a first aspect of the present invention, there is provided a lightning detection method including:
acquiring an image shot by a lightning stroke image device, and determining the ambient illumination intensity according to the average brightness of the image;
when the ambient illumination intensity is greater than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
when the brightness value of any image block is larger than a preset second brightness threshold, starting exposure of the lightning stroke image device and setting exposure time according to the difference between the brightness value and the second brightness threshold;
acquiring a video stream shot by a lightning strike image device after exposure is started, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and inputting the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
and when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value, judging that the current frame image is a thunder and lightning picture.
Preferably, in the above lightning detection method, the training process of the lightning target detection model includes:
acquiring a lightning sample image, wherein the lightning sample image is provided with a global rectangular marking frame with a direction, and the size, the position and the target rotation angle of the global rectangular marking frame; the target rotation angle is an angle of the global rectangular marking frame offset relative to the horizontal or vertical direction;
and training a lightning target detection model with a space rotation alignment network according to the lightning sample image to obtain the lightning target detection model capable of adapting to target detection in different directions.
Preferably, in the lightning detection method, training the lightning target detection model with the spatial rotation alignment network according to the lightning sample image specifically includes:
generating a directional detection frame and the size, position and predicted rotation angle of the directional detection frame according to a lightning sample image through a lightning target detection model to be trained;
calculating loss functions between the size, the position and the predicted rotation angle of the directional detection frame and the size, the position and the target rotation angle of the global rectangular marking frame, and reversely adjusting model parameters of the lightning target detection model according to the loss functions;
and returning to the lightning target detection model to be trained, and continuing to execute the step of generating the directional detection frame according to the lightning sample image until the iteration stop condition is met, and stopping iteration to obtain the trained lightning target detection model.
Preferably, in the lightning detection method, the loss function is:
Ldet=LksizeLsizeoffLoffangLang, (3)
Figure BDA0002795481890000031
wherein L isdetRepresenting a loss function; l iskRepresenting the position of the center point of the orientation detection frame; l issizeRepresenting the size loss between the directional detection frame and the global rectangular marking frame; l isoffRepresenting the position offset loss between the directional detection frame and the global rectangular marking frame; l isangRepresenting the rotation angle loss between the orientation detection frame and the global rectangular marking frame; lambda [ alpha ]size,λoff、λangWeights respectively representing the size loss, the position offset loss and the rotation angle loss; theta represents the target rotation angle of the global rectangular labeling box,
Figure BDA0002795481890000032
representing a predicted rotation angle of the orientation detection bezel; n represents the correct number of samples.
Preferably, in the lightning detection method, the directional detection frame is:
Figure BDA0002795481890000033
wherein, Plt,Prt,PlbAnd PrbRespectively representing four angular points of the directional detection frame; mrRepresenting a rotation matrix; (w, h) represents the width and height of the orientation detection bezel; (c)x,cy) Representing orientation detection framesCoordinates of the central point; (deltaxy) Representing the offset of the center point of the orientation detection frame from the corner point.
Preferably, in the lightning detection method, before detecting the chrominance information of each frame of image in the video stream, the method further includes: and smoothing, denoising, image enhancing, background segmentation and morphological processing are carried out on each frame of image to obtain a lightning channel image represented by continuous single pixel points.
According to a second aspect of the present invention, there is also provided a lightning detection apparatus, comprising:
the system comprises a brightness detection module, a light source module and a light source module, wherein the brightness detection module is configured to acquire an image shot by a lightning stroke image device and determine the ambient illumination intensity according to the average brightness of the image; when the ambient illumination intensity is larger than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
the triggering module is configured to start exposure of the lightning strike image device and set exposure time according to a difference value between a brightness value and a second brightness threshold value when the brightness value of any image block is larger than the preset second brightness threshold value;
the device comprises a prediction module, a target detection module and a control module, wherein the prediction module is configured to obtain a video stream shot by a lightning strike image device after exposure is started, detect the chrominance information of each frame of image in the video stream, compare the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and input the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
and the output module is used for judging that the current frame image is a thunder and lightning picture when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value.
According to a third aspect of the present invention, there is also provided a computer device comprising at least one processing unit, and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to perform the steps of any of the lightning detection methods described above.
According to a fourth aspect of the present invention, there is also provided a computer-readable medium, characterized in that it stores a computer program executable by a computer device, which when run on the computer device causes the computer device to perform the steps of any of the lightning detection methods described above.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the lightning detection scheme provided by the invention, the parameters of the lightning stroke image device are set regularly according to the ambient illumination intensity, so that the pictures acquired by the camera have higher quality and definition; the suspected lightning picture is preliminarily determined according to the brightness detection, then the lightning target area is judged again on the suspected lightning picture through the lightning target detection model, the lightning target is detected and identified, misjudgment caused by other factors is eliminated, and the accuracy of lightning detection is improved. The invention combines brightness detection and current deep learning artificial intelligence, improves the accuracy of lightning snapshot and reduces the false alarm rate.
(2) According to the lightning detection scheme provided by the invention, when a lightning target detection model is trained, a global rectangular marking frame with a direction is adopted to mark a lightning sample image, and a target rotation angle is marked besides the size and the position of the global rectangular marking frame; putting the marked thunder and lightning sample image into a thunder and lightning target detection model with a space rotation alignment network to train the thunder and lightning sample image to obtain a thunder and lightning target detection model capable of adapting to target detection in different directions; the suspected lightning picture is detected by using the lightning target detection model, so that the detection accuracy can be obviously improved.
Drawings
FIG. 1 is a schematic diagram of an architecture of a lightning detection system according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a device host according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a lightning detection method provided by an embodiment of the invention;
FIG. 4 is a flow chart of image pre-processing provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a network structure of a lightning target detection model according to an embodiment of the invention;
FIG. 6 is a flow chart of the operation of a lightning detection method provided by an embodiment of the invention;
FIG. 7 is a logic block diagram of a lightning detection device provided by an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
For convenience of understanding, a system scenario to which the lightning detection scheme provided in the present application is applied is described first, and referring to fig. 1, a schematic diagram of a component architecture of a lightning detection system according to the present application is shown.
The system can comprise: the camera and the device host are in communication connection through a network. In one specific example, the camera is mounted on a tripod to monitor lightning and take images of the lightning; the equipment host acquires image information acquired by the camera, detects and processes the image and determines a real thunder and lightning picture.
In order to implement the corresponding functions on the device host, a computer program for implementing the corresponding functions needs to be stored in the memory of the device host. To facilitate understanding of the hardware configuration of the device host, the device host is described as an example. As shown in fig. 2, which is a schematic diagram of a component structure of the device host of the present application, the device host in this embodiment may include: a processor 201, a memory 202, a communication interface 203, an input unit 204, a display 205 and a communication bus 206.
The processor 201, the memory 202, the communication interface 203, the input unit 204, and the display 205 all communicate with each other through the communication bus 206.
In this embodiment, the processor 201 may be a Central Processing Unit (CPU), an application specific integrated circuit, a digital signal processor, an off-the-shelf programmable gate array, or other programmable logic device.
The processor 201 may call a program stored in the memory 202. Specifically, the processor 201 may perform operations performed on the server side in the following embodiments of the logistics community village group prediction method.
The memory 202 is used for storing one or more programs, which may include program codes including computer operation instructions, and in the embodiment of the present application, the memory stores at least the programs for implementing the following functions:
determining the environmental illumination intensity, and determining the exposure time of the lightning stroke image device according to the environmental illumination intensity;
acquiring a video stream shot by a lightning strike image device after exposure is started, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and inputting the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
and when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value, judging that the current frame image is a thunder and lightning picture.
In one possible implementation, the memory 202 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as luminance and chrominance detection), and the like; the storage data area may store data created during use of the computer, such as a prediction model and lightning image samples, etc.
Further, the memory 202 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The communication interface 203 may be an interface of a communication module, such as an interface of a GSM module.
Of course, the structure of the device host shown in fig. 2 does not constitute a limitation of the device host in the embodiment of the present application, and in practical applications, the device host may include more or less components than those shown in fig. 2, or some components may be combined.
With reference to fig. 3, this embodiment shows a schematic flow chart of a lightning detection method, and the method in this embodiment includes the following steps:
step 301, determining the ambient illumination intensity, and setting the exposure time of the lightning stroke image device according to the ambient illumination intensity;
the ambient light intensity has a large influence on the definition of the image acquired by the camera, and therefore, the exposure time, brightness, contrast and other parameters of the camera are adjusted according to the ambient light intensity acquired in real time, so that high-quality image information in the current environment can be obtained.
In a specific example, determining the ambient light intensity, and setting the exposure time of the lightning strike imaging device according to the ambient light intensity specifically includes:
(1) acquiring an image shot by a camera, and determining the ambient illumination intensity according to the average brightness of the image;
in a specific example, a camera photographs sky images at preset time intervals and transmits the photographed images to a device host; and the device host acquires the image and then performs brightness detection on the image to obtain an average brightness value of the image and uses the average brightness value as the illumination intensity of the environment.
In a specific example, the camera is controlled by the device host to shoot images at preset time intervals; the preset time interval may be customized, such as 1 hour.
(2) When the ambient illumination intensity is greater than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block; when the brightness value of any image block is larger than a preset second brightness threshold, starting the exposure of the camera and setting the exposure time according to the difference between the brightness value and the second brightness threshold;
in one embodiment, when the device host detects that the ambient illumination intensity is greater than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block; the division rule and the number of divided image blocks are not particularly limited; when the brightness value of any image block is larger than a preset second brightness threshold, starting the exposure of the camera by the equipment host, and setting the exposure time according to the difference between the brightness value and the second brightness threshold; the exposure time is positively correlated with the difference, and the larger the difference between the brightness value and the second brightness threshold is, the longer the exposure time is. The values of the first brightness threshold and the second brightness threshold may be self-defined, and this embodiment is not particularly limited.
In one specific example, the device host acquires images shot by the camera at preset time intervals to acquire the illumination intensity of the environment, and automatically adjusts the parameters of the camera according to the illumination intensity of the environment.
Step 302, acquiring a video stream shot by a camera after exposure is started, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and inputting the current frame of image into a pre-trained thunder and lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient;
in a specific example, referring to fig. 4, an apparatus host acquires a video stream captured by a camera after exposure is started, and first processes each frame of image in the video stream, including three parts, namely image preprocessing, image segmentation, and morphological processing; the image preprocessing comprises the steps of converting a color RGB image into a gray image, then eliminating a background by adopting a background difference method, taking the average of a plurality of frames of images which are not influenced by a discharging process before lightning occurs as a calibration image, making a difference value between the image to be processed and the calibration image, removing background brightness, avoiding interference of background highlight, reducing interference and influence on lightning channel identification caused by clouds or other brighter objects and more obvious edges in the background as far as possible, simultaneously performing enhancement processing on part of the image, expanding the dynamic range of image gray values, improving the contrast of the image, and performing filtering processing. And then, carrying out image segmentation on the preprocessed image, realizing the separation of a target object lightning channel (namely the foreground) and the background, and realizing the binaryzation of the image. And finally, performing morphological processing on the binary image which preliminarily realizes foreground and background separation, and finally extracting to obtain a lightning channel image represented by continuous single pixel points.
After each frame of image in the video stream is processed, detecting the chrominance information of each frame of image respectively, comparing the chrominance information of the current frame of image with the chrominance mean value of the previous N frames of image, wherein the value of N is related to the acquisition speed of the camera, and is not limited in particular, in a specific example, N is 20-200; and when the compared difference value is larger than a preset third brightness threshold value, the current frame image is probably a thunder image, the current frame image is placed in a queue, otherwise, the next frame image is detected, or the video stream shot by the camera is continuously read. The value of the second brightness threshold may be customized, and this embodiment is not particularly limited.
Step 303, sequentially reading each frame of image stored in the queue, inputting the current frame of image into a pre-trained thunder and lightning target detection model, and obtaining at least one target detection frame and a corresponding confidence coefficient; when the confidence coefficient of any target detection frame is greater than a preset target detection threshold value, judging that the current frame image is a thunder and lightning picture;
detecting each frame of image stored in the queue by adopting a pre-trained lightning target detection model, and predicting a target detection frame corresponding to each frame of image and a corresponding confidence coefficient; comparing the confidence with a preset target detection threshold, and judging the corresponding current frame image as a thunder and lightning picture and storing when the confidence is greater than the target detection threshold; otherwise, detecting the next frame of image, or continuously reading the video stream shot by the camera. The value of the target detection threshold may be self-defined, and this embodiment is not particularly limited.
When thunder and lightning occur, the sky can generate obvious brightness change, the frame of image is stored in a queue according to the brightness change of the image, each frame of image in the queue is detected through a thunder and lightning target detection model, a lightning target area is judged, a thunder and lightning target is detected and identified, and a brightness image caused by other factors is removed.
In a specific example, the training process of the lightning target detection model specifically includes:
acquiring a thunder and lightning sample image, wherein the thunder and lightning sample image is provided with a global rectangular marking frame with a direction, and the size, the position and the target rotation angle of the global rectangular marking frame; the target rotation angle is an angle of the global rectangular marking frame offset relative to the horizontal or vertical direction; the size of the global rectangular marking frame is represented by the width and the height, and the position of the global rectangular marking frame is represented by four corner point coordinates.
In a specific example, a large number of real thunder and lightning pictures are collected to form a thunder and lightning picture library (including thunder and lightning pictures of various forms and backgrounds), and the thunder and lightning pictures in the thunder and lightning picture library are respectively labeled to form a thunder and lightning sample image.
Training a lightning target detection model with a spatial rotation alignment network according to the lightning sample image to obtain a lightning target detection model capable of adapting to target detection in different directions; the method specifically comprises the following steps:
generating a directional detection frame and the size, position and predicted rotation angle of the directional detection frame according to the lightning sample image through a lightning target detection model to be trained;
calculating loss functions between the size, the position and the predicted rotation angle of the directional detection frame and the size, the position and the target rotation angle of the global rectangular marking frame, and reversely adjusting model parameters of the lightning target detection model according to the loss functions;
and returning to the lightning target detection model to be trained, and continuing to execute the step of generating the directional detection frame according to the lightning sample image until the iteration stop condition is met, and stopping iteration to obtain the trained lightning target detection model. The iteration stop condition, such as the number of iterations is greater than or equal to the iteration threshold, and further such as the loss function corresponding to a single iteration has been minimized, etc., is not specifically limited herein.
In a specific example, the directional detection frame output by the lightning target detection model is as follows:
Figure BDA0002795481890000101
wherein, Plt,Prt,PlbAnd PrbRespectively representing the positions of four corner points of the directional detection frame; mrRepresenting a rotation matrix (the rotation matrix includes a target rotation angle); (w, h) represents the width and height of the orientation detection bezel; (c)x,cy) Representing the coordinates of the center point of the directional detection frame; (deltaxy) Representing the offset of the center point of the orientation detection frame from the corner point.
In one particular example, the loss function of the lightning target detection model is defined as:
Ldet=LksizeLsizeoffLoffangLang, (3)
Figure BDA0002795481890000111
wherein L isdetRepresenting a loss function; l iskRepresenting the position of the center point of the orientation detection frame; l issizeRepresenting the size loss between the directional detection frame and the global rectangular marking frame; l isoffRepresenting the position offset loss between the directional detection frame and the global rectangular marking frame; l isangRepresenting orientation detection bounding boxes and global rectanglesLoss of rotation angle between the labeling frames; lambda [ alpha ]size,λoff、λangWeights respectively representing the size loss, the position offset loss and the rotation angle loss; theta represents the target rotation angle of the global rectangular labeling box,
Figure BDA0002795481890000112
representing a predicted rotation angle of the orientation detection bezel; n represents the correct number of samples (i.e., the total number of samples containing the lightning image).
FIG. 5 is a schematic diagram of a network structure of a lightning target detection model provided in this embodiment, and referring to FIG. 5, the lightning target detection model includes a first feature extraction network, a second feature extraction network, a first detection network, a spatial rotation alignment network, and a second detection network;
the first feature extraction network is used for carrying out feature extraction processing on an input image to obtain a main network feature map;
the second feature extraction network is used for processing the backbone network feature map output by the first feature extraction network to obtain a multi-scale feature map;
the first detection network is used for carrying out regression processing on the multi-scale feature map output by the second feature extraction network to obtain lightning rough positioning information;
the spatial rotation alignment network comprises an input layer, an RPN layer, an alignment network layer and an output layer; the input layer acquires lightning rough positioning information output by the first detection network and performs feature extraction processing; the RPN layer acquires a multi-scale feature map output by a second feature extraction network and performs regional feature extraction processing; and the RPN layer and the features output by the input layer are fused to obtain a regional feature map, the regional feature map is subjected to regression processing by the alignment network layer to obtain features with spatial directions and is transmitted to the output layer, and the spatial band direction features are obtained after the output layer processes the features.
And the second detection network acquires the main network characteristic diagram output by the first characteristic extraction network, performs regression processing to obtain a detection frame, aligns the detection frame with the spatial band directional characteristic line output by the spatial rotation alignment network, and performs regression processing to obtain the detection frame with the directional characteristic, thereby realizing accurate detection of the lightning target.
Because the position and the direction of the lightning appearing in the picture have randomness, the embodiment adopts a global rectangular marking frame with the direction when marking the lightning sample image, and marks a target rotation angle besides the size and the position of the global rectangular marking frame, wherein the target rotation angle is the offset angle of the global rectangular marking frame relative to the horizontal or vertical direction; putting the marked thunder and lightning sample image into a thunder and lightning target detection model with a space rotation alignment network to train the thunder and lightning sample image to obtain a thunder and lightning target detection model capable of adapting to target detection in different directions; the suspected lightning picture is detected by using the lightning target detection model, so that the detection accuracy can be obviously improved.
Fig. 6 is a flowchart of the lightning detection method provided in this embodiment, and referring to fig. 6, the method specifically includes the following steps:
(1) firstly, initializing equipment;
(2) reading the configuration file, and acquiring a first brightness threshold, a third brightness threshold, an image block division rule, a target detection threshold and the like;
(3) reading an image shot by a camera, and determining the ambient illumination intensity according to the average brightness of the image; parameters such as exposure time, brightness, contrast and the like of the camera are intelligently adjusted according to the ambient illumination intensity, so that high-quality image information under the current ambient illumination condition can be conveniently obtained, and reliable guarantee is provided for post-stage data processing and viewing;
(4) reading a camera video stream, acquiring real-time image data, and performing image preprocessing;
(5) acquiring current frame chrominance information;
(6) comparing the average value of the chrominance information of the current frame with the average value of the previous N frames, judging that the current frame is a thunder image when the compared difference value is larger than the brightness threshold value in the configuration file, placing the image in a queue, and returning to the step (4) if the difference value is not larger than the brightness threshold value, and continuously reading the video stream of the camera;
(7) reading a queue picture;
(8) detecting the picture after image preprocessing by using a lightning target detection model with a space rotation alignment network;
(9) acquiring a detection frame and confidence;
(10) comparing the confidence coefficient with a set target detection threshold value in the configuration file, judging the image to be a thunder image when the confidence coefficient is larger than the set target detection threshold value, storing the thunder image, and returning to the step (4) if the confidence coefficient is not larger than the set target detection threshold value, and continuously reading the camera video stream;
(11) and (4) if a program exit instruction is received, ending the program, otherwise returning to the step (4) and continuously reading the camera video stream.
In one embodiment, as shown in FIG. 7, there is provided a lightning detection apparatus 700 comprising: a brightness detection module 701, a trigger module 702, a prediction module 703 and an output module 704, wherein:
a brightness detection module 701 configured to acquire an image captured by a camera, determine an ambient light intensity according to an average brightness of the image; when the ambient illumination intensity is larger than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
a triggering module 702 configured to start exposure of the camera and set an exposure time according to a difference between the brightness value and a second brightness threshold when the brightness value of any image block is greater than the preset second brightness threshold;
the prediction module 703 is configured to obtain a video stream shot by the camera after the exposure is started, detect chrominance information of each frame of image in the video stream, compare the chrominance information of the current frame of image with chrominance mean values of a plurality of previous frames of images, and input the current frame of image into a pre-trained lightning target detection model when a compared difference value is greater than a pre-configured third luminance threshold value, so as to obtain at least one target detection frame and a corresponding confidence coefficient;
and the output module 704 judges that the current frame image is a thunder and lightning picture when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value.
For the specific definition of the lightning detection means, reference may be made to the above definition of the method of the lightning detection means, which is not described herein again. The modules in the lightning detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A lightning detection method, comprising:
determining the ambient light intensity, and setting the exposure time of the lightning stroke image device according to the ambient light intensity, wherein the specific steps are as follows:
acquiring an image shot by a lightning stroke image device, and determining the ambient illumination intensity according to the average brightness of the image;
when the ambient illumination intensity is greater than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
when the brightness value of any image block is larger than a preset second brightness threshold, starting exposure of the lightning stroke image device and setting exposure time according to the difference between the brightness value and the second brightness threshold;
the method comprises the steps of obtaining a video stream shot by a lightning strike image device, detecting the chrominance information of each frame of image in the video stream, comparing the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and inputting the current frame of image into a pre-trained lightning target detection model when the compared difference value is larger than a pre-configured third luminance threshold value to obtain at least one target detection frame and corresponding confidence; the training process of the lightning target detection model comprises the following steps:
acquiring a lightning sample image, wherein the lightning sample image is provided with a global rectangular marking frame with a direction, and the size, the position and the target rotation angle of the global rectangular marking frame; the target rotation angle is an angle of the global rectangular marking frame offset relative to the horizontal or vertical direction;
training a lightning target detection model with a space rotation alignment network according to the lightning sample image to obtain a lightning target detection model capable of adapting to target detection in different directions;
and when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value, judging that the current frame image is a thunder and lightning picture.
2. The lightning detection method of claim 1, wherein training a lightning target detection model with a spatially rotated alignment network from the lightning sample images specifically comprises:
generating a directional detection frame and the size, position and predicted rotation angle of the directional detection frame according to a lightning sample image through a lightning target detection model to be trained;
calculating loss functions between the size, the position and the predicted rotation angle of the directional detection frame and the size, the position and the target rotation angle of the global rectangular marking frame, and reversely adjusting model parameters of the lightning target detection model according to the loss functions;
and returning to the lightning target detection model to be trained, and continuing to execute the step of generating the directional detection frame according to the lightning sample image until the iteration stop condition is met, and stopping iteration to obtain the trained lightning target detection model.
3. The lightning detection method of claim 2, wherein the loss function is:
Ldet=LksizeLsizeoffLoffangLang
Figure FDA0003318458390000021
wherein L isdetRepresenting a loss function; l iskRepresenting the position of the center point of the orientation detection frame; l issizeRepresenting the size loss between the directional detection frame and the global rectangular marking frame; l isoffRepresenting the position offset loss between the directional detection frame and the global rectangular marking frame; l isangRepresenting the rotation angle loss between the orientation detection frame and the global rectangular marking frame; lambda [ alpha ]size,λoff、λangWeights respectively representing the size loss, the position offset loss and the rotation angle loss; theta represents the target rotation angle of the global rectangular labeling box,
Figure FDA0003318458390000022
representing a predicted rotation angle of the orientation detection bezel; n represents the correct number of samples.
4. The lightning detection method of claim 2, wherein the directional detection frame is:
Plt=Mr[-w/2,-h/2]T+[cxx,cyy]T
Prt=Mr[w/2,-h/2]T+[cxx,cyy]T
Plb=Mr[-w/2,+h/2]T+[cxx,cyy]T
Prb=Mr[+w/2,+h/2]T+[cxx,cyy]T
wherein, Plt,Prt,PlbAnd PrbRespectively representing four angular points of the directional detection frame; mrRepresenting a rotation matrix; (w, h) represents the width and height of the orientation detection bezel; (c)x,cy) Representing the coordinates of the center point of the directional detection frame; (deltax,δy) Representing the offset of the center point of the orientation detection frame from the corner point.
5. The lightning detection method of claim 1, wherein detecting chrominance information for each frame of image in the video stream further comprises: and smoothing, denoising, image enhancing, background segmentation and morphological processing are carried out on each frame of image to obtain a lightning channel image represented by continuous single pixel points.
6. A lightning detection device, characterized in that the device comprises:
the system comprises a brightness detection module, a light source module and a light source module, wherein the brightness detection module is configured to acquire an image shot by a lightning stroke image device and determine the ambient illumination intensity according to the average brightness of the image; when the ambient illumination intensity is larger than a preset first brightness threshold, dividing the image into a plurality of image blocks according to a preset division rule and respectively detecting the brightness value of each image block;
the triggering module is configured to start exposure of the lightning strike image device and set exposure time according to a difference value between a brightness value and a second brightness threshold value when the brightness value of any image block is larger than the preset second brightness threshold value;
the device comprises a prediction module, a target detection module and a control module, wherein the prediction module is configured to obtain a video stream shot by a lightning strike image device after exposure is started, detect the chrominance information of each frame of image in the video stream, compare the chrominance information of the current frame of image with the chrominance mean value of a plurality of previous frames of images, and input the current frame of image into a pre-trained lightning target detection model when the compared difference value is greater than a pre-configured third luminance threshold value to obtain at least one target detection frame and a corresponding confidence coefficient; the training process of the lightning target detection model comprises the following steps:
acquiring a lightning sample image, wherein the lightning sample image is provided with a global rectangular marking frame with a direction, and the size, the position and the target rotation angle of the global rectangular marking frame; the target rotation angle is an angle of the global rectangular marking frame offset relative to the horizontal or vertical direction;
training a lightning target detection model with a space rotation alignment network according to the lightning sample image to obtain a lightning target detection model capable of adapting to target detection in different directions;
and the output module is used for judging that the current frame image is a thunder and lightning picture when the confidence coefficient of any one target detection frame is greater than a preset target detection threshold value.
7. Computer arrangement, comprising at least one processing unit and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to carry out the steps of the method according to any one of claims 1 to 5.
8. A computer-readable medium, in which a computer program is stored which is executable by a computer device, and which, when run on the computer device, causes the computer device to carry out the steps of the method according to any one of claims 1 to 5.
CN202011329977.XA 2020-11-24 2020-11-24 Thunder and lightning detection method and device, computer equipment and readable medium Active CN112396116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011329977.XA CN112396116B (en) 2020-11-24 2020-11-24 Thunder and lightning detection method and device, computer equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011329977.XA CN112396116B (en) 2020-11-24 2020-11-24 Thunder and lightning detection method and device, computer equipment and readable medium

Publications (2)

Publication Number Publication Date
CN112396116A CN112396116A (en) 2021-02-23
CN112396116B true CN112396116B (en) 2021-12-07

Family

ID=74607701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011329977.XA Active CN112396116B (en) 2020-11-24 2020-11-24 Thunder and lightning detection method and device, computer equipment and readable medium

Country Status (1)

Country Link
CN (1) CN112396116B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033464B (en) * 2021-04-10 2023-11-21 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium
CN113191234A (en) * 2021-04-22 2021-07-30 武汉菲舍控制技术有限公司 Belt conveyor conveyer belt anti-tearing method and device based on machine vision
CN113204903B (en) * 2021-04-29 2022-04-29 国网电力科学研究院武汉南瑞有限责任公司 Method for predicting thunder and lightning
CN113848594A (en) * 2021-09-23 2021-12-28 云南电网有限责任公司电力科学研究院 Method for identifying spectral information of lightning channel
CN114827466B (en) * 2022-04-20 2023-07-04 武汉三江中电科技有限责任公司 Human eye-like equipment image acquisition device and image acquisition method
CN114792392B (en) * 2022-05-18 2023-04-18 西南交通大学 Method for simulating lightning precursor development path based on Markov chain
CN116910491B (en) * 2023-09-11 2024-01-23 四川弘和数智集团有限公司 Lightning monitoring and early warning system and method, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050069A (en) * 2019-12-12 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101975894B (en) * 2010-09-08 2012-07-18 北京航空航天大学 4D (Four Dimensional) thunder collecting method of sensor network
CN103149458B (en) * 2012-11-21 2016-03-09 华中科技大学 A kind of Lightning monitoring equipment
CN105635565A (en) * 2015-12-21 2016-06-01 华为技术有限公司 Shooting method and equipment
CN106296698B (en) * 2016-08-15 2019-03-29 成都通甲优博科技有限责任公司 A kind of lightning 3-D positioning method based on stereoscopic vision
CN109342828A (en) * 2018-09-05 2019-02-15 国网湖北省电力有限公司电力科学研究院 A kind of lightening pulse signal detecting method based on frequency domain constant false alarm
CN109979468B (en) * 2019-03-05 2020-11-06 武汉三江中电科技有限责任公司 Lightning stroke optical path monitoring system and method
CN110609178A (en) * 2019-10-22 2019-12-24 中国气象科学研究院 Automatic observation system and method for double shooting of lightning channel
CN111723860B (en) * 2020-06-17 2022-11-18 苏宁云计算有限公司 Target detection method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050069A (en) * 2019-12-12 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment

Also Published As

Publication number Publication date
CN112396116A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112396116B (en) Thunder and lightning detection method and device, computer equipment and readable medium
CN110276767B (en) Image processing method and device, electronic equipment and computer readable storage medium
US11233933B2 (en) Method and device for processing image, and mobile terminal
CN111178183B (en) Face detection method and related device
CN111325051B (en) Face recognition method and device based on face image ROI selection
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN111881853B (en) Method and device for identifying abnormal behaviors in oversized bridge and tunnel
US20060056702A1 (en) Image processing apparatus and image processing method
CN102542552B (en) Frontlighting and backlighting judgment method of video images and detection method of shooting time
CN110659391A (en) Video detection method and device
CN111462155B (en) Motion detection method, device, computer equipment and storage medium
CN109916415B (en) Road type determination method, device, equipment and storage medium
CN109508636A (en) Vehicle attribute recognition methods, device, storage medium and electronic equipment
KR102156024B1 (en) Shadow removal method for image recognition and apparatus using the same
KR20120035734A (en) A method for detecting fire or smoke
CN115760912A (en) Moving object tracking method, device, equipment and computer readable storage medium
EP2447912B1 (en) Method and device for the detection of change in illumination for vision systems
CN110557628A (en) Method and device for detecting shielding of camera and electronic equipment
JP7163718B2 (en) INTERFERENCE AREA DETECTION DEVICE AND METHOD, AND ELECTRONIC DEVICE
US11605220B2 (en) Systems and methods for video surveillance
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN107452019B (en) Target detection method, device and system based on model switching and storage medium
CN109727268A (en) Method for tracking target, device, computer equipment and storage medium
CN113505643A (en) Violation target detection method and related device
CN110399823B (en) Subject tracking method and apparatus, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A lightning detection method, device, computer equipment and readable medium

Effective date of registration: 20220331

Granted publication date: 20211207

Pledgee: Wuhan area branch of Hubei pilot free trade zone of Bank of China Ltd.

Pledgor: WUHAN SANJIANG CLP TECHNOLOGY Co.,Ltd.

Registration number: Y2022420000094

PE01 Entry into force of the registration of the contract for pledge of patent right
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Huang Kai

Inventor after: Han Junlong

Inventor after: Zhang Fei

Inventor after: Shu Kuan

Inventor after: Lei Cheng

Inventor before: Huang Kai

Inventor before: Han Junlong

Inventor before: Li Heng

Inventor before: Zhang Fei

Inventor before: Shu Kuan

Inventor before: Lei Cheng

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220510

Address after: 430073 room 05, 14 / F, unit 07, skirt building, phase II R & D building, laser engineering design headquarters, No. 3, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan, Hubei (Wuhan area of free trade zone)

Patentee after: Wuhan Zhonggu Maituo Technology Co.,Ltd.

Address before: Floor 19, building F, Optics Valley World Trade Center, 41 Optics Valley Avenue, Donghu high tech, Wuhan, Hubei 430074

Patentee before: WUHAN SANJIANG CLP TECHNOLOGY Co.,Ltd.

CI03 Correction of invention patent
CI03 Correction of invention patent

Correction item: Patentee|Address

Correct: WUHAN SANJIANG CLP TECHNOLOGY Co.,Ltd.|Floor 19, building F, Optics Valley World Trade Center, 41 Optics Valley Avenue, Donghu high tech, Wuhan, Hubei 430074

False: Wuhan Zhonggu Maituo Technology Co.,Ltd.|430073 room 05, 14/F, unit 07, skirt building, phase II R & D building, laser engineering design headquarters, No. 3, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan, Hubei (Wuhan area of free trade zone)

Number: 21-01

Volume: 38

Correction item: Inventor

Correct: Huang Kai|Han Junlong|Li Heng|Zhang Fei|Relax|Lei Cheng

False: Huang Kai|Han Junlong|Zhang Fei|Relax|Lei Cheng

Number: 21-01

Volume: 38