CN112149583A - Smoke detection method, terminal device and storage medium - Google Patents

Smoke detection method, terminal device and storage medium Download PDF

Info

Publication number
CN112149583A
CN112149583A CN202011030654.0A CN202011030654A CN112149583A CN 112149583 A CN112149583 A CN 112149583A CN 202011030654 A CN202011030654 A CN 202011030654A CN 112149583 A CN112149583 A CN 112149583A
Authority
CN
China
Prior art keywords
image
smoke
frame
region
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011030654.0A
Other languages
Chinese (zh)
Inventor
朱焱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd
Original Assignee
Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd filed Critical Shandong Industry Research Kunyun Artificial Intelligence Research Institute Co ltd
Priority to CN202011030654.0A priority Critical patent/CN112149583A/en
Publication of CN112149583A publication Critical patent/CN112149583A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The application is applicable to the technical field of image processing, and provides a smoke detection method, a terminal device and a storage medium, wherein the smoke detection method comprises the following steps: acquiring a video image to be detected; converting each frame of image in the video image into a binary image based on a Gaussian mixture model; extracting a smoke region image from the video image based on the region of interest of each frame of binary image; and detecting the image of the smoke area based on the trained YOLOv3 model, and determining the relevant information of the smoke. By carrying out screening detection twice on the image of the smoke area, the accuracy of smoke detection can be improved.

Description

Smoke detection method, terminal device and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a smoke detection method, a terminal device, and a storage medium.
Background
Among the various natural disasters, fire is one of the most common and most vulnerable disasters that endanger public safety and social development. In the early stage of fire, smoke can be continuously generated, and if the smoke can be detected in time and effectively suppressed in the stage, the harm of the fire can be reduced to the minimum. Therefore, early smoke detection in a fire is critical.
Most of the traditional smoke detection technologies based on physical sensors are not suitable for environments with wide range and complex environments. In recent years, video smoke detection technology has been widely used and focused, and mainly uses features such as an average gradient of an image and optical flow motion to detect smoke. However, the traditional video smoke detection technology has the problem of low detection precision.
Disclosure of Invention
The embodiment of the application provides a smoke detection method, terminal equipment and a storage medium, and can solve the problem of low detection precision of the existing video smoke detection technology.
In a first aspect, an embodiment of the present application provides a smoke detection method, including:
acquiring a video image to be detected;
converting each frame of image in the video image into a binary image based on a Gaussian mixture model;
extracting a smoke region image from the video image based on the region of interest of each frame of binary image;
and detecting the smoke area image based on the trained YOLOv3 model, and determining the relevant information of the smoke.
In a possible implementation manner of the first aspect, the converting each frame of image in the video image into a binary image based on a gaussian mixture model includes:
matching all pixel points in each frame of image with the Gaussian mixture model respectively;
converting the pixel points successfully matched with the Gaussian mixture model into black foreground pixel points, and converting the pixel points failed in matching with the Gaussian mixture model into white background pixel points;
and generating the binary image according to all the black foreground pixel points and all the white background pixel points.
In a possible implementation manner of the first aspect, the extracting a smoke region image in the video image based on the region of interest of each frame binary image includes:
determining the region of interest of each frame of binary image;
determining a smoke region image in a first frame image in the video image according to the region of interest of each frame of binary image; the first frame image is a frame image corresponding to each frame of binary image in the video image.
In a possible implementation manner of the first aspect, the determining a region of interest of each frame of the binary image includes:
establishing a coordinate system by taking one angle of the binary image as an origin, and calculating the maximum value of the x-axis coordinate and the maximum value of the y-axis coordinate of the black pixel point in the coordinate system; the maximum value of the x-axis coordinate comprises the maximum value of the x-axis coordinate and the minimum value of the x-axis coordinate, and the maximum value of the y-axis coordinate comprises the maximum value of the y-axis coordinate and the minimum value of the y-axis coordinate;
and determining the region of interest of each frame of binary image according to the maximum value of the x-axis coordinate and the maximum value of the y-axis coordinate.
In a possible implementation manner of the first aspect, the determining, according to the region of interest of each frame of the binary image, a smoke region image in a first frame of image in the video image includes:
determining second coordinate information of a smoke region in the first frame image according to the first coordinate information of the region of interest;
and extracting smoke features in the first frame image according to the second coordinate information to obtain the smoke area image.
In a possible implementation manner of the first aspect, before the detecting the smoke region image based on the trained YOLOv3 model, the method further includes:
acquiring an image sample containing smoke data, and establishing a smoke data set;
labeling the smoke data set to obtain a labeled data set;
based on the annotated dataset, the YOLOv3 model is trained.
In one possible implementation manner of the first aspect, before the labeling the smoke dataset, the method further includes:
preprocessing the image sample, and performing data expansion on the preprocessed image sample; wherein the pre-processing comprises at least one of: filtering and denoising; the data expansion mode comprises at least one of the following modes: random cutting, rotation, brightness adjustment and saturation adjustment.
In a possible implementation manner of the first aspect, the detecting the smoke region image based on the trained YOLOv3 model to determine the position information of the smoke includes:
if smoke is detected in the smoke region image, the position and range of the smoke are located.
In a second aspect, an embodiment of the present application provides a smoke detection device, including:
the video image acquisition module is used for acquiring a video image to be detected;
the binary image determining module is used for converting each frame of image in the video image into a binary image based on a Gaussian mixture model;
the smoke region image determining module is used for extracting a smoke region image from the video image based on the region of interest of each frame of binary image;
and the detection module is used for detecting the smoke area image based on the trained YOLOv3 model and determining the relevant information of the smoke.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method of any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method of any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of any one of the above first aspects.
Compared with the prior art, the embodiment of the application has the advantages that:
firstly, each frame of image in a video image is converted into a binary image based on a Gaussian mixture model, and a smoke region image is extracted from the video image based on an interested region of each frame of binary image, so that the preliminary screening detection of the smoke region image in the video image is realized. And then, detecting the smoke area image based on the trained YOLOv3 model, further screening and detecting the smoke area image, and finally determining the relevant information of the smoke. By carrying out screening detection twice on the image of the smoke area, the accuracy of smoke detection can be improved.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a smoke detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a smoke detection method according to another embodiment of the present application;
fig. 3 is a schematic flow chart of a smoke detection method according to another embodiment of the present application;
fig. 4 is a schematic flow chart of a smoke detection method according to another embodiment of the present application;
fig. 5 is a schematic structural diagram of a smoke detection device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in the specification of this application and the appended claims, the term "if" may be interpreted contextually as "when …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Based on the problem of low detection precision of the existing video smoke detection technology, the application provides a smoke detection method. And then, detecting the smoke area image based on the trained YOLOv3 model, further screening and detecting the smoke area image, and finally determining the relevant information of the smoke. By carrying out screening detection twice on the image of the smoke area, the accuracy of smoke detection can be improved.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 shows a schematic flow chart of a smoke detection method provided in an embodiment of the present application, and by way of example and not limitation, the method may include the following steps:
s101, acquiring a video image to be detected.
In particular, the video images to be detected may be obtained by means of image acquisition devices arranged in the scene. For example, the image acquisition device may be a high definition camera arranged in a scene, and the video image acquired by the high definition camera in real time is the video image to be detected.
And S102, converting each frame of image in the video image into a binary image based on the Gaussian mixture model.
Specifically, each frame of image is converted into a binary image by using a Gaussian mixture model, a smoke part in each frame of image is used as a dynamic foreground, and parts except the smoke are used as static backgrounds. The distinction of smoke from other parts is achieved in binary images.
And S103, extracting a smoke region image from the video image based on the region of interest of each frame of binary image.
Specifically, the smoke portion in the binary image can be distinguished from other portions, and the region of the smoke in the binary image is used as the region of interest. And then feeding back the region of interest to the corresponding video image, finding a region corresponding to the region of interest in the video image, and recording the region as a smoke region image. Therefore, the preliminary screening detection of the smoke area image in the video image is realized.
S104, detecting the image of the smoke area based on the trained YOLOv3 model, and determining the relevant information of the smoke.
Specifically, after determining the smoke region image, the smoke region image is detected by using a trained YOLOv3 model to determine the relevant information of smoke, wherein the relevant information of smoke includes whether smoke exists and the position and the range of smoke.
For example, if smoke is detected in the image of the smoke area, the position and range of the smoke are located, so that monitoring personnel can quickly reach the area where the smoke is generated, and timely treatment can be performed to prevent fire.
Step S103 implements preliminary screening detection on the smoke region image in the video image, and step S104 performs further screening detection on the smoke region image on the basis of step S103. Through screening and detecting the two sides of the image of the smoke area, the accuracy of smoke detection is improved.
For example, as shown in fig. 2, step S102 may specifically include:
and S1021, respectively matching all pixel points in each frame of image with the Gaussian mixture model.
Specifically, each frame of image comprises a plurality of pixel points, and each pixel point in each frame of image is matched with the Gaussian mixture model respectively, so that the pixel points are distinguished.
And S1022, converting the pixel points successfully matched with the Gaussian mixture model into black foreground pixel points, and converting the pixel points unsuccessfully matched with the Gaussian mixture model into white background pixel points.
Specifically, the Gaussian mixture model can be trained in advance, pixel points representing smoke in the video image can be successfully matched with the Gaussian mixture model, and then the successfully matched pixel points are converted into black foreground pixel points. And converting pixel points except for the representative smoke pixel points in the video image into white background pixel points.
And S1023, generating a binary image according to all the black foreground pixel points and all the white background pixel points.
Specifically, after all the pixels in the video image are converted into black foreground pixels and white background pixels, all the black foreground pixels and all the white background pixels form a binary image.
For example, as shown in fig. 3, step S103 may specifically include:
and S1031, determining the region of interest of each frame of binary image.
Specifically, in step S1022, the pixel points representing smoke in the video image are converted into black foreground pixel points, and the pixel points other than the pixel points representing smoke in the video image are converted into white background pixel points. And determining a region containing all black foreground pixel points in the binary image as an interested region, and realizing the feature extraction of the smoke region in the binary image.
S1032, according to the interesting region of each frame of binary image, determining a smoke region image in the first frame of image in the video image.
Specifically, since the first frame image is a frame image corresponding to each frame of binary image in the video image, the region of interest of each frame of binary image is fed back to the corresponding first frame image, and the region corresponding to the region of interest is determined in the first frame image as the smoke region image. And extracting the image of the smoke region in the first frame image.
Exemplarily, step S1031 may specifically include:
and A1, establishing a coordinate system by taking one corner of the binary image as an origin, and calculating the maximum value of the x-axis coordinate and the maximum value of the y-axis coordinate of the black pixel point in the coordinate system.
Specifically, the maximum value of the x-axis coordinate includes the maximum value (x) of the x-axis coordinatemax) And the minimum value (x) of the x-axis coordinatemin) The maximum value of the y-axis coordinate includes the maximum value of the y-axis coordinate (y)max) And the minimum value of the y-axis coordinate (y)min)。
And B1, determining the interested region of each frame of binary image according to the most value of the x-axis coordinate and the most value of the y-axis coordinate.
Specifically, will (x)max,ymax)、(xmax,ymin)、(xmin,ymin) And (x)min,ymax) The region formed by sequentially connecting the four points is the region of interest, and the region of interest contains all black foreground pixel points, so that the region of interest in the binary image is extracted.
For example, step S1032 may specifically include:
and A2, determining second coordinate information of the smoke region in the first frame image according to the first coordinate information of the region of interest.
Specifically, firstly, a coordinate system in the binary image is referred to, and a corresponding coordinate system is established in the first frame image, so that the coordinates of corresponding pixel points in the binary image and the first frame image are ensured to be the same. The first coordinate information of the interest region in the binary image is (x)max,ymax)、(xmax,ymin)、(xmin,ymin) And (x)min,ymax) Then the second coordinate information of the smoke region in the first frame image is also (x)max,ymax)、(xmax,ymin)、(xmin,ymin) And (x)min,ymax)。
And B2, extracting smoke features in the first frame image according to the second coordinate information to obtain a smoke area image.
Specifically, (x) will be in the first frame imagemax,ymax)、(xmax,ymin)、(xmin,ymin) And (x)min,ymax) The image in the area formed by sequentially connecting the four points is the image of the smoke area, and the extraction of the smoke features in the first frame of image is realized.
In some embodiments, as shown in fig. 4, step S104 may specifically include, before step S:
s1041, obtaining an image sample containing smoke data, and establishing a smoke data set.
Specifically, a certain amount of image samples containing smoke data can be obtained through self-shooting or web crawlers and the like to form a smoke data set.
And S1042, labeling the smoke data set to obtain a labeled data set.
Specifically, the smoke area in each image sample is labeled, and the sample images of all labeled smoke areas form an labeled data set.
S1043, training a YOLOv3 model based on the labeled data set.
Specifically, the YOLOv3 model is trained using image samples in the annotation dataset, enabling the trained YOLOv3 model to detect smoke.
In some embodiments, before labeling the smoke dataset, the labeling may further include:
and preprocessing the image sample, and performing data expansion on the preprocessed image sample.
Specifically, the pre-treatment comprises at least one of: filtering and denoising; the data expansion mode comprises at least one of the following modes: random cutting, rotation, brightness adjustment and saturation adjustment.
According to the smoke detection method, each frame of image in a video image is converted into a binary image based on a Gaussian mixture model, and a smoke area image is extracted from the video image based on the region of interest of each frame of binary image, so that preliminary screening detection of the smoke area image in the video image is realized. And then, detecting the smoke area image based on the trained YOLOv3 model, further screening and detecting the smoke area image, and finally determining the relevant information of the smoke. By carrying out screening detection twice on the image of the smoke area, the accuracy of smoke detection can be improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 5 is a schematic structural diagram of a smoke detection apparatus provided in an embodiment of the present application, where the smoke detection apparatus may include a video image acquisition module 51, a binary image determination module 52, a smoke region image determination module 53, and a detection module 54;
a video image obtaining module 51, configured to obtain a video image to be detected;
a binary image determining module 52, configured to convert each frame of image in the video image into a binary image based on a gaussian mixture model;
a smoke region image determining module 53, configured to extract a smoke region image from the video image based on a region of interest of each frame of binary image;
and the detection module 54 is configured to detect the smoke region image based on the trained YOLOv3 model, and determine relevant information of smoke.
In one embodiment of the present application, the binary image determining module 52 may include a matching unit, a converting unit, and a binary image determining unit;
the matching unit is used for respectively matching all the pixel points in each frame of image with the Gaussian mixture model;
the conversion unit is used for converting pixel points which are successfully matched with the Gaussian mixture model into black foreground pixel points and converting pixel points which are unsuccessfully matched with the Gaussian mixture model into white background pixel points;
and the binary image determining unit is used for generating the binary image according to all the black foreground pixel points and all the white background pixel points.
In one embodiment of the present application, the smoke region image determination module 53 may include a region-of-interest determination unit and a smoke region image determination unit;
the region-of-interest determining unit is used for determining a region of interest of each frame of binary image;
the smoke region image determining unit is used for determining a smoke region image in a first frame image in the video image according to the region of interest of each frame of binary image; the first frame image is a frame image corresponding to each frame of binary image in the video image.
In one embodiment of the present application, the region of interest determining unit may include a coordinate determining unit and a region of interest determining subunit;
the coordinate determination unit is used for establishing a coordinate system by taking one angle of the binary image as an origin, and calculating the maximum value of the x-axis coordinate and the maximum value of the y-axis coordinate of the black pixel point in the coordinate system; the maximum value of the x-axis coordinate comprises the maximum value of the x-axis coordinate and the minimum value of the x-axis coordinate, and the maximum value of the y-axis coordinate comprises the maximum value of the y-axis coordinate and the minimum value of the y-axis coordinate;
and the interested region determining subunit is used for determining the interested region of each frame of binary image according to the maximum value of the x-axis coordinate and the maximum value of the y-axis coordinate.
In one embodiment of the present application, the smoke region image determining unit may include a second coordinate information determining module and a smoke region image determining subunit;
the second coordinate information determining module is used for determining second coordinate information of the smoke area in the first frame of image according to the first coordinate information of the area of interest;
and the smoke area image determining subunit is configured to perform smoke feature extraction in the first frame image according to the second coordinate information to obtain the smoke area image.
In one embodiment of the present application, the smoke detection device may further include a smoke dataset determination module, a labeling module, and a training module;
the smoke data set determining module is used for acquiring an image sample containing smoke data and establishing a smoke data set;
the labeling module is used for labeling the smoke data set to obtain a labeled data set;
a training module for training a Yolov3 model based on the labeled dataset.
In one embodiment of the present application, the smoke detection device may further comprise a pre-processing module;
the preprocessing module is used for preprocessing the image sample and performing data expansion on the preprocessed image sample; wherein the pre-processing comprises at least one of: filtering and denoising; the data expansion mode comprises at least one of the following modes: random cutting, rotation, brightness adjustment and saturation adjustment.
In one embodiment of the present application, the detection module 54 may include a positioning unit;
and the positioning unit is used for positioning the position and the range of the smoke if the smoke is detected in the smoke area image.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
In addition, the smoke detection device shown in fig. 5 may be a software unit, a hardware unit, or a combination of software and hardware unit that is built in the existing terminal device, may be integrated into the terminal device as an independent pendant, or may exist as an independent terminal device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment may include: at least one processor 60 (only one processor 60 is shown in fig. 6), a memory 61, and a computer program 62 stored in the memory 61 and operable on the at least one processor 60, wherein the processor 60 executes the computer program 62 to implement the steps in any of the various method embodiments described above, such as the steps S101 to S104 in the embodiment shown in fig. 1. The processor 60, when executing the computer program 62, implements the functions of the various modules/units in the above-described apparatus embodiments, such as the functions of the modules 51 to 54 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of instruction segments of the computer program 62 capable of performing specific functions, which are used to describe the execution process of the computer program 62 in the terminal device 6.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device 6 may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is only an example of the terminal device 6, and does not constitute a limitation to the terminal device 6, and may include more or less components than those shown, or combine some components, or different components, such as an input/output device, a network access device, and the like.
The Processor 60 may be a Central Processing Unit (CPU), and the Processor 60 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may in some embodiments be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are equipped on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer program 62. The memory 61 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A smoke detection method, comprising:
acquiring a video image to be detected;
converting each frame of image in the video image into a binary image based on a Gaussian mixture model;
extracting a smoke region image from the video image based on the region of interest of each frame of binary image;
and detecting the smoke area image based on the trained YOLOv3 model, and determining the relevant information of the smoke.
2. The smoke detection method of claim 1, wherein said converting each frame of image in said video image into a binary image based on a Gaussian mixture model comprises:
matching all pixel points in each frame of image with the Gaussian mixture model respectively;
converting the pixel points successfully matched with the Gaussian mixture model into black foreground pixel points, and converting the pixel points failed in matching with the Gaussian mixture model into white background pixel points;
and generating the binary image according to all the black foreground pixel points and all the white background pixel points.
3. The smoke detection method according to claim 2, wherein the extracting of the smoke region image in the video image based on the region of interest of each frame of the binary image comprises:
determining the region of interest of each frame of binary image;
determining a smoke region image in a first frame image in the video image according to the region of interest of each frame of binary image; the first frame image is a frame image corresponding to each frame of binary image in the video image.
4. The smoke detection method of claim 3, wherein said determining a region of interest for each frame of the binary image comprises:
establishing a coordinate system by taking one angle of the binary image as an origin, and calculating the maximum value of the x-axis coordinate and the maximum value of the y-axis coordinate of the black pixel point in the coordinate system; the maximum value of the x-axis coordinate comprises the maximum value of the x-axis coordinate and the minimum value of the x-axis coordinate, and the maximum value of the y-axis coordinate comprises the maximum value of the y-axis coordinate and the minimum value of the y-axis coordinate;
and determining the region of interest of each frame of binary image according to the maximum value of the x-axis coordinate and the maximum value of the y-axis coordinate.
5. The method according to claim 3, wherein the determining the smoke region image in the first frame image of the video images according to the region of interest of each frame binary image comprises:
determining second coordinate information of a smoke region in the first frame image according to the first coordinate information of the region of interest;
and extracting smoke features in the first frame image according to the second coordinate information to obtain the smoke area image.
6. The smoke detection method according to any one of claims 1 to 5, wherein prior to said detecting said smoke region image based on a trained YOLOv3 model, said method further comprises:
acquiring an image sample containing smoke data, and establishing a smoke data set;
labeling the smoke data set to obtain a labeled data set;
based on the annotated dataset, the YOLOv3 model is trained.
7. The smoke detection method of claim 6, wherein prior to said labeling the smoke data set, the method further comprises:
preprocessing the image sample, and performing data expansion on the preprocessed image sample; wherein the pre-processing comprises at least one of: filtering and denoising; the data expansion mode comprises at least one of the following modes: random cutting, rotation, brightness adjustment and saturation adjustment.
8. The smoke detection method according to any one of claims 1 to 5, wherein the detecting the smoke region image based on the trained Yolov3 model to determine the position information of the smoke comprises:
if smoke is detected in the smoke region image, the position and range of the smoke are located.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202011030654.0A 2020-09-27 2020-09-27 Smoke detection method, terminal device and storage medium Pending CN112149583A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011030654.0A CN112149583A (en) 2020-09-27 2020-09-27 Smoke detection method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011030654.0A CN112149583A (en) 2020-09-27 2020-09-27 Smoke detection method, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN112149583A true CN112149583A (en) 2020-12-29

Family

ID=73895462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011030654.0A Pending CN112149583A (en) 2020-09-27 2020-09-27 Smoke detection method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN112149583A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861737A (en) * 2021-02-11 2021-05-28 西北工业大学 Forest fire smoke detection method based on image dark channel and YoLov3
CN113139500A (en) * 2021-05-10 2021-07-20 重庆中科云从科技有限公司 Smoke detection method, system, medium and device
CN113706815A (en) * 2021-08-31 2021-11-26 沈阳二一三电子科技有限公司 Vehicle fire identification method combining YOLOv3 and optical flow method
CN113883565A (en) * 2021-10-29 2022-01-04 杭州老板电器股份有限公司 Range hood control method and device and range hood
WO2024051297A1 (en) * 2022-09-09 2024-03-14 南京邮电大学 Lightweight fire smoke detection method, terminal device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084166A (en) * 2019-04-19 2019-08-02 山东大学 Substation's smoke and fire intelligent based on deep learning identifies monitoring method
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN110969205A (en) * 2019-11-29 2020-04-07 南京恩博科技有限公司 Forest smoke and fire detection method based on target detection, storage medium and equipment
CN111091072A (en) * 2019-11-29 2020-05-01 河海大学 YOLOv 3-based flame and dense smoke detection method
CN111680632A (en) * 2020-06-10 2020-09-18 深延科技(北京)有限公司 Smoke and fire detection method and system based on deep learning convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084166A (en) * 2019-04-19 2019-08-02 山东大学 Substation's smoke and fire intelligent based on deep learning identifies monitoring method
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN110969205A (en) * 2019-11-29 2020-04-07 南京恩博科技有限公司 Forest smoke and fire detection method based on target detection, storage medium and equipment
CN111091072A (en) * 2019-11-29 2020-05-01 河海大学 YOLOv 3-based flame and dense smoke detection method
CN111680632A (en) * 2020-06-10 2020-09-18 深延科技(北京)有限公司 Smoke and fire detection method and system based on deep learning convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
任友群: "《人工智能》", 31 January 2020 *
程淑红等: "改进的混合高斯与YOLOv2融合烟雾检测算法", 《计量学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861737A (en) * 2021-02-11 2021-05-28 西北工业大学 Forest fire smoke detection method based on image dark channel and YoLov3
CN113139500A (en) * 2021-05-10 2021-07-20 重庆中科云从科技有限公司 Smoke detection method, system, medium and device
CN113139500B (en) * 2021-05-10 2023-10-20 重庆中科云从科技有限公司 Smoke detection method, system, medium and equipment
CN113706815A (en) * 2021-08-31 2021-11-26 沈阳二一三电子科技有限公司 Vehicle fire identification method combining YOLOv3 and optical flow method
CN113883565A (en) * 2021-10-29 2022-01-04 杭州老板电器股份有限公司 Range hood control method and device and range hood
WO2024051297A1 (en) * 2022-09-09 2024-03-14 南京邮电大学 Lightweight fire smoke detection method, terminal device and storage medium

Similar Documents

Publication Publication Date Title
CN112149583A (en) Smoke detection method, terminal device and storage medium
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
CN106650662B (en) Target object shielding detection method and device
CN109215037B (en) Target image segmentation method and device and terminal equipment
US20180253852A1 (en) Method and device for locating image edge in natural background
WO2019128504A1 (en) Method and apparatus for image processing in billiards game, and terminal device
CN113255516A (en) Living body detection method and device and electronic equipment
CN112559341A (en) Picture testing method, device, equipment and storage medium
CN114187333A (en) Image alignment method, image alignment device and terminal equipment
CN110390295B (en) Image information identification method and device and storage medium
CN108682021B (en) Rapid hand tracking method, device, terminal and storage medium
CN114359048A (en) Image data enhancement method and device, terminal equipment and storage medium
CN111552829B (en) Method and apparatus for analyzing image material
CN108776959B (en) Image processing method and device and terminal equipment
CN108629219B (en) Method and device for identifying one-dimensional code
CN110610178A (en) Image recognition method, device, terminal and computer readable storage medium
CN113158773B (en) Training method and training device for living body detection model
CN110619597A (en) Semitransparent watermark removing method and device, electronic equipment and storage medium
CN111931794B (en) Sketch-based image matching method
CN114359160A (en) Screen detection method and device, electronic equipment and storage medium
CN114937188A (en) Information identification method, device, equipment and medium for sharing screenshot by user
US20170109596A1 (en) Cross-Asset Media Analysis and Processing
CN114140427A (en) Object detection method and device
CN112989924A (en) Target detection method, target detection device and terminal equipment
CN112559340A (en) Picture testing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229