CN116824514B - Target identification method and device, electronic equipment and storage medium - Google Patents

Target identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116824514B
CN116824514B CN202311099393.1A CN202311099393A CN116824514B CN 116824514 B CN116824514 B CN 116824514B CN 202311099393 A CN202311099393 A CN 202311099393A CN 116824514 B CN116824514 B CN 116824514B
Authority
CN
China
Prior art keywords
detection
detected
target
value
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311099393.1A
Other languages
Chinese (zh)
Other versions
CN116824514A (en
Inventor
陈友明
陈思竹
代辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Honghe Digital Intelligence Group Co ltd
Original Assignee
Sichuan Honghe Digital Intelligence Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Honghe Digital Intelligence Group Co ltd filed Critical Sichuan Honghe Digital Intelligence Group Co ltd
Priority to CN202311099393.1A priority Critical patent/CN116824514B/en
Publication of CN116824514A publication Critical patent/CN116824514A/en
Application granted granted Critical
Publication of CN116824514B publication Critical patent/CN116824514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target identification method, a target identification device, electronic equipment and a storage medium, which relate to the technical field of image processing and comprise the following steps: acquiring an image to be detected; performing target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame; based on the size change of the detection frame, performing first characteristic judgment on the target to be detected, and acquiring a first change value of the detection frame; based on the position change of the detection frame, performing second characteristic judgment on the target to be detected, and acquiring a second change value of the detection frame; and under the condition that the first variation value and the second variation value meet the preset condition, outputting a first identification result, wherein the first identification result is that the object to be detected is smoke. According to the target identification method, the target is detected through the neural network, the neural network is used as a finder, and then the cloud and the smoke are distinguished through morphology and physical characteristics of the cloud and the smoke as a detection module, so that complex calculation links of the traditional method are removed, and the method is high in realizability.

Description

Target identification method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a target recognition method, apparatus, electronic device, and storage medium.
Background
Cloud and smoke are two common phenomena, but in the scenes of forest fire detection, fire rescue tasks and the like, the cloud and smoke detection method mainly aims at directly detecting smoke through a neural network to distinguish the cloud and the smoke so as to timely early warn the smoke. However, in the above detection method, since there are many error reports caused by the fact that smoke and cloud are difficult to distinguish, the method cannot be directly applied to the scenes such as forest detection, and further, since the coverage scale of data for forest detection is large, the actual detection data is large, and the calculation amount of the recognition model is large.
Therefore, there is a need for an efficient and accurate way of identifying to distinguish between clouds and smoke.
Disclosure of Invention
The invention provides a target identification method, a target identification device, electronic equipment and a storage medium, and aims to solve the problem that smoke and cloud cannot be accurately distinguished, so that smoke false detection is caused in the prior art.
In a first aspect, an embodiment of the present invention provides a method for identifying a target, where the method includes:
Acquiring images to be detected, wherein the images to be detected are continuous multi-frame images with target quantity;
performing target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame;
based on the size change of the detection frame, performing first characteristic judgment on the target to be detected, and acquiring a first change value of the detection frame;
based on the position change of the detection frame, performing second characteristic judgment on the target to be detected, and acquiring a second change value of the detection frame;
and under the condition that the first variation value and the second variation value meet preset conditions, outputting a first identification result, wherein the first identification result is that the target to be detected is smoke.
Optionally, the detecting the target by using the detection model to obtain the target to be detected includes:
slicing the image to be detected to obtain a plurality of slice images;
inputting a plurality of slice images into the detection model, labeling targets of the slice images, and outputting bounding boxes corresponding to the slice images and position coordinates of the bounding boxes;
combining the plurality of bounding boxes with the image to be detected based on the position coordinates of the plurality of bounding boxes to obtain the image to be detected containing the plurality of bounding boxes as a combined image;
And obtaining a target to be detected on the image to be detected based on the combined image.
Optionally, the detecting module performs target detection on the image to be detected to obtain a detection frame, and further includes:
performing morphological processing on the target to be detected in the image to be detected at least once to obtain a first target to be detected;
labeling a plurality of detection points on the first target to be detected to obtain a plurality of detection points and external rectangles corresponding to each detection point;
and taking the minimum circumscribed rectangle in the circumscribed rectangles corresponding to each detection point as a detection frame of the first target to be detected.
Optionally, acquiring the first change value of the detection frame includes:
acquiring the size values of the detection frames of two adjacent frames;
acquiring a size change value based on the size values of the detection frames of the two adjacent frames, wherein the size change value is a size difference value or a size ratio of the detection frames of the two adjacent frames;
accumulating absolute values of all size change values of the target number of detection frames to serve as the first change value;
obtaining a second variation value of the detection frame comprises the following steps:
acquiring the coordinates of the central points of the detection frames of two adjacent frames;
Acquiring a position change value based on the center point coordinates of the detection frames of the two adjacent frames, wherein the position change value is a center point coordinate difference value or a center point coordinate ratio of the detection frames of the two adjacent frames;
and accumulating absolute values of all position change values of the target number of detection frames to serve as the second change value.
Optionally, the first variation value includes a width variation value and a height variation value of the detection frame, and when the first variation value is a size ratio of the detection frames of the two adjacent frames, a calculation formula of the width variation value of the detection frame is as follows:
the calculation formula of the height change value of the detection frame is as follows:
in which W is rate Representing the width change value of the detection frame, H rate The height change value of the detection frame is represented by W, the width of the detection frame is represented by H, the height of the detection frame is represented by H, and the frame number of the detection frame is represented by i.
Optionally, the second change value includes a change value of the detection frame in a first direction and a change value of the detection frame in a second direction, and when the second change value is a difference value of coordinates of center points of the detection frames of the two adjacent frames, a change calculation formula of the detection frame in the first direction is as follows:
The change calculation formula of the detection frame in the second direction is as follows:
wherein X is move Representing the change value of the detection frame in the first direction, Y move And (3) representing the change value of the detection frame in the second direction, wherein X represents the center point coordinate of the detection frame in the first direction, Y represents the center point coordinate of the detection frame in the second direction, and i represents the frame number of the detection frame.
Optionally, the target identification method further comprises:
outputting a second identification result when at least one of the first variation value and the second variation value does not meet the preset condition, wherein the second identification result is that the target to be detected is cloud, the preset condition is that the first variation value is larger than a first threshold value, and the second variation value is larger than a second threshold value.
In a second aspect, embodiments of the present invention provide an object recognition apparatus, the apparatus comprising:
the acquisition module is used for acquiring images to be detected, wherein the images to be detected are continuous multi-frame images with target quantity;
the detection module is used for carrying out target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame;
the first change value acquisition module is used for carrying out first characteristic judgment on the target to be detected based on the size change of the detection frame to acquire a first change value of the detection frame;
The second change value acquisition module is used for carrying out second characteristic judgment on the target to be detected based on the position change of the detection frame to acquire a second change value of the detection frame;
the identification module is used for outputting a first identification result when the first variation value and the second variation value meet preset conditions, wherein the first identification result is that the target to be detected is smoke.
Wherein, the detection module includes:
the slicing unit is used for slicing the image to be detected to obtain a plurality of slice images;
the boundary box acquisition unit is used for inputting a plurality of slice images into the detection model, labeling targets of the slice images, and outputting boundary boxes corresponding to the slice images and position coordinates of the boundary boxes;
a combining unit, configured to combine the plurality of bounding boxes with the image to be detected based on position coordinates of the plurality of bounding boxes, to obtain an image to be detected including the plurality of bounding boxes as a combined image;
and the target to be detected acquisition unit is used for acquiring the target to be detected on the image to be detected based on the combined image.
Wherein, the detection module further includes:
the morphological processing unit is used for performing morphological processing on the target to be detected in the image to be detected at least once to obtain a first target to be detected;
the external rectangle obtaining unit is used for marking a plurality of detection points on the first target to be detected to obtain a plurality of detection points and external rectangles corresponding to the detection points;
and the detection frame acquisition unit is used for taking the minimum circumscribed rectangle in the circumscribed rectangles corresponding to each detection point as the detection frame of the first target to be detected.
Wherein, the first change value obtaining module includes:
a size value obtaining unit, configured to obtain size values of the detection frames of two adjacent frames;
a size change value obtaining unit, configured to obtain a size change value based on a size value of the detection frames of the two adjacent frames, where the size change value is a size difference value or a size ratio of the detection frames of the two adjacent frames;
a first variation value obtaining unit, configured to accumulate absolute values of all the size variation values of the target number of detection frames as the first variation value.
Wherein, the second change value obtaining module includes:
A center point coordinate acquiring unit, configured to acquire center point coordinates of the detection frames of two adjacent frames;
the position change value acquisition unit is used for acquiring a position change value based on the center point coordinates of the detection frames of the two adjacent frames, wherein the position change value is a center point coordinate difference value or a center point coordinate ratio of the detection frames of the two adjacent frames;
and a second variation value obtaining unit, configured to accumulate absolute values of all position variation values of the target number of detection frames as the second variation value.
Wherein, the identification module includes:
the second identifying unit is configured to output a second identifying result when at least one of the first changing value and the second changing value does not meet the preset condition, where the second identifying result is that the target to be detected is cloud, the preset condition is that the first changing value is greater than a first threshold value, and the second changing value is greater than a second threshold value.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
a memory for storing one or more programs;
a processor;
the object recognition method according to any one of the first aspects above is implemented when the one or more programs are executed by the processor.
An embodiment of the present invention provides in a fourth aspect a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the object recognition method according to any one of the first aspects described above.
The invention has the following advantages: the embodiment of the invention provides a target identification method, a target identification device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring images to be detected, wherein the images to be detected are continuous multi-frame images with target quantity; performing target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame; based on the size change of the detection frame, performing first characteristic judgment on the target to be detected, and acquiring a first change value of the detection frame; based on the position change of the detection frame, performing second characteristic judgment on the target to be detected, and acquiring a second change value of the detection frame; and under the condition that the first variation value and the second variation value meet preset conditions, outputting a first identification result, wherein the first identification result is that the target to be detected is smoke. According to the target identification method provided by the embodiment of the invention, the target is detected through the neural network, the target is used as a finder, and then the cloud and the smoke are distinguished through morphology and physical characteristics of the cloud and the smoke as a detection module. The target identification method removes complex calculation links of the traditional method, and has strong realizability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating steps of a target recognition method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for acquiring an object to be detected according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a target recognition method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an object recognition device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the related art, the detection of smoke mostly includes directly performing target detection on smoke in an image through a neural network; obtaining a smoke region by using a Gaussian mixture model as a background estimation algorithm, obtaining Haar features (Haar is a feature description used for identifying whether a given region contains a target) of the smoke region by using integration, and classifying by applying fuzzy and morphological techniques and finally applying Adaboost (Adaptive Boosting, self-adaptive enhancement); based on space-time analysis of the blocks, using a combination of time difference, gaussian mixture model and background subtraction technique, feeding a SVM (support vector machines, support vector machine) based on Gaussian kernel for classification, wherein the average reaction time is 1.34 seconds; based on multispectral video and the spectrum, time and space attributes of smoke plumes, classification is carried out through methods such as principal component analysis, spectrum variance, gray level co-occurrence matrix and the like.
The above-described centralized smoke detection method currently used has the following problems: the smoke presents various colors and various shapes due to different illumination, visual angles and flame burning sufficiency, and is difficult to distinguish from cloud under certain conditions. The neural network can not be directly applied to such a safety monitoring scene as forest fire prevention because of more false positives when detecting the irregular object image directly. The monitoring camera applied to forest fire prevention generally needs to have wide coverage and high resolution, and the camera with the resolution of more than 2K is used as a common choice, and because the Gaussian mixture model needs to calculate the distance between each data point and each Gaussian distribution, the problem of large calculation amount can be faced when a large-scale data set is processed, and the consumption of resources can be aggravated by acquiring characteristics by using methods such as integration, spectrum and the like and then connecting the characteristics with a large-scale machine learning model.
Because the distance difference between the cloud and the smoke relative to the camera is large, the cloud and the smoke are actually different objects, and the shape characteristics are as follows: the cloud is formed by continuously accumulating, the density of the cloud is about one half meter and one half meter, the water content of the cloud reaches 0.2 g-1 g per cubic meter, therefore, the weight of one cloud is about 50 ten thousand kilograms, the shape of the cloud is basically fixed from the view point, and the cloud has no diffusion trend from the view point. The smoke is composed of carbon particles generated by incomplete combustion and pyrolysis of organic matters at a sudden high temperature, is generated in a picture rapidly, has low density, moves rapidly along with air, and has irregular internal movement and a tendency of diffusion. Detection of the physical properties of the cloud and smoke described above may enable differentiation of the cloud and smoke.
Based on the characteristics, the invention aims to realize the distinction between the cloud and the smoke by utilizing the image detection technology and morphological processing and fully utilizing the deep characteristic properties of the cloud and the smoke under the monitoring view angle and based on the physical characteristics and the motion characteristics of the cloud and the smoke.
In a first aspect, an embodiment of the present invention provides a target recognition method, referring to fig. 1, and fig. 1 is a flowchart of steps of the target recognition method provided in the embodiment of the present invention, where the target recognition method includes the following steps:
Step S110: acquiring images to be detected, wherein the images to be detected are continuous multi-frame images with target quantity;
step S120: performing target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame;
step S130: based on the size change of the detection frame, performing first characteristic judgment on the target to be detected, and acquiring a first change value of the detection frame;
step S140: based on the position change of the detection frame, performing second characteristic judgment on the target to be detected, and acquiring a second change value of the detection frame;
step S150: and under the condition that the first variation value and the second variation value meet preset conditions, outputting a first identification result, wherein the first identification result is that the target to be detected is smoke.
The image to be detected in the embodiment of the invention refers to continuous multi-frame video images with the number of targets, which are acquired by image acquisition equipment such as a preset camera, wherein the image to be detected contains the image of the target to be detected. Because the physical characteristics and the motion characteristics of the cloud and the smoke are different, when the images to be detected are acquired, the physical characteristics and the motion characteristics of the objects to be detected in the continuous multi-frame images of the number of the objects to be detected need to be acquired, that is, whether the morphological change or the position change of the objects to be detected meets the preset condition or not is judged, the identification result of the objects to be detected is obtained, and the identification of the objects to be detected is realized.
When step S110 is specifically implemented, since the forest coverage area is relatively large, a certain number of cameras are first set at preset positions to obtain an image to be detected. Because the forest fire scene needs the area that detects to be big, the precision is high, so the above-mentioned camera's selection generally carries out image acquisition with the camera that the pixel resolution is more than 2K at least, and the lower camera of pixel resolution can lead to final target detection inaccuracy. The method comprises the steps of obtaining continuous multi-frame images of a target number as images to be detected, and aiming at realizing identification of the targets to be detected by taking the continuous multi-frame images in the target number as monitoring objects. In order to realize the recognition of the target to be detected, the number of the targets of the image to be detected is generally at least set to be more than or equal to 5 and less than or equal to 10, and the number of the targets to be detected is too small, so that the possibility that the output recognition result is cloud becomes large, the possibility that the output result is smoke becomes large due to the fact that the number of the targets is too large or too small, the recognition result is inaccurate, and false alarm is caused. Therefore, it is necessary to set the number of targets within the above range, and accurate recognition of the target to be detected can be achieved well.
In the embodiment of the invention, the parameters of the high-definition anti-explosion camera are at least 220 ten thousand pixels (2048 x 1080), and when the camera is arranged, the distance between the camera and the camera is 5 to 10 kilometers, and ipx level water is prevented.
In the embodiment of the invention, the parameters of the computing platform are computers comprising 1 GPU 1080Ti and above, the memory is not less than 8G, and the main frequency of the processor is not lower than 2.3GHz.
In the specific implementation step S120, the to-be-detected model is adopted as the finder of the to-be-detected target to realize the target detection on the to-be-detected image, so as to obtain the to-be-detected target, the to-be-detected target is taken as the object to be marked, so as to obtain the detection frame of the to-be-detected target, and the detection frame is taken as the detection basis of the subsequent judgment condition.
In an alternative embodiment of the present invention, a detection model is used to detect a target in the image to be detected to obtain a target to be detected, referring to fig. 2, fig. 2 is a flowchart of a method for obtaining the target to be detected according to an embodiment of the present invention, where the method for obtaining the target to be detected includes the following steps:
step 120-1: slicing the image to be detected to obtain a plurality of slice images;
step 120-2: inputting a plurality of slice images into the detection model, labeling targets of the slice images, and outputting bounding boxes corresponding to the slice images and position coordinates of the bounding boxes;
step 120-3: combining the plurality of bounding boxes with the image to be detected based on the position coordinates of the plurality of bounding boxes to obtain the image to be detected containing the plurality of bounding boxes as a combined image;
Step 120-4: and obtaining a target to be detected on the image to be detected based on the combined image.
Specifically, firstly, any one of the acquired multi-frame images is subjected to slicing processing to obtain a plurality of slice images, compression of the image to be detected is caused after the image to be detected is processed by adopting a detection model, and certain targets on the image to be detected are unclear after the image to be detected is compressed due to an oversized image with the image resolution of more than 2K, for example, certain smoke just formed can be smaller on the whole image to be detected, and after the image compression, the smoke targets just formed can be missed to be detected, so that false alarm is caused. Therefore, after the image to be detected is sliced, the obtained slice images can be guaranteed to be consistent with the image resolution of the image to be detected, the target is enlarged, the condition of missing detection of the smaller target is avoided, the detection model can be detected when smoke is just formed, and the detection capability of the small target is improved. The slicing process may be an overlap mode, or the slicing process may also perform a random slicing process, an orderly slicing process, or the like on the image to be detected, and for example, the overlap mode may be adopted to perform orderly slicing on the image to be detected according to an overlapping area of 0.2, so as to obtain a plurality of corresponding slice images.
The model to be detected is selected as a lightweight convolutional neural network, and the detection model in the embodiment of the invention can be a YOLO series algorithm model such as YOLOV5 or YOLOV 7. Inputting the plurality of slice images after the slice processing into a detection model, labeling the targets of the plurality of slice images, outputting a plurality of bounding boxes, wherein the targets can be cloud, smoke or the like by way of example, the detection model can automatically label the position coordinates of the targets on the plurality of slice images, generate bounding boxes corresponding to the targets, and can obtain the information such as the position coordinates of the targets to be detected by identifying the bounding boxes. And combining the plurality of bounding boxes with the image to be detected based on the position coordinates of the plurality of bounding boxes to obtain the image to be detected containing the plurality of bounding boxes as a combined image. For example, assuming that the image to be detected is a, performing orderly slicing processing on the image to be detected a by using overlap to obtain slice images A1, … …, a10, performing object labeling on the slice images A1, … …, a10 to obtain bounding boxes of A1, … …, a10, and combining the bounding boxes of A1, … …, a10 and the image to be detected a according to position coordinates to obtain the image to be detected a including the bounding boxes A1, … …, a10 as a combined image E. Based on the combined image, a target to be detected on the image to be detected is obtained, and for example, unified NMS processing is performed on the combined image E, and the corresponding area on the image to be detected A corresponding to the boundary boxes a1-a10 is used as the target to be detected. NMS (Non-maximum suppression ) processing is typically used in the post-processing stage of object detection, and the use of NMS processing may make it more likely that the bounding box selected is the object to be detected, and may not select a bounding box that appears repeatedly, improving the accuracy of detection of the object to be detected.
In an optional embodiment of the present invention, a detection model is used to perform target detection on the image to be detected to obtain a detection frame, and specifically, at least one morphological treatment is performed on the target to be detected in the image to be detected to obtain a first target to be detected. Morphological processing is a shape-based image processing technique that changes the morphology and characteristics of an image by way of specific operations performed on the image by structural elements. Because the target to be detected is a pattern with irregular shape, the target to be detected is subjected to shape processing, so that the characteristics of the target to be detected can be extracted, noise can be eliminated, the shape of the target to be detected can be changed, and the like. The morphological processing may be expansion processing, etching processing, open operation processing, closed operation processing, or the like. The expansion treatment can enlarge the target to be detected, so that the target to be detected is communicated more; the corrosion treatment can reduce the target to be detected, so that the target to be detected is finer; noise can be removed by the open operation processing, and the edge of the object to be detected is smoothed; the closed operation processing can fill the holes of the object in the target to be detected. By way of illustration, the corrosion treatment and the expansion treatment are described, first, a convolution kernel with the size of 3x3 is used for carrying out corrosion treatment on a target to be detected for one time to obtain a first corrosion image, small objects or details around the target to be detected are eliminated, then, the convolution kernel with the size of 3x3 is used for carrying out expansion treatment on the first corrosion image to obtain a first expansion image, the target to be detected after the corrosion treatment is expanded or connected, a cavity or crack in the target to be detected is filled, then, the convolution kernel with the size of 3x3 is used for carrying out corrosion treatment on the first expansion image to obtain a second corrosion image, the corrosion operation is carried out on the target to be detected after the expansion treatment for one time again, redundant objects or details around the target to be detected are eliminated, and the second corrosion image is used as the target to be detected.
And then, carrying out target labeling on the first target to be detected to obtain a detection frame of the target to be detected. Optionally, labeling a plurality of detection points on a first target to be detected, respectively obtaining a plurality of detection points, generating an external rectangle parallel to an X axis and a Y axis by taking each detection point as a reference, obtaining an external rectangle of the target to be detected corresponding to each detection point, and selecting a minimum external rectangle as a detection frame of the first target to be detected in the external rectangles corresponding to each detection point. Only one generation method of the detection frame is listed here, and any method that can generate the minimum circumscribed rectangle of the target to be detected can be used as the acquisition method of the detection frame in the present invention, and the method is not limited here. In the invention, the image to be detected, the target to be detected and the detection frame are in one-to-one correspondence.
In the implementation step S130, the target identifying method provided by the embodiment of the present invention may determine the target to be detected based on the morphological change of the target, or may determine the first characteristic of the target to be detected based on the dimensional change of the detection frame, and obtain the first change value of the detection frame, where the method for obtaining the first change value of the detection frame includes: firstly, obtaining the size values of the detection frames of two adjacent frames, carrying out difference operation or ratio operation on the size values of the detection frames of the two adjacent frames, taking the size difference or the size ratio obtained by operation as a size change value, accumulating absolute values of all the size change values obtained by operation of all the detection frames of a preset target number, and taking the obtained accumulated value as a first change value. For example, if the target number is preset to be 5, the corresponding frames to be detected are B1, B2, B3, B4 and B5, the size values of the detection frames B1 and B2 of two adjacent frames are first obtained, the size change value B1 is obtained after correlation operation, the size values of the detection frames B2 and B3, B3 and B4, B4 and B5 of two adjacent frames are sequentially obtained, the size change values B2, B3 and B4 are obtained, the absolute values of all the size change values are taken and accumulated, the accumulated value B5 is obtained, and the accumulated value B5 is used as the first change value.
Specifically, the first variation value includes a width variation value and a height variation value of the detection frame, and when the first variation value is a size ratio of the detection frames of the two adjacent frames, a calculation formula of the width variation value of the detection frame is as follows:
the calculation formula of the height change value of the detection frame is as follows:
in which W is rate Representing the width change value of the detection frame, H rate The height change value of the detection frame is represented by W, the width of the detection frame is represented by H, the height of the detection frame is represented by H, and the frame number of the detection frame is represented by i.
In the implementation step S140, the target identifying method provided by the embodiment of the present invention may determine the target to be detected based on the physical characteristics of the target, and may determine the second characteristic of the target to be detected based on the position change of the detection frame, and obtain the second change value of the detection frame, where the method for obtaining the second change value of the detection frame includes: firstly, acquiring coordinates of central points of the detection frames of two adjacent frames, performing difference operation or ratio operation on the coordinates of the central points of the detection frames of the two adjacent frames, taking the position difference value or the position ratio obtained by operation as a position change value, accumulating absolute values of all position change values obtained by operation of all detection frames of a preset target number, and taking the obtained accumulated value as a second change value. For example, if the target number is preset to be 5, the corresponding frames to be detected are C1, C2, C3, C4, and C5, first, the coordinates of the center points of the detection frames C1 and C2 of two adjacent frames are obtained, after correlation operation is performed, a position change value C1 is obtained, the coordinates of the center points of the detection frames C2 and C3, C3 and C4, and C4 and C5 of two adjacent frames are sequentially obtained, position change values C2, C3 and C4 are obtained, the absolute values of all the position change values are obtained and accumulated, an accumulated value C5 is obtained, and the accumulated value C5 is used as a second change value.
Specifically, the second change value includes a change value of the detection frame in the first direction and a change value of the detection frame in the second direction, and when the second change value is a difference value of coordinates of central points of the detection frames of the two adjacent frames, a change calculation formula of the detection frame in the first direction is as follows:
the change calculation formula of the detection frame in the second direction is as follows:
wherein X is move Representing the change value of the detection frame in the first direction, Y move And (3) representing the change value of the detection frame in the second direction, wherein X represents the center point coordinate of the detection frame in the first direction, Y represents the center point coordinate of the detection frame in the second direction, and i represents the frame number of the detection frame.
In the specific implementation of step S150, the first change value calculated in step 130 and the second change value calculated in step 140 are determined with a preset condition, and if and only if both the first change value and the second change value meet the preset condition, a first identification result is output, where the first identification result is that the target to be detected is smoke. Outputting a second identification result when at least one of the first variation value and the second variation value does not meet the preset condition, wherein the second identification result is that the target to be detected is cloud. The preset condition is that the first variation value is larger than a first threshold value and the second variation value is larger than a second threshold value. Wherein when the dimension ratio is taken as a first variation value, when the position ratio is taken as a second variation value, both the first threshold and the second threshold are taken as 1 as reference values, when the dimension difference is taken as the first variation value, both the first threshold and the second threshold are taken as 0 as reference values. That is, as long as the size and position of the object to be detected change, the object to be detected is judged as smoke, because the cloud is a large object which is continuously accumulated due to the difference of physical characteristics and morphological changes of the cloud and the smoke, and the change in size and position cannot occur in a short time. Illustratively, the above-described determination includes the following: outputting a first identification result when the first variation value is larger than a first threshold value and the second variation value is larger than a second threshold value, wherein the first identification result is that the target to be detected is smoke; outputting a second identification result when the second variation value is smaller than or equal to a second threshold value under the condition that the first variation value is larger than the first threshold value, wherein the second identification result is that the target to be detected is cloud; outputting a second identification result when the second variation value is larger than the second threshold value under the condition that the first variation value is smaller than or equal to the first threshold value, wherein the second identification result is that the target to be detected is cloud; and outputting a second identification result when the second variation value is smaller than or equal to a second threshold value under the condition that the first variation value is smaller than or equal to the first threshold value, wherein the second identification result is that the target to be detected is cloud. By adopting the judging method, the cloud and the smoke can be accurately distinguished, and the identification result is sent to the manager, so that smoke alarm is realized, and forest fire prevention can be treated in time.
Referring to fig. 3, fig. 3 is a flow chart of a target recognition method according to an embodiment of the invention. The method includes the steps of firstly, obtaining images to be detected of the number of targets through a preset camera, detecting targets to be detected of the images to be detected through a detection model, obtaining detection frames and position coordinates of the targets to be detected through morphological processing of the targets to be detected, judging preset conditions based on physical characteristics and morphological changes of cloud smoke, and obtaining recognition results of the targets to be detected, wherein the recognition results of the targets to be detected are cloud or smoke.
The embodiment of the invention provides a target identification method, which comprises the steps of obtaining images to be detected, wherein the images to be detected are continuous multi-frame images with target quantity; performing target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame; based on the size change of the detection frame, performing first characteristic judgment on the target to be detected, and acquiring a first change value of the detection frame; based on the position change of the detection frame, performing second characteristic judgment on the target to be detected, and acquiring a second change value of the detection frame; and under the condition that the first variation value and the second variation value meet preset conditions, outputting a first identification result, wherein the first identification result is that the target to be detected is smoke. The target identification method provided by the embodiment of the invention not only carries out target detection through the neural network, but also takes the neural network as a finder, and then takes the morphology and the physical characteristics of the cloud and the smoke as a detection module to realize the differentiation of the cloud and the smoke. The target identification method removes complex calculation links of the traditional method, and has strong realizability. The small target is discovered through slicing, so that the target can be discovered in time from the time when smoke is generated, the smoke is detected accurately through a subsequent identification module, and the purpose of high-efficiency and accurate forest prevention monitoring is achieved.
Based on the same inventive concept, an embodiment of the present invention provides an object recognition device, referring to fig. 4, fig. 4 is a schematic diagram of the object recognition device provided by the embodiment of the present invention, where the object recognition device includes:
an obtaining module 410, configured to obtain an image to be detected, where the image to be detected is a target number of continuous multi-frame images;
the detection module 420 is configured to perform target detection on the image to be detected by using a detection model, so as to obtain a target to be detected and a detection frame;
a first change value obtaining module 430, configured to perform a first feature judgment on the target to be detected based on a dimensional change of the detection frame, to obtain a first change value of the detection frame;
a second change value obtaining module 440, configured to perform a second feature judgment on the target to be detected based on the position change of the detection frame, to obtain a second change value of the detection frame;
and the identification module 450 is configured to output a first identification result when the first variation value and the second variation value both meet a preset condition, where the first identification result is that the target to be detected is smoke.
Wherein, the detection module includes:
the slicing unit is used for slicing the image to be detected to obtain a plurality of slice images;
The boundary box acquisition unit is used for inputting a plurality of slice images into the detection model, labeling targets of the slice images, and outputting boundary boxes corresponding to the slice images and position coordinates of the boundary boxes;
a combining unit, configured to combine the plurality of bounding boxes with the image to be detected based on position coordinates of the plurality of bounding boxes, to obtain an image to be detected including the plurality of bounding boxes as a combined image;
and the target to be detected acquisition unit is used for acquiring the target to be detected on the image to be detected based on the combined image.
Wherein, the detection module further includes:
the morphological processing unit is used for performing morphological processing on the target to be detected in the image to be detected at least once to obtain a first target to be detected;
the external rectangle obtaining unit is used for marking a plurality of detection points on the first target to be detected to obtain a plurality of detection points and external rectangles corresponding to the detection points;
and the detection frame acquisition unit is used for taking the minimum circumscribed rectangle in the circumscribed rectangles corresponding to each detection point as the detection frame of the first target to be detected.
Wherein, the first change value obtaining module includes:
a size value obtaining unit, configured to obtain size values of the detection frames of two adjacent frames;
a size change value obtaining unit, configured to obtain a size change value based on a size value of the detection frames of the two adjacent frames, where the size change value is a size difference value or a size ratio of the detection frames of the two adjacent frames;
a first variation value obtaining unit, configured to accumulate absolute values of all the size variation values of the target number of detection frames as the first variation value.
Wherein, the second change value obtaining module includes:
a center point coordinate acquiring unit, configured to acquire center point coordinates of the detection frames of two adjacent frames;
the position change value acquisition unit is used for acquiring a position change value based on the center point coordinates of the detection frames of the two adjacent frames, wherein the position change value is a center point coordinate difference value or a center point coordinate ratio of the detection frames of the two adjacent frames;
and a second variation value obtaining unit, configured to accumulate absolute values of all position variation values of the target number of detection frames as the second variation value.
Wherein, the identification module includes:
the second identifying unit is configured to output a second identifying result when at least one of the first changing value and the second changing value does not meet the preset condition, where the second identifying result is that the target to be detected is cloud, the preset condition is that the first changing value is greater than a first threshold value, and the second changing value is greater than a second threshold value.
Based on the same inventive concept, an embodiment of the present invention discloses an electronic device, fig. 5 shows a schematic diagram of the electronic device disclosed in the embodiment of the present invention, and as shown in fig. 5, the electronic device 100 includes: the memory 110 and the processor 120 are connected through a bus communication, and a computer program is stored in the memory 110 and can be run on the processor 120 to realize the steps in the target identification method disclosed by the embodiment of the invention.
Based on the same inventive concept, embodiments of the present invention disclose a computer readable storage medium having stored thereon a computer program/instructions which, when executed by a processor, implement steps in a target recognition method disclosed by embodiments of the present invention.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, electronic devices, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above description of the target recognition method, device, electronic equipment and storage medium provided by the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (9)

1. A method of target identification, the method comprising:
acquiring images to be detected, wherein the images to be detected are continuous multi-frame images with target quantity;
performing target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame;
based on the size change of the detection frame, performing first characteristic judgment on the target to be detected, and acquiring a first change value of the detection frame; the obtaining the first variation value of the detection frame includes: acquiring the size values of the detection frames of two adjacent frames; acquiring a size change value based on the size values of the detection frames of the two adjacent frames, wherein the size change value is a size difference value or a size ratio of the detection frames of the two adjacent frames; accumulating absolute values of all size change values of the target number of detection frames to serve as the first change value;
Based on the position change of the detection frame, performing second characteristic judgment on the target to be detected, and acquiring a second change value of the detection frame; the obtaining the second variation value of the detection frame includes: acquiring the coordinates of the central points of the detection frames of two adjacent frames; acquiring a position change value based on the center point coordinates of the detection frames of the two adjacent frames, wherein the position change value is a center point coordinate difference value or a center point coordinate ratio of the detection frames of the two adjacent frames; accumulating absolute values of all position change values of the target number of detection frames to serve as the second change value;
and under the condition that the first variation value and the second variation value meet preset conditions, outputting a first identification result, wherein the first identification result is that the target to be detected is smoke.
2. The method for identifying an object according to claim 1, wherein the detecting the object by using the detection model to obtain the object to be detected comprises:
slicing the image to be detected to obtain a plurality of slice images;
inputting a plurality of slice images into the detection model, labeling targets of the slice images, and outputting bounding boxes corresponding to the slice images and position coordinates of the bounding boxes;
Combining the plurality of bounding boxes with the image to be detected based on the position coordinates of the plurality of bounding boxes to obtain the image to be detected containing the plurality of bounding boxes as a combined image;
and obtaining a target to be detected on the image to be detected based on the combined image.
3. The method for identifying an object according to claim 1, wherein the detecting the object of the image to be detected using the detection model to obtain a detection frame, further comprises:
performing morphological processing on the target to be detected in the image to be detected at least once to obtain a first target to be detected;
labeling a plurality of detection points on the first target to be detected to obtain a plurality of detection points and external rectangles corresponding to each detection point;
and taking the minimum circumscribed rectangle in the circumscribed rectangles corresponding to each detection point as a detection frame of the first target to be detected.
4. The method according to claim 1, wherein the first variation value includes a width variation value and a height variation value of the detection frame, and when the first variation value is a size ratio of the detection frames of the two adjacent frames, a calculation formula of the width variation value of the detection frame is as follows:
The calculation formula of the height change value of the detection frame is as follows:
in which W is rate Representing the width change value of the detection frame, H rate The height change value of the detection frame is represented by W, the width of the detection frame is represented by H, the height of the detection frame is represented by H, and the frame number of the detection frame is represented by i.
5. The method of claim 1, wherein the second change value includes a change value of the detection frame in a first direction and a change value of the detection frame in a second direction, and when the second change value is a difference value of coordinates of a center point of the detection frames of the two adjacent frames, a change calculation formula of the detection frame in the first direction is as follows:
the change calculation formula of the detection frame in the second direction is as follows:
wherein X is move Representing the change value of the detection frame in the first direction, Y move And (3) representing the change value of the detection frame in the second direction, wherein X represents the center point coordinate of the detection frame in the first direction, Y represents the center point coordinate of the detection frame in the second direction, and i represents the frame number of the detection frame.
6. The target recognition method according to claim 1, comprising:
outputting a second identification result when at least one of the first variation value and the second variation value does not meet the preset condition, wherein the second identification result is that the target to be detected is cloud, the preset condition is that the first variation value is larger than a first threshold value, and the second variation value is larger than a second threshold value.
7. An object recognition apparatus, comprising:
the acquisition module is used for acquiring images to be detected, wherein the images to be detected are continuous multi-frame images with target quantity;
the detection module is used for carrying out target detection on the image to be detected by adopting a detection model to obtain a target to be detected and a detection frame;
the first change value acquisition module is used for carrying out first characteristic judgment on the target to be detected based on the size change of the detection frame to acquire a first change value of the detection frame;
the second change value acquisition module is used for carrying out second characteristic judgment on the target to be detected based on the position change of the detection frame to acquire a second change value of the detection frame;
the identification module is used for outputting a first identification result when the first variation value and the second variation value meet preset conditions, wherein the first identification result is that the target to be detected is smoke;
wherein, the first change value obtaining module includes:
a size value obtaining unit, configured to obtain size values of the detection frames of two adjacent frames;
a size change value obtaining unit, configured to obtain a size change value based on a size value of the detection frames of the two adjacent frames, where the size change value is a size difference value or a size ratio of the detection frames of the two adjacent frames;
A first variation value obtaining unit, configured to accumulate absolute values of all size variation values of the target number of detection frames as the first variation value;
the second variation value obtaining module includes:
a center point coordinate acquiring unit, configured to acquire center point coordinates of the detection frames of two adjacent frames;
the position change value acquisition unit is used for acquiring a position change value based on the center point coordinates of the detection frames of the two adjacent frames, wherein the position change value is a center point coordinate difference value or a center point coordinate ratio of the detection frames of the two adjacent frames;
and a second variation value obtaining unit, configured to accumulate absolute values of all position variation values of the target number of detection frames as the second variation value.
8. An electronic device, comprising:
a memory for storing one or more programs;
a processor;
the object recognition method according to any one of claims 1-6, when said one or more programs are executed by said processor.
9. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the object recognition method according to any one of claims 1-6.
CN202311099393.1A 2023-08-30 2023-08-30 Target identification method and device, electronic equipment and storage medium Active CN116824514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311099393.1A CN116824514B (en) 2023-08-30 2023-08-30 Target identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311099393.1A CN116824514B (en) 2023-08-30 2023-08-30 Target identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116824514A CN116824514A (en) 2023-09-29
CN116824514B true CN116824514B (en) 2023-12-08

Family

ID=88126118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311099393.1A Active CN116824514B (en) 2023-08-30 2023-08-30 Target identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116824514B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060008268A (en) * 2005-12-31 2006-01-26 주식회사 센텍 Smoke detecting method and system using ccd image
CN102663350A (en) * 2012-03-23 2012-09-12 长安大学 Road tunnel fire detection method based on video
CN104766347A (en) * 2015-04-29 2015-07-08 上海电气集团股份有限公司 Cloud cluster movement prediction method based on foundation cloud chart
CN104978733A (en) * 2014-04-11 2015-10-14 富士通株式会社 Smoke detection method and smoke detection device
CN106408846A (en) * 2016-11-29 2017-02-15 周川 Image fire hazard detection method based on video monitoring platform
JP2017102719A (en) * 2015-12-02 2017-06-08 能美防災株式会社 Flame detection device and flame detection method
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN111223129A (en) * 2020-01-10 2020-06-02 深圳中兴网信科技有限公司 Detection method, detection device, monitoring equipment and computer readable storage medium
CN112232107A (en) * 2020-08-18 2021-01-15 中国商用飞机有限责任公司 Image type smoke detection system and method
CN112507865A (en) * 2020-12-04 2021-03-16 国网山东省电力公司电力科学研究院 Smoke identification method and device
CN114612844A (en) * 2022-03-21 2022-06-10 北京明略昭辉科技有限公司 Smoking detection method and device, electronic equipment and storage medium
CN114821414A (en) * 2022-04-22 2022-07-29 深圳市瑞驰信息技术有限公司 Smoke and fire detection method and system based on improved YOLOV5 and electronic equipment
CN115546682A (en) * 2022-09-20 2022-12-30 华南理工大学 Dynamic smoke detection method based on video
CN116245915A (en) * 2023-03-07 2023-06-09 上海锡鼎智能科技有限公司 Target tracking method based on video
CN116311000A (en) * 2023-05-16 2023-06-23 合肥中科类脑智能技术有限公司 Firework detection method, device, equipment and storage medium
CN116403141A (en) * 2023-04-03 2023-07-07 深圳市巨龙创视科技有限公司 Firework detection method, system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210192175A1 (en) * 2019-12-20 2021-06-24 Volant Aerial, Inc. System and method for the early visual detection of forest fires using a deep convolutional neural network

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060008268A (en) * 2005-12-31 2006-01-26 주식회사 센텍 Smoke detecting method and system using ccd image
CN102663350A (en) * 2012-03-23 2012-09-12 长安大学 Road tunnel fire detection method based on video
CN104978733A (en) * 2014-04-11 2015-10-14 富士通株式会社 Smoke detection method and smoke detection device
CN104766347A (en) * 2015-04-29 2015-07-08 上海电气集团股份有限公司 Cloud cluster movement prediction method based on foundation cloud chart
JP2017102719A (en) * 2015-12-02 2017-06-08 能美防災株式会社 Flame detection device and flame detection method
CN106408846A (en) * 2016-11-29 2017-02-15 周川 Image fire hazard detection method based on video monitoring platform
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN111223129A (en) * 2020-01-10 2020-06-02 深圳中兴网信科技有限公司 Detection method, detection device, monitoring equipment and computer readable storage medium
CN112232107A (en) * 2020-08-18 2021-01-15 中国商用飞机有限责任公司 Image type smoke detection system and method
CN112507865A (en) * 2020-12-04 2021-03-16 国网山东省电力公司电力科学研究院 Smoke identification method and device
CN114612844A (en) * 2022-03-21 2022-06-10 北京明略昭辉科技有限公司 Smoking detection method and device, electronic equipment and storage medium
CN114821414A (en) * 2022-04-22 2022-07-29 深圳市瑞驰信息技术有限公司 Smoke and fire detection method and system based on improved YOLOV5 and electronic equipment
CN115546682A (en) * 2022-09-20 2022-12-30 华南理工大学 Dynamic smoke detection method based on video
CN116245915A (en) * 2023-03-07 2023-06-09 上海锡鼎智能科技有限公司 Target tracking method based on video
CN116403141A (en) * 2023-04-03 2023-07-07 深圳市巨龙创视科技有限公司 Firework detection method, system and storage medium
CN116311000A (en) * 2023-05-16 2023-06-23 合肥中科类脑智能技术有限公司 Firework detection method, device, equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A real‑time video smoke detection algorithm based on Kalman filter and CNN;Alessio Gagliardi等;《Journal of Real-Time Image Processing》;第18卷;第2085–2095页 *
基于BP神经网络视频火灾火焰检测方法;段锁林等;《常州大学学报(自然科学版)》;第29卷(第2期);第65-70页 *
基于Kalman滤波器运动目标跟踪的火灾监测方法;杨冰等;《信息技术》(第7期);第101-105页 *
基于改进YOLOv5的森林烟雾检测算法;熊小豪等;《Changjiang Information &Communications》(第05期);第70-72页 *
视频烟雾检测算法研究;赵艺涵;《中国优秀硕士学位论文全文数据库 工程科技I辑》(第(2020)07期);B026-15 *

Also Published As

Publication number Publication date
CN116824514A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US11468660B2 (en) Pixel-level based micro-feature extraction
CN109961049B (en) Cigarette brand identification method under complex scene
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN107133973B (en) Ship detection method in bridge collision avoidance system
CN110992329A (en) Product surface defect detection method, electronic device and readable storage medium
CN112052797A (en) MaskRCNN-based video fire identification method and system
CN108898610A (en) A kind of object contour extraction method based on mask-RCNN
CN112861635B (en) Fire disaster and smoke real-time detection method based on deep learning
CN111723654A (en) High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
CN111539938B (en) Method, system, medium and electronic terminal for detecting curvature of rolled strip steel strip head
CN109544522A (en) A kind of Surface Defects in Steel Plate detection method and system
CN106910204B (en) A kind of method and system to the automatic Tracking Recognition of sea ship
CN107909081A (en) The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN111079518A (en) Fall-down abnormal behavior identification method based on scene of law enforcement and case handling area
CN114202646A (en) Infrared image smoking detection method and system based on deep learning
CN113837086A (en) Reservoir phishing person detection method based on deep convolutional neural network
CN114359733A (en) Vision-based smoke fire detection method and system
CN113688820A (en) Stroboscopic stripe information identification method and device and electronic equipment
CN116777877A (en) Circuit board defect detection method, device, computer equipment and storage medium
CN115908356A (en) PCB defect detection method based on LDLFModel
CN114445410A (en) Circuit board detection method based on image recognition, computer and readable storage medium
CN116824514B (en) Target identification method and device, electronic equipment and storage medium
CN110135239A (en) A kind of recognition methods of optical remote sensing image harbour Ship Target
CN108510517B (en) Self-adaptive visual background extraction method and device
CN114973033B (en) Unmanned aerial vehicle automatic detection target and tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant