CN112464714A - Harmful animal detection method and device based on video monitoring and electronic device - Google Patents

Harmful animal detection method and device based on video monitoring and electronic device Download PDF

Info

Publication number
CN112464714A
CN112464714A CN202011140296.9A CN202011140296A CN112464714A CN 112464714 A CN112464714 A CN 112464714A CN 202011140296 A CN202011140296 A CN 202011140296A CN 112464714 A CN112464714 A CN 112464714A
Authority
CN
China
Prior art keywords
target
targets
preselected
pest
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011140296.9A
Other languages
Chinese (zh)
Inventor
吕辰
潘华东
殷俊
张兴明
孙鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011140296.9A priority Critical patent/CN112464714A/en
Publication of CN112464714A publication Critical patent/CN112464714A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a harmful animal detection method and device based on video monitoring and an electronic device, wherein the harmful animal detection method based on video monitoring comprises the following steps: acquiring a video stream; the video stream comprises a plurality of frames of video images; detecting multiple frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first pre-selection target set comprises a large dynamic inspection target, a small dynamic inspection target and other dynamic inspection targets; detecting multiple frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target; pest targets in the video stream are determined based on the first set of preselected targets and the second set of preselected targets. Through the application, the problem that the harmful animal target in the indoor environment cannot be effectively monitored in the related art is solved.

Description

Harmful animal detection method and device based on video monitoring and electronic device
Technical Field
The application relates to the field of intelligent monitoring, in particular to a harmful animal detection method and device based on video monitoring and an electronic device.
Background
With the rapid development of economy and the increase of consumption level of people, more and more people tend to go to restaurants or take-out to solve three meals a day. However, many restaurants with bright and beautiful appearance become the place for storing dirt and grime in the kitchen. Harmful animal targets hidden in the kitchen pose a serious threat to the hygienic safety of the kitchen, such as rats, cockroaches, and the like. Bacteria and viruses carried by these pest targets can have an adverse effect on the health of the eater.
In the related art, the sanitary safety problem of the kitchen is improved by manually monitoring the targets of the vermin in the kitchen. However, it is very difficult to manually monitor the noxious animal targets in the kitchen because the noxious animal targets such as mice, cockroaches, and the like are small in size, move rapidly, and travel more at night.
At present, no effective solution is provided for the problem that the harmful animal target in the indoor environment cannot be effectively monitored in the related art.
Disclosure of Invention
The embodiment of the application provides a method and a device for detecting harmful animals based on video monitoring and an electronic device, so as to at least solve the problem that harmful animal targets in indoor environment cannot be effectively monitored in the related art.
In a first aspect, the present application provides a method for detecting pest based on video surveillance, including:
acquiring a video stream; the video stream comprises a plurality of frames of video images;
detecting multiple frames of the video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first pre-selection target set comprises a large dynamic inspection target, a small dynamic inspection target and other dynamic inspection targets;
detecting multiple frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target;
determining pest targets in the video stream based on the first and second sets of preselected targets.
In some embodiments, the detecting multiple frames of the video images according to a pre-constructed moving object detection model to obtain a first preselected target set includes:
detecting a plurality of frames of the video image according to the moving object detection model to obtain the outline information of a plurality of moving objects in the video image and the coordinate information of each moving object corresponding to the image subregion;
calculating the pixel area of each moving object corresponding to the image subregion according to the coordinate information of each moving object corresponding to the image subregion;
obtaining the first preselected target set according to a preset pixel area threshold range and the pixel area of the image subarea corresponding to each moving object; the pixel area threshold range comprises a maximum pixel area threshold and a minimum pixel area threshold; the dynamic detection large target represents a moving object of which the pixel area of the corresponding image sub-region is larger than the maximum pixel area threshold value; the small dynamic detection target represents a moving object of which the pixel area of the corresponding image sub-region is smaller than the minimum pixel area threshold value; and the other moving detection targets represent moving objects of which the pixel areas of the corresponding image sub-regions are within the pixel area threshold range.
In some of these embodiments, said determining pest targets in said video stream from said first set of preselected targets and said second set of preselected targets comprises:
combining the first preselected target set and the second preselected target set to obtain a first summarized target set;
according to the large object target and the cross-over ratio algorithm, filtering other dynamic detection targets meeting preset filtering conditions from the first summary target set to obtain a second summary target set;
and removing the large movable inspection target, the small movable inspection target and the large object target in the second summary target set to obtain a final summary target set, and taking all targets in the final summary target set as harmful animal targets in the video stream.
In some embodiments, the filtering, according to the large object target and the intersection ratio algorithm, the other dynamic inspection targets meeting a preset filtering condition from the first summarized target set includes:
calculating a first intersection ratio of the image subarea where each other dynamic examination target is located and the image subarea where each large object target is located aiming at each frame of video image;
comparing the first cross-over ratio with a first preset cross-over ratio threshold;
if the first cross-over ratio is greater than or equal to the first preset cross-over ratio threshold, filtering out other dynamic inspection targets corresponding to the first cross-over ratio from the first summary target set.
In some of these embodiments, after said determining pest targets in said video stream from said first set of preselected targets and said second set of preselected targets, said method further comprises:
classifying the harmful animal targets in the video stream according to the trained deep learning classification model to obtain a classified harmful animal target set; the pest target set includes a plurality of pest targets; wherein the deep learning classification model includes at least one of a mouse and a cockroach.
In some embodiments, after the classifying the pest objects in the video stream according to the trained deep learning classification model to obtain a set of classified pest objects, the method further includes:
carrying out target tracking processing on the harmful animal targets to obtain track information of an ID (identity) corresponding to each harmful animal target;
counting the number of frames of each pest target;
and if the number of the occurrence frames of the harmful animal target is greater than a preset frame number threshold, determining a dense activity area of the harmful animal target according to the track information of the ID corresponding to the harmful animal target, and outputting alarm information, wherein the alarm information comprises the dense activity area of the harmful animal target and the track information of the corresponding ID.
In some of these embodiments, the method further comprises:
calculating the height and the average track change rate of the harmful animal target according to the track information of the ID corresponding to the harmful animal target;
acquiring a preset height threshold range and a preset average track change rate threshold range corresponding to each pest;
and determining the type of the pest target according to a preset height threshold range, a preset average track change rate threshold range, and the height and average track change rate of the pest target.
In a second aspect, the present application provides a pest detection device based on video surveillance, including:
the data acquisition module is used for acquiring a video stream; the video stream comprises a plurality of frames of video images;
the dynamic detection module is used for detecting a plurality of frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first pre-selection target set comprises a large dynamic inspection target, a small dynamic inspection target and other dynamic inspection targets;
the target detection module is used for detecting a plurality of frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target;
a target determination module for determining pest targets in the video stream based on the first and second preselected sets of targets.
In a third aspect, the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the method for detecting harmful animals based on video surveillance according to the first aspect.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method for detecting harmful animals based on video surveillance as described in the first aspect.
Compared with the related art, the harmful animal detection method and device based on video monitoring and the electronic device provided by the embodiment of the application acquire video streams; the video stream comprises a plurality of frames of video images; detecting multiple frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first pre-selection target set comprises a large dynamic inspection target, a small dynamic inspection target and other dynamic inspection targets; detecting multiple frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target; the pest targets in the video stream are determined according to the first preselected target set and the second preselected target set, and the problem that the pest targets in the indoor environment cannot be effectively monitored in the related art is solved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a method for detecting vermin based on video surveillance according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating the detection of multiple frames of video images according to a moving object detection model in the embodiment of the present application;
FIG. 3 is a flowchart illustrating a process of detecting multiple frames of video images according to a target detection model in an embodiment of the present application;
FIG. 4 is a flow chart of an embodiment of the present application for identifying pest targets in a video stream based on a first set of preselected targets and a second set of preselected targets;
fig. 5 is a flowchart illustrating filtering out other live test targets meeting the preset filtering condition from the first summarized target set according to the embodiment of the present application;
fig. 6 is a flowchart of the method for tracking a pest target in a video stream according to an embodiment of the present application;
FIG. 7 is a flow chart of the determination of the type of pest target in a video stream in an embodiment of the present application;
fig. 8 is a flowchart of a method for detecting vermin based on video surveillance according to a preferred embodiment of the present application;
fig. 9 is a block diagram of a hardware configuration of a terminal of the method for detecting harmful animals based on video surveillance according to the embodiment of the present application;
fig. 10 is a block diagram showing a configuration of a harmful animal detection apparatus based on video surveillance according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The various techniques described herein may be applied, but are not limited to, a variety of indoor and outdoor pest monitoring systems, platforms, and devices.
The embodiment of the present application is described by taking a kitchen scene as an example, and fig. 1 is a flowchart of a method for detecting harmful animals based on video monitoring according to the embodiment of the present application, as shown in fig. 1, the flowchart includes the following steps:
step S110, acquiring a video stream; the video stream includes a plurality of frames of video images.
A video stream of a kitchen scene may be obtained from a monitoring device, the video stream including a plurality of frames of video images of a kitchen.
In consideration of the diurnal and nocturnal habits of harmful animals such as cockroaches and mice, the acquired video stream should include a plurality of frames of video images of the kitchen in the middle of the night.
Step S120, detecting multiple frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first set of preselected targets includes a macro target, a micro target, and other macro targets.
Specifically, for each frame of video image, all pixels of the video image are divided into a background pixel and a moving foreground pixel through a pre-constructed moving object detection model, and the background pixels of the video image are deleted, so that a set of the moving foreground pixels of the video image, namely a first preselected target set, is obtained.
The large live-action object indicates an oversized live-action object, for example, a kitchen worker walking around the kitchen in a daytime scene. A motion detection small target means a motion detection target that is too small in size, for example, a flickering light spot. The other live inspection targets represent live inspection targets that meet a preset pest size condition set based on empirical knowledge of the size of the pest.
Step S130, detecting the multi-frame video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target.
The target detection model may be constructed by using a multi-scale target detection algorithm based on deep learning, or may be constructed by using other target detection algorithms, which is not limited in this embodiment.
Step S140 determines pest targets in the video stream based on the first set of preselected targets and the second set of preselected targets.
Acquiring a video stream through the steps S110 to S140; the video stream comprises a plurality of frames of video images; detecting multiple frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first pre-selection target set comprises a large dynamic inspection target, a small dynamic inspection target and other dynamic inspection targets; detecting multiple frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target; pest targets in the video stream are determined based on the first set of preselected targets and the second set of preselected targets. The method and the device have the advantages that the harmful animal target in the video stream is detected by combining two detection methods of moving object detection and target detection, so that the condition of dynamic leak detection when the harmful animal target is static is avoided, the detectable rate of the harmful animal target is improved, and the problem that the harmful animal target in the indoor environment cannot be effectively monitored in the related technology is solved.
Further, the type of large object target includes indoor staff. The type of the small-object target includes at least one of a cockroach and a mouse.
In some embodiments, fig. 2 is a flowchart illustrating a process of detecting multiple frames of video images according to a moving object detection model in the embodiment of the present application, as shown in fig. 2, the process includes the following steps:
step S210, detecting a plurality of frames of video images according to the moving object detection model to obtain the outline information of a plurality of moving objects in the video images and the coordinate information of the image subarea corresponding to each moving object.
The image sub-region corresponding to the moving object may be a minimum rectangular frame circumscribed by the outline of the moving object, or may be an image sub-region of other shapes and sizes, as long as all pixel points of the moving object in the video image can be included.
Step S220, calculating the pixel area of the image sub-region corresponding to each moving object according to the coordinate information of the image sub-region corresponding to each moving object.
Step S230, obtaining a first preselected target set according to a preset pixel area threshold range and the pixel area of the image sub-region corresponding to each moving object; the pixel area threshold range comprises a maximum pixel area threshold and a minimum pixel area threshold; the dynamic detection large target represents a moving object of which the pixel area of the corresponding image sub-region is larger than the maximum pixel area threshold value; the motion detection small target represents a motion object of which the pixel area of the corresponding image sub-region is smaller than the minimum pixel area threshold value; and other moving detection targets represent moving objects of which the pixel areas of the corresponding image sub-regions are within the pixel area threshold value range.
The pixel area threshold interval is set according to empirical data of the size of the volume of the harmful animal and is used for classifying the detected dynamic examination target.
Specifically, the moving objects in step S210 are classified according to the pixel area threshold range and the pixel area of the sub-region of the image corresponding to each moving object, so as to obtain a first preselected target set composed of a plurality of moving detection targets with different sizes.
Through the steps S210 to S230, detecting a plurality of frames of video images according to the moving object detection model to obtain the contour information of a plurality of moving objects in the video images and the coordinate information of the image subarea corresponding to each moving object; calculating the pixel area of the image subregion corresponding to each moving object according to the coordinate information of the image subregion corresponding to each moving object; according to the preset pixel area threshold range and the pixel area of the image subarea corresponding to each moving object, the moving objects detected from the video image are classified to obtain a first preselected target set, so that the harmful animal targets can be screened from the first preselected target set subsequently, the condition of false detection of the harmful animal targets can be avoided, and the accuracy of detecting the harmful animal targets in the video image is improved.
In some embodiments, fig. 3 is a flowchart illustrating a process of detecting multiple frames of video images according to a target detection model in this embodiment, as shown in fig. 3, the process includes the following steps:
step S310, an initial target detection model is constructed.
Step S320, acquiring training sample data; the training sample data comprises a plurality of training sample images.
Step S330, determining the basic proportion setting of the initial target detection model; the basic scale setting includes data on the length, width and height ratios of vermin and humans.
Step S340, setting a first network depth feature map for a small target and a second network depth feature map for a large target in the initial target detection model.
And the first network depth feature map represents a feature map with a shallow network depth. The second network depth feature map represents a feature map with a deeper network depth.
It should be noted that, the feature map with a shallow network depth is adopted for the small target, so that details can be enriched, and the accuracy of the detection of the small object target in the video stream can be improved. The fitting can be enhanced by adopting the characteristic diagram with deeper network depth for the large target, so that the accuracy of the detection of the large object target in the video stream is improved.
And step S350, training according to the training sample data to obtain a trained target detection model.
Constructing an initial target detection model through the steps S310 to S350; acquiring training sample data; the training sample data comprises a plurality of training sample images; determining a basic proportion setting of an initial target detection model; the basic proportion setting comprises data of length, width and height proportions of harmful animals and human beings; setting a first network depth feature map aiming at a small target and a second network depth feature map aiming at a large target in an initial target detection model; and training according to the training sample data to obtain a trained target detection model. According to the embodiment, the first network depth feature map for the small target and the second network depth feature map for the large target are set, so that the false detection rate of target detection can be reduced, the accuracy of large object target detection and small object target detection is improved, and the reliability of a harmful animal target detection result in a video stream is improved.
In some embodiments, in one training, the initial target detection model is trained according to the training sample to obtain a training result corresponding to the current training; calculating the confidence corresponding to the training according to the training result; comparing the confidence level to a preset confidence level threshold; and if the confidence coefficient is greater than the preset confidence coefficient threshold value, taking the target detection model corresponding to the training as the trained target detection model.
In some of these embodiments, fig. 4 is a flow chart of an embodiment of the present application for determining pest targets in a video stream based on a first set of preselected targets and a second set of preselected targets, as shown in fig. 4, the flow chart comprising the steps of:
step S410, combining the first preselected target set and the second preselected target set to obtain a first summarized target set.
The first summary target set comprises a large dynamic inspection target, a small dynamic inspection target, other dynamic inspection targets, a large object target and a small object target.
And step S420, according to the large object target and the intersection ratio algorithm, filtering other dynamic detection targets meeting preset filtering conditions from the first summary target set to obtain a second summary target set.
Wherein the preset filtering conditions are as follows: in the same frame of video image, if other dynamic detection targets and the large object target are in the same image sub-region, the other dynamic detection targets are filtered out from the first summary target set.
It should be noted that, in the same frame of video image, if other moving objects and a large object are in the same image sub-area, it may be determined that the other moving objects and the large object belong to the same detection target, and therefore the other moving objects need to be filtered out.
For example, in the moving object detection process, it is detected that the finger of the kitchen staff is in a moving state, and the pixel area of the sub-region of the image where the finger of the kitchen staff is located is in the pixel area threshold region, and it is determined that the finger of the kitchen staff is the other moving object. Meanwhile, in the target detection process, the kitchen staff is detected and is used as a large object target. Therefore, other live-action objects corresponding to the fingers of the kitchen staff need to be filtered out from the first summary object set, so as to eliminate the interference of the human live-action results on the detection of the harmful animal objects in the daytime scene.
And step S430, removing the large dynamic inspection target, the small dynamic inspection target and the large object target in the second summarized target set to obtain a final summarized target set, and taking all targets in the final summarized target set as harmful animal targets in the video stream.
And finally, the summarized target set comprises other dynamic examination targets and small object targets which are left after filtering processing.
Through the steps S410 to S430, other moving detection targets meeting the preset filtering condition are filtered out from the first collection target set to eliminate the interference of the human body moving detection result on the detection of the harmful animal targets in the daytime scene, and the moving detection large targets, the moving detection small targets and the large object targets in the second collection target set are subjected to rejection processing to eliminate the interference of kitchen staff on the detection of the harmful animal targets, so that the accuracy of the detection of the harmful animal targets is further improved.
In some embodiments, fig. 5 is a flowchart illustrating filtering out, from the first summarized target set, other live test targets meeting a preset filtering condition in the embodiment of the present application, as shown in fig. 5, the flowchart includes the following steps:
step S510, for each frame of video image, calculating a first intersection ratio between the image sub-area where each other dynamic examination target is located and the image sub-area where each large object target is located.
Step S520, comparing the first cross-over ratio with a first preset cross-over ratio threshold.
In step S530, if the first intersection ratio is greater than or equal to the first preset intersection ratio threshold, other dynamic inspection targets corresponding to the first intersection ratio are filtered from the first summary target set.
Through the steps S510 to S530, by calculating the first cross-over ratio of the image sub-region where each large object target is located, and comparing the first cross-over ratio with the first preset cross-over ratio threshold, other moving detection targets that are overlapped with the image sub-region where the large object target is located can be quickly and effectively determined, and filtering is performed.
In some embodiments, according to a trained deep learning classification model, classifying harmful animal targets in a video stream to obtain a classified harmful animal target set; the pest target set includes a plurality of pest targets.
Further, first image data of a plurality of harmful animals acquired in advance is taken as a positive sample, and second image data of an application scene acquired in advance is taken as a negative sample; constructing an initial deep learning classification model; and training the initial deep learning classification model according to the positive sample and the negative sample to obtain a trained deep learning classification model.
In the embodiment, the initial deep learning classification model is trained according to the positive samples of various harmful animals and the negative samples of the application scene, so that a trained deep learning classification model is obtained; according to the trained deep learning classification model, the detected harmful animal targets are classified more carefully, so that the false detection rate of small targets such as mice and cockroaches can be effectively reduced, and the accuracy of subsequent alarm can be improved.
In some embodiments, fig. 6 is a flowchart illustrating a method for tracking a pest target in a video stream according to an embodiment of the present application, where the method includes the following steps, as shown in fig. 6:
step S610, performing target tracking processing on the pest targets to obtain track information of an ID corresponding to each pest target.
In step S620, the number of frames of each pest target is counted.
Step S630, if the number of frames of the harmful animal target is greater than the preset frame number threshold, determining the intensive activity area of the harmful animal target according to the track information of the ID corresponding to the harmful animal target, and outputting alarm information, wherein the alarm information comprises the intensive activity area of the harmful animal target and the track information of the corresponding ID.
Performing target tracking processing on the pest targets through the steps S610 to S630 to obtain track information of an ID corresponding to each pest target; counting the number of frames of each pest target; if the number of the occurrence frames of the harmful animal targets is larger than the preset frame number threshold, determining the intensive activity area of the harmful animal targets according to the track information of the IDs corresponding to the harmful animal targets, and outputting alarm information, so that kitchen workers can perform key prevention and control on the intensive activity area of the harmful animal targets conveniently, and the sanitary safety of a kitchen is improved.
In some embodiments, for each pest target, extracting depth feature information of a subregion of the image where the pest target is located in each frame of video image; calculating a second intersection ratio of the image subareas of the harmful animal targets in the two adjacent frames of video images; and if the second cross-over ratio is greater than or equal to the second cross-over ratio threshold value and the depth characteristic information of the harmful animal target corresponding to the two adjacent frames of video images is matched, marking the ID of the harmful animal target in the two adjacent frames of video images so as to track the harmful animal target and obtain the track information of the ID corresponding to the harmful animal target.
The depth feature information includes a feature matrix formed by feature information such as edges, colors, contours and the like.
It should be noted that, if the second cross-over ratio is greater than or equal to the second cross-over ratio threshold value, and the depth feature information of the pest target corresponding to the two adjacent frames of video images is matched, it is determined that the pest target in the two adjacent frames of video images is the same pest target, and it can be determined that the pest target can be tracked, so as to further mark the ID of the pest target in the two adjacent frames of video images, so as to track the pest target.
In the embodiment, the depth characteristic information of the image subregion where the harmful animal target is located in each frame of video image is extracted aiming at each harmful animal target, and the second intersection ratio of the image subregion where the harmful animal target is located in two adjacent frames of video images is calculated; matching the depth characteristic information of the harmful animal target corresponding to the two adjacent frames of video images, and comparing the second cross-over ratio with the second cross-over ratio threshold value, so that whether the harmful animal target in the two adjacent frames of video images is the same or not can be judged quickly and accurately, and the effective tracking of the harmful animal target is realized.
In some embodiments, fig. 7 is a flow chart of determining the type of pest target in a video stream according to the embodiments of the present application, and as shown in fig. 7, the flow chart includes the following steps:
step S710, calculating the height of the pest target and the average change rate of the trajectory according to the trajectory information of the ID corresponding to the pest target.
Specifically, the height of the vermin target can be determined in combination with the trajectory information of different IDs and the height information output from the binocular camera.
And S720, acquiring a preset height threshold range and a preset average track change rate threshold range corresponding to each pest.
Step S730, determining the type of the pest target according to the preset height threshold range, the preset average trajectory change rate threshold range, and the height and average trajectory change rate of the pest target.
It should be noted that, firstly, in the kitchen scene, the range of motion of the pest is mostly on the ground, or climbing up and down, so that the height of the pest is kept in a very low range, or steadily rises, falls and then keeps a certain height. Secondly, in a kitchen scene, the movement of harmful animals such as mice, cockroaches and the like is quick and quick, the moving range is large, and the track coordinates of the harmful animals continuously and quickly change within a certain time, so that the types of the harmful animal targets can be further determined according to the height of the harmful animal targets and the average track change rate, and abnormal targets with overlarge frame space position change before and after the abnormal targets, such as flash light and shadow, are eliminated; false detection targets that remain stationary at all times are excluded.
In addition, the specific type of the pest target can be determined according to the height of the pest target and the average track change rate. For example, the size of a mouse is larger than that of a cockroach, the movement speed is faster, and the average track change speed of the mouse is faster in a certain movement time. Thus, the height and average rate of change of trajectory of the pest target can be used as auxiliary information to refine the classification within the pest.
In the steps S710 to S730, the height and the average trajectory change rate of the pest target are calculated according to the trajectory information of the ID corresponding to the pest target, and according to the preset height threshold range, the preset average trajectory change rate threshold range, and the height and the average trajectory change rate of the pest target, the flash of the pest target and the false detection of the pest target in a stationary state for a long time are eliminated, so that the accuracy of detecting the pest target is further improved; and the height and the average track change rate of the harmful animal targets are used as auxiliary information for refining classification in the harmful animals, so that the types of the harmful animal targets are more accurately determined, corresponding prevention and control measures are conveniently taken for different types of harmful animal targets, and the sanitary safety of a kitchen is further improved.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
Fig. 8 is a flowchart of a video surveillance-based pest detection method according to a preferred embodiment of the present application, including the steps of:
step S810, acquiring a video stream; the video stream includes a plurality of frames of video images.
Step S820, detecting multiple frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first set of preselected targets includes a macro target, a micro target, and other macro targets.
Step S830, detecting multiple frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target; the target detection model includes a first network depth feature map for small targets and a second network depth feature map for large targets.
Step 840, combining the first preselected target set and the second preselected target set to obtain a first summary target set; and filtering other dynamic detection targets meeting preset filtering conditions from the first summary target set according to the large object target and the intersection ratio algorithm to obtain a second summary target set.
And step S850, removing the large dynamic inspection target, the small dynamic inspection target and the large object target in the second summarized target set to obtain a final summarized target set, and taking all targets in the final summarized target set as harmful animal targets in the video stream.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here. For example, referring to fig. 1, the execution sequence of step S120 and step S130 may be interchanged, that is, step S120 may be executed first, and then step S130 may be executed; step S130 may be performed first, and then step S120 may be performed.
The method provided by the embodiment can be executed in a terminal, a computer or a similar operation device. Taking the example of the method running on the terminal, fig. 9 is a block diagram of a hardware structure of the terminal of the method for detecting harmful animals based on video monitoring according to the embodiment of the present application. As shown in fig. 9, the terminal may include one or more (only one shown in fig. 9) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 9, or have a different configuration than shown in fig. 9.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the method for detecting harmful animals based on video surveillance in the embodiment of the present application, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, i.e., to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The embodiment also provides a harmful animal detection device based on video monitoring, which is used for implementing the above embodiments and preferred embodiments, and the description of the devices is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 10 is a block diagram showing a configuration of a harmful animal detection apparatus based on video surveillance according to an embodiment of the present application, and as shown in fig. 10, the harmful animal detection apparatus based on video surveillance includes:
a data obtaining module 1010, configured to obtain a video stream; the video stream includes a plurality of frames of video images.
A dynamic detection module 1020, configured to detect multiple frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first set of preselected targets includes a macro target, a micro target, and other macro targets.
And the target detection module 1030 is configured to detect multiple frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, where the second preselected target set includes a large object target and a small object target.
A goal determination module 1040 for determining a pest goal in the video stream based on the first preselected set of goals and the second preselected set of goals.
In some of these embodiments, the dynamic detection module 1020 includes a dynamic detection unit, an area calculation unit, a threshold acquisition unit, and a target determination unit, wherein:
and the dynamic detection unit is used for detecting the multi-frame video image according to the moving object detection model to obtain the contour information of a plurality of moving objects in the video image and the coordinate information of the image subarea corresponding to each moving object.
And the area calculation unit is used for calculating the pixel area of the image subregion corresponding to each moving object according to the coordinate information of the image subregion corresponding to each moving object.
The target determining unit is used for obtaining the first preselected target set according to a preset pixel area threshold range and the pixel area of the image subarea corresponding to each moving object; the pixel area threshold range comprises a maximum pixel area threshold and a minimum pixel area threshold; the dynamic detection large target represents a moving object of which the pixel area of the corresponding image sub-region is larger than the maximum pixel area threshold value; the small dynamic detection target represents a moving object of which the pixel area of the corresponding image sub-region is smaller than the minimum pixel area threshold value; and the other moving detection targets represent moving objects of which the pixel areas of the corresponding image sub-regions are within the pixel area threshold range.
In some of these embodiments, the goal determination module 1040 includes a goal combining unit, a filtering processing unit, and a goal determination unit, where:
and the target combining unit is used for combining the first preselected target set and the second preselected target set to obtain a first summarized target set.
And the filtering processing unit is used for filtering other dynamic detection targets meeting the preset filtering condition from the first summary target set according to the large object target and the cross-over ratio algorithm to obtain a second summary target set.
And the target determining unit is used for removing the large dynamic inspection target, the small dynamic inspection target and the large object target in the second summarized target set to obtain a final summarized target set, and all targets in the final summarized target set are used as harmful animal targets in the video stream.
In some of these embodiments, the filtering processing unit includes a calculating subunit, a comparing subunit, and a filtering subunit, wherein:
and the calculation subunit is used for calculating a first intersection ratio of the image sub-region where each other dynamic detection target is located and the image sub-region where each large object target is located aiming at each frame of video image.
And the comparison subunit is used for comparing the first cross-over ratio with a first preset cross-over ratio threshold value.
And the filtering subunit is configured to filter, if the first intersection ratio is greater than or equal to a first preset intersection ratio threshold, other dynamic inspection targets corresponding to the first intersection ratio from the first summary target set.
In some embodiments, the video monitoring-based pest detection device further includes a target classification module, configured to perform classification processing on the pest targets in the video stream according to the trained deep learning classification model, so as to obtain a classified pest target set; the pest target set includes a plurality of pest targets; wherein the deep learning classification model includes at least one of a mouse and a cockroach.
In some embodiments, the video surveillance-based pest detection apparatus further includes an alarm output module including a target tracking unit, a number of frames present counting unit, and an alarm output unit, wherein:
and the target tracking unit is used for carrying out target tracking processing on the harmful animal targets to obtain the track information of the ID corresponding to each harmful animal target.
And the frame number counting unit is used for counting the occurrence frame number of each pest target.
And the alarm output unit is used for determining the intensive activity area of the harmful animal target according to the track information of the ID corresponding to the harmful animal target and outputting alarm information if the number of the occurrence frames of the harmful animal target is greater than a preset frame number threshold, wherein the alarm information comprises the intensive activity area of the harmful animal target and the track information of the corresponding ID.
In some embodiments, the object classification module further comprises a data calculation unit, a threshold acquisition unit, and a category determination unit, wherein:
and the data calculation unit is used for calculating the height and the average track change rate of the harmful animal target according to the track information of the ID corresponding to the harmful animal target.
And the threshold value obtaining unit is used for obtaining a preset height threshold value range and a preset average track change rate threshold value range corresponding to each pest.
And the type determining unit is used for determining the type of the harmful animal target according to the preset height threshold range, the preset average track change rate threshold range, the height of the harmful animal target and the average track change rate.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a video stream; the video stream includes a plurality of frames of video images.
S2, detecting multi-frame video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first set of preselected targets includes a macro target, a micro target, and other macro targets.
And S3, detecting the multi-frame video images according to the pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target.
S4, determining pest targets in the video stream based on the first set of preselected targets and the second set of preselected targets.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the method for detecting harmful animals based on video monitoring in the above embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program when executed by a processor implements any of the above-described embodiments of the video surveillance-based pest detection method.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for detecting pests based on video surveillance, comprising:
acquiring a video stream; the video stream comprises a plurality of frames of video images;
detecting multiple frames of the video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first pre-selection target set comprises a large dynamic inspection target, a small dynamic inspection target and other dynamic inspection targets;
detecting multiple frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target;
determining pest targets in the video stream based on the first and second sets of preselected targets.
2. The method of claim 1, wherein said detecting a plurality of frames of said video images according to a pre-constructed moving object detection model to obtain a first set of preselected objects comprises:
detecting a plurality of frames of the video image according to the moving object detection model to obtain the outline information of a plurality of moving objects in the video image and the coordinate information of each moving object corresponding to the image subregion;
calculating the pixel area of each moving object corresponding to the image subregion according to the coordinate information of each moving object corresponding to the image subregion;
obtaining the first preselected target set according to a preset pixel area threshold range and the pixel area of the image subarea corresponding to each moving object; the pixel area threshold range comprises a maximum pixel area threshold and a minimum pixel area threshold; the dynamic detection large target represents a moving object of which the pixel area of the corresponding image sub-region is larger than the maximum pixel area threshold value; the small dynamic detection target represents a moving object of which the pixel area of the corresponding image sub-region is smaller than the minimum pixel area threshold value; and the other moving detection targets represent moving objects of which the pixel areas of the corresponding image sub-regions are within the pixel area threshold range.
3. The method of claim 1 wherein said determining pest targets in said video stream from said first set of preselected targets and said second set of preselected targets comprises:
combining the first preselected target set and the second preselected target set to obtain a first summarized target set;
according to the large object target and the cross-over ratio algorithm, filtering other dynamic detection targets meeting preset filtering conditions from the first summary target set to obtain a second summary target set;
and removing the large movable inspection target, the small movable inspection target and the large object target in the second summary target set to obtain a final summary target set, and taking all targets in the final summary target set as harmful animal targets in the video stream.
4. The method according to claim 3, wherein the filtering out, according to the large object target and the cross-over ratio algorithm, other live inspection targets meeting preset filtering conditions from the first summarized target set comprises:
calculating a first intersection ratio of the image subarea where each other dynamic examination target is located and the image subarea where each large object target is located aiming at each frame of video image;
comparing the first cross-over ratio with a first preset cross-over ratio threshold;
if the first cross-over ratio is greater than or equal to the first preset cross-over ratio threshold, filtering out other dynamic inspection targets corresponding to the first cross-over ratio from the first summary target set.
5. The method of claim 1, wherein after said determining pest targets in said video stream from said first set of preselected targets and said second set of preselected targets, said method further comprises:
classifying the harmful animal targets in the video stream according to the trained deep learning classification model to obtain a classified harmful animal target set; the pest target set includes a plurality of pest targets; wherein the deep learning classification model includes at least one of a mouse and a cockroach.
6. The method of claim 5, wherein after classifying the pest objects in the video stream according to the trained deep learning classification model to obtain a set of classified pest objects, the method further comprises:
carrying out target tracking processing on the harmful animal targets to obtain track information of an ID (identity) corresponding to each harmful animal target;
counting the number of frames of each pest target;
and if the number of the occurrence frames of the harmful animal target is greater than a preset frame number threshold, determining a dense activity area of the harmful animal target according to the track information of the ID corresponding to the harmful animal target, and outputting alarm information, wherein the alarm information comprises the dense activity area of the harmful animal target and the track information of the corresponding ID.
7. The method of claim 6, further comprising:
calculating the height and the average track change rate of the harmful animal target according to the track information of the ID corresponding to the harmful animal target;
acquiring a preset height threshold range and a preset average track change rate threshold range corresponding to each pest;
and determining the type of the pest target according to a preset height threshold range, a preset average track change rate threshold range, and the height and average track change rate of the pest target.
8. A pest detection device based on video surveillance, comprising:
the data acquisition module is used for acquiring a video stream; the video stream comprises a plurality of frames of video images;
the dynamic detection module is used for detecting a plurality of frames of video images according to a pre-constructed moving object detection model to obtain a first preselected target set; the first pre-selection target set comprises a large dynamic inspection target, a small dynamic inspection target and other dynamic inspection targets;
the target detection module is used for detecting a plurality of frames of video images according to a pre-constructed target detection model to obtain a second preselected target set, wherein the second preselected target set comprises a large object target and a small object target;
a target determination module for determining pest targets in the video stream based on the first and second preselected sets of targets.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the video surveillance-based pest detection method of any one of claims 1 to 7.
10. A storage medium, in which a computer program is stored, wherein the computer program is configured to execute the video surveillance-based pest detection method of any one of claims 1 to 7 when running.
CN202011140296.9A 2020-10-22 2020-10-22 Harmful animal detection method and device based on video monitoring and electronic device Pending CN112464714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011140296.9A CN112464714A (en) 2020-10-22 2020-10-22 Harmful animal detection method and device based on video monitoring and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011140296.9A CN112464714A (en) 2020-10-22 2020-10-22 Harmful animal detection method and device based on video monitoring and electronic device

Publications (1)

Publication Number Publication Date
CN112464714A true CN112464714A (en) 2021-03-09

Family

ID=74834105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011140296.9A Pending CN112464714A (en) 2020-10-22 2020-10-22 Harmful animal detection method and device based on video monitoring and electronic device

Country Status (1)

Country Link
CN (1) CN112464714A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886086A (en) * 2017-12-01 2018-04-06 中国农业大学 A kind of target animal detection method and device based on image/video
CN107886120A (en) * 2017-11-03 2018-04-06 北京清瑞维航技术发展有限公司 Method and apparatus for target detection tracking
CN111340843A (en) * 2020-02-19 2020-06-26 山东大学 Power scene video detection method based on environment self-adaption and small sample learning
CN111507278A (en) * 2020-04-21 2020-08-07 浙江大华技术股份有限公司 Method and device for detecting roadblock and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886120A (en) * 2017-11-03 2018-04-06 北京清瑞维航技术发展有限公司 Method and apparatus for target detection tracking
CN107886086A (en) * 2017-12-01 2018-04-06 中国农业大学 A kind of target animal detection method and device based on image/video
CN111340843A (en) * 2020-02-19 2020-06-26 山东大学 Power scene video detection method based on environment self-adaption and small sample learning
CN111507278A (en) * 2020-04-21 2020-08-07 浙江大华技术股份有限公司 Method and device for detecting roadblock and computer equipment

Similar Documents

Publication Publication Date Title
CN109922310B (en) Target object monitoring method, device and system
CN109886130B (en) Target object determination method and device, storage medium and processor
WO2020151084A1 (en) Target object monitoring method, apparatus, and system
JP6949988B2 (en) Domain identification method, device, storage medium and processor
KR101825045B1 (en) Alarm method and device
Boult et al. Into the woods: Visual surveillance of noncooperative and camouflaged targets in complex outdoor settings
CN108805900B (en) Method and device for determining tracking target
Ko et al. Background subtraction on distributions
Lee et al. Hierarchical abnormal event detection by real time and semi-real time multi-tasking video surveillance system
CN109886999A (en) Location determining method, device, storage medium and processor
US20200394384A1 (en) Real-time Aerial Suspicious Analysis (ASANA) System and Method for Identification of Suspicious individuals in public areas
CN112733690B (en) High-altitude parabolic detection method and device and electronic equipment
CN109886555A (en) The monitoring method and device of food safety
CN110659391A (en) Video detection method and device
CN109886129B (en) Prompt message generation method and device, storage medium and electronic device
WO2019089441A1 (en) Exclusion zone in video analytics
WO2021063046A1 (en) Distributed target monitoring system and method
CN114885119A (en) Intelligent monitoring alarm system and method based on computer vision
CN111612815A (en) Infrared thermal imaging behavior intention analysis method and system
CN108874910A (en) The Small object identifying system of view-based access control model
CN112464714A (en) Harmful animal detection method and device based on video monitoring and electronic device
CN112561957A (en) State tracking method and device for target object
CN109934099A (en) Reminding method and device, storage medium, the electronic device of placement location
CN113837138B (en) Dressing monitoring method, dressing monitoring system, dressing monitoring medium and electronic terminal
CN111062295B (en) Region positioning method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination