CN112307916A - Alarm monitoring method based on visible light camera - Google Patents

Alarm monitoring method based on visible light camera Download PDF

Info

Publication number
CN112307916A
CN112307916A CN202011131393.1A CN202011131393A CN112307916A CN 112307916 A CN112307916 A CN 112307916A CN 202011131393 A CN202011131393 A CN 202011131393A CN 112307916 A CN112307916 A CN 112307916A
Authority
CN
China
Prior art keywords
network
target
cnn
fast
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011131393.1A
Other languages
Chinese (zh)
Inventor
黄楠
祝清雷
吕俊杰
苏明辰
刘建梁
姜河
赵莹
赵寰
董辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Sheenrun Optics Electronics Co Ltd
Original Assignee
Shandong Sheenrun Optics Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Sheenrun Optics Electronics Co Ltd filed Critical Shandong Sheenrun Optics Electronics Co Ltd
Priority to CN202011131393.1A priority Critical patent/CN112307916A/en
Publication of CN112307916A publication Critical patent/CN112307916A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses an alarm monitoring method based on a visible light camera, which adopts a target detection network based on fast R-CNN for detection, selects a ResNet-101 network to replace VGG as a feature extraction network, and adopts the ResNet-101 network consisting of a residual error model to solve the problems of gradient disappearance or explosion and network degradation caused by the increase of network depth. The characteristics extracted by the ResNet-101 network are more detailed, and the accuracy of target detection is improved.

Description

Alarm monitoring method based on visible light camera
Technical Field
The invention relates to an alarm monitoring method, in particular to an alarm monitoring method based on a visible light camera, and belongs to the technical field of alarm monitoring.
Background
With the development of science and technology, monitoring systems are widely applied to various industries of society, but most of the existing monitoring systems need to manually check whether abnormal conditions occur in a monitored area, and the accuracy of manual checking is greatly reduced when a plurality of monitoring videos are watched for a long time. At present, some monitoring systems are used to check whether there is a suspicious person in the monitored area, and a target detection method can be adopted to solve the problem, and the current target detection methods include a background difference method, a frame difference method and the like. However, since a camera in a monitoring system may be in a non-stationary state, it may cause the background and color texture of the detected image to be very complex, and if a general identification method such as a background subtraction method, a frame subtraction method, etc. is adopted, the detection result will have problems of false detection, missed detection, etc. due to the disadvantage of poor effectiveness of background modeling in the algorithm.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide an alarm monitoring method based on a visible light camera, which can improve the accuracy of alarm monitoring.
In order to solve the technical problem, the technical scheme adopted by the invention is as follows: an alarm monitoring method based on a visible light camera comprises the following steps:
s01), after the detection instruction of the user is obtained, or within the preset detection time, the system starts to detect;
s02), carrying out screenshot on the video of the visible light camera to obtain an original image;
s03), preprocessing and size transformation are carried out on the image acquired in the step S02;
s04), carrying out target detection on the image obtained in the step S03 by using a target detection network based on Faster R-CNN, and detecting whether a suspicious target meeting the characteristics exists in the image;
s05), judging the detection result of the step S04, and returning to the step S01 if no suspicious target appears; if the suspicious target appears, reminding the client, giving an alarm, displaying the detection image generated in the step S04 to the client, and waiting for the feedback result of the client;
s06), performing different processing according to the feedback result obtained in the step S5, if the client confirms that the target is a suspicious target, storing the detection picture generated in the step S04, and if the user feeds back that the target is not the suspicious target, deleting the images in the steps S02, S03 and S04, so as to reduce the occupation of the memory space;
s07), judging whether the detection ending time is reached or whether a detection stopping command of the user is received, if so, stopping the detection, otherwise, returning to the step S02.
Further, the target detection network based on Fast R-CNN in step S04 includes a Fast R-CNN network and an RPN network, the RPN network samples random region information of the image as a suggested region and trains regions that may contain targets, the Fast R-CNN network further processes the region information collected by the RPN network, determines a target category in the region, adjusts the size of the region, and locates a specific position of the target in the image.
Further, the process of target detection by the target detection network based on the Faster R-CNN is as follows:
s41), inputting pictures with fixed size into the Fast R-CNN network, and entering a shared convolution layer to extract a characteristic diagram;
s42), transmitting the characteristic diagram into an RPN network, predicting the position of a window containing a target through a convolution layer based on sliding operation, performing convolution operation twice by the convolution layer, judging whether the area where the sliding window is located belongs to the target or not, if so, reserving a suggested area and inputting the suggested area into a Fast R-CNN network; discarding the suggested region if the information in the widget is identified as background; the other convolution operation is used for calculating the offset position of the suggestion region and obtaining the position information of the suggestion region in the actual picture;
s43), in the Fast R-CNN network, inputting the feature map and the suggested area output by the RPN network into a pooling layer to generate an interested area, pooling the interested area into a vector with a fixed length, and performing target classification prediction and bounding box regression prediction as the input of a full-connection layer.
Further, target detection is carried out by using a target detection network based on the Faster R-CNN, a picture database containing various human body postures is established, human body targets which only face a camera and are clear are marked by using a marking boundary frame in the picture database, an image set in the picture database is divided into a sample set and a training set, the ratio of the sample set to the training set is 4:1, and the established picture database is adopted to train the target detection network.
Further, the process of training the target detection network is as follows:
A1) training the ResNet-101 network model by using data in the picture database to obtain an ImageNet model;
A2) initializing the RPN by using the ImageNet model generated in the step A1, training the RPN by using the ImageNet model in the convolution operation in the RPN network, and collecting a suggested region;
A3) initializing Fast R-CNN by using an ImageNet model, using the ImageNet model in the convolution operation in the Fast R-CNN network, and training the Fast R-CNN network by using the suggested region generated in the step A2;
A4) fixing the convolutional layer after the Fast R-CNN network training, and carrying out secondary training on the RPN network;
AA 5), fixing the shared convolution layer of RPN and Faster R-CNN, fine-tuning Fast R-CNN using the proposed region generated in the fourth step.
Further, the ImageNet model consists of a residual model.
Further, in the training of the target detection network, training ResNet, taking the first four convolutional neural networks in the ResNet-101 network as initialization parameters of a Fast R-CNN and RPN shared convolutional layer, and extracting image characteristics; in Fast R-CNN, the last stage of the ResNet-101 network is used as an initialization parameter to detect the network.
Further, in step S02, a capture time interval is set, and the visible light camera is captured according to the capture time interval.
The invention has the beneficial effects that: the alarm monitoring method based on the visible light camera adopts a target detection network based on fast R-CNN for detection, selects ResNet-101 network to replace VGG as a feature extraction network, and adopts the ResNet-101 network consisting of a residual error model to solve the problems of gradient disappearance or explosion and network degradation caused by the increase of network depth. The characteristics extracted by the ResNet-101 network are more detailed, and the accuracy of target detection is improved.
Drawings
FIG. 1 is a flow chart of the present method;
fig. 2 is a flow chart of target detection.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
Example 1
The embodiment discloses an alarm monitoring method based on a visible light camera, as shown in fig. 1, comprising the following steps:
s01), after the detection instruction of the user is obtained, or within the preset detection time, the system starts to detect;
according to step S01, the method may set a detection time in advance, and when the detection start time is reached, the system may detect whether there is a suspicious person in the monitored area, and when the detection end time is reached, the system stops detecting. In addition, the user can manually turn on the test, and when the test is turned on, the system will turn on the test immediately.
S02), carrying out screenshot on the video of the visible light camera to obtain an original image;
in step S02, a capture time interval is set, and a screen shot is taken for the visible light camera according to the capture time interval. In this embodiment, the algorithm time of Faster r-cnn is set to T, and the acquisition time interval is set to 3/2T.
S03), preprocessing and size transformation are carried out on the image acquired in the step S02;
in this embodiment, the preprocessing is to remove apparent noise in the picture;
the size transformation is to convert the preprocessed pictures into a uniform size.
S04), carrying out target detection on the image obtained in the step S03 by using a target detection network based on Faster R-CNN, and detecting whether a suspicious target meeting the characteristics exists in the image;
s05), judging the detection result of the step S04, and returning to the step S01 if no suspicious target appears; if the suspicious target appears, reminding the client, giving an alarm, displaying the detection image generated in the step S04 to the client, and waiting for the feedback result of the client;
s06), performing different processing according to the feedback result obtained in the step S05, if the client confirms that the target is a suspicious target, storing the detection picture generated in the step S04, and if the user feeds back that the target is not the suspicious target, deleting the images in the steps S02, S03 and S04, so as to reduce the occupation of the memory space;
s07), judging whether the detection ending time is reached or whether a detection stopping command of the user is received, if so, stopping the detection, otherwise, returning to the step S02.
In this embodiment, the target detection network based on Fast R-CNN in step S04 includes Fast R-CNN network and RPN network, where the RPN network samples random region information of the image as a suggested region and trains regions where they may contain targets, and the Fast R-CNN network further processes the region information acquired by the RPN network to determine the type of the target in the region, adjust the size of the region, and locate the specific position of the target in the image.
As shown in FIG. 2, the process of target detection by the target detection network based on the Faster R-CNN is as follows:
s41), inputting pictures with fixed size into the Fast R-CNN network, and entering a shared convolution layer to extract a characteristic diagram;
s42), transmitting the characteristic diagram into an RPN network, predicting the position of a window containing a target through a convolution layer based on sliding operation, performing convolution operation twice by the convolution layer, judging whether the area where the sliding window is located belongs to the target or not, if so, reserving a suggested area and inputting the suggested area into a Fast R-CNN network; discarding the suggested region if the information in the widget is identified as background; the other convolution operation is used for calculating the offset position of the suggestion region and obtaining the position information of the suggestion region in the actual picture;
in this embodiment, the sliding operation is to select 3 × 3 regions on the feature map as input of the convolution layer in sequence by sliding a window (the size of the window is 3 × 3), so as to perform feature extraction on the 3 × 3 regions on the feature map with each pixel point as the center.
S43), in the Fast R-CNN network, inputting the feature map and the suggested area output by the RPN network into a pooling layer to generate an interested area, pooling the interested area into a vector with a fixed length, and performing target classification prediction and bounding box regression prediction as the input of a full-connection layer.
In the embodiment, target detection is performed by using a target detection network based on fast R-CNN, a picture database containing various human body postures is established, human body targets which only face a camera and are clear are marked in the picture database by using a marking boundary frame, an image set in the picture database is divided into a sample set and a training set, the ratio of the sample set to the training set is 4:1, and the established picture database is adopted to train the target detection network.
The process of training the target detection network comprises the following steps:
A1) training the ResNet-101 network model by using data in the picture database to obtain an ImageNet model;
A2) initializing the RPN by using the ImageNet model generated in the step A1, training the RPN by using the ImageNet model in the convolution operation in the RPN network, and collecting a suggested region;
A3) initializing Fast R-CNN by using an ImageNet model, using the ImageNet model in the convolution operation in the Fast R-CNN network, and training the Fast R-CNN network by using the suggested region generated in the step A2;
A4) fixing the convolutional layer after the Fast R-CNN network training, and carrying out secondary training on the RPN network;
AA 5), fixing the shared convolution layer of RPN and Faster R-CNN, fine-tuning Fast R-CNN using the proposed region generated in the fourth step.
In this embodiment, the ImageNet model is composed of a residual error model. Because the depth of the network is an important factor for achieving good effects, the ResNet-101 with a 101-layer network has been selected as a feature selection network model, and as the depth of the network increases, the feature layer also increases correspondingly. However, the problem of gradient disappearance or gradient explosion caused by too large depth value also occurs, that is, the performance of the network becomes worse and worse, the number of layers increases, and the error rate significantly increases, and the ResNet model composed of the residual error model can better solve the above problem.
In the training of the target detection network, training ResNet, taking the first four convolutional neural networks in the ResNet-101 network as initialization parameters of a Fast R-CNN and RPN shared convolutional layer, and extracting image characteristics; in Fast R-CNN, the last stage of the ResNet-101 network is used as an initialization parameter to detect the network.
The foregoing description is only for the basic principle and the preferred embodiments of the present invention, and modifications and substitutions by those skilled in the art according to the present invention belong to the protection scope of the present invention.

Claims (8)

1. An alarm monitoring method based on a visible light camera is characterized in that: the method comprises the following steps:
s01), after the detection instruction of the user is obtained, or within the preset detection time, the system starts to detect;
s02), carrying out screenshot on the video of the visible light camera to obtain an original image;
s03), preprocessing and size transformation are carried out on the image acquired in the step S02;
s04), carrying out target detection on the image obtained in the step S03 by using a target detection network based on Faster R-CNN, and detecting whether a suspicious target meeting the characteristics exists in the image;
s05), judging the detection result of the step S04, and returning to the step S01 if no suspicious target appears; if the suspicious target appears, reminding the client, giving an alarm, displaying the detection image generated in the step S04 to the client, and waiting for the feedback result of the client;
s06), performing different processing according to the feedback result obtained in the step S5, if the client confirms that the target is a suspicious target, storing the detection picture generated in the step S04, and if the user feeds back that the target is not the suspicious target, deleting the images in the steps S02, S03 and S04, so as to reduce the occupation of the memory space;
s07), judging whether the detection ending time is reached or whether a detection stopping command of the user is received, if so, stopping the detection, otherwise, returning to the step S02.
2. The alarm monitoring method based on the visible light camera according to claim 1, characterized in that: the target detection network based on the Faster R-CNN in the step S04 comprises a Fast R-CNN network and an RPN network, wherein the RPN network samples random area information of the image as a suggested area and trains areas possibly containing the target, and the Fast R-CNN network further processes the area information acquired by the RPN network, determines the type of the target in the area, adjusts the size of the area and positions the specific position of the target in the image.
3. The alarm monitoring method based on the visible light camera according to claim 2, characterized in that: the process of target detection by the target detection network based on the Faster R-CNN is as follows:
s41), inputting pictures with fixed size into the Fast R-CNN network, and entering a shared convolution layer to extract a characteristic diagram;
s42), transmitting the characteristic diagram into an RPN network, predicting the position of a window containing a target through a convolution layer based on sliding operation, performing convolution operation twice by the convolution layer, judging whether the area where the sliding window is located belongs to the target or not, if so, reserving a suggested area and inputting the suggested area into a Fast R-CNN network; discarding the suggested region if the information in the widget is identified as background; the other convolution operation is used for calculating the offset position of the suggestion region and obtaining the position information of the suggestion region in the actual picture;
s43), in the Fast R-CNN network, inputting the feature map and the suggested area output by the RPN network into a pooling layer to generate an interested area, pooling the interested area into a vector with a fixed length, and performing target classification prediction and bounding box regression prediction as the input of a full-connection layer.
4. The alarm monitoring method based on the visible light camera according to claim 2, characterized in that: the method comprises the steps of carrying out target detection by using a target detection network based on fast R-CNN, establishing a picture database containing various human body postures, marking a human body target which only faces a camera and is clear by using a marking boundary box in the picture database, dividing an image set in the picture database into a sample set and a training set, wherein the ratio of the sample set to the training set is 4:1, and training the target detection network by using the established picture database.
5. The visible-light-camera-based alarm monitoring method according to claim 4, wherein: the process of training the target detection network comprises the following steps:
A1) training the ResNet-101 network model by using data in the picture database to obtain an ImageNet model;
A2) initializing the RPN by using the ImageNet model generated in the step A1, training the RPN by using the ImageNet model in the convolution operation in the RPN network, and collecting a suggested region;
A3) initializing Fast R-CNN by using an ImageNet model, using the ImageNet model in the convolution operation in the Fast R-CNN network, and training the Fast R-CNN network by using the suggested region generated in the step A2;
A4) fixing the convolutional layer after the Fast R-CNN network training, and carrying out secondary training on the RPN network;
AA 5), fixing the shared convolution layer of RPN and Faster R-CNN, fine-tuning Fast R-CNN using the proposed region generated in the fourth step.
6. The alarm monitoring method based on the visible light camera according to claim 5, wherein: the ImageNet model consists of a residual model.
7. The alarm monitoring method based on the visible light camera according to claim 5, wherein: in the training of the target detection network, training ResNet, taking the first four convolutional neural networks in the ResNet-101 network as initialization parameters of a Fast R-CNN and RPN shared convolutional layer, and extracting image characteristics; in Fast R-CNN, the last stage of the ResNet-101 network is used as an initialization parameter to detect the network.
8. The alarm monitoring method based on the visible light camera according to claim 1, characterized in that: in step S02, a capture time interval is set, and the visible light camera is captured according to the capture time interval.
CN202011131393.1A 2020-10-21 2020-10-21 Alarm monitoring method based on visible light camera Pending CN112307916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011131393.1A CN112307916A (en) 2020-10-21 2020-10-21 Alarm monitoring method based on visible light camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011131393.1A CN112307916A (en) 2020-10-21 2020-10-21 Alarm monitoring method based on visible light camera

Publications (1)

Publication Number Publication Date
CN112307916A true CN112307916A (en) 2021-02-02

Family

ID=74328682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011131393.1A Pending CN112307916A (en) 2020-10-21 2020-10-21 Alarm monitoring method based on visible light camera

Country Status (1)

Country Link
CN (1) CN112307916A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516040A (en) * 2021-05-12 2021-10-19 山东浪潮科学研究院有限公司 Method for improving two-stage target detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164270A (en) * 2011-01-24 2011-08-24 浙江工业大学 Intelligent video monitoring method and system capable of exploring abnormal events
CN106691389A (en) * 2017-01-10 2017-05-24 胡佳 Medical information monitoring method and system
CN108275114A (en) * 2018-02-27 2018-07-13 苏州清研微视电子科技有限公司 A kind of Security for fuel tank monitoring system
CN109285139A (en) * 2018-07-23 2019-01-29 同济大学 A kind of x-ray imaging weld inspection method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164270A (en) * 2011-01-24 2011-08-24 浙江工业大学 Intelligent video monitoring method and system capable of exploring abnormal events
CN106691389A (en) * 2017-01-10 2017-05-24 胡佳 Medical information monitoring method and system
CN108275114A (en) * 2018-02-27 2018-07-13 苏州清研微视电子科技有限公司 A kind of Security for fuel tank monitoring system
CN109285139A (en) * 2018-07-23 2019-01-29 同济大学 A kind of x-ray imaging weld inspection method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李明洁: "基于图像分析的输电线路防外力破坏技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516040A (en) * 2021-05-12 2021-10-19 山东浪潮科学研究院有限公司 Method for improving two-stage target detection
CN113516040B (en) * 2021-05-12 2023-06-20 山东浪潮科学研究院有限公司 Method for improving two-stage target detection

Similar Documents

Publication Publication Date Title
JP3123587B2 (en) Moving object region extraction method using background subtraction
CN110044486B (en) Method, device and equipment for avoiding repeated alarm of human body inspection and quarantine system
CN105930822A (en) Human face snapshot method and system
CN112734731B (en) Livestock temperature detection method, device, equipment and storage medium
US11468683B2 (en) Population density determination from multi-camera sourced imagery
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN111753794B (en) Fruit quality classification method, device, electronic equipment and readable storage medium
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN111415339A (en) Image defect detection method for complex texture industrial product
CN114241370A (en) Intrusion identification method and device based on digital twin transformer substation and computer equipment
CN116402852A (en) Dynamic high-speed target tracking method and device based on event camera
CN111263955A (en) Method and device for determining movement track of target object
CN113065454B (en) High-altitude parabolic target identification and comparison method and device
CN111178405A (en) Similar object identification method fusing multiple neural networks
CN112307916A (en) Alarm monitoring method based on visible light camera
CN113869110A (en) Article detection method, device, terminal and computer readable storage medium
CN115083008A (en) Moving object detection method, device, equipment and storage medium
CN116485779B (en) Adaptive wafer defect detection method and device, electronic equipment and storage medium
CN109871456B (en) Method and device for analyzing relationship between watchmen and electronic equipment
CN113689585B (en) Non-inductive attendance card punching method, system and related equipment
CN116229336A (en) Video moving target identification method, system, storage medium and computer
US20190102888A1 (en) Image processing apparatus and method and monitoring system
CN107403192B (en) Multi-classifier-based rapid target detection method and system
CN109859200B (en) Low-altitude slow-speed unmanned aerial vehicle rapid detection method based on background analysis
KR102589150B1 (en) Long-distance object detection system using cumulative difference image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210202

RJ01 Rejection of invention patent application after publication