CN113807224A - Factory violation detection and tracking method - Google Patents

Factory violation detection and tracking method Download PDF

Info

Publication number
CN113807224A
CN113807224A CN202111044971.2A CN202111044971A CN113807224A CN 113807224 A CN113807224 A CN 113807224A CN 202111044971 A CN202111044971 A CN 202111044971A CN 113807224 A CN113807224 A CN 113807224A
Authority
CN
China
Prior art keywords
image
target
tracking
matching
violation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111044971.2A
Other languages
Chinese (zh)
Other versions
CN113807224B (en
Inventor
张成祥
夏启剑
张文安
吴晓峰
吴祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinhua Zhejiang University Of Technology Innovation Joint Research Institute
Original Assignee
Jinhua Zhejiang University Of Technology Innovation Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinhua Zhejiang University Of Technology Innovation Joint Research Institute filed Critical Jinhua Zhejiang University Of Technology Innovation Joint Research Institute
Priority to CN202111044971.2A priority Critical patent/CN113807224B/en
Publication of CN113807224A publication Critical patent/CN113807224A/en
Application granted granted Critical
Publication of CN113807224B publication Critical patent/CN113807224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention discloses a factory violation detection and tracking method, which comprises the following steps: collecting an image, inputting the image into a model to obtain the position and the category of a target frame; for the violation category image, an improved kcf tracking algorithm is adopted to track and control the mobile robot to remind the target by voice, and the tracking comprises the following steps: saving a target frame of a target as a target image and updating, stopping updating when a response peak value is lower than a first preset threshold value, and defining a current frame image as a search image when the response peak value is lower than a second preset threshold value; matching the target image and the search image; defining a rectangular area formed by the matched feature points of the search image as a tracking area; and matching the tracking area with the target image before and after amplification, recording a response peak value, and continuously tracking by taking a corresponding area with a large response peak value as a latest tracking area. The method has the advantages of blocking resistance, low calculated amount, high detection speed, high accuracy, strong real-time performance, capability of automatically identifying the illegal action for tracking and reminding, time and labor saving, operation safety improvement and wide application range.

Description

Factory violation detection and tracking method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a factory violation detection and tracking method.
Background
In recent years, factory accidents are frequent, and usually workers fail to comply with the safety regulations of factories or loosen in a certain time period to cause disastrous consequences, such as common accident reasons including smashing injuries caused by that safety helmets are not worn in factories where safety helmets need to be worn; the place needing to wear the mask does not wear the mask to cause pollution poisoning; smoking in a strictly pyrotechnical setting results in combustible explosions and the like. In order to supervise workers to comply with the safety regulations of the factory all the time, most factories adopt manual video monitoring to realize supervision. The manual supervision method consumes a large amount of manpower and material resources, and even causes accidents caused by monitoring fatigue.
With the wide application of image processing in the civil and commercial fields, target tracking plays an increasingly important role in the fields of intelligent video monitoring, automatic driving, unmanned supermarkets and the like. Currently, in target tracking, target tracking can be achieved through a local association algorithm between adjacent frames or through a global association algorithm between all frames. However, when the local association algorithm between adjacent frames is used, the tracking accuracy is low due to less available information, and the tracking effect is poor in a complex scene with high pedestrian density or serious occlusion between targets. When the global association algorithm among all the frames is used, although the tracking accuracy is relatively high, real-time tracking cannot be performed, the time complexity of the algorithm is often high, and when the number of targets in a video is large, the time consumption of the tracking algorithm is large. And the calculation amount is large, the GPU is required to support, and the application is limited. Therefore, a target tracking method with both real-time tracking and high accuracy is needed.
Disclosure of Invention
The invention aims to solve the problems, provides a factory violation detection and tracking method which has an anti-blocking function, can find a target for tracking as soon as possible after the target reappears, is low in overall calculation amount, high in detection speed and high in accuracy, is suitable for detecting the violation of the target in real time, can automatically identify the violation of the target and carry out tracking and reminding, does not need to consume a large amount of manpower and material resources, greatly improves the operation safety, and is wider in application range.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention provides a factory violation detection and tracking method, which is applied to a mobile robot and comprises the following steps:
s1, acquiring a behavior image of a worker, inputting the behavior image into a yolov3-tiny network model to detect violation behaviors, and acquiring the position and the type of a target frame;
s2, for the worker behavior images belonging to the violation behavior category, tracking the target with the minimum depth information by adopting an improved kcf tracking algorithm and controlling the mobile robot to carry out voice reminding on the target, and tracking the target by adopting an improved kcf tracking algorithm, wherein the tracking method comprises the following steps:
s21, saving a target frame of the target as a target image, updating the target image by each frame of worker behavior image, judging whether a response peak value is lower than a first preset threshold value, if not, judging that the target is not shielded, turning to S3, if not, stopping updating the target image, extracting and describing feature points of the current target image;
s22, judging whether the response peak value is lower than a second preset threshold value, if not, judging that the target is half-shielded, turning to S3, if so, judging that the target is shielded and lost, defining the current frame worker behavior image as a search image, and extracting and describing feature points of the search image;
s23, performing descriptor matching on the target image and the search image, setting an experience threshold, and judging the matching of the feature points of the target image and the search image when the matching value is greater than the experience threshold;
s24, judging whether the number of the matching pairs of the feature points is lower than a third preset threshold value, if so, updating the next frame of worker behavior image as a search image, extracting and describing the feature points of the search image, returning to execute the step S23, otherwise, calculating the maximum position coordinate and the minimum position coordinate in the matching feature points of the search image, and defining a rectangular area formed by the maximum position coordinate and the minimum position coordinate as a tracking area;
s25, performing template matching on the tracking area and the target image, recording a first response peak value V1, performing template matching on the tracking area amplified by a preset multiple and the target image, and recording a second response peak value V2;
s26, judging whether V2 is more than or equal to V1, if so, taking the amplified tracking area as the latest tracking area, otherwise, keeping the tracking area before amplification as the latest tracking area, and taking the latest tracking area as a target frame;
and S3, continuing tracking until the violation is relieved.
Preferably, in step S1, the categories of the target frame include five categories of a helmet wearing, a helmet not wearing, a mask not wearing and a smoke sucking, wherein the helmet not wearing, the mask not wearing and the smoke sucking are violation categories, and the helmet wearing and the mask wearing are non-violation categories.
Preferably, the first preset threshold is 0.7 and the second preset threshold is 0.2.
Preferably, feature point extraction is performed and described as follows:
1) extraction of
Selecting any pixel point Q on the corresponding image, setting a fourth preset threshold value T, taking the pixel point Q as the circle center and R as the radius, wherein M surrounding pixel points exist on the circle, when the pixel value of continuous N (N is less than or equal to M) surrounding pixel points is larger than Q + T or smaller than Q-T, the pixel point Q is taken as a feature point, and traversing all pixel points of the image to obtain all feature points.
2) Description of the invention
And taking 128 pairs of pixel points in the surrounding area of each characteristic point, wherein each pair of pixel points comprises a pixel point P and a pixel point L, the gray values of the pixel points P and L are different, if the gray value of the pixel point P is greater than that of the pixel point L, 1 is taken, otherwise, 0 is taken, and 128-bit binary number is obtained.
Preferably, the positions of each pair of pixel points of the corresponding feature points on the target image and the search image are the same.
Preferably, in step S23, the matching value is obtained by hamming distance calculation.
Compared with the prior art, the invention has the beneficial effects that:
1) the method combines yolov3-tiny network and kcf tracking algorithm, improves kcf tracking algorithm, increases anti-blocking function, matches the tracking target in a re-searching mode when the tracking target appears again after being blocked, has less calculation amount in the searching process, realizes finding the target to track the target at the fastest speed after the target reappears, has low integral calculation amount, high detection speed and high accuracy, and is suitable for detecting the violation behavior of the target in real time;
2) the violation behaviors can be automatically identified, the violation behaviors are tracked and reminded, a large amount of manpower and material resources are not consumed, the safety in the operation process is greatly improved, the weight is light, GPU support is not needed, and the application range is wider.
Drawings
FIG. 1 is a general flow chart of the detection and tracking method of the present invention;
FIG. 2 is a flow chart of the kcf tracking algorithm of the present invention;
fig. 3 is a graph of the response peak of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
As shown in fig. 1-3, a factory violation detection and tracking method applied to a mobile robot includes the following steps:
and S1, acquiring a worker behavior image, inputting the worker behavior image into a yolov3-tiny network model to detect the violation behavior, and acquiring the position and the type of the target frame.
In one embodiment, in step S1, the categories of the target box include five categories of a helmet, a non-helmet, a mask, a non-mask and a smoke, wherein the categories of the helmet, the mask and the smoke are illegal, and the categories of the helmet and the mask are non-illegal. It should be noted that the category of the target frame may also be divided according to actual requirements.
The method can be applied to mobile robots (such as patrol robots and the like), the body of the mobile robot comprises a motion mechanism and a control main board, the existing mobile robot can be adopted, such as a biped mobile robot and the like, the body of the mobile robot is also provided with an image acquisition module and a voice module, and the control main board is connected with the image acquisition module and the voice module and drives the mobile body to move. The image acquisition module is a camera, and the voice module is used for carrying out voice reminding, such as reminding workers to remove violation behaviors. And inputting the worker behavior image acquired by the camera into the yolov3-tiny network model for detecting the violation behavior to obtain the position and the type of the target frame.
The yolov3-tiny network model can be a model trained in advance, and the training steps are as follows:
1) marking a target frame in the collected worker behavior image by using label img software, wherein the target frame comprises five categories of wearing a safety helmet, not wearing the safety helmet, wearing a mask, not wearing the mask and smoking, and the marked content comprises hat, person, mask, no _ mask and smoke in sequence;
2) dividing the labeled image data set into a test set and a training set;
3) training a yolov3-tiny network model by using a training set to obtain a plurality of weight files;
4) and testing the average accuracy of each yolov3-tiny network model by using a test set, selecting a weight file with the highest average accuracy as a final yolov3-tiny network model, and detecting the violation.
And S2, tracking the target with the minimum depth information by adopting an improved kcf tracking algorithm on the worker behavior image belonging to the illegal behavior category, and controlling the mobile robot to carry out voice reminding on the target. For example, a control main board of the mobile robot sends a speed instruction according to the depth information of the image to control wheels of the movement mechanism to rotate to a nearest target (namely, a target with minimum depth information), and voice reminding workers of unlawful behaviors is performed through a voice module. The behavior category can be automatically screened, the illegal behavior is tracked and reminded, and the safety in the operation process is greatly improved.
The improved kcf tracking algorithm is adopted for target tracking, and comprises the following steps:
s21, saving a target frame of the target as a target image, updating the target image by each frame of worker behavior image, judging whether a response peak value is lower than a first preset threshold value, if not, judging that the target is not shielded, turning to S3, and if not, stopping updating the target image, extracting feature points of the current target image and describing the feature points.
In an embodiment, feature point extraction and description are performed, that is, Fast feature point extraction and BRIEF description are performed, specifically as follows:
1) extraction of
Selecting any pixel point Q on the corresponding image, setting a fourth preset threshold value T, taking the pixel point Q as the circle center and R as the radius, wherein M surrounding pixel points exist on the circle, when the pixel value of continuous N (N is less than or equal to M) surrounding pixel points is larger than Q + T or smaller than Q-T, the pixel point Q is taken as a feature point, and traversing all pixel points of the image to obtain all feature points.
Specifically, a circle is drawn by taking a pixel point Q as a circle center and taking R as a radius, if M ═ 16 surrounding pixel points are correspondingly arranged on the circumference, and when the pixel values of continuous N ═ 10 surrounding pixel points are greater than Q + T or less than Q-T, the pixel point Q is a Fast feature point, and similarly, all pixel points of the image are traversed to obtain all Fast feature points.
2) Description of the invention
And taking 128 pairs of pixel points in the surrounding area of each characteristic point, wherein each pair of pixel points comprises a pixel point P and a pixel point L, the gray values of the pixel points P and L are different, if the gray value of the pixel point P is greater than that of the pixel point L, 1 is taken, otherwise, 0 is taken, and 128-bit binary number is obtained.
S22, judging whether the response peak value is lower than a second preset threshold value, if not, judging that the target is half-shielded, turning to S3, if so, judging that the target is shielded and lost, defining the current frame worker behavior image as a search image, and extracting and describing feature points of the search image.
In one embodiment, the first predetermined threshold is 0.7, and the second predetermined threshold is 0.2. When the response peak value V meets that V is more than or equal to 0.7, the target is not shielded and is continuously tracked; when the response peak value satisfies that V is more than or equal to 0.2 and less than 0.7, the target is semi-shielded and continuously tracked; when the response peak value satisfies V < 0.2, the target is shielded and lost, and a re-searching stage is entered. As shown in fig. 3, the abscissa indicates the number of frames of an image, the ordinate indicates a response peak, and when the response peak is located at the 125 th frame, the response peak drops to 0.7, the target starts to be occluded, and the response peak continues to drop. When the target is completely shielded and the response peak value is reduced to 0.2 when the target is located in 155 frames, the target tracking is lost when the target is completely shielded and the target re-searching stage is started to be started when the response peak value is lower than 0.2. It should be noted that specific values of the first preset threshold and the second preset threshold may also be determined according to actual requirements.
In one embodiment, the positions of each pair of pixel points of the corresponding feature points on the target image and the search image are the same.
And S23, performing descriptor matching on the target image and the search image, setting an experience threshold, and judging the matching of the feature points of the target image and the search image when the matching value is greater than the experience threshold.
In one embodiment, in step S23, the matching value is obtained by using a hamming distance calculation. And selecting the most similar matching in all matching pairs through Hamming distance to improve the detection accuracy, for example, calculating different digits in the 128-bit binary number of the corresponding search image and the 128-bit binary number of the target image to be the distance, wherein the greater the distance is, the more dissimilar the distance is, the smaller the distance is, the more similar the distance is, an empirical threshold value is set, and when the matching value is greater than the empirical threshold value, the characteristic point matching of the target image and the search image is judged.
And S24, judging whether the number of the matching pairs of the feature points is lower than a third preset threshold value, if so, updating the next frame of worker behavior image as a search image, extracting and describing the feature points of the search image, returning to execute the step S23, otherwise, calculating the maximum position coordinate and the minimum position coordinate in the matching feature points of the search image, and defining a rectangular area formed by the maximum position coordinate and the minimum position coordinate as a tracking area.
When the number of the matching pairs of the feature points is lower than a third preset threshold (for example, the third preset threshold is 2), which indicates that the target is not searched, the step S23 needs to be executed again on the updated search image; and when the number of the matching pairs of the feature points is not lower than a third preset threshold, taking a rectangular area formed by the maximum position coordinate and the minimum position coordinate in the matching feature points of the search image as a tracking area. It should be noted that the specific value of the third preset threshold may also be determined according to actual requirements.
And S25, performing template matching on the tracking area and the target image, recording a first response peak value V1, performing template matching on the tracking area amplified by a preset multiple and the target image, and recording a second response peak value V2.
And S26, judging whether V2 is more than or equal to V1, if so, taking the amplified tracking area as the latest tracking area, otherwise, keeping the tracking area before amplification as the latest tracking area, and taking the latest tracking area as a target frame. By selecting different tracking areas as the latest tracking area, the tracking frame with the most suitable target size can be obtained, which is beneficial to improving the detection precision.
And S3, continuing tracking until the violation is relieved. While continuing to track, the mobile robot keeps tracking the target and reminds the worker of removing the violation by voice, and returns to execute the step S21 until the violation is removed.
The method combines yolov3-tiny network and kcf tracking algorithm, improves kcf tracking algorithm, increases anti-blocking function, matches the tracking target in a re-searching mode when the tracking target appears again after being blocked, has less calculation amount in the searching process, realizes finding the target to track the target at the fastest speed after the target reappears, has low integral calculation amount, high detection speed and high accuracy, and is suitable for detecting the violation behavior of the target in real time; the violation behaviors can be automatically identified, the violation behaviors are tracked and reminded, a large amount of manpower and material resources are not consumed, the safety in the operation process is greatly improved, the weight is light, GPU support is not needed, and the application range is wider.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express the more specific and detailed embodiments described in the present application, but not be construed as limiting the claims. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A factory violation detection and tracking method is applied to a mobile robot and is characterized in that: the factory violation detection and tracking method comprises the following steps:
s1, acquiring a behavior image of a worker, inputting the behavior image into a yolov3-tiny network model to detect violation behaviors, and acquiring the position and the type of a target frame;
s2, for the worker behavior images belonging to the violation behavior category, tracking the target with the minimum depth information by adopting an improved kcf tracking algorithm and controlling the mobile robot to carry out voice reminding on the target, wherein the tracking of the target by adopting an improved kcf tracking algorithm comprises the following steps:
s21, saving a target frame of the target as a target image, updating the target image by each frame of worker behavior image, judging whether a response peak value is lower than a first preset threshold value, if not, judging that the target is not shielded, and turning to S3, if not, stopping updating the target image, and extracting and describing feature points of the current target image;
s22, judging whether the response peak value is lower than a second preset threshold value, if not, judging that the target is half-shielded, turning to S3, if so, judging that the target is shielded and lost, defining a current frame worker behavior image as a search image, and extracting and describing feature points of the search image;
s23, performing descriptor matching on the target image and the search image, setting an experience threshold, and judging the feature point matching of the target image and the search image when the matching value is greater than the experience threshold;
s24, judging whether the number of the matching pairs of the feature points is lower than a third preset threshold value, if so, updating the next frame of worker behavior image as a search image, extracting and describing the feature points of the search image, returning to execute the step S23, otherwise, calculating the maximum position coordinate and the minimum position coordinate in the matching feature points of the search image, and defining a rectangular area formed by the maximum position coordinate and the minimum position coordinate as a tracking area;
s25, performing template matching on the tracking area and the target image, recording a first response peak value V1, performing template matching on the tracking area amplified by a preset multiple and the target image, and recording a second response peak value V2;
s26, judging whether V2 is more than or equal to V1, if so, taking the amplified tracking area as a latest tracking area, otherwise, keeping the tracking area before amplification as the latest tracking area, and taking the latest tracking area as a target frame;
and S3, continuing tracking until the violation is relieved.
2. The plant violation detection tracking method of claim 1, wherein: in step S1, the target frame includes five categories, namely a wearable safety helmet, an unworn safety helmet, a wearable mask, an unworn mask and a smoking, where the unworn safety helmet, the unworn mask and the smoking are illegal behavior categories, and the wearable safety helmet and the wearable mask are non-illegal behavior categories.
3. The plant violation detection tracking method of claim 1, wherein: the first preset threshold is 0.7, and the second preset threshold is 0.2.
4. The plant violation detection tracking method of claim 1, wherein: the feature point extraction is performed and described, specifically as follows:
1) extraction of
Selecting any pixel point Q on the corresponding image, setting a fourth preset threshold value T, and traversing all pixel points of the image to acquire all feature points if the pixel values of continuous N (N is less than or equal to M) surrounding pixel points are greater than Q + T or less than Q-T, wherein M surrounding pixel points exist on a circle with the pixel point Q as the circle center and R as the radius.
2) Description of the invention
And taking 128 pairs of pixel points in the surrounding area of each characteristic point, wherein each pair of pixel points comprises a pixel point P and a pixel point L, the gray values of the pixel points P and L are different, if the gray value of the pixel point P is greater than that of the pixel point L, 1 is taken, otherwise, 0 is taken, and 128-bit binary number is obtained.
5. The plant violation detection and tracking method of claim 4, wherein: and the positions of each pair of pixel points of the corresponding feature points on the target image and the search image are the same.
6. The plant violation detection tracking method of claim 1, wherein: in step S23, the matching value is obtained by hamming distance calculation.
CN202111044971.2A 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory Active CN113807224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111044971.2A CN113807224B (en) 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111044971.2A CN113807224B (en) 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory

Publications (2)

Publication Number Publication Date
CN113807224A true CN113807224A (en) 2021-12-17
CN113807224B CN113807224B (en) 2023-11-21

Family

ID=78940775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111044971.2A Active CN113807224B (en) 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory

Country Status (1)

Country Link
CN (1) CN113807224B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524573A (en) * 2023-05-19 2023-08-01 北京弘治锐龙教育科技有限公司 Abnormal article and mask detection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190281224A1 (en) * 2017-06-23 2019-09-12 Goertek Inc. Method for tracking and shooting moving target and tracking device
CN111862155A (en) * 2020-07-14 2020-10-30 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle single vision target tracking method aiming at target shielding
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112712546A (en) * 2020-12-21 2021-04-27 吉林大学 Target tracking method based on twin neural network
CN112802059A (en) * 2021-01-22 2021-05-14 浙江工业大学 Helmet detection tracking method based on YOLOV3 network and kcf tracking algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190281224A1 (en) * 2017-06-23 2019-09-12 Goertek Inc. Method for tracking and shooting moving target and tracking device
CN111862155A (en) * 2020-07-14 2020-10-30 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle single vision target tracking method aiming at target shielding
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112712546A (en) * 2020-12-21 2021-04-27 吉林大学 Target tracking method based on twin neural network
CN112802059A (en) * 2021-01-22 2021-05-14 浙江工业大学 Helmet detection tracking method based on YOLOV3 network and kcf tracking algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁锡宁;杨刚;余学才;王世阳;朱良销;苏柯;陈涛;: "一种动态模板匹配的卡尔曼滤波跟踪方法", 光电工程, no. 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524573A (en) * 2023-05-19 2023-08-01 北京弘治锐龙教育科技有限公司 Abnormal article and mask detection system

Also Published As

Publication number Publication date
CN113807224B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN109948582B (en) Intelligent vehicle reverse running detection method based on tracking trajectory analysis
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108040221B (en) Intelligent video analysis and monitoring system
CN109657575B (en) Intelligent video tracking algorithm for outdoor constructors
CN111144263A (en) Construction worker high-fall accident early warning method and device
CN101266710A (en) An all-weather intelligent video analysis monitoring method based on a rule
CN101587622A (en) Forest rocket detection and recognition methods and equipment based on video image intelligent analysis
CN102665071A (en) Intelligent processing and search method for social security video monitoring images
CN110619276B (en) Anomaly and violence detection system and method based on unmanned aerial vehicle mobile monitoring
CN108830204B (en) Method for detecting abnormality in target-oriented surveillance video
CN112766091B (en) Video unsafe behavior recognition system and method based on human skeleton key points
CN113807224B (en) Method for detecting and tracking illegal behaviors of factory
CN112800975A (en) Behavior identification method in security check channel based on image processing
CN113869275A (en) Vehicle object detection system that throws based on remove edge calculation
CN104463909A (en) Visual target tracking method based on credibility combination map model
CN114648748A (en) Motor vehicle illegal parking intelligent identification method and system based on deep learning
CN109887303B (en) Lane-changing behavior early warning system and method
CN113392754B (en) Method for reducing false pedestrian detection rate based on yolov5 pedestrian detection algorithm
CN114359712A (en) Safety violation analysis system based on unmanned aerial vehicle inspection
CN108960181B (en) Black smoke vehicle detection method based on multi-scale block LBP and hidden Markov model
CN111291728A (en) Detection system, detection equipment and detection method for illegal crossing of transmission belt behavior
CN115100249B (en) Intelligent factory monitoring system based on target tracking algorithm
Chen et al. Intrusion detection of specific area based on video
CN110782485A (en) Vehicle lane change detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant