CN113807224B - Method for detecting and tracking illegal behaviors of factory - Google Patents

Method for detecting and tracking illegal behaviors of factory Download PDF

Info

Publication number
CN113807224B
CN113807224B CN202111044971.2A CN202111044971A CN113807224B CN 113807224 B CN113807224 B CN 113807224B CN 202111044971 A CN202111044971 A CN 202111044971A CN 113807224 B CN113807224 B CN 113807224B
Authority
CN
China
Prior art keywords
target
image
tracking
matching
detecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111044971.2A
Other languages
Chinese (zh)
Other versions
CN113807224A (en
Inventor
张成祥
夏启剑
张文安
吴晓峰
吴祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinhua Zhejiang University Of Technology Innovation Joint Research Institute
Original Assignee
Jinhua Zhejiang University Of Technology Innovation Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinhua Zhejiang University Of Technology Innovation Joint Research Institute filed Critical Jinhua Zhejiang University Of Technology Innovation Joint Research Institute
Priority to CN202111044971.2A priority Critical patent/CN113807224B/en
Publication of CN113807224A publication Critical patent/CN113807224A/en
Application granted granted Critical
Publication of CN113807224B publication Critical patent/CN113807224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application discloses a method for detecting and tracking illegal behaviors of a factory, which comprises the following steps: collecting images, and inputting a model to obtain the position and the category of a target frame; for the offence category image, an improved kcf tracking algorithm is adopted to track and control the mobile robot to remind the target by voice, and the tracking comprises: storing a target frame of a target as a target image and updating, stopping updating when the response peak value is lower than a first preset threshold value, and defining a current frame image as a search image when the response peak value is lower than a second preset threshold value; matching the target image with the search image; defining a rectangular area formed by matching feature points of the search image as a tracking area; and matching the tracking area with the target image before and after amplification, recording a response peak value, and continuously tracking by taking a corresponding area with a large response peak value as the latest tracking area. The method is anti-shielding, low in calculated amount, high in detection speed, high in accuracy and strong in real-time performance, can automatically identify illegal behaviors to track and remind, is time-saving and labor-saving, improves operation safety, and is wide in application range.

Description

Method for detecting and tracking illegal behaviors of factory
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method for detecting and tracking illegal behaviors of a factory.
Background
In recent years, accidents frequently happen in factories, which are usually caused by the fact that workers cannot comply with the safety regulations of the factories or relax in a certain period of time, for example, common accident causes comprise injury caused by the fact that the workers do not wear safety helmets in the factories where the safety helmets are needed to be worn; the mask is not worn at the place where the mask is needed to be worn, so that pollution and poisoning are caused; smoke is drawn at a place where smoke is strictly forbidden to cause explosion of combustible substances and the like. In order to monitor workers to observe the safety regulations of the factories at any time, most factories monitor the factories manually to realize supervision. The manual supervision method consumes a great deal of manpower and material resources, and accidents caused by monitoring fatigue can even occur.
With the wide application of image processing in civil and commercial fields, object tracking plays an increasingly important role in the fields of intelligent video monitoring, automatic driving, unmanned supermarkets and the like. In the current target tracking, the target tracking can be realized through a local association algorithm between adjacent frames or through a global association algorithm between all frames. However, when using the local correlation algorithm between adjacent frames, tracking accuracy is low due to less available information, and tracking effect is poor in complex scenes with high pedestrian density or serious occlusion between targets. When a global correlation algorithm between all frames is used, real-time tracking cannot be performed although the tracking accuracy is relatively high, the time complexity of the algorithm is often high, and when the number of targets in a video is large, the time consumption of the tracking algorithm is high. And the calculation amount is large, the support of the GPU is needed, and the application has limitation. Therefore, there is a need for a target tracking method that combines real-time tracking with high accuracy.
Disclosure of Invention
Aiming at the problems, the application provides a method for detecting and tracking the illegal behaviors of a factory, which has an anti-shielding function, can find the target to track the target at the fastest speed after the target reappears, has low integral calculated amount, high detection speed and high accuracy, is suitable for detecting the illegal behaviors of the target in real time, can automatically identify the illegal behaviors and track and remind the illegal behaviors, does not consume a great amount of manpower and material resources, greatly improves the operation safety, and has wider application range.
In order to achieve the above purpose, the technical scheme adopted by the application is as follows:
the application provides a method for detecting and tracking factory illegal behaviors, which is applied to a mobile robot and comprises the following steps:
s1, acquiring a worker behavior image, inputting a yolov3-tiny network model for detecting illegal behaviors, and obtaining the position and the category of a target frame;
s2, tracking a target with minimum depth information by adopting an improved kcf tracking algorithm for a worker behavior image belonging to the illegal behavior category, controlling a mobile robot to carry out voice reminding on the target, and carrying out target tracking by adopting an improved kcf tracking algorithm, wherein the method comprises the following steps:
s21, storing a target frame of a target as a target image, updating the target image by each frame of worker behavior image, judging whether a response peak value is lower than a first preset threshold value, if not, judging that the target is not shielded, turning to step S3, and if so, stopping updating the target image, extracting and describing characteristic points of the current target image;
s22, judging whether the response peak value is lower than a second preset threshold value, if not, judging that the target is blocked halfway, turning to step S3, if so, judging that the target is blocked and lost, defining a worker behavior image of the current frame as a search image, and extracting and describing characteristic points of the search image;
s23, carrying out descriptor matching on the target image and the search image, setting an experience threshold, and judging that the characteristic points of the target image and the search image are matched when the matching value is larger than the experience threshold;
s24, judging whether the matching number of the feature points is lower than a third preset threshold value, if so, updating the next frame of worker behavior image to be a search image, extracting and describing the feature points of the search image, and returning to the execution step S23, otherwise, calculating the maximum position coordinate and the minimum position coordinate in the matching feature points of the search image, and defining a rectangular area formed by the maximum position coordinate and the minimum position coordinate as a tracking area;
s25, performing template matching on the tracking area and the target image, recording a first response peak value V1, performing template matching on the tracking area amplified by a preset multiple and the target image, and recording a second response peak value V2;
s26, judging whether V2 is more than or equal to V1, if yes, taking the amplified tracking area as the latest tracking area, otherwise, reserving the tracking area before amplification as the latest tracking area, and taking the latest tracking area as a target frame;
s3, tracking is continued until the illegal action is relieved.
Preferably, in step S1, the categories of the target frame include five categories of wearing a helmet, not wearing a helmet, wearing a mask, not wearing a mask, and smoking, wherein the non-wearing helmet, not wearing a mask, and smoking are illegal categories, and the wearing of a helmet and the wearing of a mask are non-illegal categories.
Preferably, the first preset threshold is 0.7 and the second preset threshold is 0.2.
Preferably, feature point extraction and description are performed, specifically as follows:
1) Extraction of
Selecting any pixel point Q on a corresponding image, setting a fourth preset threshold value T, taking the pixel point Q as a circle center, taking R as a radius, wherein M surrounding pixel points exist on the circle, and when the pixel value of N (N is less than or equal to M) surrounding pixel points is larger than Q+T or smaller than Q-T, taking the pixel point Q as a characteristic point, traversing all the pixel points of the image to obtain all the characteristic points.
2) Description of the application
128 pairs of pixel points are taken from the surrounding area of each feature point, each pair of pixel points comprises a pixel point P and a pixel point L, the gray values of the pixel points P and the pixel point L are different, if the gray value of the pixel point P is larger than that of the pixel point L, 1 is taken, otherwise, 0 is taken, and 128-bit binary numbers are obtained.
Preferably, the positions taken by each pair of pixels of the corresponding feature points on the target image and the search image are the same.
Preferably, in step S23, the matching value is obtained by hamming distance calculation.
Compared with the prior art, the application has the beneficial effects that:
1) According to the method, a yolov3-tiny network and a kcf tracking algorithm are combined, an anti-shielding function is added by improving the kcf tracking algorithm, the tracked target can be matched in a re-searching mode when the tracked target appears again after being shielded, the calculated amount in the searching process is small, the target is found out to track the target at the highest speed after the target appears again, the overall calculated amount is low, the detection speed is high, the accuracy is high, and the method is suitable for detecting the illegal behavior of the target in real time;
2) The automatic illegal behavior recognition system can automatically recognize illegal behaviors, track and remind the illegal behaviors, does not need to consume a large amount of manpower and material resources, greatly improves the safety in the operation process, is light in weight, does not need GPU support, and is wider in application range.
Drawings
FIG. 1 is a general flow chart of a detection tracking method of the present application;
FIG. 2 is a flowchart of a kcf tracking algorithm of the present application;
FIG. 3 is a graph of the response peak of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It is to be noted that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
As shown in fig. 1-3, a method for detecting and tracking factory violations is applied to a mobile robot, and comprises the following steps:
s1, acquiring a worker behavior image, inputting a yolov3-tiny network model to detect illegal behaviors, and obtaining the position and the category of the target frame.
In an embodiment, in step S1, the categories of the target frame include five categories of wearing a helmet, not wearing a helmet, wearing a mask, not wearing a mask, and smoking, wherein the non-wearing helmet, not wearing a mask, and smoking are offensive categories, and the wearing helmet and the wearing mask are non-offensive categories. It should be noted that the categories of the target frames may be further divided according to actual requirements.
The method can be applied to a mobile robot (such as a patrol robot and the like), the body of the mobile robot comprises a motion mechanism and a control main board, the existing mobile robot such as a bipedal mobile robot and the like can be adopted, an image acquisition module and a voice module are further arranged on the body of the mobile robot, and the control main board is connected with the image acquisition module and the voice module and drives the mobile body to move. The image acquisition module is a camera, and the voice module is used for carrying out voice reminding, such as reminding workers to remove illegal behaviors. And inputting the worker behavior image acquired by the camera into a yolov3-tiny network model to detect the illegal behaviors, and obtaining the position and the category of the target frame.
The yolov3-tiny network model can be a model trained in advance, and the training steps are as follows:
1) Labeling a target frame in the acquired worker behavior image by labelImg software, wherein the labeling comprises five categories of wearing a safety helmet, not wearing the safety helmet, wearing a mask, not wearing the mask and smoking, and the labeling contents are hat, person, mask, no _mask and smoke in sequence;
2) Dividing the marked image data set into a test set and a training set;
3) Training a yolov3-tiny network model by using a training set to obtain a plurality of weight files;
4) And testing the average accuracy of each yolov3-tiny network model by using a test set, and selecting a weight file with the highest average accuracy as a final yolov3-tiny network model to detect illegal behaviors.
S2, tracking the target with the minimum depth information by adopting an improved kcf tracking algorithm for the worker behavior image belonging to the illegal behavior category, and controlling the mobile robot to carry out voice reminding on the target. If the control main board of the mobile robot sends a speed instruction according to the depth information of the image to control the wheels of the moving mechanism to rotate to move to the nearest target (namely the target with the minimum depth information), and the voice module is used for prompting workers to remove the illegal behaviors. The behavior category can be automatically screened, the illegal behaviors are tracked and reminded, and the safety in the operation process is greatly improved.
Target tracking using a modified kcf tracking algorithm, comprising:
s21, storing a target frame of a target as a target image, updating the target image by each frame of worker behavior image, judging whether a response peak value is lower than a first preset threshold value, if not, judging that the target is not shielded, turning to step S3, and if so, stopping updating the target image, extracting characteristic points of the current target image and describing the characteristic points.
In one embodiment, feature point extraction and description are performed, that is, fast feature point extraction and BRIEF description are performed, specifically as follows:
1) Extraction of
Selecting any pixel point Q on a corresponding image, setting a fourth preset threshold value T, taking the pixel point Q as a circle center, taking R as a radius, wherein M surrounding pixel points exist on the circle, and when the pixel value of N (N is less than or equal to M) surrounding pixel points is larger than Q+T or smaller than Q-T, taking the pixel point Q as a characteristic point, traversing all the pixel points of the image to obtain all the characteristic points.
Specifically, a circle is drawn by taking a pixel point Q as a circle center and R as a radius, if m=16 surrounding pixel points are corresponding to the circle, and if the pixel value of n=10 surrounding pixel points is greater than q+t or less than Q-T, the pixel point Q is a Fast feature point, and the same applies to all the pixel points of the traversing image to obtain all the Fast feature points.
2) Description of the application
128 pairs of pixel points are taken from the surrounding area of each feature point, each pair of pixel points comprises a pixel point P and a pixel point L, the gray values of the pixel points P and the pixel point L are different, if the gray value of the pixel point P is larger than that of the pixel point L, 1 is taken, otherwise, 0 is taken, and 128-bit binary numbers are obtained.
S22, judging whether the response peak value is lower than a second preset threshold value, if not, judging that the target is blocked halfway, turning to step S3, if so, judging that the target is blocked and lost, defining the worker behavior image of the current frame as a search image, and extracting and describing feature points of the search image.
In one embodiment, the first preset threshold is 0.7 and the second preset threshold is 0.2. When the response peak value V meets V being more than or equal to 0.7, the target is not shielded, and continuous tracking is carried out; when the response peak value is more than or equal to 0.2 and less than 0.7, the target is semi-shielded and continuously tracked; when the response peak value satisfies V < 0.2, the target is blocked and lost, and the re-search stage is entered. As shown in fig. 3, the abscissa represents the number of image frames, the ordinate represents the response peak, and when at the 125 th frame, the response peak falls to 0.7, the target starts to be blocked, and the response peak continues to fall. When the target is located in 155 frames, the target is completely blocked, the response peak value is reduced to 0.2, when the response peak value is lower than 0.2, the target tracking is lost under the condition that the target is completely blocked, and the target re-searching stage starts to be entered. It should be noted that the specific values of the first preset threshold and the second preset threshold may also be determined according to actual requirements.
In one embodiment, each pair of pixels of the corresponding feature points on the target image and the search image take the same position.
And S23, carrying out descriptor matching on the target image and the search image, setting an experience threshold, and judging that the characteristic points of the target image and the search image are matched when the matching value is larger than the experience threshold.
In one embodiment, in step S23, the matching value is obtained by hamming distance calculation. And selecting the most similar matching from all matching pairs through the Hamming distance to improve the detection accuracy, for example, calculating different digits in 128-bit binary numbers of the corresponding search image and 128-bit binary numbers of the target image to be distances, wherein the distances are larger and smaller and are more similar, setting an experience threshold value, and judging that the characteristic points of the target image and the search image are matched when the matching value is larger than the experience threshold value.
S24, judging whether the matching number of the feature points is lower than a third preset threshold value, if so, updating the next frame of worker behavior image to be a search image, extracting and describing the feature points of the search image, and returning to the execution step S23, otherwise, calculating the maximum position coordinate and the minimum position coordinate in the matching feature points of the search image, and defining a rectangular area formed by the maximum position coordinate and the minimum position coordinate as a tracking area.
When the matching pair number of the feature points is lower than a third preset threshold (for example, the third preset threshold is 2), it indicates that the target is not searched, and the step S23 is executed again for the updated search image; and when the matching pair number of the feature points is not lower than a third preset threshold value, taking a rectangular area formed by the largest position coordinate and the smallest position coordinate in the matching feature points of the search image as a tracking area. It should be noted that the specific value of the third preset threshold may also be determined according to the actual requirement.
S25, performing template matching on the tracking area and the target image, recording a first response peak value V1, performing template matching on the tracking area amplified by a preset multiple and the target image, and recording a second response peak value V2.
S26, judging whether V2 is more than or equal to V1, if yes, taking the amplified tracking area as the latest tracking area, otherwise, reserving the tracking area before amplification as the latest tracking area, and taking the latest tracking area as a target frame. By selecting different tracking areas as the latest tracking areas, the tracking frame with the most suitable target size can be obtained, and the detection precision is improved.
S3, tracking is continued until the illegal action is relieved. And when the tracking is continuous, the mobile robot keeps tracking the target, reminds workers of releasing the illegal behaviors through voice, and returns to the step S21 until the illegal behaviors are released.
According to the method, a yolov3-tiny network and a kcf tracking algorithm are combined, an anti-shielding function is added by improving the kcf tracking algorithm, the tracked target can be matched in a re-searching mode when the tracked target appears again after being shielded, the calculated amount in the searching process is small, the target is found out to track the target at the highest speed after the target appears again, the overall calculated amount is low, the detection speed is high, the accuracy is high, and the method is suitable for detecting the illegal behavior of the target in real time; the automatic illegal behavior recognition system can automatically recognize illegal behaviors, track and remind the illegal behaviors, does not need to consume a large amount of manpower and material resources, greatly improves the safety in the operation process, is light in weight, does not need GPU support, and is wider in application range.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above-described embodiments represent only the more specific and detailed embodiments of the present application, but are not to be construed as limiting the claims. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (6)

1. The method for detecting and tracking the illegal behaviors of the factory is applied to a mobile robot and is characterized in that: the method for detecting and tracking the illegal behaviors of the factory comprises the following steps:
s1, acquiring a worker behavior image, inputting a yolov3-tiny network model for detecting illegal behaviors, and obtaining the position and the category of a target frame;
s2, tracking a target with minimum depth information by adopting an improved kcf tracking algorithm for a worker behavior image belonging to the illegal behavior category and controlling a mobile robot to carry out voice reminding on the target, wherein the target tracking by adopting the improved kcf tracking algorithm comprises the following steps:
s21, storing a target frame of the target as a target image, updating the target image by each frame of worker behavior image, judging whether a response peak value is lower than a first preset threshold value, if not, judging that the target is not shielded, turning to step S3, and if so, stopping updating the target image, extracting and describing characteristic points of the current target image;
s22, judging whether a response peak value is lower than a second preset threshold value, if not, judging that the target is blocked halfway, turning to step S3, if so, judging that the target is blocked and lost, defining a worker behavior image of a current frame as a search image, and extracting and describing feature points of the search image;
s23, carrying out descriptor matching on the target image and the search image, setting an experience threshold, and judging that the characteristic points of the target image and the search image are matched when the matching value is larger than the experience threshold;
s24, judging whether the matching number of the feature points is lower than a third preset threshold value, if so, updating a next frame of worker behavior image to be a search image, extracting and describing the feature points of the search image, and returning to the execution step S23, otherwise, calculating the maximum position coordinate and the minimum position coordinate in the matching feature points of the search image, and defining a rectangular area formed by the maximum position coordinate and the minimum position coordinate as a tracking area;
s25, performing template matching on the tracking area and the target image, recording a first response peak value V1, performing template matching on the amplification preset multiple of the tracking area and the target image, and recording a second response peak value V2;
s26, judging whether V2 is more than or equal to V1, if yes, taking the amplified tracking area as the latest tracking area, otherwise, reserving the tracking area before amplification as the latest tracking area, and taking the latest tracking area as a target frame;
s3, tracking is continued until the illegal action is relieved.
2. The method for detecting and tracking factory violations as claimed in claim 1, wherein: in step S1, the categories of the target frame include five categories of wearing a safety helmet, not wearing a safety helmet, wearing a mask, not wearing a mask and smoking, wherein the non-wearing safety helmet, not wearing a mask and smoking are illegal categories, and the wearing safety helmet and the wearing mask are non-illegal categories.
3. The method for detecting and tracking factory violations as claimed in claim 1, wherein: the first preset threshold is 0.7, and the second preset threshold is 0.2.
4. The method for detecting and tracking factory violations as claimed in claim 1, wherein: the feature point extraction and description are specifically as follows:
1) Extraction of
Selecting any pixel point Q on a corresponding image, setting a fourth preset threshold value T, taking the pixel point Q as a circle center, and taking R as a circle radius, wherein M surrounding pixel points exist on the circle, and if the pixel value of N (N is less than or equal to M) surrounding pixel points is greater than Q+T or less than Q-T, the pixel point Q is a characteristic point, traversing all the pixel points of the image to obtain all the characteristic points;
2) Description of the application
128 pairs of pixel points are taken from the surrounding area of each feature point, each pair of pixel points comprises a pixel point P and a pixel point L, the gray values of the pixel points P and the pixel point L are different, if the gray value of the pixel point P is larger than that of the pixel point L, 1 is taken, otherwise, 0 is taken, and 128-bit binary numbers are obtained.
5. The method of detecting and tracking factory violations of claim 4, wherein: and the positions of each pair of pixel points of the characteristic points corresponding to the target image and the searching image are the same.
6. The method for detecting and tracking factory violations as claimed in claim 1, wherein: in step S23, the matching value is obtained by hamming distance calculation.
CN202111044971.2A 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory Active CN113807224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111044971.2A CN113807224B (en) 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111044971.2A CN113807224B (en) 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory

Publications (2)

Publication Number Publication Date
CN113807224A CN113807224A (en) 2021-12-17
CN113807224B true CN113807224B (en) 2023-11-21

Family

ID=78940775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111044971.2A Active CN113807224B (en) 2021-09-07 2021-09-07 Method for detecting and tracking illegal behaviors of factory

Country Status (1)

Country Link
CN (1) CN113807224B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524573A (en) * 2023-05-19 2023-08-01 北京弘治锐龙教育科技有限公司 Abnormal article and mask detection system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862155A (en) * 2020-07-14 2020-10-30 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle single vision target tracking method aiming at target shielding
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112712546A (en) * 2020-12-21 2021-04-27 吉林大学 Target tracking method based on twin neural network
CN112802059A (en) * 2021-01-22 2021-05-14 浙江工业大学 Helmet detection tracking method based on YOLOV3 network and kcf tracking algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330917B (en) * 2017-06-23 2019-06-25 歌尔股份有限公司 The track up method and tracking equipment of mobile target

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862155A (en) * 2020-07-14 2020-10-30 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle single vision target tracking method aiming at target shielding
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112712546A (en) * 2020-12-21 2021-04-27 吉林大学 Target tracking method based on twin neural network
CN112802059A (en) * 2021-01-22 2021-05-14 浙江工业大学 Helmet detection tracking method based on YOLOV3 network and kcf tracking algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种动态模板匹配的卡尔曼滤波跟踪方法;梁锡宁;杨刚;余学才;王世阳;朱良销;苏柯;陈涛;;光电工程(第10期);全文 *

Also Published As

Publication number Publication date
CN113807224A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN109948582B (en) Intelligent vehicle reverse running detection method based on tracking trajectory analysis
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN101587622B (en) Forest rocket detecting and identifying method and apparatus based on video image intelligent analysis
CN111598066A (en) Helmet wearing identification method based on cascade prediction
CN104200466B (en) A kind of method for early warning and video camera
CN106341661B (en) Patrol robot
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN107644519A (en) A kind of intelligent alarm method and system based on video human Activity recognition
CN111986228A (en) Pedestrian tracking method, device and medium based on LSTM model escalator scene
CN112149761A (en) Electric power intelligent construction site violation detection method based on YOLOv4 improved algorithm
CN110619276B (en) Anomaly and violence detection system and method based on unmanned aerial vehicle mobile monitoring
CN113807224B (en) Method for detecting and tracking illegal behaviors of factory
CN111127507A (en) Method and system for determining throwing object
CN108830204B (en) Method for detecting abnormality in target-oriented surveillance video
CN109887303B (en) Lane-changing behavior early warning system and method
CN113392754B (en) Method for reducing false pedestrian detection rate based on yolov5 pedestrian detection algorithm
Wang et al. Vision-based highway traffic accident detection
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
CN107146415A (en) A kind of traffic incidents detection and localization method
CN108960181B (en) Black smoke vehicle detection method based on multi-scale block LBP and hidden Markov model
CN116543023A (en) Multi-sensor target crowd intelligent tracking method based on correction deep SORT
CN114187666B (en) Identification method and system for watching mobile phone while walking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant