CN115731516A - Behavior recognition method and device based on target tracking and storage medium - Google Patents

Behavior recognition method and device based on target tracking and storage medium Download PDF

Info

Publication number
CN115731516A
CN115731516A CN202211456771.2A CN202211456771A CN115731516A CN 115731516 A CN115731516 A CN 115731516A CN 202211456771 A CN202211456771 A CN 202211456771A CN 115731516 A CN115731516 A CN 115731516A
Authority
CN
China
Prior art keywords
image
behavior recognition
target tracking
frame data
blurred
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211456771.2A
Other languages
Chinese (zh)
Inventor
陈�光
乔梁
曾学文
何赵亮
黄晓明
马成城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoneng Jiujiang Power Generation Co ltd
Original Assignee
Guoneng Jiujiang Power Generation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoneng Jiujiang Power Generation Co ltd filed Critical Guoneng Jiujiang Power Generation Co ltd
Priority to CN202211456771.2A priority Critical patent/CN115731516A/en
Publication of CN115731516A publication Critical patent/CN115731516A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of behavior recognition, in particular to a behavior recognition method, equipment and a storage medium based on target tracking, which comprises the following steps of collecting continuous images and decoding the collected continuous images into image frame data; setting the maximum frame length of a behavior recognition algorithm as X, and inputting image frame data; sequentially inputting the inference results of single image frame data and the previous image frame data to a target tracking algorithm to obtain the inference result of the image frame data and finally obtain an image tracking result R; when the length of R is larger than X, the first X results of R are input into a behavior recognition algorithm, a behavior recognition result P is obtained, and the result P is recorded and stored; the first reasoning result in the R is removed, and the steps are repeated.

Description

Behavior recognition method and device based on target tracking and storage medium
Technical Field
The invention relates to the field of behavior recognition, in particular to a behavior recognition method, equipment and a storage medium based on target tracking.
Background
In recent years, high-definition cameras are arranged in many activity places to monitor the occurrence of abnormal events, but security personnel still need to observe and analyze the monitored contents with time and labor waste. With the maturity of artificial intelligence technology, people begin to utilize artificial intelligence to realize security and protection intelligent monitoring. The intelligent security monitoring needs to be applied to several hot research technologies in the field of computer vision, namely a target detection technology, a target tracking technology and a behavior recognition technology.
The current mainstream behavior identification method has the flow that a video camera acquires video stream data and transmits the video stream data to an industrial control host through a switch and a router. The industrial personal computer decodes the video stream, extracts all image frames of the video stream, and then identifies a single image frame by using a behavior identification algorithm, so that the associated information between continuous frames cannot be utilized, the identification accuracy is low, and the identification speed is low.
Disclosure of Invention
The invention aims to provide a behavior recognition method, equipment and a storage medium based on target tracking, which can improve the recognition speed and the recognition accuracy by utilizing the correlation information between continuous frames.
In order to achieve the purpose, the invention provides the following technical scheme:
in a first aspect, the present invention provides a behavior recognition method based on target tracking, which is characterized by comprising the following steps:
s1, collecting continuous images, and decoding the collected continuous images into image frame data { f1, f2, f3..., fN };
s2, setting the maximum frame length of a behavior recognition algorithm as X, and inputting image frame data { f1, f2, f3..., fN } in the step S1;
s3, inputting the image frame data f1 into a target tracking algorithm based on the step S2 to obtain an inference result r1 of the image frame data f 1;
s4, based on the step S3, inputting the inference result r1 of the image frame data f2 and the image frame data f1 into a target tracking algorithm to obtain an inference result r2 of the image frame data f 2;
s5, repeating the step S4, sequentially inputting the inference result of the single image frame data and the previous image frame data to a target tracking algorithm, obtaining the inference result of the image frame data, and finally obtaining an image tracking result R = { R1, R2, r3... RN };
s6, based on the step S5, when the length of R is larger than X, inputting the first X results { R1, R2, r3... RX } of R into a behavior recognition algorithm, obtaining a behavior recognition result P, and recording and storing the result P;
s7, removing the first inference result R1 in the R, and repeating the step S6.
Preferably, in step S1, after the image frame data is acquired, the image blur degree evaluation is performed on the image frame.
Further, the image blur degree evaluation method comprises the following steps:
graying and laplacian filtering: converting the RGB color image into a gray image, and filtering by using a Laplace operator to realize the pretreatment of the image;
and (3) variance calculation: the more serious the image blurring degree is, the lower the image variance is, the higher the clear image variance is, and when the variance is less than the threshold value 200, the image is judged to be a blurred image.
Furthermore, when the image is judged to be a blurred image, the image needs to be combined with a previous frame image to prevent misjudgment, so that the processing efficiency can be improved, and the authenticity of the output image can also be improved, and the specific method comprises the following steps:
when one frame of image is blurred, judging that the image is a non-blurred image when the variance ratio of the current frame of image to the previous frame of image is greater than a threshold value of 5;
when the previous image is not blurred, judging that the image is a non-blurred image when the variance ratio of the current frame image to the previous frame image is greater than the threshold value 0.3;
otherwise, the image is blurred.
Further, specific image deblurring work is carried out on the blurred image frame, the image deblurring work is carried out by adopting an image deblurring deep learning network based on a GAN network, and a data set of a training network comprises one or more of a Kohler standard data set, a GOPRO data set and an infrared blurred-sharp image pair data set.
Preferably, in step S2, the a priori knowledge of f1 is defaulted to r0, and r0 is null.
Preferably, the image collected in step S1 is an infrared or non-infrared video resource.
In a second aspect, the present invention provides a behavior recognition device based on target tracking, which is characterized in that: comprising a processor, a memory and a control program stored on the memory and operable on the processor, the control program, when executed by the processor, performing the steps described above.
In a third aspect, the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps described above.
The invention has the beneficial effects that:
1) In the application, the recognition result of the previous frame image can be used as the prior knowledge of the next frame image recognition, so that the image processing time of the next frame image is greatly reduced, and the speed of the whole screen behavior recognition is further improved.
2) In the application, whether the persons in different frame images are the same person or not can be confirmed by utilizing the time correlation information among the continuous image frames, one complex action can last more than ten frames, and the accuracy can be greatly improved by judging the behaviors of the persons through the image information of the continuous frames.
3) The method and the device can improve the visualization effect by evaluating the blurring degree of the image frame and deblurring the blurred image.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a diagram of the steps of the present invention;
fig. 2 is a flow chart of the present invention.
Detailed Description
Example 1
As shown in fig. 1 and fig. 2, the present invention provides a behavior recognition method based on target tracking, which includes the following steps:
s1, collecting continuous images, wherein the images are infrared or non-infrared video screen resources, and decoding the collected continuous images into image frame data { f1, f2, f3..
And after image frame data are obtained, image blurring degree evaluation is carried out on the image frame.
The image blurring degree evaluation method comprises the following steps:
graying and laplacian filtering: converting the RGB color image into a gray image, and filtering by using a Laplace operator to realize the pretreatment of the image;
and (3) variance calculation: the more serious the image blurring degree is, the lower the image variance is, the higher the clear image variance is, and when the variance is less than the threshold value 200, the image is judged to be a blurred image.
When the image is judged to be a fuzzy image, the image needs to be combined with a previous frame image to prevent misjudgment, so that the processing efficiency can be improved, and the authenticity of the output image can also be improved, and the specific method comprises the following steps:
when one frame of image is blurred, judging that the image is a non-blurred image when the variance ratio of the current frame of image to the previous frame of image is greater than a threshold value of 5;
when the previous image is not blurred, judging that the image is a non-blurred image when the variance ratio of the current frame image to the previous frame image is greater than the threshold value 0.3;
otherwise, the image is blurred.
And carrying out specific image deblurring work on the blurred image frame, wherein the image deblurring work is carried out by adopting an image deblurring deep learning network based on a GAN network, and a data set of a training network comprises one or more of a Kohler standard data set, a GOPRO data set and an infrared blurred-sharp image pair data set.
S2, setting the maximum frame length of a behavior recognition algorithm as X, inputting image frame data { f1, f2, f3..., fN } in the step S1, defaulting the priori knowledge of f1 as r0, and setting r0 to be null.
And S3, based on the step S2, inputting the image frame data f1 and r0 into a target tracking algorithm to obtain an inference result r1 of the image frame data f 1.
And S4, based on the step S3, inputting the inference result r1 of the image frame data f2 and the image frame data f1 into a target tracking algorithm to obtain the inference result r2 of the image frame data f2, and taking the recognition result of the previous frame of image as the prior knowledge for recognizing the next frame of image, so that the image processing time of the next frame of image is greatly reduced, and the speed of recognizing the whole visual screen behavior is improved.
And S5, repeating the step S4, sequentially inputting the inference result of the single image frame data and the previous image frame data to the target tracking algorithm to obtain the inference result of the image frame data, and finally obtaining an image tracking result R = { R1, R2, r3..
S6, based on the step S5, when the length of R is larger than X, the first X results { R1, R2, r3... RX } of R are input into a behavior recognition algorithm, a behavior recognition result P is obtained, the result P is recorded and stored, and the accuracy can be greatly improved by judging the behaviors of the personnel through the picture information of continuous frames.
S7, removing the first inference result R1 in the R, and repeating the step S6.
Example 2
The invention provides behavior recognition equipment based on target tracking, which is characterized in that: comprising a processor, a memory and a control program stored on said memory and operable on said processor, said control program when executed by said processor implementing the steps of any of embodiment 1.
Example 3
The present invention provides a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of any one of the embodiments 1.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A behavior identification method based on target tracking is characterized by comprising the following steps:
s1, collecting continuous images, and decoding the collected continuous images into image frame data { f1, f2, f3..., fN };
s2, setting the maximum frame length of a behavior recognition algorithm as X, and inputting image frame data { f1, f2, f3..., fN } in the step S1;
s3, inputting the image frame data f1 into a target tracking algorithm based on the step S2 to obtain an inference result r1 of the image frame data f 1;
s4, based on the step S3, inputting the inference result r1 of the image frame data f2 and the image frame data f1 into a target tracking algorithm to obtain an inference result r2 of the image frame data f 2;
s5, repeating the step S4, sequentially inputting the inference result of the single image frame data and the previous image frame data to a target tracking algorithm, obtaining the inference result of the image frame data, and finally obtaining an image tracking result R = { R1, R2, r3... RN };
s6, based on the step S5, when the length of R is larger than X, the first X results { R1, R2, r3... RX } of R are input into a behavior recognition algorithm, a behavior recognition result P is obtained, and the result P is recorded and stored;
s7, removing the first inference result R1 in the R, and repeating the step S6.
2. The behavior recognition method based on target tracking as claimed in claim 1, wherein: in step S1, after image frame data is acquired, image blur degree determination is performed on the image frame.
3. The behavior recognition method based on target tracking according to claim 2, characterized in that: the image blur degree judging method comprises the following steps:
graying and laplacian filtering: converting the RGB color image into a gray image, and filtering by using a Laplace operator to realize the pretreatment of the image;
and (3) variance calculation: the more serious the image blurring degree is, the lower the image variance is, the higher the clear image variance is, and when the variance is less than the threshold value 200, the image is judged to be a blurred image.
4. The behavior recognition method based on target tracking according to claim 3, characterized in that: when the image is judged to be a fuzzy image, the image needs to be combined with a previous frame image to prevent misjudgment, and the specific method comprises the following steps:
when one frame of image is blurred, judging that the image is a non-blurred image when the variance ratio of the current frame of image to the previous frame of image is greater than a threshold value of 5;
when the previous image is not blurred, judging that the image is a non-blurred image when the variance ratio of the current frame image to the previous frame image is greater than the threshold value 0.3;
otherwise, the image is blurred.
5. The behavior recognition method based on target tracking according to claim 2, characterized in that: and carrying out specific image deblurring work on the blurred image frame, wherein the image deblurring work is carried out by adopting an image deblurring deep learning network based on a GAN network, and a data set of a training network comprises one or more of a Kohler standard data set, a GOPRO data set and an infrared blurred-sharp image pair data set.
6. The behavior recognition method based on target tracking according to claim 1, characterized in that: in step S2, the a priori knowledge of f1 is defaulted to r0, and r0 is null.
7. The behavior recognition method based on target tracking according to claim 1, characterized in that: the image collected in step S1 is an infrared or non-infrared video screen resource.
8. A behavior recognition device based on target tracking, characterized by: comprising a processor, a memory and a control program stored on said memory and executable on said processor, said control program realizing the steps of any of claims 1-6 when executed by said processor.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any one of claims 1-6.
CN202211456771.2A 2022-11-21 2022-11-21 Behavior recognition method and device based on target tracking and storage medium Pending CN115731516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211456771.2A CN115731516A (en) 2022-11-21 2022-11-21 Behavior recognition method and device based on target tracking and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211456771.2A CN115731516A (en) 2022-11-21 2022-11-21 Behavior recognition method and device based on target tracking and storage medium

Publications (1)

Publication Number Publication Date
CN115731516A true CN115731516A (en) 2023-03-03

Family

ID=85296862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211456771.2A Pending CN115731516A (en) 2022-11-21 2022-11-21 Behavior recognition method and device based on target tracking and storage medium

Country Status (1)

Country Link
CN (1) CN115731516A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109478333A (en) * 2016-09-30 2019-03-15 富士通株式会社 Object detection method, device and image processing equipment
CN111985375A (en) * 2020-08-12 2020-11-24 华中科技大学 Visual target tracking self-adaptive template fusion method
CN112241969A (en) * 2020-04-28 2021-01-19 北京新能源汽车技术创新中心有限公司 Target detection tracking method and device based on traffic monitoring video and storage medium
CN112446436A (en) * 2020-12-11 2021-03-05 浙江大学 Anti-fuzzy unmanned vehicle multi-target tracking method based on generation countermeasure network
CN112767446A (en) * 2021-01-22 2021-05-07 西安电子科技大学 Image tracking system for improving target tracking accuracy of infrared image tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109478333A (en) * 2016-09-30 2019-03-15 富士通株式会社 Object detection method, device and image processing equipment
CN112241969A (en) * 2020-04-28 2021-01-19 北京新能源汽车技术创新中心有限公司 Target detection tracking method and device based on traffic monitoring video and storage medium
CN111985375A (en) * 2020-08-12 2020-11-24 华中科技大学 Visual target tracking self-adaptive template fusion method
CN112446436A (en) * 2020-12-11 2021-03-05 浙江大学 Anti-fuzzy unmanned vehicle multi-target tracking method based on generation countermeasure network
CN112767446A (en) * 2021-01-22 2021-05-07 西安电子科技大学 Image tracking system for improving target tracking accuracy of infrared image tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁玮等: "计算机视觉" *

Similar Documents

Publication Publication Date Title
CN111401177B (en) End-to-end behavior recognition method and system based on adaptive space-time attention mechanism
Fernandes et al. Predicting heart rate variations of deepfake videos using neural ode
WO2021184894A1 (en) Deblurred face recognition method and system and inspection robot
CN106056079B (en) A kind of occlusion detection method of image capture device and human face five-sense-organ
EP3082065A1 (en) Duplicate reduction for face detection
CN109271886A (en) A kind of the human body behavior analysis method and system of examination of education monitor video
CN109711318B (en) Multi-face detection and tracking method based on video stream
CN113297926B (en) Behavior detection and recognition method and system
Abiko et al. Single image reflection removal based on GAN with gradient constraint
Aloraini et al. Statistical sequential analysis for object-based video forgery detection
Cheng et al. Advanced background subtraction approach using Laplacian distribution model
Dwivedi et al. An approach for unattended object detection through contour formation using background subtraction
CN116402852A (en) Dynamic high-speed target tracking method and device based on event camera
CN114359333A (en) Moving object extraction method and device, computer equipment and storage medium
CN114764895A (en) Abnormal behavior detection device and method
CN115731516A (en) Behavior recognition method and device based on target tracking and storage medium
CN112307895A (en) Crowd gathering abnormal behavior detection method under community monitoring scene
CN116110095A (en) Training method of face filtering model, face recognition method and device
CN113822240B (en) Method and device for extracting abnormal behaviors from power field operation video data
Jamali et al. Saliency based fire detection using texture and color features
CN114694090A (en) Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5
CN112561957A (en) State tracking method and device for target object
CN111191593A (en) Image target detection method and device, storage medium and sewage pipeline detection device
CN111860229A (en) Intelligent abnormal behavior identification method and device and storage medium
Selvakarthi et al. Edge detection and object identification for vision enhancement at rescue operations using deep learning techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination