CN117291951A - Multi-human-body posture tracking method based on human body key points - Google Patents

Multi-human-body posture tracking method based on human body key points Download PDF

Info

Publication number
CN117291951A
CN117291951A CN202311327579.8A CN202311327579A CN117291951A CN 117291951 A CN117291951 A CN 117291951A CN 202311327579 A CN202311327579 A CN 202311327579A CN 117291951 A CN117291951 A CN 117291951A
Authority
CN
China
Prior art keywords
human body
tracking
target
key point
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311327579.8A
Other languages
Chinese (zh)
Inventor
吴文平
李佳乾
刘淑敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Homwee Technology Co ltd
Original Assignee
Homwee Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Homwee Technology Co ltd filed Critical Homwee Technology Co ltd
Priority to CN202311327579.8A priority Critical patent/CN117291951A/en
Publication of CN117291951A publication Critical patent/CN117291951A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-human body posture tracking method based on human body key points, which detects human bodies in video pictures; and carrying out target association matching on the detected human body and the tracked target, carrying out human body key point tracking screening on the initial new target which is not successfully matched, deleting the pseudo target, and tracking the real human body and the successfully matched human body by adopting the human body key point. Expanding a human body tracking area, obtaining coordinates and confidence of key points of the human body, judging crossing occlusion of the human body to eliminate repeated tracking, deleting the tracking target, and outputting position information and key point information of the human body tracking target; and executing circularly until the tracking of the human body target is completed. The invention is based on a lightweight human body key point MOT tracking frame, can effectively track multiple human body targets, has lower degree of human body detection dependence on an algorithm frame, and mainly tracks lightweight human body key points; the model has the advantages of high reasoning speed, small fluctuation of the human body tracking frame, strong tracking stability, capability of outputting human body posture estimation information and the like.

Description

Multi-human-body posture tracking method based on human body key points
Technical Field
The invention relates to the technical field of computer vision, in particular to a multi-human-body gesture tracking method based on human-body key points.
Background
The multi-human body target tracking is to detect a plurality of targets in a video picture, track and ID allocation are carried out on each target, AI caretaker moving frame measurement is carried out under a television scene, interaction of a somatosensory game is carried out, DPTZ stability is improved, and virtual reality, augmented reality and the like have wide application values. The existing multi-human target tracking MOT framework mainly comprises: (1) Detection-followed-up (Tracking by detection), such as Sort/deep Sort; (2) a combination of detection and tracking, such as JDE, centerTrack, etc.; (3) Based on the mechanism of attention, such as TransTrack, trackFormer, etc. The treatment process mainly comprises four steps: (1) parsing an input video by frame number; (2) Acquiring a target detection frame of an original video frame through a target detection network; (3) Extracting features (motion or semantic features) of the detected target frame and calculating similarity of the front video frame and the rear video frame; and (4) associating the data, and matching the target frame with the corresponding track and ID. Although the method can realize multi-human target tracking, the following defects exist: 1. the human body detection algorithm is seriously relied on, the human body tracking speed is low, the fluctuation of the human body tracking frame is large, the tracking stability is poor, and the target is easy to lose. 2. The multi-human body target tracking frame is complex in implementation process, only the human body target position can be output, and human body posture estimation information cannot be obtained.
Disclosure of Invention
The invention aims to provide a multi-human-body-posture tracking method based on human key points, which is used for solving the problems that the multi-human-body-posture tracking method in the prior art is slow in speed, poor in tracking stability, easy to lose targets, only capable of outputting human body target positions and incapable of acquiring human body posture estimation information.
The invention solves the problems by the following technical proposal:
a multi-human body posture tracking method based on human body key points comprises the following steps:
step S1, detecting a human body in a video picture;
step S2, performing target association matching on the detected human body and the tracked target, if the matching is successful, jumping to step S4, if the matching is unsuccessful, enabling an initial new target to appear, and jumping to step S3;
step S3, carrying out human body key point tracking screening on the initial new target, deleting the initial new target if the initial new target is judged to be a false target, and jumping to the step S4 if the initial new target is judged to be a real human body;
s4, expanding a human body tracking area;
s5, obtaining image data of each human body tracking area, inputting a lightweight human body key point model for reasoning operation after scaling treatment, obtaining the coordinate position of each human body key point and the confidence coefficient of the human body key point according to the reasoning operation result of the lightweight human body key point model, and calculating the minimum circumscribed rectangular frame of the human body key point, the human body tracking expansion area and the human body confidence coefficient information according to the coordinate position of each human body key point, wherein the human body confidence coefficient is the confidence coefficient average value of the human body key points;
according to the minimum circumscribed rectangular frame of each human body key point, calculating the size of the human body to judge whether the cross shielding between the human bodies exceeds a first threshold value, if so, reserving a target with larger human body area, and avoiding the situation of repeatedly tracking the human body;
s6, judging whether a human body target exceeds the boundary of a video picture or not according to the human body tracking expansion area, judging whether the human body target area is smaller than a preset value or not and judging whether the confidence level of continuous multi-frame human bodies is lower than a second threshold value if one of the conditions is met, deleting the tracking target if the tracking target is lost, otherwise, outputting the position information and the key point information of the tracking human body target; if the video frame interval is greater than the third threshold, the steps S1 to S6 are circulated, otherwise, the steps S4 to S6 are circulated until the tracking of the human body target is completed.
Further, in the step S2, the detected M human targets and the tracked N human targets are subjected to IOU calculation and hungarian algorithm matching.
Further, the step S3 specifically includes specifically including:
step S31, expanding a human body tracking area;
s32, acquiring image data of each human body tracking area, inputting a lightweight human body key point model for reasoning operation after scaling treatment, and acquiring the coordinate position and the human body confidence of each human body key point according to the reasoning operation result of the lightweight human body key point model; if the confidence coefficient of the human body of the continuous K frames is larger than the fourth threshold value, the initial new target is a real human body, otherwise, the initial new target is judged to be a pseudo target.
Further, the extended human body tracking area specifically includes:
rectangular frame for detecting human body [ x ] d ,y d ,w d ,h d ]Converting into initial center and radius of human body tracking, and simultaneously performing tracking radius multiple expansion to ensure that the human body tracking expansion region can cover the whole human body ROI region ROI [ cx ] t ,cy t ,scale t *cr t ]The extended calculation formula is as follows:
cx t =x d +w d /2,cy t =y d +h d /2
cr t =sqrt((w d /2) 2 +(h d /2) 2 )
scale t =1.25
wherein x is d The starting coordinate of the x-axis of the left upper corner of the human body detection rectangular frame; y is d The starting coordinate of the y-axis at the left upper corner of the human body detection rectangular frame; w (w) d Detecting a rectangular frame width for a human body; h is a d Detecting the height of the rectangular frame for a human body; cx (cx) t The x coordinate of the center is extended for human body tracking; cy t The y coordinate of the center is extended for human body tracking; cr (cr) t Tracking an expanded radius for the human body; scale for measuring the size of a sample t The expansion radius multiples are tracked for the human body.
Further, according to the model reasoning operation result, the coordinate position [ kx, ky, ks ] of each human body key point is obtained] 14 And human confidence score t Minimum external rectangular frame [ x ] of human body tracking target is calculated according to human body key point coordinates t ,y t ,w t ,h t ] m To ensure stable output of the minimum human body circumscribed rectangular frame, the human body key point confidence ks participating in calculation is required i More than 0.5, and simultaneously calculating a human body tracking expansion region ROI [ cx ] according to the coordinates of the key points of the human body t ,cy t ,scale t *cr t ]Human confidence score t The calculation formula is as follows:
sx t =kx tophead ,sy t =ky tophead
cx t =(kx leftshoulder +kx rightshoulder +4*kx lefthip +4*kx righthip )/10
cy t =(ky leftshoulder +ky rightshoulder +4*ky lefthip +4*ky righthip )/10
cr t =sqrt((cx t -sx t ) 2 +(cy t -sy t ) 2 )
scale t =1.25
score t =∑ks i /14
wherein kx tophead X coordinates of key points at the top of the head; ky (ky) tophead Y coordinates are key points at the top of the head; sx t X coordinates of key points at the top of the head; sy t Y coordinates are key points at the top of the head; kx leftshoulder The left shoulder key point x coordinate; kx rightshoulder The right shoulder key point x coordinate; kx lefthip X coordinates of a left hip key point; kx righthip The right hip keypoint x coordinate; ky (ky) leftshoulder The y coordinate of the left shoulder key point; ky (ky) leftshoulder The y coordinate of the right shoulder key point; ky (ky) lefthip Y coordinates of a left hip key point; ky (ky) righthip Is the right hip keypoint y coordinate.
Further, the step S6 specifically includes:
extending region ROI [ cx ] from human body tracking t ,cy t ,scale t *cr t ]Judging whether the center of the human body exceeds the boundary of the video picture, namely cx t <0、cy t <0、cx t >fw、cy t > fh, or whether the body area satisfies pi cr t *cr t < 300, or consecutive K frames human confidence score t Less than 0.5, if one of the conditions is met, judging that the tracking target is lost, deleting the corresponding tracking target, otherwise outputting a minimum external rectangular frame [ x ] of the human body tracking target t ,y t ,w t ,h t ] m And human body key point coordinates [ kx, ky, ks ]] 14 Where fw is the video frame image width and fh is the video frame image height.
If the video frame interval is greater than the threshold th=20, the steps S1 to S6 are cycled to re-perform the human body detection, otherwise, the steps S4 to S6 are cycled.
Compared with the prior art, the invention has the following advantages:
the invention is based on a lightweight human body key point MOT tracking frame, can effectively track multiple human body targets, has lower degree of human body detection dependence on an algorithm frame, and mainly tracks lightweight human body key points; the model has the advantages of high reasoning speed, small fluctuation of the human body tracking frame, strong tracking stability, capability of outputting human body posture estimation information and the like.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples, but embodiments of the present invention are not limited thereto.
Examples:
referring to fig. 1, a multi-human body gesture tracking method based on human body key points comprises the following steps:
step 1: a human body detection algorithm is used to detect a human body in a video picture.
Step 2: performing object association matching on the detected human body in the video picture to detect M human body objects [ x, y, w, h ]] i And tracked N human targets [ x ] t ,y t ,w t ,h t ] j Proceeding with IOU i,j And (3) calculating and matching by using a Hungary algorithm, wherein the matching result of the human body detection target is divided into two cases, namely successful matching or unsuccessful matching, and judging an initial new target for an unsuccessful person. And if the matching is successful, the human body key point tracking is performed.
Step 3: if the initial new target is judged to exist, human body key point tracking screening is carried out on the initial new target so as to eliminate false targets possibly generated by human body false detection. The filtering condition is that the human confidence score of the continuous multi-frame is larger than a threshold, for example, the human confidence score of the continuous K > =5 frames is set t If the value is more than 0.5, judging the real human body target, and continuously executing the human body key to the real human body targetAnd (5) tracking points, otherwise, judging that the false target is deleted, wherein the human confidence coefficient is a human key point confidence coefficient average value.
The key point tracking method based on the lightweight human body comprises the following steps:
step a: firstly, human body detection rectangular frame [ x ] d ,y d ,w d ,h d ]Translating into the initial center and radius of human body tracking, and simultaneously performing multiple expansion of tracking radius so that the expanded region can cover the whole human body ROI region ROI [ cx ] t ,cy t ,scale t *cr t ]The extended calculation formula is as follows:
cx t =x d +w d /2,cy t =y d +h d /2,cr t =sqrt((w d /2) 2 +(h d /2) 2 ),scale t =1.25
step b: next, each human ROI [ cx ] is obtained t ,cy t ,scale t *cr t ]The regional image data is scaled to the input size of the model, the acquired image data is input into a lightweight human body key point model for reasoning operation, and 14-3 dimension human body key point information is output.
Step c: obtaining the coordinate position and the confidence coefficient [ kx, ky, ks ] of each human body key point according to the model reasoning operation result] 14 Calculating minimum external rectangular frame [ x ] of human body according to key points of human body t ,y t ,w t ,h t ] m In order to ensure the stable output of the human body tracking frame, the confidence level ks of the key points of the human body is required in the process of calculating the minimum circumscribed rectangle i > 0.5. Simultaneously, calculating a human body tracking expansion region ROI [ cx ] according to the human body key point information t ,cy t ,scale t *cr t ]Human confidence score t The calculation formula is as follows:
sx t =kx tophead ,sy t =ky tophead
cx t =(kx leftshoulder +kx rightshoulder +4*kx lefthip +4*kx righthip )/10
cy t =(ky leftshoulder +ky rightshoulder +4*ky lefthip +4*ky righthip )/10
cr t =sqrt((cx t -sx t ) 2 +(cy t -sy t ) 2 )
scale t =1.25
score t =∑ks i /14
step d: according to minimum external rectangular frame [ x ] of key point of each human body t ,y t ,w t ,h t ] m Calculate IOU between human bodies i,j The size of the IOU is used for judging whether serious cross shielding exists between human bodies, if so, the IOU is used for detecting whether the cross shielding exists between the human bodies i,j And judging that serious cross shielding exists between the human bodies if the ratio is more than 0.75.
Step e: if the serious cross shielding exists between the human bodies, the targets with larger areas of the human bodies are reserved, and the repeated tracking condition of the human bodies is avoided.
Step f: extending region ROI [ cx ] from human body tracking t ,cy t ,scale t *cr t ]Judging whether the center of the human body exceeds the boundary of the video picture, namely cx t <0、cy t <0、cx t >fw、cy t > fh, or whether the body area satisfies pi cr t *cr t < 300, or consecutive K frames human confidence score t Less than 0.5, if one of the conditions is met, judging that the tracking target is lost, deleting the corresponding tracking target, otherwise outputting a minimum external rectangular frame [ x ] of the human body tracking target t ,y t ,w t ,h t ] m And human body key point coordinates [ kx, ky, ks ]] 14 Where fw is the video frame image width and fh is the video frame image height.
Step g: if the video frame interval is larger than the threshold Th=20, returning to the step 1 to circularly re-detect the human body, otherwise, repeatedly executing the steps a-g.
Although the invention has been described herein with reference to the above-described illustrative embodiments thereof, the above-described embodiments are merely preferred embodiments of the present invention, and the embodiments of the present invention are not limited by the above-described embodiments, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the scope and spirit of the principles of this disclosure.

Claims (6)

1. A multi-human-body gesture tracking method based on human body key points is characterized by comprising the following steps:
step S1, detecting a human body in a video picture;
step S2, performing target association matching on the detected human body and the tracked target, if the matching is successful, jumping to step S4, if the matching is unsuccessful, enabling an initial new target to appear, and jumping to step S3;
step S3, carrying out human body key point tracking screening on the initial new target, deleting the initial new target if the initial new target is judged to be a false target, and jumping to the step S4 if the initial new target is judged to be a real human body;
s4, expanding a human body tracking area;
s5, obtaining image data of each human body tracking area, inputting a lightweight human body key point model for reasoning operation after scaling treatment, obtaining the coordinate position of each human body key point and the confidence coefficient of the human body key point according to the reasoning operation result of the lightweight human body key point model, and calculating the minimum circumscribed rectangular frame of the human body key point, the human body tracking expansion area and the human body confidence coefficient information according to the coordinate position of each human body key point, wherein the human body confidence coefficient is the confidence coefficient average value of the human body key points;
according to the minimum circumscribed rectangular frame of each human body key point, calculating the size of the human body to judge whether the cross shielding between the human bodies exceeds a first threshold value, if so, reserving a target with larger human body area, and avoiding the situation of repeatedly tracking the human body;
s6, judging whether a human body target exceeds the boundary of a video picture or not according to the human body tracking expansion area, judging whether the human body target area is smaller than a preset value or not and judging whether the confidence level of continuous multi-frame human bodies is lower than a second threshold value if one of the conditions is met, deleting the tracking target if the tracking target is lost, otherwise, outputting the position information and the key point information of the tracking human body target; if the video frame interval is greater than the third threshold, the steps S1 to S6 are circulated, otherwise, the steps S4 to S6 are circulated until the tracking of the human body target is completed.
2. The human body key point-based multi-human body posture tracking method according to claim 1, wherein in the step S2, the detected M human body targets and the tracked N human body targets are subjected to IOU calculation and hungarian algorithm matching.
3. The method for tracking multiple human body gestures based on human body key points according to claim 1 or 2, wherein the step S3 specifically comprises the following steps:
step S31, expanding a human body tracking area;
s32, acquiring image data of each human body tracking area, inputting a lightweight human body key point model for reasoning operation after scaling treatment, and acquiring the coordinate position and the human body confidence of each human body key point according to the reasoning operation result of the lightweight human body key point model; if the confidence coefficient of the human body of the continuous K frames is larger than the fourth threshold value, the initial new target is a real human body, otherwise, the initial new target is judged to be a pseudo target.
4. The human body key point-based multi-human body posture tracking method of claim 3, wherein the expanding human body tracking area specifically comprises:
rectangular frame for detecting human body [ x ] d ,y d ,w d ,h d ]Converting into initial center and radius of human body tracking target, and simultaneously performing tracking radius multiple expansion to make human body tracking expansion region cover whole human body ROI region ROI [ cx ] t ,cy t ,scale t *cr t ]The extended calculation formula is as follows:
cx t =x d +w d /2,cy t =y d +h d /2
cr t =sqrt((w d /2) 2 +(h d /2) 2 )
scale t =1.25
wherein x is d The starting coordinate of the x-axis of the left upper corner of the human body detection rectangular frame; y is d The starting coordinate of the y-axis at the left upper corner of the human body detection rectangular frame; w (w) d Detecting a rectangular frame width for a human body; h is a d Detecting the height of the rectangular frame for a human body; cx (cx) t The x coordinate of the center is extended for human body tracking; cy t The y coordinate of the center is extended for human body tracking; cr (cr) t Tracking an expanded radius for the human body; scale for measuring the size of a sample t The expansion radius multiples are tracked for the human body.
5. The method for tracking multiple human body poses based on human body key points according to claim 4, wherein in step S5, the coordinate position [ kx, ky, ks ] of each human body key point is obtained according to the model reasoning operation result] 14 And human confidence score t Minimum external rectangular frame [ x ] of human body tracking target is calculated according to human body key point coordinates t ,y t ,w t ,h t ] m To ensure stable output of the minimum human body circumscribed rectangular frame, the human body key point confidence ks participating in calculation is required i More than 0.5, and simultaneously calculating a human body tracking expansion region ROI [ cx ] according to the coordinates of the key points of the human body t ,cy t ,scale t *cr t ]Human confidence score t The calculation formula is as follows:
sx t =kx tophead ,sy t =ky tophead
cx t =(kx leftshoulder +kx rightshoulder +4*kx lefthip +4*kx righthip )/10
cy t =(ky leftshoulder +ky rightshoulder +4*ky lefthip +4*ky righthip )/10
cr t =sqrt((cx t -sx t ) 2 +(cy t -sy t ) 2 )
scale t =1.25
score t =∑ks i /14
wherein kx tophead X coordinates of key points at the top of the head; ky (ky) tophead Y coordinates are key points at the top of the head; sx t X coordinates of key points at the top of the head; sy t Y coordinates are key points at the top of the head; kx leftshoulder The left shoulder key point x coordinate; kx rightshoulder The right shoulder key point x coordinate; kx lefthip X coordinates of a left hip key point; kx righthip The right hip keypoint x coordinate; ky (ky) leftshoulder The y coordinate of the left shoulder key point; ky (ky) leftshoulder The y coordinate of the right shoulder key point; ky (ky) lefthip Y coordinates of a left hip key point; ky (ky) righthip Is the right hip keypoint y coordinate.
6. The method for tracking multiple human body gestures based on human body key points according to claim 5, wherein the step S6 specifically comprises:
extending region ROI [ cx ] from human body tracking t ,cy t ,scale t *cr t ]Judging whether the center of the human body exceeds the boundary of the video picture, namely cx t <0、cy t <0、cx t >fw、cy t > fh, or whether the body area satisfies pi cr t *cr t < 300, or consecutive K frames human confidence score t Less than 0.5, if one of the conditions is met, judging that the tracking target is lost, deleting the corresponding tracking target, otherwise outputting a minimum external rectangular frame [ x ] of the human body tracking target t ,y t ,w t ,h t ] m And human body key point coordinates [ kx, ky, ks ]] 14 Where fw is the video frame image width and fh is the video frame image height.
If the video frame interval is greater than the threshold th=20, the steps S1 to S6 are looped to re-perform the human body detection, otherwise the steps S4 to S6 are looped.
CN202311327579.8A 2023-10-13 2023-10-13 Multi-human-body posture tracking method based on human body key points Pending CN117291951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311327579.8A CN117291951A (en) 2023-10-13 2023-10-13 Multi-human-body posture tracking method based on human body key points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311327579.8A CN117291951A (en) 2023-10-13 2023-10-13 Multi-human-body posture tracking method based on human body key points

Publications (1)

Publication Number Publication Date
CN117291951A true CN117291951A (en) 2023-12-26

Family

ID=89244358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311327579.8A Pending CN117291951A (en) 2023-10-13 2023-10-13 Multi-human-body posture tracking method based on human body key points

Country Status (1)

Country Link
CN (1) CN117291951A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789040A (en) * 2024-02-28 2024-03-29 华南农业大学 Tea bud leaf posture detection method under disturbance state

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789040A (en) * 2024-02-28 2024-03-29 华南农业大学 Tea bud leaf posture detection method under disturbance state
CN117789040B (en) * 2024-02-28 2024-05-10 华南农业大学 Tea bud leaf posture detection method under disturbance state

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US10872262B2 (en) Information processing apparatus and information processing method for detecting position of object
US6738066B1 (en) System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display
KR101870902B1 (en) Image processing apparatus and image processing method
CN107403436B (en) Figure outline rapid detection and tracking method based on depth image
US8706663B2 (en) Detection of people in real world videos and images
KR101409810B1 (en) Real-time object tracking method in moving camera by using particle filter
CN117291951A (en) Multi-human-body posture tracking method based on human body key points
JP2011076255A (en) Gesture recognizing device, gesture recognizing method and gesture recognizing program
KR20160044316A (en) Device and method for tracking people based depth information
CN112308879A (en) Image processing apparatus, method of tracking target object, and storage medium
JP2012073971A (en) Moving image object detection device, method and program
JP2008288684A (en) Person detection device and program
JP4913801B2 (en) Shielding object image identification apparatus and method
Zhou et al. A study on attention-based LSTM for abnormal behavior recognition with variable pooling
JP7096175B2 (en) Object extraction method and device
CN103077536B (en) Space-time mutative scale moving target detecting method
Khashman Automatic detection, extraction and recognition of moving objects
Najeeb et al. A survey on object detection and tracking in soccer videos
US20220207261A1 (en) Method and apparatus for detecting associated objects
CN111639600B (en) Video key frame extraction method based on center offset
Choi et al. A view-based real-time human action recognition system as an interface for human computer interaction
JP3519672B2 (en) Motion recognition processing device and motion analysis system for moving object
JP2002358526A (en) Device for detecting and tracking video object
Mikrut et al. Combining pattern matching and optical flow methods in home care vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination