CN109918997B - Pedestrian target tracking method based on multi-instance learning - Google Patents

Pedestrian target tracking method based on multi-instance learning Download PDF

Info

Publication number
CN109918997B
CN109918997B CN201910056927.XA CN201910056927A CN109918997B CN 109918997 B CN109918997 B CN 109918997B CN 201910056927 A CN201910056927 A CN 201910056927A CN 109918997 B CN109918997 B CN 109918997B
Authority
CN
China
Prior art keywords
feature
target
graph
block
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910056927.XA
Other languages
Chinese (zh)
Other versions
CN109918997A (en
Inventor
连国云
孙宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Polytechnic
Original Assignee
Shenzhen Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Polytechnic filed Critical Shenzhen Polytechnic
Priority to CN201910056927.XA priority Critical patent/CN109918997B/en
Publication of CN109918997A publication Critical patent/CN109918997A/en
Application granted granted Critical
Publication of CN109918997B publication Critical patent/CN109918997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a pedestrian target tracking method based on multi-instance learning, relates to the related field of target tracking methods, and aims to solve the problem that a target model established according to a first frame in the prior art is easy to fail in tracking when the appearance of a target changes greatly. The method comprises the following steps: s1, intercepting image data; s2, decomposing the image data into a plurality of block graphs; s3, extracting graphic features of the block graphics, including facial features, action features, color features and shape features; s4, comparing the graph characteristics of the block graphs with the characteristics of the pedestrian target; s5, constructing a classifier; s6, calculating the contact ratio score of the examples in each classification pool of the classifier; s7, carrying out weight calculation on the contact ratio fraction; s8, calculating and tracking the pedestrian target according to the weight, wherein the step S5 of constructing the classifier comprises the following steps: s51, stripping color characteristics of the block graph; s52, establishing a color contact ratio classification pool; s53, stripping the shape characteristics of the block graph; and S54, establishing a shape contact ratio classification pool.

Description

Pedestrian target tracking method based on multi-instance learning
Technical Field
The invention relates to the field of target tracking methods, in particular to a pedestrian target tracking method based on multi-instance learning.
Background
In machine learning, multi-instance learning evolves from supervised learning, in which a series of labeled "packages" are input, each "package" comprising a number of examples, and when all examples in a package are negative, the package is labeled as a negative package, as compared to inputting a series of individually labeled examples; when a packet contains at least one positive case, the packet is marked as a positive case. When a series of labeled packets are received, the machine tries to: (1) Generalizing a category concept to correctly label individual examples; (2) learn how to label a package outside of the induction.
The common target model is a static model, the target model is established by utilizing the information of the first frame image in the initial tracking stage, the target is searched in the subsequent frame image according to the established target model in the tracking process, and when the appearance of the target is greatly changed or the background information interferes with the target model, the method is easy to cause tracking failure; therefore, the market urgently needs to develop a pedestrian target tracking method based on multi-example learning to help people solve the existing problems.
Disclosure of Invention
The invention aims to provide a pedestrian target tracking method based on multi-instance learning, so as to solve the problem that a target model built according to a first frame is easy to fail in tracking when the appearance of a target changes greatly, which is proposed in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a pedestrian target tracking method based on multi-example learning comprises the following steps:
s1, intercepting image data;
s2, decomposing the image data into a plurality of block graphs;
s3, extracting the graph characteristics of the block graph;
s4, comparing the graph characteristics of the block graphs with the characteristics of the pedestrian target;
s5, constructing a classifier;
s6, calculating the contact ratio score of the examples in each classification pool of the classifier;
s7, carrying out weight calculation on the contact ratio fraction;
and S8, calculating and tracking the pedestrian target according to the weight.
Preferably, the S4 comparison method includes the following steps:
s51, stripping color characteristics of the block graph;
s52, establishing a color contact ratio classification pool;
s53, stripping the shape characteristics of the block graph;
s54, establishing a shape contact ratio classification pool;
s55, identifying and removing the environmental characteristics of the block graph;
s56, stripping the character face characteristics of the block graph;
s57, establishing a face feature contact ratio classification pool;
s58, stripping character action characteristics of the block graph;
s59, making an action characteristic dynamic graph;
and S510, establishing an action characteristic contact degree classification pool.
Preferably, in step S2, the number of image data to be decomposed depends on the resolution of the image and the level of the associated pixels, and the higher the resolution of the image data is, the more block images are decomposed.
Preferably, in step S4, the pedestrian target is a target that is manually selected from a first frame of the video, and is used as initial data for target tracking, an initial standard for calculating the degree of coincidence is established according to the target, the target tracking of the second frame performs score calculation according to the initial standard of the first frame, and uses the feature quantity of the second frame as a secondary standard for establishing the degree of coincidence, the target tracking of the third frame performs score calculation according to the initial standard of the second frame until a last frame is reached, and the comparison is finished.
Preferably, in step S5, the classifier includes a color overlap degree classification pool, a shape overlap degree classification pool, a facial feature overlap degree classification pool, and a motion feature overlap degree classification pool.
Preferably, in step S6, the color feature code is y, and the pixel quantity of the target graphic feature standard is X iy The number of pixels comparing the feature value of the pattern is X jy Then score of degree of coincidence
Figure GDA0003928150160000031
The shape feature code is q, and the pixel quantity of the target graphic feature standard is X iq The pixel quantity of the characteristic value of the contrast figure is X jq If so, the coincidence score >>
Figure GDA0003928150160000032
The code number of the facial feature is s, and the pixel quantity of the target graphic feature standard is X is The pixel quantity of the characteristic value of the contrast figure is X js Then the coincidence degree score>
Figure GDA0003928150160000033
The code number of the action characteristic is z, and the pixel quantity of the target graphic characteristic standard is X iz The pixel quantity of the characteristic value of the contrast figure is X jz If so, the coincidence score >>
Figure GDA0003928150160000034
Preferably, in step S7, the weighting factor of the color feature is 20%, the weighting factor of the shape feature is 15%, the weighting factor of the face feature is 35%, the weighting factor of the motion feature is 30%, and the weighting factor is set to 20%
Figure GDA0003928150160000035
Preferably, in step S8, the block graph with a larger weight value includes a pedestrian target, and if the weight values are the same, the pedestrian target is selected according to the facial feature W s Characteristic of motion W z Color feature W y And shape feature W q The pedestrian target is tracked in this order.
Preferably, in step S5, the environmental characteristic refers to an external environment other than a person.
Preferably, in step S6, if the facial features of the person are not included in the block graph, the facial feature overlap ratio score is determined
Figure GDA0003928150160000036
Not performing weight calculation on the graph; in step S8, if the block pattern does not include the character motion feature, the degree-of-coincidence score of the motion feature is greater than or equal to>
Figure GDA0003928150160000037
No weight calculation is performed on the graph.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method, a target model is established by utilizing first frame image information at the initial stage of tracking, a target is searched in a second frame image according to the established target model in the tracking process, after the target model is established by utilizing the second frame image information, the target is searched in a third frame image according to the established target model until a tail frame is reached, even if the appearance of the target is greatly changed or the background information interferes with the target model, a target graph is arranged in the previous frame of the search frame, and the possibility of tracking failure is reduced;
2. in the method, classification pools are respectively established by using facial features, action features, color features and shape features, coincidence scores are respectively calculated, scene coincidence parts are all superposed to determine a pedestrian target, and the tracking accuracy is improved;
3. in the invention, the face feature W s And an operation characteristic W z Color feature W y And shape feature W q When the weight values are the same, the pedestrian target can be tracked according to the weight ratio, namely according to the facial feature W s Characteristic of motion W z Color feature W y And shape feature W q The pedestrian target is tracked in this order.
Drawings
FIG. 1 is a flow chart of the present invention for a method of pedestrian target tracking based on multi-instance learning;
FIG. 2 is a flow chart of the present invention for comparing the graphical features of the tile pattern with the features of the pedestrian object.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1-2, an embodiment of the present invention is shown: a pedestrian target tracking method based on multi-example learning comprises the following steps:
s1, intercepting image data;
s2, decomposing the image data into a plurality of block graphs, wherein the decomposition depends on the resolution and the related pixel level of the image, and the target can be accurately positioned and tracked;
s3, extracting graphic features of the block graphics, including facial features, action features, color features and shape features;
s4, comparing the graph characteristics of the block graphs with the characteristics of the pedestrian target;
s5, constructing a classifier, and placing a color contact degree classification pool, a shape contact degree classification pool, a face feature contact degree classification pool and an action feature contact degree classification pool in the classifier, so as to facilitate subsequent weight calculation;
s6, calculating the contact ratio score of the examples in each classification pool of the classifier, respectively calculating each characteristic, and integrating to improve the tracking accuracy;
s7, carrying out weight calculation on the contact ratio score, and weighting
Figure GDA0003928150160000051
And S8, calculating and tracking the pedestrian target according to the weight.
Further, the step of constructing the classifier in the step S5 includes the following steps:
s51, stripping color characteristics of a block graph, wherein in a color image, in three color channels of R, G and B, each color channel occupies 8 bits, namely 256 colors, the three channels contain 256 colors of the power of 3, namely 1677 thousands of colors, a common color image needs 24 bits of color to express and becomes a true color, each image has one or more color channels which store information about the color in the image, and the color characteristics are that the color at the same position is extracted and compared with a target position;
s52, establishing a color coincidence degree classification pool, wherein the image blocks with high color coincidence rate are preferentially compared when the shape characteristics are compared;
s53, stripping the shape characteristics of the block graph, wherein the shape characteristics are the shape characteristics of a specific object and the shape characteristics of an overlapped combined shape, and the comparison analysis is carried out through the shape characteristics of an object and an abstract shape, so that the target position can be locked more quickly by two pairs of images because the difference between the transition and the environment of adjacent frames is not great;
s54, establishing a shape coincidence degree classification pool, and comparing two kinds of graphic blocks with high coincidence rate in shape preferentially when the facial features of the figures are compared;
s55, identifying and removing the environmental characteristics of the block graph, namely removing all environments except human bodies;
s56, stripping the facial features of the characters of the block graphs, and respectively comparing the facial features according to the positions of the five sense organs;
s57, establishing a facial feature contact ratio classification pool, respectively calculating and overlapping contact ratios of the five sense organs, and if the block graph is not the five sense organs, marking the comparison contact ratio of the current organ as 0, and overlapping;
s58, stripping character action characteristics of the block graph, namely carrying out target tracking on the pedestrian through estimation of an action value;
s59, making an action characteristic dynamic graph, estimating the action state of the action of the current frame, comparing the action contact ratio of the next frame with all estimated actions when calculating the contact ratio, and taking the value with high contact ratio as the action characteristic contact ratio score of the current frame;
and S510, establishing an action characteristic contact degree classification pool.
Further, in step S2, the number of image data to be decomposed depends on the resolution of the image and the level of the associated pixels, and the higher the resolution of the image data is, the more the block images are decomposed.
Further, in step S4, the pedestrian target is a target that is manually selected from a first frame of the video and is used as initial data for target tracking, an initial standard for calculating the contact ratio is established according to the target, the target tracking of the second frame is subjected to score calculation according to the initial standard of the first frame, and the feature quantity of the second frame is used as a secondary standard for establishing the contact ratio, the target tracking of the third frame is subjected to score calculation according to the initial standard of the second frame until a last frame is reached, and the comparison is completed.
Further, in step S5, the classifier includes a color contact degree classification pool, a shape contact degree classification pool, a facial feature contact degree classification pool, and an action feature contact degree classification pool, and calculates contact scores respectively, and overlaps scene contact parts to determine a pedestrian target, so that tracking accuracy is improved.
Further, in step S6, the color feature code is y, and the standard pixel quantity of the target graphic feature is X iy The pixel quantity of the characteristic value of the contrast figure is X jy Then score of degree of coincidence
Figure GDA0003928150160000061
The shape feature code is q, and the pixel quantity of the target graphic feature standard is X iq The number of pixels comparing the feature value of the pattern is X jq If so, the coincidence score >>
Figure GDA0003928150160000062
The code number of the facial feature is s, and the pixel quantity of the target graphic feature standard is X is The number of pixels comparing the feature value of the pattern is X js Then score of degree of coincidence
Figure GDA0003928150160000063
The code number of the action characteristic is z, and the pixel quantity of the target graphic characteristic standard is X iz The pixel quantity of the characteristic value of the contrast figure is X jz If so, the coincidence score >>
Figure GDA0003928150160000064
Further, in step S7, the weight coefficient of the color feature is 20%, the weight coefficient of the shape feature is 15%, the weight coefficient of the face feature is 35%, the weight coefficient of the motion feature is 30%, and the weights are set to 20%, 15%, 35%, and 30%, respectively
Figure GDA0003928150160000071
Further, in step S8, the block graph with a larger weight value includes the pedestrian target, and if the weight values are the same, the pedestrian target is selected according to the facial feature W s Characteristic of motion W z Color characteristic W y And shape feature W q The pedestrian target is tracked in this order.
Further, in step S5, the environmental characteristics refer to the external environment other than the person.
Further, in step S6, if the block graph does not include the facial features of the person, the facial feature overlap ratio score is determined
Figure GDA0003928150160000072
Not performing weight calculation on the graph; in step S8, if the block pattern does not include the character motion feature, the coincidence score of the character motion feature is greater than or equal to>
Figure GDA0003928150160000073
No weight calculation is performed on the graph.
The working principle is as follows: when in use, a pedestrian target is manually selected from a first frame of a video to be used as initial data of target tracking, an initial standard of coincidence degree calculation is established according to the target, image data is intercepted, the image data is decomposed into a plurality of block graphs, the graph characteristics of the block graphs are extracted, the graph characteristics of the block graphs are compared with the characteristics of the pedestrian target, the target tracking of a second frame is subjected to fraction calculation according to the initial standard of a first frame, the characteristic quantity of the second frame is used as a secondary standard for establishing the coincidence degree, the target tracking of a third frame is subjected to fraction calculation according to the initial standard of the second frame until a last frame is reached, the comparison is finished, even if the appearance of the target is changed greatly or background information causes interference on a target model, but the target graph is arranged at the previous frame of a search frame, reducing the probability of tracking failure, stripping the color characteristics of block graphs during comparison, establishing a color coincidence degree classification pool, wherein each image is provided with one or more color channels which store information about colors in the images, the color characteristics are to extract the colors at the same position and compare the colors with a target position, the graph blocks with high color coincidence rate are preferentially compared during shape characteristic comparison, the shape characteristics of the stripped block graphs are not only the shape characteristics of concrete objects but also the shape characteristics of overlapped combined shapes, and the comparison analysis is carried out by the shape characteristics of the objects and abstract, and because the transition and the environment of adjacent frames are not greatly different, the target position can be locked more quickly by the two pairs of objects, the target position is established, and the color coincidence degree classification pool is establishedSetting up a shape coincidence degree classification pool and two kinds of figure blocks with high coincidence rate in shape, preferentially comparing when performing figure facial feature comparison, identifying and removing the environmental features of the block figures, namely removing all the environments outside the human body, stripping the figure facial features of the block figures, respectively comparing the facial features according to the positions of five sense organs, establishing the facial feature coincidence degree classification pool, if the block figures do not contain the figure facial features, the facial feature coincidence degree score is 0, not performing weight calculation on the figures, stripping the figure action features of the block figures, making an action feature dynamic figure, performing action state estimation on the action of the current frame, when performing coincidence degree calculation, performing coincidence degree comparison on the action coincidence degree of the next frame and all estimated actions, using the value with high coincidence degree as the action feature coincidence degree score of the current frame, establishing the action feature coincidence degree classification pool, if the block figures do not contain the figure action features, the action feature coincidence degree score is 0, not performing weight calculation on the figures, and performing weight calculation through the feature W s Characteristic of motion W z Color feature W y And shape feature W q Classifiers are constructed in the respectively established classification pools, the coincidence degree fraction is calculated for the examples in each classification pool of the classifier, the weight calculation is carried out on the coincidence degree fraction, the scene coincidence parts are all overlapped to determine the pedestrian target, the tracking accuracy is improved,
Figure GDA0003928150160000081
calculating the maximum value according to the weight to track the pedestrian target, and if the weight values are the same, tracking the pedestrian target according to the facial features W s And an operation characteristic W z Color feature W y And shape feature W q The pedestrian target is tracked in this order.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (2)

1. A pedestrian target tracking method based on multi-example learning is characterized by comprising the following steps:
s1, intercepting image data;
s2, decomposing the image data into a plurality of block graphs;
s3, extracting the graph characteristics of the block graph, including extracting facial characteristics, action characteristics, color characteristics and shape characteristics;
s4, comparing the graph characteristics of the block graph with the characteristics of the pedestrian target;
s5, constructing a classifier;
s6, calculating the contact ratio score of the examples in each classification pool of the classifier;
s7, carrying out weight calculation on the contact ratio fraction;
s8, calculating and tracking a pedestrian target according to the weight;
wherein, the step of constructing the classifier in the step S5 comprises the following steps:
s51, stripping color characteristics of the block graph;
s52, establishing a color contact ratio classification pool;
s53, stripping the shape characteristics of the block graph;
s54, establishing a shape contact ratio classification pool;
s55, identifying and removing the environmental characteristics of the block graph;
s56, stripping the character face characteristics of the block graph;
s57, establishing a facial feature contact ratio classification pool;
s58, stripping character action characteristics of the block graph;
s59, making an action characteristic dynamic graph;
s510, establishing an action characteristic contact ratio classification pool;
in the step S4, the pedestrian target is a target that is manually selected from a first frame of the video and is used as initial data for target tracking, an initial standard for calculating the coincidence degree is established according to the target, the target tracking of a second frame performs score calculation according to the initial standard of the first frame, the characteristic quantity of the second frame is used as a secondary standard for establishing the coincidence degree, the target tracking of a third frame performs score calculation according to the initial standard of the second frame until a last frame is reached, and the comparison is finished;
in the step S5, the classifier comprises a color contact ratio classification pool, a shape contact ratio classification pool, a facial feature contact ratio classification pool and an action feature contact ratio classification pool;
in the step S6, the color feature code is y, and the standard pixel quantity of the target graph feature is X iy The pixel quantity of the characteristic value of the contrast figure is X jy Then the score of degree of coincidence
Figure FDA0004049203610000021
The shape feature code is q, and the pixel quantity of the target graphic feature standard is X iq The number of pixels comparing the feature value of the pattern is X jq If so, the coincidence score >>
Figure FDA0004049203610000022
The code number of the facial feature is s, and the pixel quantity of the target graphic feature standard is X is The pixel quantity of the characteristic value of the contrast figure is X js Then the score of degree of coincidence
Figure FDA0004049203610000023
The code number of the action characteristic is z, and the pixel quantity of the target graphic characteristic standard is X iz The pixel quantity of the characteristic value of the contrast figure is X jz If so, the coincidence score >>
Figure FDA0004049203610000024
In step S7, the weight coefficient of the color feature is 20%, the weight coefficient of the shape feature is 15%, the weight coefficient of the face feature is 35%, the weight coefficient of the motion feature is 30%, and the weights are set to the above values
Figure FDA0004049203610000025
In step S8, the weights of the block images are compared, the block images with large weight values include pedestrian targets, and if the weight values are the same, the pedestrian targets are identified according to the facial features W s Characteristic of motion W z Color characteristic W y And shape feature W q Tracking the pedestrian target in sequence;
in the step S5, the environmental characteristics refer to external environments except for people;
in step S6, if the block graph does not include the facial features of the person, the facial feature overlap ratio score is determined
Figure FDA0004049203610000026
Not performing weight calculation on the graph; in step S8, if the block pattern does not include the character motion feature, the degree-of-coincidence score of the motion feature is greater than or equal to>
Figure FDA0004049203610000027
No weight calculation is performed on the graph.
2. The pedestrian target tracking method based on multi-example learning according to claim 1, characterized in that: in step S2, the number of image data to be decomposed depends on the resolution and the level of the relevant pixels of the image, and the higher the resolution of the image data is, the more the block images are decomposed.
CN201910056927.XA 2019-01-22 2019-01-22 Pedestrian target tracking method based on multi-instance learning Active CN109918997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910056927.XA CN109918997B (en) 2019-01-22 2019-01-22 Pedestrian target tracking method based on multi-instance learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910056927.XA CN109918997B (en) 2019-01-22 2019-01-22 Pedestrian target tracking method based on multi-instance learning

Publications (2)

Publication Number Publication Date
CN109918997A CN109918997A (en) 2019-06-21
CN109918997B true CN109918997B (en) 2023-04-07

Family

ID=66960643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910056927.XA Active CN109918997B (en) 2019-01-22 2019-01-22 Pedestrian target tracking method based on multi-instance learning

Country Status (1)

Country Link
CN (1) CN109918997B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852285B (en) * 2019-11-14 2023-04-18 腾讯科技(深圳)有限公司 Object detection method and device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008007471A1 (en) * 2006-07-10 2008-01-17 Kyoto University Walker tracking method and walker tracking device
CN103093212B (en) * 2013-01-28 2015-11-18 北京信息科技大学 The method and apparatus of facial image is intercepted based on Face detection and tracking
CN104240266A (en) * 2014-09-04 2014-12-24 成都理想境界科技有限公司 Target object tracking method based on color-structure features
CN107270889B (en) * 2017-06-08 2020-08-25 东南大学 Indoor positioning method and positioning system based on geomagnetic map

Also Published As

Publication number Publication date
CN109918997A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109255364B (en) Scene recognition method for generating countermeasure network based on deep convolution
CN108710865B (en) Driver abnormal behavior detection method based on neural network
CN107203781B (en) End-to-end weak supervision target detection method based on significance guidance
CN103186775B (en) Based on the human motion identification method of mix description
CN111368788A (en) Training method and device of image recognition model and electronic equipment
CN107330397A (en) A kind of pedestrian's recognition methods again based on large-spacing relative distance metric learning
CN106228109A (en) A kind of action identification method based on skeleton motion track
CN105184772A (en) Adaptive color image segmentation method based on super pixels
CN110543877A (en) Identification recognition method, training method and device of model thereof and electronic system
CN106937120A (en) Object-based monitor video method for concentration
CN109657612A (en) A kind of quality-ordered system and its application method based on facial image feature
CN111178312B (en) Face expression recognition method based on multi-task feature learning network
CN110728307A (en) Method for realizing small sample character recognition of X-ray image by self-generating data set and label
CN116052218B (en) Pedestrian re-identification method
CN112580558A (en) Infrared image target detection model construction method, detection method, device and system
CN111105443A (en) Video group figure motion trajectory tracking method based on feature association
CN111310768A (en) Saliency target detection method based on robustness background prior and global information
CN109918997B (en) Pedestrian target tracking method based on multi-instance learning
CN109740527B (en) Image processing method in video frame
CN109460761A (en) Bank card number detection and recognition methods based on dimension cluster and multi-scale prediction
CN103020631B (en) Human movement identification method based on star model
CN112487926A (en) Scenic spot feeding behavior identification method based on space-time diagram convolutional network
CN113221667A (en) Face and mask attribute classification method and system based on deep learning
CN106651918A (en) Method for extracting foreground under shaking background
CN106530319A (en) Video object cooperative segmentation method based on track directed graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant