CN112686153B - Three-dimensional skeleton key frame selection method for human behavior recognition - Google Patents

Three-dimensional skeleton key frame selection method for human behavior recognition Download PDF

Info

Publication number
CN112686153B
CN112686153B CN202011608049.7A CN202011608049A CN112686153B CN 112686153 B CN112686153 B CN 112686153B CN 202011608049 A CN202011608049 A CN 202011608049A CN 112686153 B CN112686153 B CN 112686153B
Authority
CN
China
Prior art keywords
frame
key
frames
sequence
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011608049.7A
Other languages
Chinese (zh)
Other versions
CN112686153A (en
Inventor
陈皓
潘跃凯
张凯伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202011608049.7A priority Critical patent/CN112686153B/en
Publication of CN112686153A publication Critical patent/CN112686153A/en
Application granted granted Critical
Publication of CN112686153B publication Critical patent/CN112686153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a key frame selection method for behavior recognition aiming at a human body three-dimensional skeleton, and belongs to the field of computer vision and pattern recognition. The method comprises the following steps: firstly, acquiring a human body three-dimensional skeleton joint data stream from a video image through a depth sensor or an attitude estimation algorithm; secondly, extracting attitude characteristics and determining an inflection point frame in the sequence according to momentum change of motion of each body part; then, the pose feature vector is input into a key frame selection model of the fusion domain information and the number of key frames to obtain a key frame sequence. The model adopts binary coding, takes an inflection point frame as a population initialization identifier, and adopts a multi-objective binary difference algorithm to perform key frame coding optimization. The extracted key frame sequence has stronger motion summarization capability, and the number of the key frames can be adaptively adjusted according to the complexity of the behaviors, so that the optimized and generated key frame sequence can obtain higher accuracy in human behavior recognition.

Description

Three-dimensional skeleton key frame selection method for human behavior recognition
Technical Field
The invention belongs to the field of computer vision and pattern recognition, and particularly relates to a key frame selection method for human behavior recognition based on three-dimensional skeleton characteristics.
Background
The main research content of human behavior analysis is to analyze human behaviors in a video and classify and identify behavior categories by collecting behavior signals of a target human body. In recent years, human behavior recognition gradually becomes a hot research problem in the field of computer vision, and has wide application prospects and potential economic values in the fields of public security, human-computer interaction, sports, medical care and the like.
At present, research on human behavior recognition can be divided into behavior recognition based on skeletal joint point characteristics and behavior recognition based on non-skeletal characteristics according to different research data, wherein the behavior recognition based on the non-skeletal characteristics is mainly based on traditional image data. With the development and popularization of the depth sensor, the acquisition of high-precision three-dimensional skeleton joint point information becomes simple and convenient, meanwhile, the description of the skeleton posture to the behavior has the inherent advantages, the human body posture and the motion state can be accurately described, and the influence of factors such as background, illumination and the like is avoided. In the behavior recognition method based on the three-dimensional skeleton joint point characteristics, most of research is to process the whole behavior sequence, but not all frames in the sequence are meaningful for behavior recognition, so that effective selection of key frames can reduce data redundancy and computational complexity and can more effectively express behavior characteristics. Clustering is a common method for selecting key frames, and the method has strong generalization capability on motion description, but specific motion significance of data is not considered in the clustering process, frames separated by a certain distance in a clustering space can be clustered into the same class, and the time sequence of motion is ignored, so that distortion of motion analysis is easily caused, the number of clusters needs to be manually specified, and automatic processing is not easily realized.
Disclosure of Invention
Aiming at the problems existing in the extraction of the existing behavior recognition key frame, the invention provides a key frame selection method for performing behavior recognition on three-dimensional skeleton joint point characteristics. For this reason, the key technical problems to be solved include: extracting posture characteristics; establishing and solving an optimization problem; and generating the decoded key frame sequence and calculating the behavior classification identification. In order to achieve the purpose, the specific technical scheme of the invention is as follows:
a keyframe selection optimization method for human body three-dimensional skeleton behavior recognition comprises the following steps:
step 1: and reading the three-dimensional skeleton joint point data. The method specifically comprises the following steps:
acquiring three-dimensional skeleton joint data from a video image through a depth sensor or a posture estimation algorithm, reading a behavior sequence according to the following structure, wherein one behavior sequence comprises T frames, position coordinates of N joint points are provided in each frame, and then a joint position matrix of the behavior sequence can be represented as follows:
Figure BDA0002870588970000021
wherein p is tn Representing the nth joint, p, in the t-th frame tn =(x tn ,y t,n z t ),t∈{1,2,...,T},n∈{1,2,...,N},x tn ,y tn ,z tn Respectively representing the x-axis, y-axis and z-axis coordinates of the joint point.
Step 2: and extracting the posture features. Calculating the normalized feature vector of each frame, specifically as follows:
Figure BDA0002870588970000022
wherein p is t0 For the center of the abdomen as a central reference point, p t3 For neck joint points, | | represents the Euclidean distance, and the vector d relative to the central reference point is calculated for each skeleton joint point tn And obtaining the attitude characteristic by normalizing the distance from the neck to the center of the abdomen, wherein the attitude characteristic vector of the t-th frame is expressed as f t =[d t1 ,d t2 ,...,d tn ]。
And step 3: an inflection point frame is determined. Defining frames reaching extreme values in the motion tracks of the important joints as turning point frames, extracting the turning point frames by utilizing motion track curves of the important joints because certain motion linkage relation exists among all joint points in the human body structure and terminal joint points such as left and right hands and left and right feet contain more motion information; the concrete solution is as follows: let the motion trail of one of the joints be S = { p = { 1n ,p 2n ,...,p tn Denotes that S is mappedTo a two-dimensional space: s → f (t, m), wherein f (t, m) is t frame which is the momentum of the displacement distance of the joint relative to the initial position, the frame where the local extreme point in f (t, m) is located is taken as an inflection point frame, and the sequence number of the inflection point frame is recorded.
And 4, step 4: a problem space for selecting key frames is constructed. And defining adjacent frames before and after each key frame as the same domain, wherein the domain information defining the key frames is as follows:
Figure BDA0002870588970000023
wherein, f i r Representing the pose feature vector of the ith frame in the r domain, and representing the key frame vector in the r domain as
Figure BDA0002870588970000025
dis (-) is used to represent information between two frames>
Figure BDA0002870588970000024
Is the dot product of w and the Euclidean distance of each feature between two frames, w is a column vector matrix which represents the weight of the joint feature of each body part and is obtained by the ratio of the movement amount of the important joint point of each body part in the step 3; defining the inter-domain information of the key frame as:
Figure BDA0002870588970000031
wherein
Figure BDA0002870588970000035
Information expressed as key frames in the r-th and j-th domains, u rj The weight coefficient is a weight coefficient between two domains, the weight coefficient is related to the domain interval size, the key frame information difference of adjacent domains is relatively small, and the information difference between key frames with large domain intervals is relatively large; the attitude time sequence change characteristics are fully reserved by integrating intra-domain information and inter-domain information, and a domain information objective function for evaluating the quality of the key frame is defined as follows:
Figure BDA0002870588970000032
another objective function that measures the number of key frames is defined as the frame compression ratio:
Figure BDA0002870588970000033
wherein the frame key For the number of selected keyframes, frames total The number of frames contained in the sequence of behaviors in total. Final objective function:
min{DI,FC}
and 5: the key frame selection mainly aims at searching the optimal subset of the three-dimensional skeleton characteristic frame, the process is substantially a process of searching the optimal, the key frame selection is converted into a multi-objective optimization problem in a binary coding space, and an improved multi-objective differential evolution algorithm is adopted to solve the model. The method specifically comprises the following steps:
step 5.1: initializing parameters, setting the current iteration times G =0, the maximum iteration times Gmax, the population size Np and the cross probability CR, and generating a probability PG;
step 5.2: aiming at the extraction problem of the key frame, binary coding is adopted in the text, a variable of 0-1 is used for representing the state of each frame in the sequence, when the value is 0, the frame is not the key frame, and when the value is 1, the frame is the key frame; the initial frame, the end frame and the inflection point frame are used as the selected key frames, the initialization population is randomly generated by other chromosome positions in a discontinuous way, and the fitness of the individual is calculated according to the target function;
step 5.3: evolution algebra G = G +1;
step 5.4: according to the fitness value, carrying out non-dominated sorting on the individuals, and dividing the population into three communities Pop1, pop2 and Pop3;
and step 5.5: let the individual index i =0 of the population;
step 5.6: selecting individuals r1, r2 and r3 from the three communities respectively;
step 5.7: generating variablesDifferential vector V i,G
Figure BDA0002870588970000034
Wherein
Figure BDA0002870588970000043
Represents an exclusive-OR operation, <' > based on the comparison result>
Figure BDA0002870588970000044
Indication and operation, <' > indication OR operation, X r1,G 、X r2,G And X r3,G Are parents taken from Pop1, pop2 and Pop3, respectively. In order to fully retain the prior knowledge of the inflection point frame, in the process of generating the random vector F, the chromosome position fixed value of the inflection point frame is 1, and the values of other positions are determined by a random number and the generation probability;
step 5.8: carrying out intersection operation according to an intersection operator:
Figure BDA0002870588970000041
wherein j is a random index, rand j For randomly generating numbers, rand j ∈[0,1]Performing intersection with probability CR;
step 5.9: and carrying out selection operation according to a selection operator:
Figure BDA0002870588970000042
wherein f (-) is an objective function;
step 5.10: index i = i +1, go to step 5.6 until i = NP, otherwise go to step 5.11;
step 5.11: if G = Gmax, the algorithm ends, the key frame set is output, otherwise step3 is transposed.
Step 6: and classifying based on the behavior of the key frame. And (5) decoding the binary codes obtained in the step (5) to obtain a key frame sequence, inputting the three-dimensional skeleton characteristics corresponding to the sequence into a human behavior classifier, and outputting a behavior classification result.
Drawings
Fig. 1 is a key frame extraction method for three-dimensional skeleton behavior recognition according to an embodiment of the present invention;
FIG. 2 is a model of a human skeletal joint point in accordance with an embodiment of the present invention;
FIG. 3 is a momentum curve for determining inflection frames in an embodiment of the present invention;
FIG. 4 is a key frame extracted in an embodiment of the present invention;
FIG. 5 is a confusion matrix of recognition results according to an embodiment of the present invention;
Detailed Description
The invention is further described below with reference to the accompanying drawings. The technical solution and implementation process of the present invention are described below by taking a high throw as an example, which should not be taken as a limitation to the protection scope of the present invention. As shown in fig. 1, a process of a method for extracting a keyframe for three-dimensional skeleton behavior recognition is specifically implemented as follows:
step 1: and reading the three-dimensional skeleton joint point data. As shown in fig. 2, when the behavior sequence contains 42 frames and each frame contains 20 joint points, the joint position matrix of the whole behavior sequence can be represented as:
Figure BDA0002870588970000051
step 2: and extracting the attitude characteristics. For each frame, a normalized feature vector is calculated, such as where the 1 st joint feature of frame 1 is represented as:
d 1,1 ={-0.3924,0.8008,-0.1231}
then the 60-dimensional pose feature vector at frame 1 is:
f 1 =[-0.3924,0.8008,-0.1231,...,-0.0242,1.4841,-0.4189]
and 3, step 3: an inflection frame is determined. In this sequence of actions, the momentum of the joint relative to the displacement distance from the initial position is shown in FIG. 3, and if local extrema are obtained at points 1,8,15,27,32,37 in the figure, the number of the inflection point frame is {1,8,15,27,32,37}.
And 4, step 4: and constructing a multi-target key frame extraction model fusing domain information and the number of key frames.
And 5: solving the key frame extraction model by a multi-target differential evolution algorithm based on binary coding. The optimal binary code for the behavior sequence after the optimization calculation is [0 10 0 0 0 0 10 0 0 10 0 10 0 0 0 10 0 10 0 0 10 0 0 0 0 0 0 0].
Step 6: and classifying based on the behavior of the key frame. The optimal binary code obtained in step 5 is decoded to obtain a sequence of keyframes {0,8,12,15,19,22,23,27,32,35}, and the visualization is as shown in fig. 4, wherein the features of the extracted sequence of keyframes are used as input values, the input values are input into a classifier of a support vector machine, and the classification result of the output behavior category is 'high throw'. Experiments the confusion matrix for the recognition results in the MSR-Action3D dataset is shown in fig. 5, with an average recognition accuracy of about 92.88%.

Claims (1)

1. A keyframe selection optimization method for human body three-dimensional skeleton behavior recognition comprises the following steps:
step 1: reading three-dimensional skeleton joint point data, specifically:
acquiring three-dimensional skeleton joint data from a video image through a depth sensor or a posture estimation algorithm, reading a behavior sequence according to the following structure, wherein one behavior sequence comprises T frames, position coordinates of N joint points are provided in each frame, and then a joint position matrix of the behavior sequence can be represented as follows:
Figure FDA0004089306030000011
wherein p is tn Representing the nth joint, p, in the t-th frame tn =(x tn ,y tn ,z tn ),t∈{1,2,...,T},n∈{1,2,...,N},x tn ,y tn ,z tn Coordinates representing the x-axis, y-axis and z-axis of the joint point, respectively;
and 2, step: extracting the attitude characteristics, calculating the normalized characteristic vector of each frame, specifically:
Figure FDA0004089306030000012
wherein p is t0 For the center of the abdomen as a central reference point, p t3 For neck joint points, | | represents the Euclidean distance, and the vector d relative to the central reference point is calculated for each skeleton joint point tn And obtaining the attitude characteristic by normalizing the distance from the neck to the center of the abdomen, wherein the attitude characteristic vector of the t-th frame is expressed as f t =[d t1 ,d t2 ,...,d tn ];
And step 3: determining an inflection point frame, defining a frame reaching an extreme value in the motion trail of the important joint as the inflection point frame, and specifically solving the following steps: let S = { p ] movement locus of important joint 1n ,p 2n ,...,p tn Denotes that S is mapped to a two-dimensional space: s → f (t, m), wherein f (t, m) is the displacement momentum of the mth joint point in the tth frame relative to the position in the initial frame, the frame where the local extreme point with the value of f (t, m) in the processed frame sequence is located is taken as an inflection point frame, and the sequence number of the inflection point frame in the processed video frame sequence is identified and recorded;
and 4, step 4: a problem space for selecting the key frames is constructed, front and back adjacent frames of each key frame are defined as the same domain, and the domain information of the key frames is defined as follows:
Figure FDA0004089306030000013
wherein, f i r Representing the pose feature vector of the ith frame in the r-th domain, and the keyframe vector in the r-th domain is represented as f 0 r Dis (-) is used to represent information between two frames,
Figure FDA0004089306030000014
is the dot product of w and the Euclidean distance of each feature between two frames, w is a column vector matrix and represents the weight of the joint feature of each body part, and the weight is obtained by the ratio of the moving amount of the important joint point of each body part in the step 3; defining the inter-domain information of the key frame as:
Figure FDA0004089306030000021
wherein
Figure FDA0004089306030000022
Information expressed as key frames in the r-th and j-th domains, u rj The key frame information difference between the adjacent domains is relatively small, and the information difference between the key frames with large domain intervals is relatively large; the attitude time sequence change characteristics are fully reserved by integrating intra-domain information and inter-domain information, and a domain information objective function for evaluating the quality of the key frame is defined as follows:
Figure FDA0004089306030000023
another objective function that measures the number of key frames is defined as the frame compression ratio:
Figure FDA0004089306030000024
wherein the frame key For the number of selected key frames, frames total The final objective function is the number of frames contained in the behavior sequence in total:
min{DI,FC}
and 5: the key frame selection aims at searching an optimal subset of three-dimensional skeleton characteristic frames, converting the key frame selection into a multi-objective optimization problem in a binary coding space, and solving a model by adopting an improved multi-objective differential evolution algorithm, wherein the method specifically comprises the following steps:
step 5.1: initializing parameters, setting the current iteration times G =0, the maximum iteration times Gmax, the population size Np and the cross probability CR, and generating a probability PG;
step 5.2: aiming at the extraction problem of the key frame, binary coding is adopted, a variable of 0-1 is used for representing the state of each frame in the sequence, when the value is 0, the frame is not the key frame, and when the value is 1, the frame is the key frame; the initial frame, the end frame and the inflection point frame are used as the acquisition key frames, other chromosome positions randomly generate an initialization population in a discontinuous way, and the fitness of the individuals is calculated according to the target function;
step 5.3: evolution algebra G = G +1;
step 5.4: according to the fitness value, carrying out non-dominated sorting on the individuals, and dividing the population into three communities Pop1, pop2 and Pop3;
and step 5.5: let the individual index of the population i =0;
step 5.6: selecting individuals r1, r2 and r3 from the three communities respectively;
step 5.7: generating a variation vector V i,G
Figure FDA0004089306030000031
Wherein
Figure FDA0004089306030000032
Represents an exclusive-OR operation, <' > based on the comparison result>
Figure FDA0004089306030000033
An AND operation, an OR operation, X r1,G 、X r2,G And X r3,G In order to fully retain the prior knowledge of the inflection point frame, the fixed value of the dyeing body position of the inflection point frame is 1 in the process of generating a random vector F, and the values of other positions are determined by a random number and a generation probability;
step 5.8: carrying out intersection operation according to an intersection operator:
Figure FDA0004089306030000034
wherein j is a random index, rand j For randomly generating numbers, rand j ∈[0,1]Crossing with probability CR;
step 5.9: and carrying out selection operation according to a selection operator:
Figure FDA0004089306030000035
wherein f (-) is an objective function;
step 5.10: index i = i +1, go to step 5.6 until i = NP, otherwise go to step 5.11;
step 5.11: if G = Gmax, the algorithm is ended, and a key frame set is output, otherwise, the step is switched to step 5.3;
and 6: and (4) based on behavior classification of the key frames, decoding the binary codes obtained in the step (5) to obtain a key frame sequence, inputting three-dimensional skeleton features corresponding to the sequence into a human behavior classifier, and outputting a behavior classification result.
CN202011608049.7A 2020-12-30 2020-12-30 Three-dimensional skeleton key frame selection method for human behavior recognition Active CN112686153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011608049.7A CN112686153B (en) 2020-12-30 2020-12-30 Three-dimensional skeleton key frame selection method for human behavior recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011608049.7A CN112686153B (en) 2020-12-30 2020-12-30 Three-dimensional skeleton key frame selection method for human behavior recognition

Publications (2)

Publication Number Publication Date
CN112686153A CN112686153A (en) 2021-04-20
CN112686153B true CN112686153B (en) 2023-04-18

Family

ID=75454957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011608049.7A Active CN112686153B (en) 2020-12-30 2020-12-30 Three-dimensional skeleton key frame selection method for human behavior recognition

Country Status (1)

Country Link
CN (1) CN112686153B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627365A (en) * 2021-08-16 2021-11-09 南通大学 Group movement identification and time sequence analysis method
CN114926910A (en) * 2022-07-18 2022-08-19 科大讯飞(苏州)科技有限公司 Action matching method and related equipment thereof
CN115665359B (en) * 2022-10-09 2023-04-25 西华县环境监察大队 Intelligent compression method for environment monitoring data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000465A1 (en) * 2015-07-01 2017-01-05 中国矿业大学 Method for real-time selection of key frames when mining wireless distributed video coding
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping
CN109858406A (en) * 2019-01-17 2019-06-07 西北大学 A kind of extraction method of key frame based on artis information
CN111310659A (en) * 2020-02-14 2020-06-19 福州大学 Human body action recognition method based on enhanced graph convolution neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000465A1 (en) * 2015-07-01 2017-01-05 中国矿业大学 Method for real-time selection of key frames when mining wireless distributed video coding
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping
CN109858406A (en) * 2019-01-17 2019-06-07 西北大学 A kind of extraction method of key frame based on artis information
CN111310659A (en) * 2020-02-14 2020-06-19 福州大学 Human body action recognition method based on enhanced graph convolution neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network";Hashim Yasin等;《Sensors》;20200415;第1-24页 *
"基于人体关节点数据的攻击性行为识别";陈皓等;《计算机应用》;20190810;第39卷(第8期);第2235-2241页 *

Also Published As

Publication number Publication date
CN112686153A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112686153B (en) Three-dimensional skeleton key frame selection method for human behavior recognition
Wang et al. Enhancing sketch-based image retrieval by cnn semantic re-ranking
Tu et al. Edge-guided non-local fully convolutional network for salient object detection
Cheng et al. Cross-modality compensation convolutional neural networks for RGB-D action recognition
Youssif et al. Arabic sign language (arsl) recognition system using hmm
Xiao et al. Multimodal fusion based on LSTM and a couple conditional hidden Markov model for Chinese sign language recognition
Yao et al. Learning universal multiview dictionary for human action recognition
CN110555387B (en) Behavior identification method based on space-time volume of local joint point track in skeleton sequence
CN111046732B (en) Pedestrian re-recognition method based on multi-granularity semantic analysis and storage medium
CN109858406A (en) A kind of extraction method of key frame based on artis information
Kadu et al. Automatic human mocap data classification
CN110163117B (en) Pedestrian re-identification method based on self-excitation discriminant feature learning
Xia et al. LAGA-Net: Local-and-global attention network for skeleton based action recognition
Song et al. Temporal action localization in untrimmed videos using action pattern trees
Wang et al. A novel sign language recognition framework using hierarchical grassmann covariance matrix
Yang et al. Simultaneous spotting of signs and fingerspellings based on hierarchical conditional random fields and boostmap embeddings
Liu et al. Dual-stream generative adversarial networks for distributionally robust zero-shot learning
Chen et al. TriViews: A general framework to use 3D depth data effectively for action recognition
CN113435421B (en) Cross-modal attention enhancement-based lip language identification method and system
Baek et al. Generative adversarial ensemble learning for face forensics
Wang et al. A deep clustering via automatic feature embedded learning for human activity recognition
Xu et al. Motion recognition algorithm based on deep edge-aware pyramid pooling network in human–computer interaction
Zhang et al. Appearance-and-dynamic learning with bifurcated convolution neural network for action recognition
Zhai et al. Adaptive two-stream consensus network for weakly-supervised temporal action localization
Pang et al. Analysis of computer vision applied in martial arts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant