CN101515371B - Human body movement data fragment extracting method - Google Patents
Human body movement data fragment extracting method Download PDFInfo
- Publication number
- CN101515371B CN101515371B CN2009100969762A CN200910096976A CN101515371B CN 101515371 B CN101515371 B CN 101515371B CN 2009100969762 A CN2009100969762 A CN 2009100969762A CN 200910096976 A CN200910096976 A CN 200910096976A CN 101515371 B CN101515371 B CN 101515371B
- Authority
- CN
- China
- Prior art keywords
- center
- range
- com
- velocity
- speed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Abstract
The invention discloses a human body movement data fragment extracting method. The method adopts the computer movement data compiling and the human body model building knowledge to realize the automatic intelligent identification and split of the action fragment with the basic semantic unit in the movement data. At first, the movement capture data needing to extract the action fragment is inputted, the kinematics model of human body is renewed, the centroidal speed is calculated out based on the physical knowledge of each joint and the instant speed, the centroidal speed of corresponding frame in the data sequence is in the spectrum analysis and split based on the obtaining of the centroidal speed, the action fragment with the basic semantic unit is extracted. In the method, because the centroidal speed is calculated out and the speed is in clustering extremume split, the accuracy and the efficiency of the action identification are improved, the low efficiency and the subjectivity of handwork extracting, the efficiency for the reuse of the movement capture data in the human body part animation producing process.
Description
Technical field
The present invention relates to a kind of extraction method of human action fragment, be specifically related to a kind of gaining knowledge and handle the semantic cutting method of the original motion data that obtains by the capturing movement equipment with the exercise data editing technique, belong to the general field of computing machine motion capture data editor and human body role animation based on human motion.
Background technology
Along with the development of exercise data reuse technology, the motion segmentation technology becomes the important research focus in human body role animation field gradually.Although still be in the elementary stage about this Study on Technology at present, a lot of relatively effective methods have been arranged.
" the Motion Abstraction and Mapping withSpatial Constraints " that delivered on meeting CAPTECH in 1998 discloses a kind of zero point that detects movable joint momentum acceleration of utilizing and judged whether the human body role carries out mutual method with the external world." the Automated Derivation of Primitives for Movement Classification " that delivered on periodical Autonomous Robots in 2002 utilizes this method to come the motion sequence data are carried out segmentation, and the result of segmentation is carried out based on the classification of principal component analysis (PCA) algorithm (PCA) with based on the cluster of K-Means algorithm.But this algorithm based on the acceleration zero point only can be effective at the eurythmy pattern, can not be by widespread in all types of motions.
In recent years, the motion segmentation technology begins to be conceived to for each human joint points analytically." Motion Modeling for On-Line Locomotion Synthesis " as delivering on SCA in 2005 discloses a kind of human motion dividing method based on centroid trajectory (COM)." the Motion Synthesis from Annotations " that delivered on SIGGRAPH in 2003 discloses a kind of method of utilizing support vector machine (SVM) sorter to come interactively mark exercise data.Although this automanual sorter can be applicable to the mark of motion preferably, this mark framework based on frame can't be caught a conversion process between the variety classes action very accurately.The represented human motion posture of single frame is because the shortcoming on the granularity, can't be really well well mark the whole flow process of motion." the Efficient Content-Based Retrieval of Motion Capture Data " that on periodical TOG, delivered in 2005, disclose a kind of method of motion search, this method is searched for by differentiating the motion feature of representing with Boolean function.When time shaft was out of shape, this method was caught the motion fragment that functional value changes.Same method also has been used on " the Motion Modeling for On-Line LocomotionSynthesis " that was published in 2005 on the SCA, wherein represent system by designing a kind of motion fragment based on symbol, this method has realized rule-based sorting technique.But the part that this method can not represent in the motion segmentation is empirical, and the effect of cutting apart exists stiff problem to a great extent.
Summary of the invention
The objective of the invention is provides a kind of extraction method based on the systemic velocity extreme value in order to overcome among the present exercise data editor for the complicacy and the subjectivity of human motion fragment manual extraction.
Human body movement data fragment extracting method comprises the steps:
1) utilizes people's kinematics model and exercise data feature, based on each articulation point physical knowledge and instantaneous velocity, the method for designing and calculating systemic velocity;
2) based on the systemic velocity computing method of step 1), the corresponding systemic velocity sequence of institute's input motion sequence of data frames is carried out spectrum analysis, and analysis result is cut extraction.
Described people's kinematics model and the exercise data feature utilized, based on each articulation point physical knowledge and instantaneous velocity, the method step of designing and calculating systemic velocity:
(a) motion capture data of input standard format is chosen m in the manikin articulation point the most representative as candidate's articulation point; Length in input is the motion capture data sequence S={f of n+1
0, f
1..., f
nIn, f wherein
iHuman body movement data by the corresponding frame of frame index i; Candidate's articulation point information vector f of human body in every frame
i={ P
I0, V
I0, P
I1, V
I1..., P
I (m-1), V
I (m-1)Represent P wherein
IjAnd V
IjRepresent that respectively candidate's articulation point j is at frame f
iPairing spatial positional information of timestamp i and velocity information; Its medium velocity V
IjAccount form as follows:
Wherein ‖ ‖ represents Euler's range formula, and fps is the frame per second of motion capture data; Obtained each frame f among the frame sequence S
iCandidate's articulation point information P of middle human body
iAnd V
iAfter, calculate barycenter at each frame f
iPairing systemic velocity V
I-COMInformation:
W wherein
jBe V
IjCorresponding weighted value, according to different articulation points for the percentage contribution of systemic velocity and difference, acquisition human sports trapped data sequence { f
0, f
1..., f
nPairing continuous systemic velocity set { V
0-COM, V
1-COM..., V
N-COM.
Described systemic velocity computing method based on step 1), the corresponding systemic velocity sequence of institute's input motion sequence of data frames is carried out spectrum analysis, and analysis result is cut extraction step:
(b) obtain the pairing continuous systemic velocity set { V of human sports trapped data sequence
0-COM, V
1-COM..., V
N-COMAfter, sets of speeds is carried out traversal one time, obtain speed V maximum in the set
MaxWith minimum speed V
Min, the threshold value θ of a filtration is set
Thre, computing formula is as follows:
Wherein slack is adjustable division parameter, and the difference between minimum and maximum is divided between several dividing regions, will satisfy { V in the systemic velocity set
i≤ V
Min+ θ
Thre| V
i∈ { V
ThrePut into and divide set { V
ThreIn;
(c) for dividing set { V
ThreIn each systemic velocity V
I-COM, drop on (θ between the index area for all index value i
Range,+θ
Range) in speed put into one and divide to assemble subclass { V
RangeIn, θ wherein
RangeIt is adjustable radius parameter; Choose the pairing concentration range threshold value of index, obtain for dividing set { V
ThreSeveral divide to assemble the intersection { { V of subclass
1-range, { V
2-range..., { V
K-range, wherein k is by being obtained to assemble number; Ask for then to divide and assemble subclass { V
I-rangeCorresponding central speed V
I-center:
V wherein
I-centerAssemble subclass { V for dividing
I-rangeIn all speed V
I-COMWeighted mean, w
iBe the weighted value of speed correspondence, index i-center is the intermediate value between this index area, and λ assembles subclass { V for dividing
I-rangeIn the number of all elements; A complete division set { V who obtains
ThreCorresponding V
CenterSet { V
1-center, V
2-center..., V
K-center, according to the V that obtains
CenterSet, to the index of frame according to 1-center, 2-center ..., k-center} is cut apart exercise data as cut-point, extracts exercise data between two cut-points as the action fragment.
Description of drawings
Fig. 1 is a human skeleton model of the present invention;
Fig. 2 is a systemic velocity frequency spectrum of the present invention.
Embodiment
Human body movement data fragment extracting method comprises the steps:
1) utilizes people's kinematics model and exercise data feature, based on each articulation point physical knowledge and instantaneous velocity, the method for designing and calculating systemic velocity;
2) based on the systemic velocity computing method of step 1), the corresponding systemic velocity sequence of institute's input motion sequence of data frames is carried out spectrum analysis, and analysis result is cut extraction.
Described people's kinematics model and the exercise data feature utilized, based on each articulation point physical knowledge and instantaneous velocity, the method step of designing and calculating systemic velocity:
(a) motion capture data of standard format is chosen m in the manikin articulation point the most representative as candidate's articulation point; Length in input is the motion capture data sequence S={f of n+1
0, f
1..., f
nIn, f wherein
iHuman body movement data by the corresponding frame of frame index i; Candidate's articulation point information vector f of human body in every frame
i={ P
I0, V
I0, P
I1, V
I1..., P
I (m-1), V
I (m-1)Represent P wherein
IjAnd V
IjRepresent that respectively candidate's articulation point j is at frame f
iPairing spatial positional information of timestamp i and velocity information; Its medium velocity V
IjAccount form as follows:
Wherein || || expression Euler range formula, fps is the frame per second of motion capture data; Obtained each frame f among the frame sequence S
iCandidate's articulation point information P of middle human body
iAnd V
iAfter, calculate barycenter at each frame f
iPairing systemic velocity V
I-COMInformation:
W wherein
jBe V
IjCorresponding weighted value, according to different articulation points for the percentage contribution of systemic velocity and difference, acquisition human sports trapped data sequence { f
0, f
1..., f
nPairing continuous systemic velocity set { V
0-COM, V
1-COM..., V
N-COM.
Described systemic velocity computing method based on step 1), the corresponding systemic velocity sequence of institute's input motion sequence of data frames is carried out spectrum analysis, and analysis result is cut extraction step:
(b) obtain the pairing continuous systemic velocity set { V of human sports trapped data sequence
0-COM, V
1-COM..., V
N-COMAfter, sets of speeds is carried out traversal one time, obtain speed V maximum in the set
MaxWith minimum speed V
Min, the threshold value θ of a filtration is set
Thre, computing formula is as follows:
Wherein slack is adjustable division parameter, and the difference between minimum and maximum is divided between several dividing regions, will satisfy { V in the systemic velocity set
i≤ V
Min+ θ
Thre| V
i∈ { V
ThrePut into and divide set { V
ThreIn;
(c) for dividing set { V
ThreIn each systemic velocity V
I-COM, drop on (θ between the index area for all index value i
Range,+θ
Range) in speed put into one and divide to assemble subclass { V
RangeIn, θ wherein
RangeIt is adjustable radius parameter; Choose the pairing concentration range threshold value of index, obtain for dividing set { V
ThreSeveral divide to assemble the intersection { { V of subclass
1-range, { V
2-range..., { V
K-range, wherein k is by being obtained to assemble number; Ask for then to divide and assemble subclass { V
I-rangeCorresponding central speed V
I-center:
V wherein
I-centerAssemble subclass { V for dividing
I-rangeIn all speed V
I-COMWeighted mean, w
iBe the weighted value of speed correspondence, index i-center is the intermediate value between this index area, and λ assembles subclass { V for dividing
I-rangeIn the number of all elements; A complete division set { V who obtains
ThreCorresponding V
CenterSet { V
1-center, V
2-center..., V
K-center, according to the V that obtains
CenterSet, to the index of frame according to 1-center, 2-center ..., k-center} is cut apart exercise data as cut-point, extracts exercise data between two cut-points as the action fragment.
Embodiment
(1) human sports trapped data of input standard format is imported the file of the human motion standard .trc form that the near-infrared optical capture systems obtained, per second 24 frames in the present embodiment; Choose in the manikin 18 articulation points the most representative as candidate's articulation point, belong to the part that four limbs, cervical vertebra, the several motions of head are concentrated respectively, human skeleton information as shown in Figure 1;
Length in input is the motion capture data sequence S={f of n+1
0, f
1..., f
nIn, f wherein
iExercise data by the corresponding frame of frame index i; Candidate's articulation point information vector f of human body in every frame
i={ P
I0, V
I0, P
I1, V
I1..., P
I (m-1), V
I (m-1)Represent P wherein
IjAnd V
IjRepresent that respectively candidate's articulation point j is at frame f
iPairing spatial positional information of timestamp and velocity information, m represents the number of candidate's articulation point, m is 18 in the present embodiment, its medium velocity V
IjAccount form as follows:
Wherein || || expression Euler range formula, fps is the frame per second of motion capture data, is 24 frame per seconds in the present embodiment; Obtained each frame f among the frame sequence S
iCandidate's articulation point information P of middle human body
iAnd V
iAfter, calculate barycenter at each frame f
iPairing systemic velocity V
I-COMInformation:
W wherein
jBe V
IjCorresponding weighted value, for the percentage contribution of systemic velocity and difference, in the present embodiment, the weights of candidate's articulation point correspondence that belong to four limbs are bigger according to different articulation points, and the weights that belong to cervical vertebra and head are less; Obtain human sports trapped data sequence { f
0, f
1..., f
nPairing continuous systemic velocity set { V
0-COM, V
1-COM..., V
N-COM, as shown in Figure 2.
(2) obtain the pairing continuous systemic velocity set { V of human sports trapped data sequence
0-COM, V
1-COM..., V
N-COMAfter, utilize the bubbling algorithm that sets of speeds is carried out traversal one time in this example, obtain speed V maximum in the set
MaxWith minimum speed V
Min, the threshold value θ of a filtration is set
Thre, computing formula is as follows:
Wherein slack is adjustable division parameter, and the difference between minimum and maximum is divided between several dividing regions, will satisfy { V in the systemic velocity set
i≤ V
Min+ θ
Thre| V
i∈ { V
ThrePut into and divide set { V
ThreIn, obtain the sets of speeds that systemic velocity set medium velocity is close to the lowest point value; In the present embodiment, the default setting of slack is that the adjusting slack that 6 whiles can be real-time obtains best effect;
For dividing set { V
ThreIn each systemic velocity V
I-COM, drop on (θ between the index area for all index value i
Range,+θ
Range) in speed put into one and divide to assemble subclass { V
RangeIn, θ wherein
RangeBe adjustable radius parameter, wherein θ
RangeBe adjustable radius parameter, choose the pairing concentration range threshold value of index, index value is carried out a cluster, eliminate the information redundancy of index value, in the present embodiment θ
RangeDefault value be 5 simultaneously also can be real-time adjusting θ
RangeObtain best effect; Choose the pairing concentration range threshold value of index, obtain for dividing set { V
ThreSeveral divide to assemble the intersection { { V of subclass
1-range, { V
2-range..., { V
K-range, wherein k is by being obtained to assemble number; Ask for then to divide and assemble subclass { V
I-rangeCorresponding central speed V
I-center:
V wherein
I-centerAssemble subclass { V for dividing
I-rangeIn all speed V
I-COMWeighted mean, w
iBe the weighted value of speed correspondence, index i-center is the intermediate value between this index area, and λ assembles subclass { V for dividing
I-rangeIn the number of all elements; A complete division set { V who obtains
ThreCorresponding V
CenterSet { V
1-center, V
2-center..., V
K-center, according to the V that obtains
CenterSet, to the index of frame according to { 1-center, 2-center, ..., k-center} is cut apart exercise data as cut-point, extract between two cut-points exercise data as the action fragment, in the present embodiment, the .trc file is carried out header and specific data are made amendment, obtain .trc file corresponding to the human action fragment.
Claims (1)
1. a human body movement data fragment extracting method is characterized in that comprising the steps:
1) utilizes people's kinematics model and exercise data feature, based on each articulation point physical knowledge and instantaneous velocity, the method for designing and calculating systemic velocity;
2) based on the systemic velocity computing method of step 1), the corresponding systemic velocity sequence of institute's input motion sequence of data frames is carried out spectrum analysis, and analysis result is cut extraction;
Described people's kinematics model and the exercise data feature utilized, based on each articulation point physical knowledge and instantaneous velocity, the method step of designing and calculating systemic velocity:
(a) motion capture data of input standard format is chosen m in the manikin articulation point the most representative as candidate's articulation point; Length in input is the motion capture data sequence S={f of n+1
0, f
1..., f
nIn, f wherein
iHuman body movement data by the corresponding frame of frame index i; Candidate's articulation point information vector f of human body in every frame
i={ P
I0, V
I0, P
I1, V
I1..., P
I (m-1), V
I (m-1)Represent P wherein
IjAnd V
IjRepresent that respectively candidate's articulation point j is at frame f
iPairing spatial positional information of timestamp i and velocity information; Its medium velocity V
IjAccount form as follows:
‖ wherein. ‖ represents Euler's range formula, and fps is the frame per second of motion capture data; Obtained each frame f among the frame sequence S
iCandidate's articulation point information P of middle human body
iAnd V
iAfter, calculate barycenter at each frame f
iPairing systemic velocity V
I-COMInformation:
W wherein
jBe V
IjCorresponding weighted value, according to different articulation points for the percentage contribution of systemic velocity and difference, acquisition human sports trapped data sequence { f
0, f
1..., f
nPairing continuous systemic velocity set { V
0-COM, V
1-COM..., V
N-COM; Described systemic velocity computing method based on step 1), the corresponding systemic velocity sequence of institute's input motion sequence of data frames is carried out spectrum analysis, and analysis result is cut extraction step:
(b) obtain the pairing continuous systemic velocity set { V of human sports trapped data sequence
0-COM, V
1-COM..., V
N-COMAfter, sets of speeds is carried out traversal one time, obtain speed V maximum in the set
MaxWith minimum speed V
Min, the threshold value θ of a filtration is set
Thre, computing formula is as follows:
Wherein slack is adjustable division parameter, and the difference between minimum and maximum is divided between several dividing regions, will satisfy { V in the systemic velocity set
i≤ V
Min+ θ
Thre| V
i∈ { V
ThrePut into and divide set { V
ThreIn;
(c) for dividing set { V
ThreIn each systemic velocity V
I-COM, drop on (θ between the index area for all index value i
Range,+θ
Range) in speed put into one and divide to assemble subclass { V
RangeIn, θ wherein
RangeIt is adjustable radius parameter; Choose the pairing concentration range threshold value of index, obtain for dividing set { V
ThreSeveral divide to assemble the intersection { { V of subclass
1-range, { V
2-range..., { V
K-range, wherein k is by being obtained to assemble number; Ask for then to divide and assemble subclass { V
I-rangeCorresponding central speed V
I-center:
V wherein
I-centerAssemble subclass { V for dividing
I-rangeIn all speed V
I-COMWeighted mean, w
iBe the weighted value of speed correspondence, index i-center is the intermediate value between this index area, and λ assembles subclass { V for dividing
I-rangeIn the number of all elements; A complete division set { V who obtains
ThreCorresponding V
CenterSet { V
1-center, V
2-center..., V
K-center, according to the V that obtains
CenterSet, to the index of frame according to 1-center, 2-center ..., k-center} is cut apart exercise data as cut-point, extracts exercise data between two cut-points as the action fragment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100969762A CN101515371B (en) | 2009-03-26 | 2009-03-26 | Human body movement data fragment extracting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100969762A CN101515371B (en) | 2009-03-26 | 2009-03-26 | Human body movement data fragment extracting method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101515371A CN101515371A (en) | 2009-08-26 |
CN101515371B true CN101515371B (en) | 2011-01-19 |
Family
ID=41039816
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009100969762A Expired - Fee Related CN101515371B (en) | 2009-03-26 | 2009-03-26 | Human body movement data fragment extracting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101515371B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819549B (en) * | 2012-04-16 | 2016-03-30 | 大连大学 | Based on the human motion sequences segmentation method of Least-squares estimator characteristic curve |
CN103886588B (en) * | 2014-02-26 | 2016-08-17 | 浙江大学 | A kind of feature extracting method of 3 D human body attitude projection |
CN104408396B (en) * | 2014-08-28 | 2017-06-30 | 浙江工业大学 | A kind of action identification method based on time pyramid local matching window |
US10581558B2 (en) | 2015-02-11 | 2020-03-03 | Huawei Technologies Co., Ltd. | Data transmission method and apparatus, and first device |
CN113673494B (en) * | 2021-10-25 | 2022-03-08 | 青岛根尖智能科技有限公司 | Human body posture standard motion behavior matching method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1332430A (en) * | 2001-07-27 | 2002-01-23 | 南开大学 | 3D tracking and measurement method of moving objects by 2D code |
CN1619598A (en) * | 2003-11-21 | 2005-05-25 | 长春市蒲公英动画影视制作中心 | Human body carton system based on video frequency |
CN1949274A (en) * | 2006-10-27 | 2007-04-18 | 中国科学院计算技术研究所 | 3-D visualising method for virtual crowd motion |
CN101140663A (en) * | 2007-10-16 | 2008-03-12 | 中国科学院计算技术研究所 | Clothing cartoon computation method |
CN101246601A (en) * | 2008-03-07 | 2008-08-20 | 清华大学 | Three-dimensional virtual human body movement generation method based on key frame and space-time restriction |
-
2009
- 2009-03-26 CN CN2009100969762A patent/CN101515371B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1332430A (en) * | 2001-07-27 | 2002-01-23 | 南开大学 | 3D tracking and measurement method of moving objects by 2D code |
CN1619598A (en) * | 2003-11-21 | 2005-05-25 | 长春市蒲公英动画影视制作中心 | Human body carton system based on video frequency |
CN1949274A (en) * | 2006-10-27 | 2007-04-18 | 中国科学院计算技术研究所 | 3-D visualising method for virtual crowd motion |
CN101140663A (en) * | 2007-10-16 | 2008-03-12 | 中国科学院计算技术研究所 | Clothing cartoon computation method |
CN101246601A (en) * | 2008-03-07 | 2008-08-20 | 清华大学 | Three-dimensional virtual human body movement generation method based on key frame and space-time restriction |
Also Published As
Publication number | Publication date |
---|---|
CN101515371A (en) | 2009-08-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | On geometric features for skeleton-based action recognition using multilayer lstm networks | |
CN101894276B (en) | Training method of human action recognition and recognition method | |
CN107808143A (en) | Dynamic gesture identification method based on computer vision | |
CN106529477B (en) | Video human Activity recognition method based on significant track and temporal-spatial evolution information | |
CN101515371B (en) | Human body movement data fragment extracting method | |
CN108108699A (en) | Merge deep neural network model and the human motion recognition method of binary system Hash | |
CN107092894A (en) | A kind of motor behavior recognition methods based on LSTM models | |
CN106778796A (en) | Human motion recognition method and system based on hybrid cooperative model training | |
CN103268495A (en) | Human body behavioral modeling identification method based on priori knowledge cluster in computer system | |
CN106919951A (en) | A kind of Weakly supervised bilinearity deep learning method merged with vision based on click | |
CN108288015A (en) | Human motion recognition method and system in video based on THE INVARIANCE OF THE SCALE OF TIME | |
CN101866429A (en) | Training method of multi-moving object action identification and multi-moving object action identification method | |
CN105956517A (en) | Motion identification method based on dense trajectory | |
CN110110686A (en) | Based on the human motion recognition methods for losing double-current convolutional neural networks more | |
CN109858407A (en) | A kind of video behavior recognition methods based on much information stream feature and asynchronous fusion | |
CN102938070A (en) | Behavior recognition method based on action subspace and weight behavior recognition model | |
CN101561881B (en) | Emotion identification method for human non-programmed motion | |
CN105930770A (en) | Human motion identification method based on Gaussian process latent variable model | |
CN101976451B (en) | Motion control and animation generation method based on acceleration transducer | |
CN109614896A (en) | A method of the video content semantic understanding based on recursive convolution neural network | |
CN113392781A (en) | Video emotion semantic analysis method based on graph neural network | |
CN106203510A (en) | A kind of based on morphological feature with the hyperspectral image classification method of dictionary learning | |
CN104461000A (en) | Online continuous human motion recognition method based on few missed signals | |
CN110334607A (en) | A kind of video human interbehavior recognition methods and system | |
CN109299753A (en) | A kind of integrated learning approach and system for Law Text information excavating |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110119 Termination date: 20180326 |
|
CF01 | Termination of patent right due to non-payment of annual fee |