CN101883209A - Method by integrating background model and three-frame difference to detect video background - Google Patents

Method by integrating background model and three-frame difference to detect video background Download PDF

Info

Publication number
CN101883209A
CN101883209A CN 201010191865 CN201010191865A CN101883209A CN 101883209 A CN101883209 A CN 101883209A CN 201010191865 CN201010191865 CN 201010191865 CN 201010191865 A CN201010191865 A CN 201010191865A CN 101883209 A CN101883209 A CN 101883209A
Authority
CN
China
Prior art keywords
background
image block
frame difference
background model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010191865
Other languages
Chinese (zh)
Other versions
CN101883209B (en
Inventor
罗笑南
陆晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Hua Kai Culture Intention Inc Co
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN2010101918652A priority Critical patent/CN101883209B/en
Publication of CN101883209A publication Critical patent/CN101883209A/en
Application granted granted Critical
Publication of CN101883209B publication Critical patent/CN101883209B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method by integrating background model and three-frame difference to detect video background, which is characterized by comprising that: each frame image is divided into image blocks; an image block-based mixed Guass background model is established; the image block which is unmatched with the background model is further judged by three-frame difference way; and each image block is judged to be a foreground block or a background block according to the results of the background model and the three-frame difference. Through the method, not only the weakness of the traditional method that the system execution efficiency is low because the background model is established in a form of a single pixel can be overcome, but also a simple and effectively judging method is adopted to judge whether the macro block belongs to a foreground block or a background block, the calculation complexity is low, and the detection is correct.

Description

A kind of background model combines with three-frame difference and carries out the method that video background detects
Technical field
The present invention relates to areas of information technology, be specifically related to a kind of background model and combine with three-frame difference and carry out the method that video background detects.
Background technology
Detecting dynamic object from a video sequence is the primary and basic task of video monitoring.At present, many tracking systems all depend on the background extracting technology to motion target detection, just the picture frame of current input is compared with the background model of reference,, judge that this pixel is object pixel or background pixel according to the pixel value of present frame and the departure degree of background model.Then, those pixels that are considered to target are done further processing so that recognition objective, determine the position of target, and then realize following the tracks of.
The extractive technique of background is widely used in such as the such tracking system of video monitoring.The method of many structure background models has been proposed at present.Simple background model can be the image that a width of cloth does not have mobile object, and the complicated background model then is a kind of statistic model of continual renovation.Yet real world is complicated and changeable, the display of the tree that for example rocks, the ripple in the water, flicker, the illumination of variation etc.In order to handle these complicated situations, background model becomes and becomes increasingly complex, and desired real-time processing has proposed challenge to system for this.
Yet video monitoring in the reality requires not only complex situations in the processing environment well of background model, but also will consider that can real-time calculating be met.The background model majority that is proposed at present is that unit sets up with the pixel, and these pixels are regarded as mutually independent random variables, and each pixel is made a strategic decision into background or prospect (target) individually.But itself can not illustrate too many problem sometimes single pixel, such as noise spot.In fact, to aimless zone, the structure of image itself has relative stability, therefore only single pixel is carried out analysis meeting in the target leaching process and produces bulk redundancy information, can the actual execution efficient of algorithm be exerted an influence inevitably.And adjacent a plurality of pixels are carried out disposed of in its entirety is to reduce redundant, as the to improve execution efficient a kind of approach that calculates.Therefore the present invention proposes a kind of construction method of the background model based on image block.
In present common video dynamic background modeling method, common have W4 method, Kalman filtering method, single Gauss model and a mixed Gauss model method etc., and wherein, what detection information was the most complete, effect is best is the mixed Gauss model method.This paper sets up the Gaussian Background model based on image block on the basis of conventional hybrid Gauss model.And strong inadequately at Gauss model to noise inhibiting ability, to the strong inadequately shortcoming of unexpected variation adaptability of background content.
Summary of the invention
The object of the present invention is to provide a kind of background model to combine and carry out the method that video background detects with three-frame difference, not only can overcome in traditional Gaussian Background modeling method with single pixel is redundancy and the inefficiencies that unit carries out modeling, and combine with three-frame difference can overcome Gauss model strong to the noise effect restraint, to the not strong shortcoming of unexpected variation adaptability of background content.
A kind of Gauss model background modeling method based on image block of the present invention comprises:
To every two field picture partitioned image piece;
Foundation is based on the mixed Gaussian background model of image block;
For utilizing three-frame difference further to judge with the unmatched image block of background model;
Judging each image block according to the result of background model and three-frame difference is foreground blocks or background piece.
Described every two field picture partitioned image piece is specially: every two field picture is carried out the division of 16*16 image block, again the image block of each 16*16 is divided into the fritter of 16 4*4.
Described foundation is specially based on the mixed Gaussian background model of image block: 16 pixels to the fritter of each 4*4 are averaged, and are that characteristic value and existing Gauss model mate with the average.
Describedly be specially: when the average of 4*4 fritter and all Gaussian Profile all do not match, utilize three-frame difference further to judge again for utilizing three-frame difference further to judge with the unmatched image block of background model.
It is that foreground blocks or background piece are specially that described result according to background model and three-frame difference judges each image block: the number and certain preset threshold that in the fritter according to 4*4 in each 16*16 image block are the background piece compare, if number is greater than preset threshold, judge that then this 16*16 image block is the background piece, otherwise be judged to be foreground blocks.
Implement the present invention, have following beneficial effect:
(1) background model is to set up with the form of single pixel in traditional Gauss's modeling method, although the model of single pixel form has advantages such as accurate, flexible, but exist redundancy, system to carry out the not high shortcoming of efficient simultaneously, the method as basic detection unit, can overcome this shortcoming with image block.
(2) and the method strong inadequately at Gauss model to noise inhibiting ability, to the strong inadequately shortcoming of unexpected variation adaptability of background content, setting up the further judgement of carrying out prospect and background on the model based in conjunction with three-frame difference.
(3) in the operation: the dividing mode to image block is: every two field picture is carried out the division of 16*16 image block, the image block of each 16*16 is divided into the fritter of 16 4*4 again.Setting up aspect the model: each 4*4 fritter is averaged, mate as characteristic value and Gauss model, carry out the maintenance and the renewal of model with average.At last to decide this 16*16 image block be prospect or background to the number that is judged as the background piece with the 4*4 fritter, arithmetic unit is divided suitably like this, can be because of the excessive error of calculation that causes of 16*16 image block, having overcome single pixel again is the low shortcoming of unit computational efficiency, calculate simply, and the judgment accuracy height.
In sum, it is to carry out the not high shortcoming of efficient with the system that causes that the form of single pixel is set up that this method not only can overcome background model in the conventional method, and adopt simple and effective decision method to carry out macro block to belong to judgement preceding, the background piece, computational complexity is low and detect accurately.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, to do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below, apparently, accompanying drawing in describing below only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 carries out the method flow diagram that video background detects for the background model in the embodiment of the invention combines with three-frame difference.
Embodiment
Describe the embodiment of the invention in detail below in conjunction with accompanying drawing.
Below in conjunction with accompanying drawing background model of the present invention is combined with three-frame difference and to carry out the method that video background detects and be described in detail.
Be control flow chart of the present invention as shown in Figure 1, mainly may further comprise the steps: to every two field picture partitioned image piece; Foundation is based on the mixed Gaussian background model of image block; For utilizing three-frame difference further to judge with the unmatched image block of background model; Judging each image block according to the result of background model and three-frame difference is foreground blocks or background piece.Describe in detail below.
To every two field picture partitioned image piece
In algorithm, real-time each two field picture to input from left to right carries out the division of 16*16 image block from top to bottom, the image block of each 16*16 is divided into the fritter of 16 4*4 again, with the fritter of 4*4 as unit of account.
Foundation is based on the mixed Gaussian background model of image block
Ask the average of each the 4*4 fritter in the 16*16 image block earlier
If directly the macro block to 16*16 calculates, image block is bigger, though to be processed number is less, efficient can be high more, and is just more little to local target susceptibility degree, and the accuracy of target will variation, because the number of the little image block of prospect proportion can increase.Therefore, each 16*16 macro block further is divided into the fritter of 16 4*4, the arithmetic unit size is suitably calculated the average of each fritter, with the object of average as computing like this.
Remember that each pixel is X Ij, 1 4*4 image block be characterized as λ,
Figure BSA00000149295300041
Because all features all are the linear combination of pixel in the image block, therefore as each pixel X IjDuring Normal Distribution, all feature λ are as the result of the linear operation of former pixel, also Normal Distribution.
If λ=v TX, the random vector formed by plurality of pixels of X wherein, v is and the vector of X with dimension, then has:
λ~N(v Tμ,v TUv) (3.1)
The variation of the pixel point value in the image block can be reflected in the variation of characteristic value, we just can be to the feature λ of image block like this, rather than single pixel value set up background model, judge image block with this, rather than judge that each pixel is background or prospect.
2. each average is mated as characteristic value and Gauss model
As eigenvalue, make characteristic vector Λ=[λ] with the average of each 4*4 fritter.
This method adopts the form of the mixed Gaussian distribution that provides in the document (C.Stauffer, W.E.L.Grimson.Adaptive background mixturemodels for real-time tracking) and foundation and the renewal that parameter updating method carries out model.
Figure BSA00000149295300042
(1)
Figure BSA00000149295300043
Q=(Λ ti,t) TU i,t -1ti,t) (2)
P (ω i) or w I, tBe i component shared weights in overall distribution.
Suppose between pixel it is separate, can draw also is separate between the characteristic value.In order to simplify calculating, suppose that further they have same variance, thereby covariance matrix U I, tCan be reduced to U I, tI, t 2E, wherein E is a unit matrix.This hypothesis can avoid complicated calculations to cause that error strengthens.Relational expression (1) has illustrated that the probability distribution of current observed value of the characteristic vector Λ of each image block can be portrayed by a Gaussian Mixture function, that is to say that certain state of the characteristic vector of image block can be described by certain component of mixed model.
Based on the hypothesis of front, backward the characteristic value of new image block and K existing Gaussian Profile are mated in the past, if characteristic value drops in a certain multiple scope of standard variance of certain distribution, the match is successful just to think this 4*4 fritter, if promptly | Λ tI, t|≤τ σ I, t, think that then the match is successful with this distribution.It is 4 proper that experiment shows that τ gets.
An existing K Gaussian function is pressed ratio w/ σ ordering, and this increases by Gauss's priority valve in proper order and variance reduces to arrange.With the order after component compare, the order the preceding component be background possibility greater than the order after component.Backward priority valve is got in the past and, itself and preceding B distribution occupying the T part are defined as background.
Wherein,
Figure BSA00000149295300051
T is illustrated in the tolerance of background least part in the whole distribution.Here get best distribution up to its weights and account for data T part till.If the match is successful for the feature of image block and a preceding B component, just be judged to be background, otherwise further judge forwarding next step to.
After above-mentioned coupling work is finished, utilize the method in the document (1) that background model is upgraded.
(3) combine with three-frame difference and further judge
When all not matching for the average of the 4*4 fritter of current detection and Gauss model, because Gauss model is for the slower shortcoming of the renewal of background, in order to increase the accuracy of detection, we further utilize three-frame difference to judge.For the average of this piece of present frame, respectively at the average of correspondence position in the front cross frame, it is poor to do in twos, because three-frame difference is just done difference operation, so operation efficiency can not be subjected to very big influence.
If present frame is X, front cross frame is respectively X-1, X-2, detecting i sub-piece in the A macro block of present frame, be designated as X.A.i_mean, then the sub-piece average of correspondence position is respectively X-1.A.i_mean in the front cross frame, X-2.A.i_mean, set one and be difference limen value TH_SUB, TH_SUB is taken as 5 in the experiment, if satisfy:
|SUB(X-2.A.i_mean,X-1.A.i_mean)|<=TH_SUB
And | SUB (X-1.A.i_mean, X.A.i_mean) |<=TH_SUB
And | SUB (X-2.A.i_mean, X.A.i_mean) |<=TH_SUB
The absolute value that is the difference between any two frames in this three frame is all less than certain threshold value, can think that this piece does not change substantially, but because the influence of noise etc., its Y, U, some sudden changes take place in the V value, do not meet background model, but be the background piece that does not have variation in fact, therefore think that the fritter of this 4*4 is the background piece.If this fritter neither satisfies background model, do not satisfy the condition of three-frame difference again, so it is judged to be foreground blocks.
(4) judging each image block according to the result of background model and three-frame difference is foreground blocks or background piece.
After 16 4*4 fritters to this 16*16 image block all mated end, the fritter number of establishing coupling was MatchNum, setting threshold TH_MTH, M gets 12 in the experiment, when MatchNum>=TH_MTH, judge that then this 16*16 image block belongs to the background piece, otherwise be foreground blocks.
So far, the judgement for a 16*16 macro block finishes.
All macro blocks to each frame detect successively, just can access the prospect part and the background parts of each frame correspondence.
Above disclosed is a kind of preferred embodiment of the present invention only, can not limit the present invention's interest field certainly with this, and therefore the equivalent variations of doing according to claim of the present invention still belongs to the scope that the present invention is contained.

Claims (5)

1. a background model combines with three-frame difference and carries out the method that video background detects, and it is characterized in that, comprising:
To every two field picture partitioned image piece;
Foundation is based on the mixed Gaussian background model of image block;
For utilizing three-frame difference further to judge with the unmatched image block of background model;
Judging each image block according to the result of background model and three-frame difference is foreground blocks or background piece.
2. background model according to claim 1 combines with three-frame difference and carries out the method that video background detects, it is characterized in that, described every two field picture partitioned image piece is specially: every two field picture is carried out the division of 16*16 image block, again the image block of each 16*16 is divided into the fritter of 16 4*4.
3. background model according to claim 2 combines with three-frame difference and carries out the method that video background detects, it is characterized in that, described foundation is specially based on the mixed Gaussian background model of image block: 16 pixels to the fritter of each 4*4 are averaged, and are that characteristic value and existing Gauss model mate with the average.
4. background model according to claim 3 combines with three-frame difference and carries out the method that video background detects, it is characterized in that, describedly be specially: when the average of 4*4 fritter and all Gaussian Profile all do not match, utilize three-frame difference further to judge again for utilizing three-frame difference further to judge with the unmatched image block of background model.
5. background model according to claim 4 combines with three-frame difference and carries out the method that video background detects, it is characterized in that, it is that foreground blocks or background piece are specially that described result according to background model and three-frame difference judges each image block: the number and certain preset threshold that in the fritter according to 4*4 in each 16*16 image block are the background piece compare, if number is greater than preset threshold, judge that then this 16*16 image block is the background piece, otherwise be judged to be foreground blocks.
CN2010101918652A 2010-05-31 2010-05-31 Method for integrating background model and three-frame difference to detect video background Expired - Fee Related CN101883209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101918652A CN101883209B (en) 2010-05-31 2010-05-31 Method for integrating background model and three-frame difference to detect video background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101918652A CN101883209B (en) 2010-05-31 2010-05-31 Method for integrating background model and three-frame difference to detect video background

Publications (2)

Publication Number Publication Date
CN101883209A true CN101883209A (en) 2010-11-10
CN101883209B CN101883209B (en) 2012-09-12

Family

ID=43055086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101918652A Expired - Fee Related CN101883209B (en) 2010-05-31 2010-05-31 Method for integrating background model and three-frame difference to detect video background

Country Status (1)

Country Link
CN (1) CN101883209B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102340620A (en) * 2011-10-25 2012-02-01 重庆大学 Mahalanobis-distance-based video image background detection method
CN102801964A (en) * 2012-08-28 2012-11-28 杭州尚思科技有限公司 Intelligent home monitoring method based on motion detection and system thereof
CN103473753A (en) * 2013-09-02 2013-12-25 昆明理工大学 Target detection method based on multi-scale wavelet threshold denoising
CN104301669A (en) * 2014-09-12 2015-01-21 重庆大学 Suspicious target detection tracking and recognition method based on dual-camera cooperation
CN104702956A (en) * 2015-03-24 2015-06-10 武汉大学 Background modeling method for video coding
CN104820995A (en) * 2015-04-21 2015-08-05 重庆大学 Large public place-oriented people stream density monitoring and early warning method
CN105657317A (en) * 2014-11-14 2016-06-08 澜起科技(上海)有限公司 Interlaced video motion detection method and system in video de-interlacing
CN106096586A (en) * 2016-06-29 2016-11-09 深圳大学 The extra large background modeling of high resolution remote sensing ocean imagery and the method and system of suppression
CN108230362A (en) * 2017-12-29 2018-06-29 北京视觉世界科技有限公司 Environment control method, device, electronic equipment and storage medium
CN109002801A (en) * 2018-07-20 2018-12-14 燕山大学 A kind of face occlusion detection method and system based on video monitoring
CN112908035A (en) * 2021-01-20 2021-06-04 温州大学 Automobile auxiliary driving system based on visible light communication and implementation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1275859A (en) * 1999-06-01 2000-12-06 索尼公司 Image treatment device, method and medium thereof
CN1984236A (en) * 2005-12-14 2007-06-20 浙江工业大学 Method for collecting characteristics in telecommunication flow information video detection
CN101017573A (en) * 2007-02-09 2007-08-15 南京大学 Method for detecting and identifying moving target based on video monitoring
CN101394479A (en) * 2008-09-25 2009-03-25 上海交通大学 Teacher movement tracing method based on movement detection combining multi-channel fusion
CN101527773A (en) * 2008-03-05 2009-09-09 株式会社半导体能源研究所 Image processing method, image processing system and computer program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1275859A (en) * 1999-06-01 2000-12-06 索尼公司 Image treatment device, method and medium thereof
CN1984236A (en) * 2005-12-14 2007-06-20 浙江工业大学 Method for collecting characteristics in telecommunication flow information video detection
CN101017573A (en) * 2007-02-09 2007-08-15 南京大学 Method for detecting and identifying moving target based on video monitoring
CN101527773A (en) * 2008-03-05 2009-09-09 株式会社半导体能源研究所 Image processing method, image processing system and computer program
CN101394479A (en) * 2008-09-25 2009-03-25 上海交通大学 Teacher movement tracing method based on movement detection combining multi-channel fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《计算机工程与应用》 20100501 刘静,王玲 混合高斯模型背景法的一种改进算法 1-5 , *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102340620A (en) * 2011-10-25 2012-02-01 重庆大学 Mahalanobis-distance-based video image background detection method
CN102801964A (en) * 2012-08-28 2012-11-28 杭州尚思科技有限公司 Intelligent home monitoring method based on motion detection and system thereof
CN103473753A (en) * 2013-09-02 2013-12-25 昆明理工大学 Target detection method based on multi-scale wavelet threshold denoising
CN104301669A (en) * 2014-09-12 2015-01-21 重庆大学 Suspicious target detection tracking and recognition method based on dual-camera cooperation
CN105657317A (en) * 2014-11-14 2016-06-08 澜起科技(上海)有限公司 Interlaced video motion detection method and system in video de-interlacing
CN105657317B (en) * 2014-11-14 2018-10-16 澜至电子科技(成都)有限公司 A kind of interlaced video method for testing motion in video release of an interleave and its system
CN104702956A (en) * 2015-03-24 2015-06-10 武汉大学 Background modeling method for video coding
CN104702956B (en) * 2015-03-24 2017-07-11 武汉大学 A kind of background modeling method towards Video coding
CN104820995A (en) * 2015-04-21 2015-08-05 重庆大学 Large public place-oriented people stream density monitoring and early warning method
CN106096586A (en) * 2016-06-29 2016-11-09 深圳大学 The extra large background modeling of high resolution remote sensing ocean imagery and the method and system of suppression
CN108230362A (en) * 2017-12-29 2018-06-29 北京视觉世界科技有限公司 Environment control method, device, electronic equipment and storage medium
CN109002801A (en) * 2018-07-20 2018-12-14 燕山大学 A kind of face occlusion detection method and system based on video monitoring
CN112908035A (en) * 2021-01-20 2021-06-04 温州大学 Automobile auxiliary driving system based on visible light communication and implementation method

Also Published As

Publication number Publication date
CN101883209B (en) 2012-09-12

Similar Documents

Publication Publication Date Title
CN101883209B (en) Method for integrating background model and three-frame difference to detect video background
Wang et al. A region based stereo matching algorithm using cooperative optimization
CN106846359A (en) Moving target method for quick based on video sequence
CN103164693B (en) A kind of monitor video pedestrian detection matching process
CN110827320B (en) Target tracking method and device based on time sequence prediction
CN101470809A (en) Moving object detection method based on expansion mixed gauss model
CN101324958A (en) Method and apparatus for tracking object
CN102142085A (en) Robust tracking method for moving flame target in forest region monitoring video
CN101833760A (en) Background modeling method and device based on image blocks
CN107871315B (en) Video image motion detection method and device
CN111414938B (en) Target detection method for bubbles in plate heat exchanger
Wei et al. A robust approach for multiple vehicles tracking using layered particle filter
CN111696133A (en) Real-time target tracking method and system
CN102930559A (en) Image processing method and device
Li et al. Robust detection of headland boundary in paddy fields from continuous RGB-D images using hybrid deep neural networks
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN101877135B (en) Moving target detecting method based on background reconstruction
CN103077533A (en) Method for positioning moving target based on frogeye visual characteristics
CN102592125A (en) Moving object detection method based on standard deviation characteristic
Zhang et al. Target tracking for mobile robot platforms via object matching and background anti-matching
CN113112479A (en) Progressive target detection method and device based on key block extraction
CN104680194A (en) On-line target tracking method based on random fern cluster and random projection
CN110580712B (en) Improved CFNet video target tracking method using motion information and time sequence information
CN111862147A (en) Method for tracking multiple vehicles and multiple human targets in video
CN107564029B (en) Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Luo Xiaonan

Inventor after: Meng Siming

Inventor after: Lu Qing

Inventor before: Luo Xiaonan

Inventor before: Lu Qing

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: LUO XIAONAN LU QING TO: LUO XIAONAN MENG SIMING LU QING

C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: HUNAN HUAKAI CULTURAL CREATIVE CO., LTD.

Free format text: FORMER OWNER: ZHONGSHAN UNIVERSITY

Effective date: 20150105

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 510006 GUANGZHOU, GUANGDONG PROVINCE TO: 410000 CHANGSHA, HUNAN PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20150105

Address after: 410000, No. 229, west slope, Tongzi, Hunan, Yuelu District, 101, Changsha

Patentee after: Hunan Hua Kai culture intention incorporated company

Address before: 510006 teaching experiment center, east campus, Zhongshan University, Panyu District, Guangdong, C401, China

Patentee before: Sun Yat-sen University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120912

Termination date: 20200531

CF01 Termination of patent right due to non-payment of annual fee