CN102568006A - Visual saliency algorithm based on motion characteristic of object in video - Google Patents
Visual saliency algorithm based on motion characteristic of object in video Download PDFInfo
- Publication number
- CN102568006A CN102568006A CN2012100069309A CN201210006930A CN102568006A CN 102568006 A CN102568006 A CN 102568006A CN 2012100069309 A CN2012100069309 A CN 2012100069309A CN 201210006930 A CN201210006930 A CN 201210006930A CN 102568006 A CN102568006 A CN 102568006A
- Authority
- CN
- China
- Prior art keywords
- motion vector
- visual saliency
- motion
- video
- algorithm based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 78
- 239000013598 vector Substances 0.000 claims abstract description 70
- 238000004364 calculation method Methods 0.000 claims abstract description 4
- 238000001914 filtration Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a visual saliency algorithm based on the motion characteristic of an object in video. The visual saliency algorithm specifically comprises the following steps of: (1) calculating a motion vector of a brightness component of a current frame by using a block match-based motion estimation calculation method; (2) acquiring the motion vector from which the mean is removed; (3) performing Gaussian filter on the motion vector after the mean is removed; (4) calculating squares of components in the horizontal direction and components in the vertical direction of the motion vector respectively to obtain visual saliency maps of the horizontal direction and the vertical direction; and (5) acquiring a final visual saliency map. According to the method, when motion factors of the object in the video are considered, the motion saliency is extracted according to the local motion vector characteristic of the saliency object in the video by reducing the influence of the global motion of the object in the video on the visual saliency, so that the accuracy of the visual saliency algorithm can be effectively improved; and the algorithm provided by the invention has low complexity.
Description
Technical Field
The invention relates to a visual saliency algorithm based on object motion characteristics in a video, and belongs to the technical field of computer vision and image processing.
Background
With the development of information technology, the amount of video information contacted by people in life is more and more huge, and how to efficiently extract a salient object in a video draws more and more researchers' attention, and the visual saliency has wide application in the aspect of video signal processing, such as the fields of video retrieval, video compression, video monitoring, video tracking and the like.
In the aspect of video retrieval, because the video data volume is very large, the retrieval accuracy can be effectively improved by extracting the salient objects in the video and taking the salient objects as the characteristics of the video.
In video compression, as video resolution is higher and higher nowadays, efficient video compression algorithms are also one of the hot spots of research. Meanwhile, a video compression algorithm combined with a human eye vision model is one of key technologies of the next generation of video coding and decoding, so that the visual saliency is particularly important as an important aspect of the human eye vision model.
In the aspect of video monitoring, the intelligent degree of video monitoring can be effectively improved by the visual saliency, so that the visual saliency research has very important significance.
In the aspect of video tracking, people mainly track the motion of a salient object generally, and the accuracy of video tracking can be effectively improved through a visual saliency algorithm.
The visual saliency has wide application in video signal processing, so the visual saliency has very important significance for visual saliency research, and the visual saliency mainly extracts a salient region in an image video according to visual characteristics. At present, the research on the visual saliency mainly aims at the image saliency, and an image saliency algorithm mainly utilizes the characteristics of color, brightness and the like of an image to calculate the saliency of the image, but the image saliency algorithm does not utilize the motion characteristics of a video, so that the effect of directly applying the image saliency algorithm to video saliency detection is poor. However, the video saliency algorithm is less researched, and the defect of high algorithm complexity exists. In general, the motion of an object in a video is mainly composed of two aspects: one is global motion due to motion of the camera and the other is local motion due to relative motion of objects in the video. Compared with the global motion of the local motion of the object in the video, the local motion of the object in the video is higher in significance, and the global motion of the object in the video is lower in significance,Without reducing the impact of global motion of objects in the video on the visual saliency in advance, the accuracy of the saliency algorithm will be reduced.
The relative motion and the global motion of the object in the video can be obtained through a motion estimation algorithm of block matching. For example, a publication entitled "a new diamond search fast matching motion estimation algorithm" (this author is shan zhu, published in 2000, "image processing society of electronics and electronics engineers", 2000,9(2): 287-290) describes a motion estimation algorithm for block matching, which is an algorithm that compares a current block with a corresponding block in a previous frame, calculates an error of the two blocks, takes the block with the smallest error as a best matching block, and shifts a motion between the two blocks as a motion vector. Bilinear interpolation algorithm (K.R. Castleman, Vermilion, digital image processing, electronic industry Press) can be used to obtain motion vector consistent with current frame size2006: 96-97), the interpolation algorithm is to obtain the pixel value to be solved by weighted average of the pixel values of the four nearest points around the pixel value。
Disclosure of Invention
The invention aims to provide a visual saliency algorithm based on object motion characteristics in a video aiming at the problems in the prior art. According to the algorithm, the influence of the global motion of the object in the video on the visual saliency is reduced by detecting the motion of the object in the video, and the accuracy of the visual saliency algorithm can be effectively improved.
In order to achieve the purpose, the invention adopts the following scheme:
a visual saliency algorithm based on object motion characteristics in a video comprises the following specific steps:
(1) calculating a motion vector of the brightness component of the current frame by adopting a motion estimation algorithm based on block matching;
(2) obtaining the motion vector with the mean value removed;
(3)、performing Gaussian filtering on the motion vector with the mean value removed;
(4) respectively calculating the squares of the horizontal direction component and the vertical direction component of the motion vector to obtain visual saliency maps in the horizontal direction and the vertical direction;
(5) and acquiring a final visual saliency map.
The step (1) of calculating the motion vector of the luminance component of the current frame by using the motion estimation algorithm based on block matching specifically comprises the following steps:
(11) extracting a current frame original image and a previous frame original image of the video stream needing visual saliency detection;
(12) respectively extracting the brightness components of the two adjacent frames;
(13) partitioning the brightness component of the current frame of the video stream according to the size of a 16 multiplied by 16 pixel block;
(14) calculating the motion vector of each pixel block in the brightness component of the current frame by adopting a motion estimation algorithm based on block matching;
(15) amplifying the motion vector by using a bilinear interpolation algorithm to obtain a motion vector with the same size as the current frameV(t)。
The obtaining of the motion vector with the mean value removed in the step (2) specifically includes the following steps:
(21)、averaging the motion vectors of each pixel block in the current frame brightness component to obtain the average value of the motion vectorsV m(t);
(22)、The motion vector calculated in the step (15)V(t) Subtracting the mean of the motion vectorsV m(t) Obtaining a motion vector with the mean value removed;
the gaussian filtering of the motion vector with the mean value removed in the step (3) includes the following specific steps:
(31)、filtering the motion vector after the mean value is removed by adopting a Gaussian filter to obtain a filtered motion vector;
(32)、setting the boundary of the filtered motion vector as 0 to obtain the final filtered motion vectorV f(t)。
The step (4) of calculating the squares of the horizontal direction component and the vertical direction component of the motion vector respectively to obtain the visual saliency maps in the horizontal direction and the vertical direction includes the following specific steps:
(41)、filtering the horizontal component of the motion vectorV fx(t) Squaring to obtain a visual saliency map in the horizontal directionS x(t);
(42)、Filtering the vertical component of the motion vectorV fy(t) Squaring to obtain a visual saliency map in the vertical directionS y(t)。
Obtaining the final visual saliency map in step (5) above specifically includes:
adding the visual saliency maps in the horizontal direction and the vertical direction and mapping the visual saliency maps to 0-255 to obtain a final visual saliency mapS(t) Visual saliency mapS(t) The calculation expression of (a) is:
whereinS x(t) Is a visual saliency map in a horizontal direction,S y(t) Is a visual saliency map in a vertical direction,S(t) In order to be the final visual saliency map,indicating a rounding down. The larger the gray value in the saliency map is, the higher the saliency of the corresponding region of the current frame is; the smaller the gray value in the saliency map is, the lower the saliency of the corresponding region of the current frame is.
Compared with the prior art, the visual saliency algorithm based on the motion characteristics of the objects in the video has the following prominent substantive features and remarkable advantages: when the method considers the motion factors of the objects in the video, the motion significance is extracted according to the local motion vector characteristics of the significant objects in the video by reducing the influence of the global motion of the objects in the video on the visual significance, the accuracy of the visual significance algorithm can be effectively improved, and the algorithm disclosed by the invention is low in complexity.
Drawings
FIG. 1 is a block flow diagram of a visual saliency algorithm based on object motion characteristics in video in accordance with the present invention;
FIG. 2 is a flow chart of step (1) described in FIG. 1;
FIG. 3 is a flow chart of step (2) depicted in FIG. 1;
FIG. 4 is a flowchart of step (3) described in FIG. 1;
FIG. 5 is a flowchart of step (4) described in FIG. 1;
FIG. 6 is a 16 th frame original image in a coastguard video sequence;
FIG. 7 is a 17 th frame original image in a coastguard video sequence;
FIG. 8 is a visual saliency map corresponding to FIG. 6;
fig. 9 is a visual saliency map corresponding to fig. 7.
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description of the invention.
As shown in fig. 1-9, the visual saliency algorithm based on the motion characteristics of the object in the video according to the present invention has the following steps:
(1)、the motion vector of the brightness component of the current frame is calculated by adopting a motion estimation algorithm based on block matching, and the specific implementation steps are as follows:
(11) extracting a current frame original image and a previous frame original image of the video stream needing visual saliency detection;
(12) respectively extracting the brightness components of the two adjacent frames;
(13) partitioning the brightness component of the current frame of the video stream according to the size of a 16 multiplied by 16 pixel block;
(14) calculating the motion vector of each pixel block in the brightness component of the current frame by adopting a motion estimation algorithm based on block matching;
(15) amplifying the motion vector by using a bilinear interpolation algorithm to obtain a motion vector with the same size as the current frameV(t);
(2) And obtaining the motion vector with the mean value removed, wherein the specific implementation steps are as follows:
(21)、averaging the motion vectors to obtain a mean value of the motion vectorsV m(t);
(22)、Motion vector is converted intoV(t) Subtracting the mean of the motion vectorsV m(t) Obtaining a motion vector with the mean value removed;
(3)、and performing Gaussian filtering on the motion vector after the mean value is removed, wherein the specific implementation steps are as follows:
(31)、filtering the motion vector after the mean value is removed by adopting a Gaussian filter to obtain a filtered motion vector;
(32)、setting the boundary of the corresponding filtered motion vector as 0 to obtain the final filtered motion vectorV f(t);
(4)、Respectively calculating the squares of the horizontal direction component and the vertical direction component of the motion vector to obtain the visual saliency maps in the horizontal direction and the vertical direction, and specifically implementing the steps as follows:
(41)、filtering the horizontal component of the motion vectorV fx(t) Squaring to obtain a visual saliency map in the horizontal directionS x(t);
(42)、Filtering the vertical component of the motion vectorV fy(t) Squaring to obtain a visual saliency map in the vertical directionS y(t);
(5)、Acquiring a final visual saliency map, wherein the specific implementation steps are as follows:
adding the visual saliency maps in the horizontal direction and the vertical direction and mapping the visual saliency maps to 0-255 to obtain a final visual saliency mapS(t). Visual saliency mapS(t) The calculation expression of (a) is:
wherein,S x(t) Is a visual saliency map in a horizontal direction,S y(t) Is a visual saliency map in a vertical direction,S(t) In order to be the final visual saliency map,indicating a rounding down. The larger the gray value in the saliency map is, the higher the saliency of the corresponding region of the current frame is; the smaller the gray value in the saliency map is, the lower the saliency of the corresponding region of the current frame is.
Claims (6)
1. A visual saliency algorithm based on object motion characteristics in a video comprises the following specific steps:
(1) calculating a motion vector of the brightness component of the current frame by adopting a motion estimation algorithm based on block matching;
(2) obtaining the motion vector with the mean value removed;
(3) performing Gaussian filtering on the motion vector with the mean value removed;
(4) respectively calculating the squares of the horizontal direction component and the vertical direction component of the motion vector to obtain visual saliency maps in the horizontal direction and the vertical direction;
(5) and acquiring a final visual saliency map.
2. The visual saliency algorithm based on object motion features in video according to claim 1, characterized in that said step (1) of calculating the motion vector of the luminance component of the current frame by using a motion estimation algorithm based on block matching comprises the following specific steps:
(11) extracting a current frame original image and a previous frame original image of the video stream needing visual saliency detection;
(12) respectively extracting the brightness components of the two adjacent frames;
(13) partitioning the brightness component of the current frame of the video stream according to the size of a 16 multiplied by 16 pixel block;
(14) calculating the motion vector of each pixel block in the brightness component of the current frame by adopting a motion estimation algorithm based on block matching;
(15) amplifying the motion vector by using a bilinear interpolation algorithm to obtain a motion vector with the same size as the current frameV(t)。
3. The visual saliency algorithm based on object motion features in video according to claim 1, characterized in that the obtaining of the motion vector with the mean value removed in step (2) specifically comprises the following steps:
(21) averaging the motion vectors of each pixel block in the brightness component of the current frame to obtain a motion vector average valueV m(t);
(22) And (3) calculating the motion vector in the step (15)V(t) Subtracting the mean of the motion vectorsV m(t) And obtaining the motion vector with the mean value removed.
4. The visual saliency algorithm based on object motion features in video according to claim 1, characterized in that said step (3) of gaussian filtering the motion vector after mean removal comprises the following specific steps:
(31) filtering the motion vector after the mean value is removed by adopting a Gaussian filter to obtain a filtered motion vector;
(32) setting the boundary of the filtered motion vector to be 0 to obtain the final filtered motion vectorV f(t)。
5. The visual saliency algorithm based on motion features of objects in video according to claim 1, characterized in that said step (4) of calculating the squares of the horizontal direction component and the vertical direction component of the motion vector respectively to obtain the visual saliency maps in the horizontal direction and the vertical direction comprises the following specific steps:
(41) horizontal component of the filtered motion vectorV fx(t) Squaring to obtain a visual saliency map in the horizontal directionS x(t);
(42) The vertical component of the filtered motion vectorV fy(t) Squaring to obtain a visual saliency map in the vertical directionS y(t)。
6. The algorithm for visual saliency based on object motion features in videos as claimed in claim 1, wherein said step (5) of obtaining a final visual saliency map specifically comprises:
adding the visual saliency maps in the horizontal direction and the vertical direction and mapping the visual saliency maps to 0-255 to obtain a final visual saliency mapS(t) Visual saliency mapS(t) The calculation expression of (a) is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210006930.9A CN102568006B (en) | 2011-03-02 | 2012-01-11 | Visual saliency algorithm based on motion characteristic of object in video |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110049052.4 | 2011-03-02 | ||
CN201110049052 | 2011-03-02 | ||
CN201210006930.9A CN102568006B (en) | 2011-03-02 | 2012-01-11 | Visual saliency algorithm based on motion characteristic of object in video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102568006A true CN102568006A (en) | 2012-07-11 |
CN102568006B CN102568006B (en) | 2014-06-11 |
Family
ID=46413352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210006930.9A Expired - Fee Related CN102568006B (en) | 2011-03-02 | 2012-01-11 | Visual saliency algorithm based on motion characteristic of object in video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102568006B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867301A (en) * | 2012-08-29 | 2013-01-09 | 西北工业大学 | Mehtod for getting image salient features according to information entropy |
CN103455795A (en) * | 2013-08-27 | 2013-12-18 | 西北工业大学 | Method for determining area where traffic target is located based on traffic video data image |
CN104637038A (en) * | 2015-03-11 | 2015-05-20 | 天津工业大学 | Improved CamShift tracing method based on weighted histogram model |
CN104718562A (en) * | 2012-10-17 | 2015-06-17 | 富士通株式会社 | Image processing device, image processing program and image processing method |
CN104869421A (en) * | 2015-06-04 | 2015-08-26 | 北京牡丹电子集团有限责任公司数字电视技术中心 | Global motion estimation based video saliency detection method |
CN110084160A (en) * | 2019-04-16 | 2019-08-02 | 东南大学 | A kind of video forest rocket detection method based on movement and brightness significant characteristics |
CN110415273A (en) * | 2019-07-29 | 2019-11-05 | 肇庆学院 | A kind of efficient motion tracking method of robot and system of view-based access control model conspicuousness |
CN114640850A (en) * | 2022-02-28 | 2022-06-17 | 上海顺久电子科技有限公司 | Motion estimation method of video image, display device and chip |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005236723A (en) * | 2004-02-20 | 2005-09-02 | Victor Co Of Japan Ltd | Device and method for encoding moving image, and device and method for decoding the moving image |
CN101263719A (en) * | 2005-09-09 | 2008-09-10 | 索尼株式会社 | Image processing device and method, program, and recording medium |
CN101436301A (en) * | 2008-12-04 | 2009-05-20 | 上海大学 | Method for detecting characteristic movement region of video encode |
US20090141808A1 (en) * | 2007-11-30 | 2009-06-04 | Yiufai Wong | System and methods for improved video decoding |
-
2012
- 2012-01-11 CN CN201210006930.9A patent/CN102568006B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005236723A (en) * | 2004-02-20 | 2005-09-02 | Victor Co Of Japan Ltd | Device and method for encoding moving image, and device and method for decoding the moving image |
CN101263719A (en) * | 2005-09-09 | 2008-09-10 | 索尼株式会社 | Image processing device and method, program, and recording medium |
US20090141808A1 (en) * | 2007-11-30 | 2009-06-04 | Yiufai Wong | System and methods for improved video decoding |
CN101436301A (en) * | 2008-12-04 | 2009-05-20 | 上海大学 | Method for detecting characteristic movement region of video encode |
Non-Patent Citations (2)
Title |
---|
丁绪星,等: "一种基于视觉注意力的快速运动估计算法", 《仪器仪表学报》 * |
张焱,等: "基于动态显著性特征的粒子滤波多目标跟踪算法", 《电子学报》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867301B (en) * | 2012-08-29 | 2015-01-28 | 西北工业大学 | Mehtod for getting image salient features according to information entropy |
CN102867301A (en) * | 2012-08-29 | 2013-01-09 | 西北工业大学 | Mehtod for getting image salient features according to information entropy |
CN104718562A (en) * | 2012-10-17 | 2015-06-17 | 富士通株式会社 | Image processing device, image processing program and image processing method |
CN104718562B (en) * | 2012-10-17 | 2018-07-06 | 富士通株式会社 | Image processing apparatus and image processing method |
CN103455795B (en) * | 2013-08-27 | 2017-03-29 | 西北工业大学 | A kind of method of the determination traffic target region based on traffic video data image |
CN103455795A (en) * | 2013-08-27 | 2013-12-18 | 西北工业大学 | Method for determining area where traffic target is located based on traffic video data image |
CN104637038A (en) * | 2015-03-11 | 2015-05-20 | 天津工业大学 | Improved CamShift tracing method based on weighted histogram model |
CN104637038B (en) * | 2015-03-11 | 2017-06-09 | 天津工业大学 | A kind of improvement CamShift trackings based on weighted histogram model |
CN104869421A (en) * | 2015-06-04 | 2015-08-26 | 北京牡丹电子集团有限责任公司数字电视技术中心 | Global motion estimation based video saliency detection method |
CN104869421B (en) * | 2015-06-04 | 2017-11-24 | 北京牡丹电子集团有限责任公司数字电视技术中心 | Saliency detection method based on overall motion estimation |
CN110084160A (en) * | 2019-04-16 | 2019-08-02 | 东南大学 | A kind of video forest rocket detection method based on movement and brightness significant characteristics |
CN110415273A (en) * | 2019-07-29 | 2019-11-05 | 肇庆学院 | A kind of efficient motion tracking method of robot and system of view-based access control model conspicuousness |
CN110415273B (en) * | 2019-07-29 | 2020-09-01 | 肇庆学院 | Robot efficient motion tracking method and system based on visual saliency |
CN114640850A (en) * | 2022-02-28 | 2022-06-17 | 上海顺久电子科技有限公司 | Motion estimation method of video image, display device and chip |
CN114640850B (en) * | 2022-02-28 | 2024-06-18 | 上海顺久电子科技有限公司 | Video image motion estimation method, display device and chip |
Also Published As
Publication number | Publication date |
---|---|
CN102568006B (en) | 2014-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102568006B (en) | Visual saliency algorithm based on motion characteristic of object in video | |
CN110706157B (en) | Face super-resolution reconstruction method for generating confrontation network based on identity prior | |
CN103871076B (en) | Extracting of Moving Object based on optical flow method and super-pixel segmentation | |
CN113011329B (en) | Multi-scale feature pyramid network-based and dense crowd counting method | |
KR20080035213A (en) | Image analysis method and apparatus, motion segmentation system | |
Huang et al. | A depth extraction method based on motion and geometry for 2D to 3D conversion | |
CN103177260B (en) | A kind of coloured image boundary extraction method | |
CN101650830A (en) | Compressed domain video lens mutation and gradient union automatic segmentation method and system | |
KR20090062049A (en) | Video compression method and system for enabling the method | |
CN102457724B (en) | Image motion detecting system and method | |
CN108200432A (en) | A kind of target following technology based on video compress domain | |
CN101707711B (en) | Method for detecting tampering of video sequence by Copy-Move based on compressed domain | |
CN102236887A (en) | Motion-blurred image restoration method based on rotary difference and weighted total variation | |
CN103226824B (en) | Maintain the video Redirectional system of vision significance | |
CN103428409A (en) | Video denoising processing method and device based on fixed scene | |
CN112446292B (en) | 2D image salient object detection method and system | |
US20150169632A1 (en) | Method and apparatus for image processing and computer readable medium | |
CN103106663B (en) | Realize the method for SIM card defects detection based on image procossing in computer system | |
Ma et al. | Surveillance video coding with vehicle library | |
Chen et al. | Moving vehicle detection based on union of three-frame difference | |
CN113487487B (en) | Super-resolution reconstruction method and system for heterogeneous stereo image | |
Liu et al. | Attention-guided lightweight generative adversarial network for low-light image enhancement in maritime video surveillance | |
CN106951831B (en) | Pedestrian detection tracking method based on depth camera | |
CN114841941A (en) | Moving target detection algorithm based on depth and color image fusion | |
CN103440616A (en) | High volume reversible watermarking method based on self-adaptive prediction model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20140611 Termination date: 20170111 |
|
CF01 | Termination of patent right due to non-payment of annual fee |