CN103607589A - Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain - Google Patents

Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain Download PDF

Info

Publication number
CN103607589A
CN103607589A CN201310563526.6A CN201310563526A CN103607589A CN 103607589 A CN103607589 A CN 103607589A CN 201310563526 A CN201310563526 A CN 201310563526A CN 103607589 A CN103607589 A CN 103607589A
Authority
CN
China
Prior art keywords
image
threshold value
threshold
jnd
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310563526.6A
Other languages
Chinese (zh)
Other versions
CN103607589B (en
Inventor
张冬冬
高利晶
臧笛
孙杳如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201310563526.6A priority Critical patent/CN103607589B/en
Publication of CN103607589A publication Critical patent/CN103607589A/en
Application granted granted Critical
Publication of CN103607589B publication Critical patent/CN103607589B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention, which belongs to the technical field of image/video coding, relates to a level selection visual attention mechanism-based image just-noticeable-distortion (JND) threshold calculating method in a pixel domain. According to the technical scheme, the provided method comprises the following steps: S1, calculating a background brightness adaptive threshold for an original input image; S2, calculating an edge-based texture masking threshold for an image; S3, adding the brightness adaptive threshold obtained in the step S1 and the texture masking threshold obtained in the step S2 and subtracting a superposed portion of the two threshold so as to obtain a basic JND threshold; S4, according to the size of the input image, setting a level value of level selection; S5, carrying out downsampling on the original input image at different resolution ratios and carrying out saliency map detection on the image at the different resolution ratios by utilizing a quaternion fourier phase spectrum (PQFT) saliency detection method; and S6, carrying out sampling on the saliency map at different resolution ratios so as to obtain the resolution ratio of the original image. According to the invention, more noises can be accommodated; and a good visual quality is realized.

Description

Image JND threshold value calculation method based on hierarchy selection visual attention mechanism in pixel domain
Technical field
The present invention relates to image/video coding technical field.
Technical background
Traditional image/video coding technology is carried out compressed encoding mainly for spatial domain redundancy, time-domain redundancy and statistical redundancy, but seldom consider human visual system's characteristic and psychologic effect, therefore a large amount of visual redundancy data are encoded and transmit, in order further to improve the efficiency of coding, researcher has started to be devoted to remove the research of visual redundancy.The effective ways of the previous characterization of visual redundancy of order are exactly based on psychology and the discernable distortion model of physiological minimum, be called for short JND model, also can be described as proper discernable distortion model, it is the imperceptible variation of human eye, various screen effects due to human eye, human eye can only be perceiveed the noise that surpasses a certain threshold value, this threshold value be exactly human eye just can perceive distortion, representing the visual redundancy degree in image.JND model is commonly used to perceptual coding and the processing of guide image or video, as preliminary treatment, adaptive quantizing, code stream control, estimation etc.
Existing proper discernable distortion (JND) model can roughly be divided into the JND model in pixel domain JND model and transform domain, the JND model of pixel domain is simple owing to calculating, be used widely, its basic principle is to carry out modeling by characterizing brightness self adaptation effect and texture masking effect mostly, for example document 1(is referring to X.Yang, W.Lin, Z.Lu, E .P.Ong, and S.Yao, " Just-noticeable-distortion profile with nonlinear additivity model for perceptual masking color images " iEEE Trans.Circuits Syst.Video Technol., vol.15, no.6, pp742-752, Jun.2005) in, proposed the coloured image JND model based on spatial domain, along with the development of visual attention model, scholars start to propose multiple visual attention detection method in recent years, document 2(L. Itti for example, C. Koch, E. Niebur, et al., " A model of saliency-based visual attention for rapid scene analysis, ". iEEE Trans. on Pattern Analysis and Machine Intelligencevol.20, no.11, pp. 1254 – 1259,1998.), based on this vision significance detection model, there is scholar to start visual attention mechanism to be applied in the JND modeling of image, for example document 3 (Z. Chen and C. Guillemont, " Perceptually-friendly is video coding based on foveated just-noticeable-distortion model H.264/AVC, " iEEE Trans Circuits Syst Video Technol., vol.20, no.6, June 2010), first JND model in the document calculates the significant point of image by the vision significance model in document 2, then calculate the distance between given pixel and significant point, and given pixel is than the eccentricity of significant point, then the relation based on eccentricity and observed range, constructed a modulating function, JND model in document 1 is modulated, obtain the JND model based on fovea centralis, but on the one hand because the vision significance detection method in document 2 is not considered the hierarchy selection characteristic of human eye in observed image process, on the other hand for high-definition image, the index of modulation that the modulating function of use based on retina eccentricity and visual range relation calculates is modulated the JND threshold value in document 1, may be due to the distant noise that surpasses actual capabilities tolerance of modulating between significant point and pixel, so this model can not accurately calculate the visual redundancy threshold value of human eye to image.Than document 2, document 4(C.L. Guo and L.M. Zhang, " A novel multiresolution spatiotemporal saliency dection model and its applications in image and video compression ", IEEE Trans Image processing, vol.19, no.1, Jan 2010) in PQFT(Quaternion Fourier phase spectrum) saliency detection method is by the marking area of computed image under different resolution, can simulate well the hierarchy selection characteristic of human eye in observed image process.
Summary of the invention
At prior art Yang(document 1) the basis of model on binding hierarchy selective visual attention power mechanism of the present invention image JND model modelling approach in a new pixel domain has been proposed, by thinking over human eye at the hierarchy selection visual attention mechanism of observing image process, and this visual attention mechanism and the masking effect based on texture are combined and set up multi-level modulating function, traditional JND threshold value is modulated, thereby set up more accurate JND model.
For this reason, the present invention provides technical scheme implementation step and is:
The proper method for computing perceptible distortion of image of pixel domain, adopts following technical scheme, comprises the following steps:
Step S1: original input picture is calculated to background luminance adaptive threshold.
Step S2: the texture masking threshold value to image calculation based on edge.
Step S3: the brightness adaptive threshold that step S1 and S2 are obtained and texture masking threshold value are added, and deduct the two overlapping part, finally obtain basic JND threshold value.
Step S4: according to the size of input picture, the level value L of hierarchy selection is set, in the present invention for the image that is similar to 512*512 size, level value L=2, for 720 * 1280, L=3, and for larger image, L=4.
Step S5: original input picture is down sampled to different resolution, and this resolution is respectively original image (1/2) 0 ~ Ldoubly big or small, and under different resolution, imagery exploitation PQFT conspicuousness detection method is carried out to significantly figure and detect.
Step S6: the remarkable figure under different resolution is upsampled to original image resolution size.
Step S7: utilize adaptive threshold to determine method---large Tianjin method obtains the threshold value of each remarkable figure t i , and utilize this threshold value that remarkable figure is divided into marking area and non-marking area.
Step S8: the remarkable figure after all segmenting, according to nested mode from big to small, is obtained to multi-level conspicuousness and shelters figure.
Step S9: utilize canny edge detector to detect to original image, image is divided into texture area, marginal zone, level and smooth district.
Step S10: the characteristic such as the remarkable district of multilayer obtaining based on step S8 and S9 and S6 and texture, set up the multi-level modulating function of comprehensively sheltering, the threshold value that S3 is obtained is modulated, and obtains final JND threshold value.
Known, the vision system of human eye has hierarchy selection in vision attention process, i.e. the hierarchy selection process from coarse to fine of the object-object features-spatial point of vision attention from object group-group.Therefore, during eye-observation image, first can catch the overall marking area in picture, then can carry out bed-by-bed analysis from coarse to fine to overall marking area observes, in conjunction with this biological property, the present invention proposes a kind of JND model detecting based on multi-level vision attention, the inventive method technical scheme is to realize the key technology main points of the contribution that invention task also embodies:
1, for the proper discernable distortion model of traditional images, do not consider this problem of hierarchy selection visual attention mechanism, the present invention by carrying out conspicuousness detection under different resolution size, and thereby these remarkable figure are carried out to the nested multilayer marking area that obtains, simulation human eye in observing image process from thick to smart observation process.
2, the present invention is based on the hierarchy selection attentiveness characteristic of human eye in observing image process, set up a modulating function of sheltering based on multilayer marking area.
3, the present invention has not only considered hierarchy selection attentiveness mechanism comprehensive in sheltering modulating function, and has considered marginal zone, texture area and the level and smooth district different screening abilities to noise.
4, the present invention is directed to the different hierarchy selection value of image setting of different resolution size, make this model can be applied to the image of different sizes.
The beneficial effect of the inventive method is: use and to have considered that the value that the multi-level modulating function of hierarchy selection visual attention mechanism calculates modulates traditional JND threshold value, finally obtain JND threshold value more accurately.Than the computation model of Yang, guaranteeing that, under the prerequisite of same vision subjective quality, the model that the image JND threshold value calculation method that the present invention proposes is realized can hold more noise; Than the model of Chen, the computational methods that the present invention proposes, not only can hold more noise, and have better visual quality.
Accompanying drawing explanation
Fig. 1 is the proper perceptible distortion model framework chart of the image based on hierarchy selection visual attention mechanism of pixel domain of the present invention.
Fig. 2 is example test pattern of the present invention.
Fig. 3 is the remarkable figure of example test pattern of the present invention under original image resolution.
Fig. 4 is the remarkable figure under 1/2 times of big or small resolution of original image of example test pattern of the present invention.
Fig. 5 is that example test pattern of the present invention carries out the image that marking area is cut apart under original resolution.
Fig. 6 is the marking area segmentation result under 1/2 times of big or small resolution of original image of example test pattern of the present invention.
The multi-level marking area segmentation result that Fig. 7 is example test pattern of the present invention based on after Fig. 5 and Fig. 6 nested.
Fig. 8 is the texture area obtaining based on canny operator, level and smooth district, the piecemeal result of marginal zone.
Fig. 9 is JND threshold value calculation method flow chart of the present invention.
Embodiment
With instantiation, the invention will be further described by reference to the accompanying drawings below:
Example provided by the invention adopts MATLAB7 as Simulation Experimental Platform, usings the bmp coloured image (as shown in Figure 2) of 768*512 as selected test pattern, below in conjunction with each step, describes this example in detail:
Basic JND threshold value is calculated in step (1) ~ (3), and it is identical that computational methods and the people such as Yang in prior art document 1 propose JND model threshold computational methods.
Step (1), the bmp coloured image of selected 768*512 is as the image of input test, and the adaptive threshold to image calculation based on background luminance, gets the maximum of background luminance model and space mask model as adaptive threshold.Its computing formula is as follows:
Figure 2013105635266100002DEST_PATH_IMAGE002
(1)
Wherein bg( x,y) and mg( x,y) be respectively background average brightness value and the average weighted maximum of background luminance change direction.And
Figure 2013105635266100002DEST_PATH_IMAGE004
representation space mask model, is determined by pixel background average brightness value and luminance difference around.The relative smooth region of its expression texture region can be tolerated larger distortion. f 1 ( bg( x,y) , mg( x,y)) and background luminance, background luminance variation relation as follows:
Figure 2013105635266100002DEST_PATH_IMAGE006
(2)
(3)
(4)
mg( x,y) by a weighting low pass operator g k calculate, as follows:
Figure 2013105635266100002DEST_PATH_IMAGE012
(5)
(6)
f 2 ( bg( x,y)) representing background luminance model, formula is as follows:
Figure 2013105635266100002DEST_PATH_IMAGE016
(7)
Step (2), calculates texture mask threshold value to original image, adopts following formula:
(8)
Wherein g y ( x,y) be the average weighted maximum of background luminance change direction for Y component,
Figure 2013105635266100002DEST_PATH_IMAGE020
=0.117, w y be to adopt low-pass filtering Boundary Detection operator, algorithm is as follows:
Figure 2013105635266100002DEST_PATH_IMAGE022
(9)
e y be three borders that different components are corresponding, wherein we use canny operator to carry out the detection on border; hbe gauss low frequency filter, its use can be used for preventing the violent saltus step in border.
Step (3): step (1) and step (2) are calculated to the threshold value addition of gained, and deduct the overlapping part of the two, obtain basic JND threshold value, shown in following formula:
Figure 2013105635266100002DEST_PATH_IMAGE024
(10)
Clt Y=0.3 wherein.
Binding hierarchy selective visual attention power mechanism on the JND computation model basis that step (4) ~ (11) are the present invention Yang in prior art document 1 obtains the process of new model.
Step (4): the level value L of hierarchy selection attentiveness mechanism is set, and in present case, the image for 768*512, arranges L=2.
Step (5): original input picture is down sampled to different resolution, and the resolution is here respectively 1 times, 1/2 times size of original image, and under different resolution, imagery exploitation PQFT conspicuousness detection method is carried out to significantly figure and detect.
Step (6): the remarkable figure under the different resolution that step (5) is obtained is upsampled to original image size, and its result as shown in Figure 3,4.
Step (7): utilize threshold value that remarkable figure is divided into marking area and non-marking area, this threshold value T i0.7 times of the threshold value that calculates of large Tianjin method threshold:
Figure 2013105635266100002DEST_PATH_IMAGE026
(11)
Wherein m i the remarkable type that represents the remarkable figure of i layer, sM i ( x,y) represent that i layer significantly schemes the remarkable value of each pixel, T irepresent to utilize 0.7 times of threshold value that large Tianjin method obtains.Result as shown in Figure 5 and Figure 6.Wherein black represents non-marking area, and white represents marking area.
Step (8): the remarkable type of the different layers that step (7) is obtained, according to nested mode, obtains multi-level remarkable type map:
Figure 2013105635266100002DEST_PATH_IMAGE028
(12)
As shown in Figure 7, wherein black represents the 1st layer of marking area to result, and grey represents the 2nd layer of marking area, and white represents the 3rd layer of marking area.
Step (9): utilize canny edge detector to detect to original image, image is divided into texture area, marginal zone, level and smooth district.As shown in Figure 8, wherein white represents texture area, and black represents level and smooth district, and grey represents marginal zone.
Step (10): consider the block sort result that multilayer marking area segmentation result that step (8) obtains and step (9) obtain, image is carried out to finer piecemeal, wherein except top marking area, every one deck marking area all can be divided into level and smooth district, marginal zone and texture area, altogether can be divided into 3L+1 different masses, it is the texture area in ground floor marking area, the marginal zone of ground floor marking area, the level and smooth district of ground floor marking area, the texture area of second layer marking area, the marginal zone of second layer marking area ... L+1 layer marking area.For different pieces, consider conspicuousness and the texture features of different layers, construct multi-level modulating function, calculate modulating function value:
Figure 2013105635266100002DEST_PATH_IMAGE030
(13)
Figure 2013105635266100002DEST_PATH_IMAGE032
(14)
Figure 2013105635266100002DEST_PATH_IMAGE034
(15)
Step (11): the modulating function value obtaining according to step (10), the JND threshold value that step (3) is obtained is modulated, and obtains final JND threshold value:
Figure 2013105635266100002DEST_PATH_IMAGE036
(16)
The comprehensive above JND threshold value that calculates in steps image, this threshold value has considered spatial contrast degree effect, brightness self adaptation effect, block sort contrast masking sensitivity effect and hierarchy selection visual attention mechanism, so the vision system of this threshold value and human eye is more identical, more accurate.

Claims (1)

1. the image JND threshold value calculation method based on hierarchy selection visual attention mechanism in pixel domain, adopts following technical scheme, comprises the following steps:
Step S1: original input picture is calculated to background luminance adaptive threshold;
Step S2: the texture masking threshold value to image calculation based on edge;
Step S3: the brightness adaptive threshold that step S1 and S2 are obtained and texture masking threshold value are added, and deduct the two overlapping part, finally obtain basic JND threshold value;
Step S4: according to the size of input picture, the level value L of hierarchy selection is set,
For the image that is similar to 512*512 size, level value L=2,
For 720 * 1280, L=3,
And for larger image, L=4;
Step S5: original input picture is down sampled to different resolution, and this resolution is respectively original image (1/2) 0 ~ L-1doubly big or small, and under different resolution, imagery exploitation PQFT conspicuousness detection method is carried out to significantly figure and detect;
Step S6: the remarkable figure under different resolution is upsampled to original image resolution size;
Step S7: utilize adaptive threshold to determine that method obtains the threshold value of each remarkable figure t i , and utilize this threshold value that remarkable figure is divided into marking area and non-marking area;
Step S8: the remarkable figure after all segmenting, according to nested mode from big to small, is obtained to multi-level conspicuousness and shelters figure;
Step S9: utilize canny edge detector to detect to original image, image is divided into texture area, marginal zone, level and smooth district;
Step S10: the characteristic such as the remarkable district of multilayer obtaining based on step S8 and S9 and S6 and texture, set up the multi-level modulating function of comprehensively sheltering, the threshold value that S3 is obtained is modulated, and obtains final JND threshold value.
CN201310563526.6A 2013-11-14 2013-11-14 JND threshold value computational methods based on hierarchy selection visual attention mechanism Expired - Fee Related CN103607589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310563526.6A CN103607589B (en) 2013-11-14 2013-11-14 JND threshold value computational methods based on hierarchy selection visual attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310563526.6A CN103607589B (en) 2013-11-14 2013-11-14 JND threshold value computational methods based on hierarchy selection visual attention mechanism

Publications (2)

Publication Number Publication Date
CN103607589A true CN103607589A (en) 2014-02-26
CN103607589B CN103607589B (en) 2016-08-24

Family

ID=50125786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310563526.6A Expired - Fee Related CN103607589B (en) 2013-11-14 2013-11-14 JND threshold value computational methods based on hierarchy selection visual attention mechanism

Country Status (1)

Country Link
CN (1) CN103607589B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754320A (en) * 2015-03-27 2015-07-01 同济大学 Method for calculating 3D-JND threshold value
CN105611272A (en) * 2015-12-28 2016-05-25 宁波大学 Eye exactly perceptible stereo image distortion analyzing method based on texture complexity
CN105635743A (en) * 2015-12-30 2016-06-01 福建师范大学 Minimum noticeable distortion method and system based on saliency detection and total variation
CN108521572A (en) * 2018-03-22 2018-09-11 四川大学 A kind of residual filtering method based on pixel domain JND model
CN108965879A (en) * 2018-08-31 2018-12-07 杭州电子科技大学 A kind of Space-time domain adaptively just perceives the measure of distortion
CN112634278A (en) * 2020-10-30 2021-04-09 上海大学 Superpixel-based just noticeable distortion model
CN115187519A (en) * 2022-06-21 2022-10-14 上海市计量测试技术研究院 Image quality evaluation method, system and computer readable medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605272A (en) * 2009-07-09 2009-12-16 浙江大学 A kind of method for evaluating objective quality of partial reference type image
CN101621708A (en) * 2009-07-29 2010-01-06 武汉大学 Method for computing perceptible distortion of color image based on DCT field
US20110243228A1 (en) * 2010-03-30 2011-10-06 Hong Kong Applied Science and Technology Research Institute Company Limited Method and apparatus for video coding by abt-based just noticeable difference model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605272A (en) * 2009-07-09 2009-12-16 浙江大学 A kind of method for evaluating objective quality of partial reference type image
CN101621708A (en) * 2009-07-29 2010-01-06 武汉大学 Method for computing perceptible distortion of color image based on DCT field
US20110243228A1 (en) * 2010-03-30 2011-10-06 Hong Kong Applied Science and Technology Research Institute Company Limited Method and apparatus for video coding by abt-based just noticeable difference model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONGDONG ZHANG等: "A DCT-Domain JND Model Based on Visual Attention for Image", 《2013 IEEE INTERNATIONAL CONFERENCE ON SINGNAL AND IMAGE PROCESSING APPLICATION》, 10 October 2013 (2013-10-10) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754320A (en) * 2015-03-27 2015-07-01 同济大学 Method for calculating 3D-JND threshold value
CN104754320B (en) * 2015-03-27 2017-05-31 同济大学 A kind of 3D JND threshold values computational methods
CN105611272A (en) * 2015-12-28 2016-05-25 宁波大学 Eye exactly perceptible stereo image distortion analyzing method based on texture complexity
CN105635743A (en) * 2015-12-30 2016-06-01 福建师范大学 Minimum noticeable distortion method and system based on saliency detection and total variation
CN108521572A (en) * 2018-03-22 2018-09-11 四川大学 A kind of residual filtering method based on pixel domain JND model
CN108521572B (en) * 2018-03-22 2021-07-16 四川大学 Residual filtering method based on pixel domain JND model
CN108965879A (en) * 2018-08-31 2018-12-07 杭州电子科技大学 A kind of Space-time domain adaptively just perceives the measure of distortion
CN108965879B (en) * 2018-08-31 2020-08-25 杭州电子科技大学 Space-time domain self-adaptive just noticeable distortion measurement method
CN112634278A (en) * 2020-10-30 2021-04-09 上海大学 Superpixel-based just noticeable distortion model
CN112634278B (en) * 2020-10-30 2022-06-14 上海大学 Super-pixel-based just noticeable distortion method
CN115187519A (en) * 2022-06-21 2022-10-14 上海市计量测试技术研究院 Image quality evaluation method, system and computer readable medium

Also Published As

Publication number Publication date
CN103607589B (en) 2016-08-24

Similar Documents

Publication Publication Date Title
CN103607589B (en) JND threshold value computational methods based on hierarchy selection visual attention mechanism
Liang et al. No-reference perceptual image quality metric using gradient profiles for JPEG2000
CN108122206A (en) A kind of low-light (level) image denoising method and device
Wang et al. A fast single-image dehazing method based on a physical model and gray projection
Luan et al. Fast single image dehazing based on a regression model
CN104378636B (en) A kind of video encoding method and device
CN110246088B (en) Image brightness noise reduction method based on wavelet transformation and image noise reduction system thereof
CN105427257A (en) Image enhancement method and apparatus
CN108564597A (en) A kind of video foreground target extraction method of fusion gauss hybrid models and H-S optical flow methods
CN103020933A (en) Multi-source image fusion method based on bionic visual mechanism
Chen et al. Blind quality index for tone-mapped images based on luminance partition
He et al. Video quality assessment by compact representation of energy in 3D-DCT domain
KR20140035273A (en) Image processing device, image processing program, computer-readable recording medium storing image processing program, and image processing method
Ma et al. Efficient saliency analysis based on wavelet transform and entropy theory
Zhang et al. Image dehazing based on dark channel prior and brightness enhancement for agricultural remote sensing images from consumer-grade cameras
CN103839244B (en) Real-time image fusion method and device
Chen et al. Improve transmission by designing filters for image dehazing
Rao et al. An Efficient Contourlet-Transform-Based Algorithm for Video Enhancement.
Ein-shoka et al. Quality enhancement of infrared images using dynamic fuzzy histogram equalization and high pass adaptation in DWT
Hu et al. A region-based video de-noising algorithm based on temporal and spatial correlations
Hanumantharaju et al. Adaptive color image enhancement based geometric mean filter
CN105528772B (en) A kind of image interfusion method based on directiveness filtering
Ciancio et al. Objective no-reference image blur metric based on local phase coherence
Gao et al. Single image haze removal algorithm using pixel-based airlight constraints
CN106485703A (en) Fuzzy detection method based on image gradient dct transform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160824

Termination date: 20181114

CF01 Termination of patent right due to non-payment of annual fee