CN111339974A - Method for identifying modern ceramics and ancient ceramics - Google Patents

Method for identifying modern ceramics and ancient ceramics Download PDF

Info

Publication number
CN111339974A
CN111339974A CN202010139176.0A CN202010139176A CN111339974A CN 111339974 A CN111339974 A CN 111339974A CN 202010139176 A CN202010139176 A CN 202010139176A CN 111339974 A CN111339974 A CN 111339974A
Authority
CN
China
Prior art keywords
image
ceramics
training
scale
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010139176.0A
Other languages
Chinese (zh)
Other versions
CN111339974B (en
Inventor
程翔
程钰鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Jingdezhen Ceramic Institute
Original Assignee
Southeast University
Jingdezhen Ceramic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University, Jingdezhen Ceramic Institute filed Critical Southeast University
Priority to CN202010139176.0A priority Critical patent/CN111339974B/en
Publication of CN111339974A publication Critical patent/CN111339974A/en
Application granted granted Critical
Publication of CN111339974B publication Critical patent/CN111339974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an identification method of modern ceramics and ancient ceramics, which comprises the steps of constructing a positive sample corresponding to the ancient ceramics and a negative sample corresponding to the ancient ceramics, converting an RGB image into an HSV color space to obtain an HSV image, obtaining a feature descriptor of the HSV image, inputting the feature descriptor into a support vector machine for training to obtain training parameters of the support vector machine, inputting the RGB image into a deep convolutional neural network architecture for training to obtain network parameters of a convolutional neural network, determining a deep learning model according to the training parameters of the support vector machine and the network parameters of the convolutional neural network, respectively inputting a gray scale image of the positive sample and a gray scale image of the negative sample into the deep learning model for training to obtain an identification model, obtaining a to-be-identified picture of the to-be-identified porcelain, inputting the to-be-identified picture into the identification model, determining the to-be-identified porcelain to be modern ceramics or the ancient ceramics according to an output result of the identification model, to improve the efficiency of the corresponding ceramic identification.

Description

Method for identifying modern ceramics and ancient ceramics
Technical Field
The invention relates to the technical field of computers, in particular to a method for identifying modern ceramics and ancient ceramics.
Background
Ancient ceramics often have extremely high value, for example, in the auction of Chinese cultural relics and artworks held in hong Kong in 4 months in 1999, a Ming dynasty adult lottery chicken mug with perfect preserved appearance beats out the day price of 2917 ten thousand yuan Hongkong coin. 8 days 4 and 8 days 4 in 2014, 8 days of Daizhiguang Dahua Doucai chicken pots in rose and Yinzang were bred in 2.8124 hundred million Hongkong Yuan in China porcelain and handicraft article spring clap which are important in hong Kong Sufu ratio. Therefore, the tide made by the ancient porcelain is raised in the areas of Jingdezhen and the like. Since the regions are the ancient imperial porcelain production places, people and people are in good use at all times, and many ancient porcelain lovers can go to the regions to do treasure washing. How to identify ancient ceramics or modern ancient ceramics, for this reason, many experts and scholars set up the WeChat public number of appraisal and network platform come out to identify ancient ceramics for the free of charge of the ordinary people. However, the conventional authentication schemes often have a problem of low efficiency due to time and place limitations.
Disclosure of Invention
Aiming at the problems, the invention provides a method for identifying modern ceramics and ancient ceramics.
In order to realize the aim of the invention, the invention provides a method for identifying modern ceramics and ancient ceramics, which comprises the following steps:
s10, obtaining a plurality of micrographs of ancient ceramics to construct a positive sample corresponding to the ancient ceramics, obtaining a plurality of micrographs of the ancient ceramics to construct a negative sample corresponding to the ancient ceramics, respectively obtaining RGB images of the positive sample and the negative sample, and converting the RGB images into HSV color space to obtain HSV images;
s20, acquiring a feature descriptor of the HSV image, inputting the feature descriptor into a support vector machine for training, and acquiring training parameters of the support vector machine;
s30, inputting the RGB image into a deep convolutional neural network architecture for training to obtain network parameters of a convolutional neural network;
s40, determining a deep learning model according to the training parameters of the support vector machine and the network parameters of the convolutional neural network, and inputting the gray-scale image of the positive sample and the gray-scale image of the negative sample into an input layer of the deep learning model respectively for training to obtain an identification model;
s50, obtaining the picture to be identified of the porcelain to be identified, inputting the picture to be identified into the identification model, and determining the porcelain to be identified as modern ceramics or ancient ceramics according to the output result of the identification model.
In one embodiment, obtaining a feature descriptor for an HSV image comprises:
constructing a scale space, carrying out scale transformation on a gray level image and a gradient image corresponding to the HSV image, establishing a Gaussian difference pyramid, obtaining a scale space representation sequence of the HSV image under multiple scales, extracting a main contour of the scale space from the scale space representation sequence, and taking the extracted main contour as a feature vector;
finding key points according to the feature vectors by adopting a Gaussian difference algorithm, and positioning the key points to obtain the positions of the key points;
and determining the information distribution of the key points according to the positions of the key points, and generating descriptors according to the information distribution of the key points.
As an embodiment, generating the descriptor according to the information allocation of the key point includes:
and determining a fuzzy image of the gradient direction histogram of the descriptor in the scale of the key point according to the information distribution of the key point, and calculating the descriptor according to the fuzzy image.
In one embodiment, the respectively inputting the gray-scale map of the positive sample and the gray-scale map of the negative sample into the input layer of the deep learning model for training, and obtaining the identification model includes:
respectively inputting the gray-scale image of the positive sample and the gray-scale image of the negative sample into an input layer of a deep learning model, and training the input layer by adopting a training matrix of the deep learning model to obtain the characteristics of the input layer and parameters corresponding to the characteristics of the input layer;
taking the input layer as a target layer, and acquiring an adjacent hidden layer of the target layer;
training an adjacent hidden layer of the target layer according to the features of the target layer to obtain the features of the adjacent hidden layer and parameters corresponding to the features of the adjacent hidden layer;
and taking the adjacent hidden layer as the target layer, and iteratively executing the step of obtaining the adjacent hidden layer of the target layer, and determining an identification model according to the current parameters of the deep learning model when the loss function value of the deep learning model is smaller than a set threshold value or the iteration times is larger than a time threshold value.
In one embodiment, converting the RGB image into an HSV color space, the obtaining the HSV image includes:
and converting the RGB image into an HSV color space, and performing multi-resolution decomposition on the image converted into the HSV color space to obtain HSV images with multiple resolutions.
As an embodiment, performing multi-resolution decomposition on the image after being converted into the HSV color space includes:
the method comprises the steps of taking a gray image as an initial image, using a Gaussian pyramid to perform downsampling on the initial image to obtain a first downsampled image, taking the first downsampled image as the initial image, using the Gaussian pyramid to perform downsampling on the initial image to obtain a second downsampled image, repeating the process of obtaining the first downsampled image and the process of obtaining the second downsampled image twice, obtaining three groups of downsampled images in total, and obtaining HSV images with multiple resolutions.
The method for identifying the modern ceramics and the ancient ceramics comprises the steps of obtaining a plurality of micrographs of the ancient ceramics to construct a positive sample corresponding to the ancient ceramics, obtaining a plurality of micrographs of the ancient ceramics to construct a negative sample corresponding to the ancient ceramics, respectively obtaining RGB images of the positive sample and the negative sample, converting the RGB images into HSV color space to obtain HSV images, obtaining a feature descriptor of the HSV images, inputting the feature descriptor into a support vector machine for training to obtain a training parameter of the support vector machine, inputting the RGB images into a deep convolutional neural network architecture for training to obtain a network parameter of a convolutional neural network, determining a deep learning model according to the training parameter of the support vector machine and the network parameter of the convolutional neural network, respectively inputting the gray maps of the positive sample and the negative sample into an input layer of the deep learning model for training to obtain an identification model, and acquiring a picture to be identified of the porcelain to be identified, inputting the picture to be identified into the identification model, and determining that the porcelain to be identified is modern ceramics or ancient ceramics according to an output result of the identification model so as to improve the efficiency of corresponding ceramic identification.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a method for identifying modern ceramics and ancient ceramics;
FIG. 2 is a schematic diagram of a deep convolutional neural network architecture of an embodiment;
FIG. 3 is a schematic diagram of a difference of gaussians operation performed on images of different scales according to an embodiment;
FIG. 4 is a schematic diagram of determining extreme points between images of different scale spaces in a Gaussian pyramid according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a method for identifying modern ceramics and ancient ceramics, comprising the following steps:
and S10, obtaining micrographs of the ancient ceramics to construct a positive sample corresponding to the ancient ceramics, obtaining the micrographs of the ancient ceramics to construct a negative sample corresponding to the ancient ceramics, respectively obtaining RGB images of the positive sample and the negative sample, and converting the RGB images into HSV color space to obtain HSV images.
In the steps, the sample space of the ancient ceramic can be established as a positive sample, and the sample space of the ancient ceramic can be established as a secondary sample. And converting the positive sample and the negative sample from the RGB color space to the HSV color space for subsequent processing.
In one example, a hydrofluoric acid solution may be used to etch the glaze of modern ceramics (antique porcelain) to achieve a delustring process. Because of the differences in ceramic enamel composition, acid concentration and erosion time, several decisions can be made regarding this extinction method.
a. Under microscope observation, the glaze bubbles are broken in a large amount and distributed uniformly, which is the result of excessive antique.
b. The dull part of the glaze surface has brush marks or tear marks. This is because the antique can directly dip concentrated acid with a brush or brush pen and apply the concentrated acid on the glaze. The glaze surface of the place where the acid liquid is firstly applied is seriously corroded, and dirt are easily stored, and vice versa.
c. The glaze has dull luster or oily luster, and the naturally devitrified glaze loses luster, so that the glaze looks warm, moist and soft in luster by naked eyes.
d. The glaze surface is hung with white cream. Some utensils with complex surface shapes are easy to generate white frost after acid cleaning and are extremely difficult to clean.
e. The glaze surface has obvious cracks. Such phenomena are most likely to occur with devices glazed with finely divided pieces.
Compared with ancient porcelain (ancient porcelain), the ancient porcelain has obviously different characteristics. Therefore, the artificial intelligence method can be used for classifying and identifying the micrographs respectively corresponding to the ancient ceramics and the modern ceramics.
And S20, acquiring a feature descriptor of the HSV image, inputting the feature descriptor into a support vector machine for training, and acquiring training parameters of the support vector machine.
In the above steps, feature maps of the images with each resolution output in step S10 are extracted through SIFT feature descriptors of the images, and the SIFT feature descriptors of the training samples are input into the support vector machine for training to obtain training parameters of the support vector machine.
And S30, inputting the RGB image into a deep convolutional neural network architecture for training to obtain network parameters of the convolutional neural network.
In one example, the above steps may establish a deep convolutional neural network architecture, and the RGB image is input into the deep convolutional neural network architecture as shown in fig. 2 to be trained, so as to obtain network parameters of the convolutional neural network.
And S40, determining a deep learning model according to the training parameters of the support vector machine and the network parameters of the convolutional neural network, and inputting the gray-scale image of the positive sample and the gray-scale image of the negative sample into an input layer of the deep learning model respectively for training to obtain the identification model.
Before the above steps, the gray scale map of the positive sample and the gray scale map of the negative sample can be obtained respectively.
S50, obtaining the picture to be identified of the porcelain to be identified, inputting the picture to be identified into the identification model, and determining the porcelain to be identified as modern ceramics or ancient ceramics according to the output result of the identification model.
The picture to be identified can be a micrograph of the porcelain to be identified.
The embodiment provides a method for neural network deep learning, which includes the steps of locally acquiring a picture on the surface of a ceramic under a microscope, establishing a training gallery of new and old ceramics, performing feature extraction on images of the porcelain by using a Scale-invariant feature transform (SIFT) algorithm to obtain key points (or corner points) and descriptors related to Scale and orientation of the key points, inputting the feature points and the descriptors as input data into a support vector machine to train a support vector machine model, training the neural network deep learning model by using an image, and fusing two models to identify ancient ceramics or modern ancient ceramics.
The method for identifying the modern ceramics and the ancient ceramics comprises the steps of obtaining a plurality of micrographs of the ancient ceramics to construct a positive sample corresponding to the ancient ceramics, obtaining a plurality of micrographs of the ancient ceramics to construct a negative sample corresponding to the ancient ceramics, respectively obtaining RGB images of the positive sample and the negative sample, converting the RGB images into HSV color space to obtain HSV images, obtaining a feature descriptor of the HSV images, inputting the feature descriptor into a support vector machine for training to obtain a training parameter of the support vector machine, inputting the RGB images into a deep convolutional neural network architecture for training to obtain a network parameter of a convolutional neural network, determining a deep learning model according to the training parameter of the support vector machine and the network parameter of the convolutional neural network, respectively inputting the gray maps of the positive sample and the negative sample into an input layer of the deep learning model for training to obtain an identification model, and acquiring a picture to be identified of the porcelain to be identified, inputting the picture to be identified into the identification model, and determining that the porcelain to be identified is modern ceramics or ancient ceramics according to an output result of the identification model so as to improve the efficiency of corresponding ceramic identification.
In one embodiment, obtaining a feature descriptor for an HSV image comprises:
constructing a scale space, carrying out scale transformation on a gray level image and a gradient image corresponding to the HSV image, establishing a Gaussian difference pyramid, obtaining a scale space representation sequence of the HSV image under multiple scales, extracting a main contour of the scale space from the scale space representation sequence, and taking the extracted main contour as a feature vector;
finding key points according to the feature vectors by adopting a Gaussian difference algorithm, and positioning the key points to obtain the positions of the key points;
and determining the information distribution of the key points according to the positions of the key points, and generating descriptors according to the information distribution of the key points.
As an embodiment, generating the descriptor according to the information allocation of the key point includes:
and determining a fuzzy image of the gradient direction histogram of the descriptor in the scale of the key point according to the information distribution of the key point, and calculating the descriptor according to the fuzzy image.
In one example, the feature descriptor may also be referred to as SIFT feature descriptor, and the obtaining process may include:
a) the method comprises the steps of constructing a scale space, carrying out scale transformation on a gray level image and a gradient image of the HSV image, establishing a Gaussian difference pyramid, obtaining scale space representation sequences under the multi-scale of the image, extracting main outlines of the scale space for the sequences, and taking the main outlines as a feature vector to realize edge and corner detection and key point extraction on different resolutions. The scale space of the image is defined as a function L (x, y, σ):
L(x,y,σ)=G(x,y,σ)*I(x,y) (1)
Figure BDA0002398442190000051
l (x, y, σ) is generated by a variable-scale gaussian convolution, where G (x, y, σ) in formula (1) is a scale-variable gaussian function, the symbol × represents a convolution operation, I (x, y) is an original gray image and a gradient image of the ancient ceramics, x and y are coordinate values of image pixels, respectively, and σ is a parameter value of the gaussian function.
b) To effectively detect stable keypoint locations in the scale space, the keypoints are found using a difference-of-Gaussian (DoG) algorithm.
Figure BDA0002398442190000061
Two reasons are that the extreme point is calculated by selecting the Gaussian difference scale space; firstly, for easy calculation, subtraction is carried out on different scale spaces; second, the difference scale space and the laplacian of gaussian function are approximately equal. For an image I (x, y), establishing an image with different scales (scales) to be a sub-octave (octave), wherein the sub-octave is for scale-invariance (scale-invariant), namely corresponding characteristic points can be found in any scale, the scale of the first sub-octave is the size of the original image, each subsequent sub-octave is the result of down-sampling the previous sub-octave, namely 1/4 (the length and the width are respectively halved) of the original image, and the next sub-octave (a pyramid higher layer) is formed. The process of performing the gaussian difference operation on the images with different scales can be referred to as shown in fig. 3.
As shown in FIG. 4, the middle detection point is compared with 26 points of 8 adjacent points of the same scale and 9 × 2 points corresponding to the upper and lower adjacent scales to ensure that the extreme point is detected in both the scale space and the two-dimensional image space.
In the process of comparing the extreme values, the first and last layers of each group of images cannot be subjected to extreme value comparison, in order to meet the continuity of scale change, 3 images are generated on the top layer of each group of images by Gaussian blur, and each group of S +3 layers of images is provided in a Gaussian pyramid. Each group of the DOG pyramid has S +2 layers of images.
Assuming that s is 3, i.e. 3 layers in each column, k is 21/s=21/3Then, the obtained Gaussian Space and the dougspace have 3(s) and 2 (s-1) components respectively, and in the DoG Space, two terms of 1st-octave are sigma and k sigma respectively; the two terms 2nd-octave are 2 sigma and 2k sigma respectively, because extreme values cannot be compared, the Gaussian blur term must be added in the Gaussian space continuously, so that sigma, k sigma and k are formed2σ,k3σ,k4And sigma. Thus, the middle three terms k σ, k in the DoG space can be selected2σ,k3Sigma (extreme values can be found only in the left and right), then the three terms obtained in the next octave (obtained by down-sampling the previous layer) are 2k sigma, 2k2σ,2k3σ, its first term 2k σ ═ 24/3. Exactly with the last octave term k3σ=23/3The scale change is continuous, so 3 items are added to Gaussian Space each time, each group (tower) has S +3 layers of images, and the corresponding DoG pyramid has S +2 layers of images.
The interest points in the found image can be well found by using the LOG, but the calculation amount is large, so that the DOG operator is simply approximated by the scale normalization LOG operator by using the maximum and minimum value approximation of the DOG image to find the characteristic points.
c) And stabilizing the accurate positioning of the key points. At each candidate location, the location and scale are determined by fitting a fine model. The selection of the key points depends on their degree of stability. The positions and the scales of the points are accurately determined by fitting a 3-D quadratic function, and meanwhile, low-contrast key points and unstable edge response points are removed, so that the matching stability is enhanced, and the noise capability is improved. We perform curve fitting on the scale space DoG function. The scale space is defined by a DoG function as:
Figure BDA0002398442190000071
wherein, X ═ (X, y, sigma)TDerivative and let it be 0, the offset from which the extreme point can be found is:
Figure BDA0002398442190000072
among the features that have been detected, feature points of low contrast and unstable edge response points are removed. The points of low contrast are removed. Substituting equation (5) into equation (4) can take only the first two terms as:
Figure BDA0002398442190000073
wherein:
Figure BDA0002398442190000074
represents the offset from the center of interpolation, when it is offset in either dimension by more than 0.5 (i.e., x or y or σ), meaning that the center of interpolation has been shifted to its neighbors, so the position of the current keypoint must be changed. While repeatedly interpolating at the new position until convergence; it is also possible to go beyond the set number of iterations or beyond the boundaries of the image, where such points should be deleted. In addition, the point where | d (x) | is too small is susceptible to noise and becomes unstable, so the extreme point where | d (x) | is less than a certain empirical value (0.03 is used in the Lowe paper) is deleted. Meanwhile, the precise position of the feature point (the home position plus the offset of the fit) and the scales (σ (o, s) and σ _ oct (s)) are acquired in this process.
The dots with small contrast include the extreme dots obtained by the DoG, if
Figure BDA0002398442190000075
The feature point is retained, otherwise discarded. Since the DoG operator generates a strong edge response, there are edge points among the extreme points. Since the extremum of a poorly defined gaussian difference operator has a larger principal curvature across the edge and a smaller principal curvature in the direction perpendicular to the edge, the principal curvature can be found by a Hessian matrix H of 2x 2:
Figure BDA0002398442190000076
the principal curvature of D is proportional to the eigenvalue of H, let α be the larger eigenvalue and β be the smaller eigenvalue, we do not directly solve for eigenvalues, and Tr and determinant Det do
Tr(H)=Dxx+Dyy=α+β (8)
Det(H)=DxxDyy-(Dxy)2=αβ (9)
Let α be γ β, then
Figure BDA0002398442190000081
(γ+1)2The value of/gamma is minimum when the two characteristic values are equal and increases with the increase of gamma, so that only detection is needed to detect whether the principal curvature is under a certain threshold value gamma or not
Figure BDA0002398442190000082
Directly discarded when the ratio is negative, when the ratio is (α + β)/αβ>(γ+1)2And/γ, discarded, and in Lowe, γ is taken as 10.
d) And stabilizing the direction information distribution of the key points. The stable extreme points are extracted under different scale spaces, and for any key point, the gradient amplitude is expressed as:
Figure BDA0002398442190000083
the gradient direction is as follows:
Figure BDA0002398442190000084
e) a descriptor is generated. Local gradients of the image are measured at a selected scale in a neighborhood around each keypoint. These gradients are transformed into a representation that allows for relatively large local shape deformations and illumination variations.
First, the image area required for calculating the descriptor is determined, and the gradient direction histogram of the descriptor is generated by calculating the blurred image of the scale where the key point is located.
The method comprises the steps of rotating coordinate axes of a plane coordinate system to the main direction of key points to ensure rotation invariance, taking a window of 16 × 16 by taking the key points as the center, dividing the window into 16 small blocks of 4 × 4, calculating the gradient direction of each sampling point in each small block, counting the accumulated values in 8 directions in each small block, and adopting a descriptor key point characterization of 128-dimensional vectors of 4 x 8, fusing sift characteristics of gray scale and gradient, wherein if the gradient feature points do not contain any gray scale feature points in the scale range of the gradient feature points, the gradient feature points are added, otherwise, only the weight of the gray scale points is increased, and the gradient points are not recorded.
In one embodiment, the respectively inputting the gray-scale map of the positive sample and the gray-scale map of the negative sample into the input layer of the deep learning model for training, and obtaining the identification model includes:
respectively inputting the gray-scale image of the positive sample and the gray-scale image of the negative sample into an input layer of a deep learning model, and training the input layer by adopting a training matrix of the deep learning model to obtain the characteristics of the input layer and parameters corresponding to the characteristics of the input layer;
taking the input layer as a target layer, and acquiring an adjacent hidden layer of the target layer;
training an adjacent hidden layer of the target layer according to the features of the target layer to obtain the features of the adjacent hidden layer and parameters corresponding to the features of the adjacent hidden layer;
and taking the adjacent hidden layer as the target layer, and iteratively executing the step of obtaining the adjacent hidden layer of the target layer, and determining an identification model according to the current parameters of the deep learning model when the loss function value of the deep learning model is smaller than a set threshold value or the iteration times is larger than a time threshold value.
In one example, a deep learning model may be built based on CNN. The general architecture of CNN can be seen with reference to fig. 2, the network comprising 8 weighted layers. The first five are convoluted and the remaining three are fully connected. The output of the last fully connected layer is fed to 2-way softmax, which produces a distribution over class 2 tags. The polynomial logistic regression objective is maximized, which is equivalent to the mean of the training cases that maximize the log probability of the correct label under the predicted distribution.
Second, the kernels of the fourth and fifth convolutional layers are connected to only those kernel maps located in the upper layer on the same GPU (see fig. 2). The kernels of the third convolutional layer are connected to all the kernel maps in the second layer. The neurons in the fully connected layer are connected to all neurons in the previous layer. The response normalization layer follows the first and second convolution layers. The maximum pooling layer is located after the response normalization layer and the fifth convolution layer. The ReLU nonlinearity is applied to the output of each convolutional layer and fully-connected layer.
The first convolutional layer filters a 224 × 0224 × 13 input image with 96 kernels of size 11 × 11 × 3 with a stride of 4 pixels (this is the distance between the centers of the receptive fields of adjacent neurons in the kernel map). the second convolutional layer takes as input the output (in response to normalization and merging) of the first convolutional layer and filters it using 256 kernels of size 5 × 5 × 48. the third, fourth and fifth convolutional layers are connected to each other without any intermediate pooling or normalization layer.the third convolutional layer has 384 kernels of size 3 × 3 × 256 which are connected to the (normalized, merged) output of the second convolutional layer.the fourth convolutional layer has 384 kernels of size 3 × 3 × 192 and the fifth convolutional layer has 256 kernels of size 3 × 3 × 192. the fully connected layers have 4096 neurons each.
Further, the process of inputting the grayscale images of the positive samples and the grayscale images of the negative samples into the input layer of the deep learning model for training may further include:
inputting an original gray image into an input layer of a deep learning model, and training the input layer according to the training matrix to obtain the characteristics of the input layer and parameters corresponding to the characteristics of the input layer;
taking the input layer as a target layer, and acquiring an adjacent hidden layer of the target layer;
training an adjacent hidden layer of the target layer according to the features of the target layer to obtain the features of the adjacent hidden layer and parameters corresponding to the features of the adjacent hidden layer;
taking the adjacent hidden layer as the target layer, and iteratively executing the step of obtaining the adjacent hidden layer of the target layer;
the step of training the input layer of the deep learning model according to the training matrix comprises the following steps:
according to the formula:
Figure BDA0002398442190000101
minimizing a cost function to solve for features and parameters of the deep learning model input layer, wherein,
Figure BDA0002398442190000102
the jth column vector representing the training matrix,
Figure BDA0002398442190000103
representing a characteristic ofiRepresent each one
Figure BDA0002398442190000104
Corresponding parameters, Cost represents the Cost function, and m represents the training
The number of column vectors of the matrix, i, represents the number of features.
And judging whether the termination condition is met, if not, continuing to iteratively train the deep learning model according to the training data, and if so, determining an identification model.
The termination condition may include: and the loss function value of the deep learning model is smaller than a set threshold, or the iteration times are larger than a time threshold.
In one embodiment, converting the RGB image into an HSV color space, the obtaining the HSV image includes:
and converting the RGB image into an HSV color space, and performing multi-resolution decomposition on the image converted into the HSV color space to obtain HSV images with multiple resolutions.
As an embodiment, performing multi-resolution decomposition on the image after being converted into the HSV color space includes:
the method comprises the steps of taking a gray image as an initial image, using a Gaussian pyramid to perform downsampling on the initial image to obtain a first downsampled image, taking the first downsampled image as the initial image, using the Gaussian pyramid to perform downsampling on the initial image to obtain a second downsampled image, repeating the process of obtaining the first downsampled image and the process of obtaining the second downsampled image twice, obtaining three groups of downsampled images in total, and obtaining HSV images with multiple resolutions.
In this embodiment, a gray map may be used as an initial image, a gaussian pyramid is used to perform downsampling on the initial image (by performing gaussian kernel convolution on the gray map and removing even rows and even columns) to obtain a first downsampled image, the first downsampled image is used as the initial image, the gaussian pyramid is used to perform downsampling on the initial image to obtain a second downsampled image, and the above steps are repeated twice to obtain three downsampled images in total, that is, HSV images with multiple resolutions. And correspondingly processing the HSV images with a plurality of resolutions in the following process to ensure the accuracy of the obtained identification model.
In an embodiment, in the application of the method for identifying modern ceramics and ancient ceramics, when extracting the SIFT feature descriptor, the method may further include:
1. SIFT feature point detection
a) Detecting extreme point in scale space
And (3) obtaining two-dimensional images under different scales by using convolution calculation of the images and Gaussian kernels:
L(x,y,σ)=G(x,y,σ)*I(x,y), (15)
when detecting an extremum in the DoG space, the key point needs to be compared with 8 pixels in the surrounding neighborhood of the same scale and 26 pixels in total in the surrounding neighborhood of 9 × 2 pixels at the corresponding position of the adjacent scale, so as to ensure that a local extremum is detected in the scale space and the two-dimensional image space at the same time.
B) Accurate positioning extreme point
The taylor expansion of the spatial scale function D (x, y, σ) at the local extreme point (x0, y0, σ) is as follows:
Figure BDA0002398442190000111
the above formula is derived and made 0 to obtain the precise position XmaxAs shown in formula (9):
Figure BDA0002398442190000112
among the feature points that have been detected, feature points of low contrast and unstable edge response points are removed. Removing low-contrast points: substituting equation (17) into equation (16) only the first two terms are taken:
Figure BDA0002398442190000113
if | D (X)max) If | ≧ 0.03, the characteristic point is retained, otherwise, it is discarded.
2. Generating SIFT feature vector with length of 128
a) The coordinate axes are first rotated to the direction of the key point to ensure rotational invariance.
b) Then, taking a 16 × 16 window with the key point as the center, dividing the window into 4 × 4 sub-regions, calculating gradient histograms of 8 directions on each sub-region, and drawing the accumulated value of each gradient direction to form a seed point, so that a total of 16 seed points can be generated, and data with the length of 128 can be generated for each key point.
3. Inputting SIFT feature vectors of key points into a support vector machine for training
In this embodiment, the corresponding depth convolution model can be divided into 8 layers: 5 convolutional layers +3 full tie layers. The input image size is 227 x 3. Convolutional layer 1 was trained simultaneously by two GPUs, each GPU running 48 convolutional kernels, with a convolutional kernel size of 11 × 11, step size of 4, pooling using the relu activation function (kernel size 3, stride 2). The activation function (relu) outputs an image size of (227-11)/4+1 ═ 55, i.e., 55 × 96. The pooled (kernel size 3, stride 2) output image size is (55-3)/2+1 27, i.e., 27 × 96.
Convolutional layer 2 was trained simultaneously by two GPUs, each GPU running 128 convolutional kernels with a size of 5 x 5, step size 1, pooling using the relu activation function (kernel size 3, stride 2). The input feature image is first expanded by 2 pixels, i.e. 31 x 31, and the activation function (relu) outputs an image size of (31-5)/1+ 1-27, i.e. 27 x 256. The pooled (kernel size 3, stride 2) output image size is (27-3)/2+1 13, i.e., 13 × 256.
Convolutional layer 3 was trained simultaneously by two GPUs, each GPU running 192 convolutional kernels with a size of 3 x 3, step size 1, using the relu activation function. The input feature image is first expanded by 1 pixel, i.e. 15 x 15, and the activation function (relu) outputs an image size of (15-3)/1+ 1-13, i.e. 13 x 384.
Convolutional layer 4 was trained simultaneously by two GPUs, each running 192 convolutional kernels with a size of 3 x 3, step size 1, using the relu activation function. The input feature image is first expanded by 1 pixel, i.e. 15 x 15, and the activation function (relu) outputs an image size of (15-3)/1+ 1-13, i.e. 13 x 384.
Convolutional layer 5 was trained simultaneously by two GPUs, each GPU running 128 convolutional kernels with a size of 5 x 5, step size 1, pooling using the relu activation function (kernel size 3, stride 2). The input feature image is first expanded by 1 pixel, i.e. 15 × 15, and the activation function (relu) outputs an image size of (15-3)/1+1 ═ 13, i.e. 13 × 256. The pooled (kernel size 3, stride 2) output image size is (13-3)/2+ 16, i.e., 6 x 256.
The fully-connected layer 6 is trained by two GPUs simultaneously, 4096 neurons are obtained in total, and 4096 x 1 vectors are output.
The fully-connected layer 7 is trained by two GPUs simultaneously, 4096 neurons are obtained in total, and 4096 x 1 vectors are output.
The fully-connected layer 8 is trained by two GPUs simultaneously, and has 2 neurons in total, and outputs a 2x 1 vector.
Adding a layer of softmax regression after the pre-trained deep learning network model, carrying out reverse fine adjustment on the whole network, inputting the abstract characteristics of the ancient ceramic image output by the last hidden layer of the deep learning network into the softmax regression layer, outputting a result which is a label of a true type and a false type corresponding to each ancient ceramic image, and comparing the true type and the false type corresponding to the output prediction label with an actual type to obtain the classification accuracy of the image.
Further, the authentication model can be based on a computer or a mobile phone, when a user uses the mobile phone to predict the model, the model established in the previous 3 steps is used as a background, and a front end based on an IOS or Android system is developed and fused into a prediction system. The ceramic image under the microscope is input into the mobile phone, and the prediction result of the model can be immediately obtained by running the software. When a user uses a computer prediction model, the model established in the previous 3 steps is used as a background, and a front end based on a WINDOWS or LINUX system is developed and fused into a prediction system. The ceramic image under the microscope is input into a computer, and the result can be obtained after the operation
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application merely distinguish similar objects, and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may exchange a specific order or sequence when allowed. It should be understood that "first \ second \ third" distinct objects may be interchanged under appropriate circumstances such that the embodiments of the application described herein may be implemented in an order other than those illustrated or described herein.
The terms "comprising" and "having" and any variations thereof in the embodiments of the present application are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, product, or device that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, product, or device.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. The method for identifying the modern ceramics and the ancient ceramics is characterized by comprising the following steps of:
s10, obtaining a plurality of micrographs of ancient ceramics to construct a positive sample corresponding to the ancient ceramics, obtaining a plurality of micrographs of the ancient ceramics to construct a negative sample corresponding to the ancient ceramics, respectively obtaining RGB images of the positive sample and the negative sample, and converting the RGB images into HSV color space to obtain HSV images;
s20, acquiring a feature descriptor of the HSV image, inputting the feature descriptor into a support vector machine for training, and acquiring training parameters of the support vector machine;
s30, inputting the RGB image into a deep convolutional neural network architecture for training to obtain network parameters of a convolutional neural network;
s40, determining a deep learning model according to the training parameters of the support vector machine and the network parameters of the convolutional neural network, and inputting the gray-scale image of the positive sample and the gray-scale image of the negative sample into an input layer of the deep learning model respectively for training to obtain an identification model;
s50, obtaining the picture to be identified of the porcelain to be identified, inputting the picture to be identified into the identification model, and determining the porcelain to be identified as modern ceramics or ancient ceramics according to the output result of the identification model.
2. The method of claim 1, wherein the obtaining of the feature descriptor of the HSV image comprises:
constructing a scale space, carrying out scale transformation on a gray level image and a gradient image corresponding to the HSV image, establishing a Gaussian difference pyramid, obtaining a scale space representation sequence of the HSV image under multiple scales, extracting a main contour of the scale space from the scale space representation sequence, and taking the extracted main contour as a feature vector;
finding key points according to the feature vectors by adopting a Gaussian difference algorithm, and positioning the key points to obtain the positions of the key points;
and determining the information distribution of the key points according to the positions of the key points, and generating descriptors according to the information distribution of the key points.
3. The method of claim 2, wherein the generating descriptors according to the information distribution of the key points comprises:
and determining a fuzzy image of the gradient direction histogram of the descriptor in the scale of the key point according to the information distribution of the key point, and calculating the descriptor according to the fuzzy image.
4. The method for identifying modern ceramics and ancient ceramics according to claim 1, wherein the training of inputting the gray-scale map of the positive sample and the gray-scale map of the negative sample into the input layer of the deep learning model respectively to obtain the identification model comprises:
respectively inputting the gray-scale image of the positive sample and the gray-scale image of the negative sample into an input layer of a deep learning model, and training the input layer by adopting a training matrix of the deep learning model to obtain the characteristics of the input layer and parameters corresponding to the characteristics of the input layer;
taking the input layer as a target layer, and acquiring an adjacent hidden layer of the target layer;
training an adjacent hidden layer of the target layer according to the features of the target layer to obtain the features of the adjacent hidden layer and parameters corresponding to the features of the adjacent hidden layer;
and taking the adjacent hidden layer as the target layer, and iteratively executing the step of obtaining the adjacent hidden layer of the target layer, and determining an identification model according to the current parameters of the deep learning model when the loss function value of the deep learning model is smaller than a set threshold value or the iteration times is larger than a time threshold value.
5. The method of claim 1, wherein converting the RGB image into HSV color space to obtain the HSV image comprises:
and converting the RGB image into an HSV color space, and performing multi-resolution decomposition on the image converted into the HSV color space to obtain HSV images with multiple resolutions.
6. The method for identifying modern ceramics and ancient ceramics according to claim 5, wherein performing multi-resolution decomposition on the image after being converted into HSV color space comprises:
the method comprises the steps of taking a gray image as an initial image, using a Gaussian pyramid to perform downsampling on the initial image to obtain a first downsampled image, taking the first downsampled image as the initial image, using the Gaussian pyramid to perform downsampling on the initial image to obtain a second downsampled image, repeating the process of obtaining the first downsampled image and the process of obtaining the second downsampled image twice, obtaining three groups of downsampled images in total, and obtaining HSV images with multiple resolutions.
CN202010139176.0A 2020-03-03 2020-03-03 Method for identifying modern ceramics and ancient ceramics Active CN111339974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010139176.0A CN111339974B (en) 2020-03-03 2020-03-03 Method for identifying modern ceramics and ancient ceramics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010139176.0A CN111339974B (en) 2020-03-03 2020-03-03 Method for identifying modern ceramics and ancient ceramics

Publications (2)

Publication Number Publication Date
CN111339974A true CN111339974A (en) 2020-06-26
CN111339974B CN111339974B (en) 2023-04-07

Family

ID=71185746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010139176.0A Active CN111339974B (en) 2020-03-03 2020-03-03 Method for identifying modern ceramics and ancient ceramics

Country Status (1)

Country Link
CN (1) CN111339974B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724339A (en) * 2021-05-10 2021-11-30 华南理工大学 Color separation method for few-sample ceramic tile based on color space characteristics
CN113724238A (en) * 2021-09-08 2021-11-30 佛山科学技术学院 Ceramic tile color difference detection and classification method based on feature point neighborhood color analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663401A (en) * 2012-04-18 2012-09-12 哈尔滨工程大学 Image characteristic extracting and describing method
CN103336942A (en) * 2013-04-28 2013-10-02 中山大学 Traditional Chinese painting identification method based on Radon BEMD (bidimensional empirical mode decomposition) transformation
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105243667A (en) * 2015-10-13 2016-01-13 中国科学院自动化研究所 Target re-identification method based on local feature fusion
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN109583376A (en) * 2018-11-30 2019-04-05 陕西科技大学 The disconnected source periodization method of ancient pottery and porcelain based on multicharacteristic information fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663401A (en) * 2012-04-18 2012-09-12 哈尔滨工程大学 Image characteristic extracting and describing method
CN103336942A (en) * 2013-04-28 2013-10-02 中山大学 Traditional Chinese painting identification method based on Radon BEMD (bidimensional empirical mode decomposition) transformation
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105243667A (en) * 2015-10-13 2016-01-13 中国科学院自动化研究所 Target re-identification method based on local feature fusion
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN109583376A (en) * 2018-11-30 2019-04-05 陕西科技大学 The disconnected source periodization method of ancient pottery and porcelain based on multicharacteristic information fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
夏朝贵: "公文印鉴鉴别技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
孙强 等: "基于Radon方向投影特征的文物真伪鉴别方法", 《吉林工业大学自然科学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724339A (en) * 2021-05-10 2021-11-30 华南理工大学 Color separation method for few-sample ceramic tile based on color space characteristics
CN113724339B (en) * 2021-05-10 2023-08-18 华南理工大学 Color space feature-based color separation method for tiles with few samples
CN113724238A (en) * 2021-09-08 2021-11-30 佛山科学技术学院 Ceramic tile color difference detection and classification method based on feature point neighborhood color analysis

Also Published As

Publication number Publication date
CN111339974B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Cerutti et al. A parametric active polygon for leaf segmentation and shape estimation
CN110866924B (en) Line structured light center line extraction method and storage medium
Xie et al. TEXEMS: Texture exemplars for defect detection on random textured surfaces
Li et al. Expression-robust 3D face recognition via weighted sparse representation of multi-scale and multi-component local normal patterns
CN106682598A (en) Multi-pose facial feature point detection method based on cascade regression
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN105335725B (en) A kind of Gait Recognition identity identifying method based on Fusion Features
CN110021024B (en) Image segmentation method based on LBP and chain code technology
CN109902565B (en) Multi-feature fusion human behavior recognition method
Elbakary et al. Shadow detection of man-made buildings in high-resolution panchromatic satellite images
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN108509925B (en) Pedestrian re-identification method based on visual bag-of-words model
Khan et al. A review of human pose estimation from single image
CN105701495B (en) Image texture feature extraction method
CN103927511A (en) Image identification method based on difference feature description
CN109902585A (en) A kind of three modality fusion recognition methods of finger based on graph model
CN111127417B (en) Printing defect detection method based on SIFT feature matching and SSD algorithm improvement
CN111339974B (en) Method for identifying modern ceramics and ancient ceramics
CN108550165A (en) A kind of image matching method based on local invariant feature
CN111753119A (en) Image searching method and device, electronic equipment and storage medium
CN111815640B (en) Memristor-based RBF neural network medical image segmentation algorithm
Chetverikov et al. Texture Anisotropy, Symmetry, Regularity: Recovering Structure and Orientation from Interaction Maps.
CN110516638B (en) Sign language recognition method based on track and random forest
CN105404883B (en) A kind of heterogeneous three-dimensional face identification method
CN111666813A (en) Subcutaneous sweat gland extraction method based on three-dimensional convolutional neural network of non-local information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant