CN111539470A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN111539470A
CN111539470A CN202010313007.4A CN202010313007A CN111539470A CN 111539470 A CN111539470 A CN 111539470A CN 202010313007 A CN202010313007 A CN 202010313007A CN 111539470 A CN111539470 A CN 111539470A
Authority
CN
China
Prior art keywords
image
target detection
food
detection algorithm
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010313007.4A
Other languages
Chinese (zh)
Inventor
韦鹏程
黄思行
颜蓓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Education
Original Assignee
Chongqing University of Education
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Education filed Critical Chongqing University of Education
Priority to CN202010313007.4A priority Critical patent/CN111539470A/en
Publication of CN111539470A publication Critical patent/CN111539470A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method. The method comprises the following steps: acquiring an image to be processed; the to-be-processed image comprises a food area; searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm; extracting a food area in the image to be processed according to a target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm; and classifying the images to be processed according to the visual features. The accuracy of extracting the food area in the image to be processed can be improved by adjusting the parameters in the initial target detection algorithm, and the deep neural network algorithm has good visual feature recognition capability, has a better effect in food image processing, and can improve the accuracy of image processing.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
The development of computer technology brings about a great deal of communication and application of image information, and image retrieval and classification technology is also widely applied. The retrieval or classification of images is often performed by text, and image processing is performed based on text recognition, that is, images are described by text information such as titles, time, environments, and the like. Since image retrieval and classification by text have not been able to meet daily needs of people, image processing methods based on image contents have been studied. The processing of the food image is widely applied to various fields such as food, health and the like, and the food image can further help people to evaluate the calorie of the food, analyze the eating habits of the people and provide personalized services.
Because the taken food picture not only contains the visual information of the food itself, but also contains various background information, the traditional image processing method has the problem of low accuracy when image classification or retrieval is carried out.
Disclosure of Invention
In order to solve the above technical problem, an image processing method, an image processing apparatus, a computer device, and a storage medium are provided, which can improve the accuracy of image processing.
A method of image processing, the method comprising:
acquiring an image to be processed; the image to be processed comprises a food area;
searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
extracting a food area in the image to be processed according to the target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm;
and classifying the image to be processed according to the visual features.
In one embodiment, the adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm includes:
acquiring an image database, and searching a food image from the image database;
generating a food image dataset from the food image;
adjusting parameters in the initial target detection algorithm using the food image dataset to obtain the target detection algorithm.
In one embodiment, the extracting the food area in the image to be processed according to the target detection algorithm includes:
respectively acquiring candidate frames of the image to be processed according to the target detection algorithm;
calculating the candidate frame proportion of each candidate frame;
and extracting the food area in the image to be processed according to the candidate frame proportion.
In one embodiment, the extracting visual features in the food area through a deep neural network algorithm includes:
acquiring a specific gravity threshold, and taking a food region corresponding to the specific gravity of the candidate frame higher than the specific gravity threshold as a candidate region;
obtaining candidate frame coordinates of the candidate region according to the candidate frame proportion of the candidate region;
extracting the sub-features of the candidate region according to the candidate frame coordinates;
and obtaining the visual characteristics in the food area according to the sub-characteristics.
In one embodiment, the classifying the image to be processed according to the visual features includes:
extracting color features in the visual features and extracting shape features in the visual features;
and classifying the images to be processed according to the color features and the shape features to obtain the categories of the images to be processed.
In one embodiment, the classifying the image to be processed according to the color feature and the shape feature to obtain a category of the image to be processed includes:
acquiring an edge feature map of the image to be processed according to the color feature and the shape feature;
calculating the number of edge points according to the edge feature graph, and obtaining an image histogram according to the number of the edge points;
and inputting the image histogram into an image classifier to obtain the category of the image to be processed.
In one embodiment, the method further comprises:
acquiring an image to be retrieved, and extracting image characteristics of the image to be retrieved;
carrying out image retrieval in an image library according to the image characteristics to obtain an initial retrieval image;
calculating the similarity between the initial retrieval image and the image to be retrieved;
and obtaining a target retrieval image according to the similarity.
An image processing apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an image to be processed; the image to be processed comprises a food area;
the parameter adjusting module is used for searching an initial target detection algorithm and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
the characteristic extraction module is used for extracting a food area in the image to be processed according to the target detection algorithm and extracting visual characteristics in the food area through a deep neural network algorithm;
and the image classification module is used for classifying the image to be processed according to the visual features.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an image to be processed; the image to be processed comprises a food area;
searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
extracting a food area in the image to be processed according to the target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm;
and classifying the image to be processed according to the visual features.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an image to be processed; the image to be processed comprises a food area;
searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
extracting a food area in the image to be processed according to the target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm;
and classifying the image to be processed according to the visual features.
The image processing method, the image processing device, the computer equipment and the storage medium acquire the image to be processed; the to-be-processed image comprises a food area; searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm; extracting a food area in the image to be processed according to a target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm; and classifying the images to be processed according to the visual features. The accuracy of extracting the food area in the image to be processed can be improved by adjusting the parameters in the initial target detection algorithm, and the deep neural network algorithm has good visual feature recognition capability, has a better effect in food image processing, and can improve the accuracy of image processing.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a schematic flow diagram of extracting visual features in a food area in one embodiment;
FIG. 4 is a diagram showing the analysis of the results of the accuracy comparison of different methods in the experiment;
fig. 5 is a schematic diagram of the classification results of dishes randomly drawn using 15 different methods in the experiment;
FIG. 6 is a graph showing the performance of different methods in an experiment using Precision @ K;
FIG. 7 is a graph showing the performance of different methods of searching using MPA @ K in an experiment;
FIG. 8 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. As shown in FIG. 1, the application environment includes a computer device 110. The computer device 110 may obtain the image to be processed; the to-be-processed image comprises a food area; the computer device 110 may search for an initial target detection algorithm and adjust parameters in the initial target detection algorithm to obtain a target detection algorithm; the computer device 110 may extract the food area in the image to be processed according to a target detection algorithm and extract visual features in the food area through a deep neural network algorithm; the computer device 110 may classify the image to be processed according to visual characteristics. The computer device 110 may be, but is not limited to, various personal computers, laptops, smartphones, robots, unmanned aerial vehicles, tablets, portable wearable devices, and the like.
In one embodiment, as shown in fig. 2, there is provided an image processing method including the steps of:
step 202, acquiring an image to be processed; the image to be processed contains a food area.
The image to be processed can be an image which needs to be subjected to image classification and image retrieval. The computer device may present a display interface through which a user may input images to be processed. Wherein, the food area can be contained in the image to be processed.
And step 204, searching for an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm.
The initial target detection algorithm may be used for target detection, and in particular, the initial target detection algorithm may be a Convolutional Neural network algorithm (CNN).
A neural network consists of a set of neural nodes, which are divided into a number of layers, each layer being connected by a complete connection, that is, each node is associated with all nodes in the previous layer. The input of each node is the output of all nodes in the previous layer, and the current node uses the sum of the weighting coefficient and the input data as the output of the node. The weight parameters are obtained through training and represent the important proportion of the corresponding input nodes of the previous layer in the task. If the weight value is larger, the output value of the corresponding node is increased, and a deeper network is influenced.
Convolutional neural networks have three core ideas, namely local reception domain, weight sharing and pooling. The local receiving field, namely each nerve node, is only related to a local image area and is not related to a global image. The neural node may learn some basic features of the local reception neighborhood, such as corners and the like. At the same time, since each node only needs to be associated with a small local area in the image, the weight parameters that need to be trained can be reduced. Weight sharing may be used to indicate that in a convolutional neural network, the weights of the same convolution kernel are the same in different local regions of the input image. That is, if the dominant feature detector is able to detect features in one region of the image, the detector is also active at other locations in the image, so that changes in image distortion or offset, etc., can be overcome. Pooling refers to the reduction of dimensions of an image to reduce the size of the image, and the exact location of the learned image features is not critical and only the relative locations need be recorded. In addition, since different images may have problems of offset, deformation, etc., the precise positioning of the features will have a certain influence on the next learning.
The convolutional neural network is used as a pre-feedback neural network and mainly comprises a convolutional layer, a pooling layer, a full-link layer and a loss function layer. Convolution layers perform convolution operations on input data, each convolution layer containing a plurality of convolution kernels. The convolution kernel is essentially a set of filter matrices with fixed weights. The convolution operation is a convolution kernel that performs an inner product operation on input data through a sliding window. The convolution layer is a reflection of the idea of sharing local acceptance domain and weight in the network, each convolution kernel can also be called as a filter and is used for learning one class of characteristics of an input image, and different convolution kernels can obtain different characteristics, such as color depth, outline and the like. Generally, a shallower convolutional layer can only extract some low-level visual features, such as shapes, edges, textures, and the like, while a deeper network can extract some high-level visual semantic features for a target task through the previous low-level features.
The pooling layer down samples the input data. The non-linear pool function has various forms, such as max pooling MaxPooling, mean pooling MeanPooling, gaussian pooling gauss spoooling, etc., where max pooling is the most common, and max pooling divides the input image into several rectangular regions and outputs a maximum for each sub-region. The pooling layer will continuously reduce the spatial size of the data and hence the number of parameters and the amount of calculations will also be reduced, which also inhibits overfitting to some extent.
The full connection layer and the loss function layer correspond to a classical neural network. The fully-connected layer can be regarded as a hidden layer of the neural network, and the loss function layer is a training target of the whole network and can be used for calculating the relation between the predicted result and the actual result of the network, usually the last layer of the network.
Convolutional neural network parameter training is the same as neural networks, and still adopts a back propagation algorithm. The algorithm firstly sends training data to a network to obtain network output, then calculates the error between the network output and target output through a loss function, and inversely calculates the residual error of each layer of the network according to the obtained error so as to update parameters.
The computer device may adjust parameters in the initial target detection algorithm to obtain the target detection algorithm. The target detection algorithm can be an R-CNN network, and the R-CNN network can realize a target detection technology based on algorithms such as a Convolutional Neural Network (CNN), a linear regression and a Support Vector Machine (SVM).
And step 206, extracting the food area in the image to be processed according to a target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm.
Specifically, the computer device may detect the food area in the image to be processed using a target detection algorithm, thereby extracting the food area in the image to be processed. The deep neural network algorithm may be a CNN deep neural network through which the computer device may extract visual features in the food area.
And step 208, classifying the image to be processed according to the visual characteristics.
In the embodiment, the computer equipment acquires the image to be processed; the to-be-processed image comprises a food area; searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm; extracting a food area in the image to be processed according to a target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm; and classifying the images to be processed according to the visual features. The accuracy of extracting the food area in the image to be processed can be improved by adjusting the parameters in the initial target detection algorithm, and the deep neural network algorithm has good visual feature recognition capability, has a better effect in food image processing, and can improve the accuracy of image processing.
In an embodiment, the provided image processing method may further include a process of obtaining a target detection algorithm, where the specific process includes: acquiring an image database, and searching a food image from the image database; generating a food image dataset from the food image; and adjusting parameters in the initial target detection algorithm by using the food image data set to obtain a target detection algorithm.
The image database can store a plurality of images of different types, and the computer device can identify the images in the image database and mark the images containing food. After the computer device acquires the image database, the marked image, namely the food image, can be searched from the image database. The computer device may derive a set of food images from the food images to generate a food image dataset, and then use the food image dataset to adjust parameters in the initial target detection algorithm to derive the target detection algorithm.
Since most food images contain irrelevant background information and feature extraction is performed only on food areas, the influence of the background information of the food images is reduced, and therefore the food areas in the images need to be detected, and the food areas in the images can be detected through the R-CNN. In particular, the computer device may first select an image of a food-related category from the visual gene library and then fine-tune the R-CNN using the selected food image dataset. The formula of the target detection algorithm is as follows:
Figure BDA0002458556170000071
where i is the small anchor index, piDenotes the predicted probability of anchor i, if anchor is positive, p*i is 1, otherwise p*i is 0. t is tiFour parameter coordinates, t, representing the prediction periphery*i is the coordinate vector of the positive anchor corresponding to the ground truthbox, and the classification penalty Lcls is the logarithmic penalty of both classes (target and non-target).
In one embodiment, the provided image processing method may further include a process of extracting a food area, where the specific process includes: respectively acquiring candidate frames of the image to be processed according to a target detection algorithm; calculating the proportion of the candidate frames of each candidate frame; and extracting the food area in the image to be processed according to the candidate frame weight.
After the initial detection algorithm is finely adjusted, the computer device may calculate the ratio of the candidate frame to each candidate frame of the image to be processed by using the target detection algorithm after fine adjustment, so as to extract the food area in the image to be processed.
As shown in fig. 3, in an embodiment, the provided image processing method may further include a process of obtaining a visual feature, and the specific steps include:
step 302, obtaining a specific gravity threshold, and taking the food area corresponding to the specific gravity of the candidate frame higher than the specific gravity threshold as a candidate area.
The specific gravity threshold may be a preset value for determining the level of the specific gravity of the candidate frame. The computer device may obtain a specific gravity threshold and compare the candidate frame specific gravity with the specific gravity threshold to obtain a comparison result. When the comparison result obtained by the computer device is that the specific gravity of the candidate frame is higher than the specific gravity threshold, the computer device may regard the food area corresponding to the specific gravity of the candidate frame as the candidate area.
And step 304, obtaining candidate frame coordinates of the candidate region according to the candidate frame proportion of the candidate region.
And step 306, extracting the sub-features of the candidate region according to the candidate frame coordinates.
Since the candidate frame weight of the candidate region is above the weight threshold, the computer device may extract features of the FC7 layer using an AlexNet network based on the candidate frame coordinates and label the extracted features of the FC7 layer as sub-features.
Based on the sub-features, a visual characteristic of the food area is obtained, step 308.
The computer device can concatenate the sub-features to obtain a visual feature in the food area.
In an embodiment, the provided image processing method may further include a process of classifying the image to be processed, where the specific process includes: extracting color features in the visual features and extracting shape features in the visual features; and classifying the images to be processed according to the color characteristics and the shape characteristics to obtain the categories of the images to be processed.
Since color features are intuitive, color features are one of the most widely used features in the field of image classification and image retrieval. The RGB color space uses R, G, B principle of three color mixing by image color display, and the original single color can be added and mixed according to the principle of three primary colors: f ═ R [ R ] + G [ G ] + B [ B ]. The RGB model may be represented in a three-dimensional cartesian coordinate system with three axes R, G, B representing the three primary colors red, blue, and green, respectively, and the origin of the axes representing black, any color may be represented by a linear combination of the three primary colors. Although the RGB color space may represent any color, in a cartesian coordinate system, RGB coordinate values of two similar colors may be greatly different, so when performing color feature similarity matching on an image in the color space, the RGB color space is converted into another color space, because a general representation form of the image is RGB, the RGB color space needs to be converted into another color space in the conversion process, for example, the RGB color space is converted into an HSV space, and the conversion formula is:
Figure BDA0002458556170000091
where r, g, and b are coordinate values in RGB cartesian space, max and min are the maximum and minimum values of the three, and H, S and V represent hue, saturation, and brightness in the HSV space model.
The shape features may be used to represent the contours of objects in the image. The computer equipment can extract the shape features in the visual features, so that the images to be processed are classified according to the color features and the shape features to obtain the categories of the images to be processed.
In an embodiment, the provided image processing method may further include a process of obtaining a category of an image to be processed, where the specific process includes: acquiring an edge feature map of the image to be processed according to the color feature and the shape feature; calculating the number of edge points according to the edge feature graph, and obtaining an image histogram according to the number of the edge points; and inputting the image histogram into an image classifier to obtain the category of the image to be processed.
When image classification is performed, similarity matching of shape features needs to be increased. The shape features are mainly reflected on the outline features and the edge features of the image. At present, many algorithms for performing edge detection on an image, such as a Sobel algorithm, a Canny algorithm, a Laplacian algorithm, and the like, are used for extracting edge features of the image.
Edge detection is the first step in extracting shape features of an image, and a computer device may extract shape features of an image from an edge feature map. Common methods for extracting shape features are: calculating the number of edge points according to the angle variable, drawing a histogram, and calculating an edge detection image; and (3) using the area, eccentricity, shape parameter and shape feature histogram as the description of the image, and inputting the histogram into a trained classifier to obtain the class information of the image. The computer equipment can extract visual features from food images of all training sets and testing sets, and then trains the classifier through the training sets to obtain a classification result based on the trained classification model.
In one embodiment, the provided image processing method may further include a process of retrieving an image, where the specific process includes: acquiring an image to be retrieved, and extracting image characteristics of the image to be retrieved; carrying out image retrieval in an image library according to the image characteristics to obtain an initial retrieval image; calculating the similarity between the initial retrieval image and the image to be retrieved; and obtaining a target retrieval image according to the similarity.
The computer device can acquire an image to be retrieved, wherein the image to be retrieved can be an image to be searched and input by a user. The computer equipment can extract the image characteristics of the image to be retrieved by utilizing a CNN algorithm, convert the image characteristics into a 1 x 4096 vector, and perform image retrieval in an image library according to the image characteristics to obtain an initial retrieval image. Then, the computer device can calculate the similarity between the initial retrieval image and the image to be retrieved according to the extracted image features. In this embodiment, the similarity between images can be characterized by euclidean distance formula or by determining angles, since the eigenvector matrix is uniform, the larger the product between two pairs, the more similar the two images. The calculation formula is as follows: a · b | × | b | ×. cos θ. Where a and b represent two eigenvectors, | a | and | b | may represent the modulus of the two eigenvectors, cos θ is the angle between the two eigenvectors, and the larger the dot product, the more similar the two eigenvectors. The computer device can sort the images according to the size of the dot product, and select the first N image targets from the image database to retrieve the images.
The computer device may then classify the N images according to the BOW model. The accuracy of the BOW classification model is more than 50%, and only one feature of the image, namely SIFT feature is used, so that the classification accuracy can be greatly improved by a multi-feature and multi-kernel method. Because the accuracy of CNN is at least above 70%, most images belong to the same category, only a few images belong to different categories, the different categories are eliminated, and only images in a large category are reserved. The computer equipment can feed back the final search result obtained after the primary search result is processed by the BOW model to the user to obtain the final search result.
In one embodiment, the essence of image retrieval is to find one or more points in the database that are closest to the index word, which may be referred to as neighbors, given m samples of n variables, as an example of a spectral hash-based image retrieval algorithm, construct an m × n matrix, where n is typically large in value and therefore often has redundant data that is difficult for humans to recognize.
Figure BDA0002458556170000101
Where i is the sample's sequence number, j is the dimension of the sample vector,
Figure BDA0002458556170000102
is the first dimension eigenvalue of the first sample.
In one embodiment, a BOW model is a model commonly used in text classification, which represents a document by using a feature vector composed of words and phrases, and ignores the grammar, grammar and word order of the document, and features of the document can be represented by using the document due to the uniqueness of words and phrases in the document.A BOW model classifies an image based on SIFT features, which have invariance not only in displacement, scale and deformation but also in illumination, i.e., have good detection effect on similar images with large brightness difference, and can make up for the insufficiency of illumination characteristics by CNN.A step of extracting SIFT features of an image may include establishing a fixed-scale space, which mainly aims to describe the scale invariance of the image, and defining an image scale space as L (x, y, σ) G (x, y, σ) × (x, y), wherein x, y are spatial coordinates of image pixels, the size of σ determines the smoothness of the image, then, detecting extreme points in DOG space, selecting feature points, determining a position point matching function by fitting, and calculating a final three-dimensional feature vector matching ability, and a curvature of each point, and a final feature point matching ability of 128 are calculated:
Figure BDA0002458556170000111
θ (x, y) ═ α tan2(L (x, y +1) -L (x, y-1)/L (x +1, y) -L (x-11, y)), since the number of SIFT feature points is enormous, the BOW model is directly used as a codebook, the data size is large, and the computational complexity is high.
After SIFT features are clustered into k classes, the mass center of each class is placed into a word bag, the size of a codebook of a word bag model is k, a null array with the size of k is constructed, and feature points of each image in a test data set, namely each feature in a distance codebook, are calculated. The obtained word frequency histogram can be normalized according to a formula, wherein the formula is as follows:
Figure BDA0002458556170000112
where i is the code book sequence number, xiIs a statistic corresponding to SIFT features of the first codebook, and Yi is a normalized value.
In one embodiment, the protocol was tested, and the experimental procedures and results were as follows: the original dish data set comprises 117504 images and 11611 dish categories, and the dish categories with the number of the selected images being larger than or equal to 15 are obtained, so that 233 dishes and 49168 images are finally obtained, and the data set is called dish-233.
In order to detect the food image area using the fast R-CNN, it is necessary to perform fast fine-tuning of the R-CNN. For this purpose, a food product image dataset with zone boundary labels is required. The visual database contains 108077 calibration images, including a large number of food images. A query dictionary list is constructed using the category name of the Dish-233 dataset to translate it into english as a query, with the food images being selected from the visual dataset. In order to obtain more food pictures, the category information of other food databases is further selected, the food pictures are selected, then further filtering is manually carried out to remove non-food pictures, and finally 10641 food pictures and corresponding calibration areas are obtained, which are called Visgenome-11K.
For the parameter setting of the model, in the rapid R-CNN training process, the minimum (mini-batch) batch is that 256 anchors are extracted from the image, and the iteration number is 80000 times. In the first 60000 iterations, the learning rate was set to 0.001, and after 20000 iterations, the learning rate was set to 0.0001, the momentum parameter was set to 0.9, and the weight attenuation parameter was set to 0.0005. When the Alex-Net model is fine-tuned, the initial learning rate is set to be 0.001, the learning rate is adjusted to be 0.1 before every 20 stages, and the maximum iteration number is set to be 60 stages. The Visgenome-11K image set is divided into two parts, 80% for the training set and 20% for the validation set. The method comprises the steps of finely adjusting faster R-CNN by using Visgenome-11K, then performing region detection on a food image of a Dish data set by using a faster-finely-adjusted R-CNN model to obtain a food region in the image, then extracting visual features of the food region from the image by using an Alex-Net model trained in advance on an image network, and after performing region detection and visual feature extraction on all images, using the images for retrieval and classification of the food image.
Accuracy and MAP are common evaluation indexes used in information retrieval. To verify the effectiveness of the methods herein, CNN-G, CNN-G-F was compared to Rapid R-CNN-G: the CNN-G mainly uses a 7-layer Alex network to directly extract the visual characteristics of the global image; compared with CNN-G, CNN-G-F firstly uses a training set to finely tune an Alex network, and then uses the fine tuning network to extract the visual characteristics of the whole image; and the fast R-CNN-G directly uses the fast R-CNN network to detect candidate food areas in the image, and then extracts visual features for the candidate areas with the highest weight by using a fine-tuned AlexNet network. Wherein, the method CNN-G, CNN-G-F does not use rapid R-CNN for region detection, and the rapid R-CNN-G method further illustrates the effect of fine tuning the rapid RCNN by Visgenome-11K.
For the classification task of images, 75% of the data set of each class is used as a training set and 25% is used as a test set. Since the classification task is single-label, accuracy is adopted as an evaluation index. The results of the accuracy comparisons using the different methods are shown in table 1, and the results of the accuracy comparisons using the different methods are analyzed as shown in fig. 4.
Table 1:
Method Accursdy rate
CNN-G 0.356
CNN-G-F 0.704
Faster-R-CNN-G 0.748
Artucle method 0.754
as shown in FIG. 4, the performance of the fast R-CNN-G is improved by 5% compared to CNN-G-F.
In one embodiment, the dish classification results of 15 different methods are randomly selected for analysis, and the results are shown in table 2 and fig. 5.
Table 2:
Figure BDA0002458556170000131
Figure BDA0002458556170000141
combining table 2 and fig. 5, the classification results of 15 randomly selected dishes under different methods can be obtained. The accuracy of the optimal classification result of the CNN-G method is 0.41, the accuracy of the optimal classification result of the CNN-G-F method is 0.73, and the accuracy of the optimal classification result of the rapid R-CNN-G method is 0.78. It can be seen that in most experiments the classification performance of the fast R-CNN-G method was the best.
A search experiment was performed on the Dish-233 dataset using Precision @ K and MAP @ K (K represents the number of candidate images returned during the search), K ═ 1,20,40,60,80100, and the search results for four of these two indices are shown in fig. 6 and 7.
From fig. 6 and 7, it can be derived that: the CNN-G-F has better retrieval performance than the CNN-G, which indicates that the trimming Alex-Net network can obtain visual characteristics more suitable for a Dish data set; the rapid R-CNN-G has higher speed and is better than CNN-G-F, which shows that the interference of background information of the food image can be effectively reduced by using the rapid R-CNN to detect the food image area, thereby improving the retrieval performance; it is demonstrated that fine tuning with Visgenome11K can improve the accuracy of food image inspection.
It should be understood that, although the steps in the respective flowcharts described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in each of the flowcharts described above may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided an image processing apparatus including: an image acquisition module 810, a parameter adjustment module 820, a feature extraction module 830, and an image classification module 840, wherein:
an image obtaining module 810, configured to obtain an image to be processed; the image to be processed contains a food area.
And the parameter adjusting module 820 is configured to search for an initial target detection algorithm and adjust parameters in the initial target detection algorithm to obtain a target detection algorithm.
And the feature extraction module 830 is configured to extract a food area in the image to be processed according to a target detection algorithm, and extract visual features in the food area through a deep neural network algorithm.
An image classification module 840, configured to classify the image to be processed according to the visual features.
In one embodiment, the parameter adjustment module 820 is further configured to obtain an image database, and search the image database for a food image; generating a food image dataset from the food image; and adjusting parameters in the initial target detection algorithm by using the food image data set to obtain a target detection algorithm.
In one embodiment, the feature extraction module 830 is further configured to respectively obtain candidate frames of the image to be processed according to a target detection algorithm; calculating the proportion of the candidate frames of each candidate frame; and extracting the food area in the image to be processed according to the candidate frame weight.
In one embodiment, the feature extraction module 830 is further configured to obtain a specific gravity threshold, and use a food region corresponding to a specific gravity of a candidate frame higher than the specific gravity threshold as a candidate region; obtaining candidate frame coordinates of the candidate region according to the candidate frame proportion of the candidate region; extracting the sub-features of the candidate region according to the candidate frame coordinates; from the sub-features, visual features in the food area are derived.
In one embodiment, the image classification module 840 is further configured to extract color features from the visual features and extract shape features from the visual features; and classifying the images to be processed according to the color characteristics and the shape characteristics to obtain the categories of the images to be processed.
In one embodiment, the image classification module 840 is further configured to obtain an edge feature map of the image to be processed according to the color feature and the shape feature; calculating the number of edge points according to the edge feature graph, and obtaining an image histogram according to the number of the edge points; and inputting the image histogram into an image classifier to obtain the category of the image to be processed.
In an embodiment, the image processing apparatus provided may further include an image retrieval module, configured to obtain an image to be retrieved, and extract an image feature of the image to be retrieved; carrying out image retrieval in an image library according to the image characteristics to obtain an initial retrieval image; calculating the similarity between the initial retrieval image and the image to be retrieved; and obtaining a target retrieval image according to the similarity.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an image to be processed; the to-be-processed image comprises a food area;
searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
extracting a food area in the image to be processed according to a target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm;
and classifying the images to be processed according to the visual features.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring an image database, and searching a food image from the image database; generating a food image dataset from the food image; and adjusting parameters in the initial target detection algorithm by using the food image data set to obtain a target detection algorithm.
In one embodiment, the processor, when executing the computer program, further performs the steps of: respectively acquiring candidate frames of the image to be processed according to a target detection algorithm; calculating the proportion of the candidate frames of each candidate frame; and extracting the food area in the image to be processed according to the candidate frame weight.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a specific gravity threshold, and taking a food region corresponding to the specific gravity of the candidate frame higher than the specific gravity threshold as a candidate region; obtaining candidate frame coordinates of the candidate region according to the candidate frame proportion of the candidate region; extracting the sub-features of the candidate region according to the candidate frame coordinates; from the sub-features, visual features in the food area are derived.
In one embodiment, the processor, when executing the computer program, further performs the steps of: extracting color features in the visual features and extracting shape features in the visual features; and classifying the images to be processed according to the color characteristics and the shape characteristics to obtain the categories of the images to be processed.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring an edge feature map of the image to be processed according to the color feature and the shape feature; calculating the number of edge points according to the edge feature graph, and obtaining an image histogram according to the number of the edge points; and inputting the image histogram into an image classifier to obtain the category of the image to be processed.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring an image to be retrieved, and extracting image characteristics of the image to be retrieved; carrying out image retrieval in an image library according to the image characteristics to obtain an initial retrieval image; calculating the similarity between the initial retrieval image and the image to be retrieved; and obtaining a target retrieval image according to the similarity.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be processed; the to-be-processed image comprises a food area;
searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
extracting a food area in the image to be processed according to a target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm;
and classifying the images to be processed according to the visual features.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an image database, and searching a food image from the image database; generating a food image dataset from the food image; and adjusting parameters in the initial target detection algorithm by using the food image data set to obtain a target detection algorithm.
In one embodiment, the computer program when executed by the processor further performs the steps of: respectively acquiring candidate frames of the image to be processed according to a target detection algorithm; calculating the proportion of the candidate frames of each candidate frame; and extracting the food area in the image to be processed according to the candidate frame weight.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a specific gravity threshold, and taking a food region corresponding to the specific gravity of the candidate frame higher than the specific gravity threshold as a candidate region; obtaining candidate frame coordinates of the candidate region according to the candidate frame proportion of the candidate region; extracting the sub-features of the candidate region according to the candidate frame coordinates; from the sub-features, visual features in the food area are derived.
In one embodiment, the computer program when executed by the processor further performs the steps of: extracting color features in the visual features and extracting shape features in the visual features; and classifying the images to be processed according to the color characteristics and the shape characteristics to obtain the categories of the images to be processed.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an edge feature map of the image to be processed according to the color feature and the shape feature; calculating the number of edge points according to the edge feature graph, and obtaining an image histogram according to the number of the edge points; and inputting the image histogram into an image classifier to obtain the category of the image to be processed.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an image to be retrieved, and extracting image characteristics of the image to be retrieved; carrying out image retrieval in an image library according to the image characteristics to obtain an initial retrieval image; calculating the similarity between the initial retrieval image and the image to be retrieved; and obtaining a target retrieval image according to the similarity.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring an image to be processed; the image to be processed comprises a food area;
searching an initial target detection algorithm, and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
extracting a food area in the image to be processed according to the target detection algorithm, and extracting visual features in the food area through a deep neural network algorithm;
and classifying the image to be processed according to the visual features.
2. The method of claim 1, wherein the adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm comprises:
acquiring an image database, and searching a food image from the image database;
generating a food image dataset from the food image;
adjusting parameters in the initial target detection algorithm using the food image dataset to obtain the target detection algorithm.
3. The method of claim 1, wherein extracting food regions in the image to be processed according to the target detection algorithm comprises:
respectively acquiring candidate frames of the image to be processed according to the target detection algorithm;
calculating the candidate frame proportion of each candidate frame;
and extracting the food area in the image to be processed according to the candidate frame proportion.
4. The method of claim 3, wherein extracting visual features in the food area by a deep neural network algorithm comprises:
acquiring a specific gravity threshold, and taking a food region corresponding to the specific gravity of the candidate frame higher than the specific gravity threshold as a candidate region;
obtaining candidate frame coordinates of the candidate region according to the candidate frame proportion of the candidate region;
extracting the sub-features of the candidate region according to the candidate frame coordinates;
and obtaining the visual characteristics in the food area according to the sub-characteristics.
5. The method of claim 1, wherein the classifying the image to be processed according to the visual features comprises:
extracting color features in the visual features and extracting shape features in the visual features;
and classifying the images to be processed according to the color features and the shape features to obtain the categories of the images to be processed.
6. The method according to claim 5, wherein the classifying the image to be processed according to the color feature and the shape feature to obtain a category of the image to be processed comprises:
acquiring an edge feature map of the image to be processed according to the color feature and the shape feature;
calculating the number of edge points according to the edge feature graph, and obtaining an image histogram according to the number of the edge points;
and inputting the image histogram into an image classifier to obtain the category of the image to be processed.
7. The method of claim 1, further comprising:
acquiring an image to be retrieved, and extracting image characteristics of the image to be retrieved;
carrying out image retrieval in an image library according to the image characteristics to obtain an initial retrieval image;
calculating the similarity between the initial retrieval image and the image to be retrieved;
and obtaining a target retrieval image according to the similarity.
8. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be processed; the image to be processed comprises a food area;
the parameter adjusting module is used for searching an initial target detection algorithm and adjusting parameters in the initial target detection algorithm to obtain a target detection algorithm;
the characteristic extraction module is used for extracting a food area in the image to be processed according to the target detection algorithm and extracting visual characteristics in the food area through a deep neural network algorithm;
and the image classification module is used for classifying the image to be processed according to the visual features.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010313007.4A 2020-04-20 2020-04-20 Image processing method, image processing device, computer equipment and storage medium Pending CN111539470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010313007.4A CN111539470A (en) 2020-04-20 2020-04-20 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010313007.4A CN111539470A (en) 2020-04-20 2020-04-20 Image processing method, image processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111539470A true CN111539470A (en) 2020-08-14

Family

ID=71979153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010313007.4A Pending CN111539470A (en) 2020-04-20 2020-04-20 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111539470A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287193A (en) * 2020-10-30 2021-01-29 腾讯科技(深圳)有限公司 Data clustering method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809121A (en) * 2016-03-03 2016-07-27 电子科技大学 Multi-characteristic synergic traffic sign detection and identification method
US20170206646A1 (en) * 2016-01-14 2017-07-20 Electronics And Telecommunications Research Institute Apparatus and method for food search service
CN109002851A (en) * 2018-07-06 2018-12-14 东北大学 It is a kind of based on the fruit classification method of image multiple features fusion and application
CN109903836A (en) * 2019-03-31 2019-06-18 山西慧虎健康科技有限公司 A kind of diet intelligent recommendation and matching system and method based on constitution and big data
CN110705621A (en) * 2019-09-25 2020-01-17 北京影谱科技股份有限公司 Food image identification method and system based on DCNN and food calorie calculation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170206646A1 (en) * 2016-01-14 2017-07-20 Electronics And Telecommunications Research Institute Apparatus and method for food search service
CN105809121A (en) * 2016-03-03 2016-07-27 电子科技大学 Multi-characteristic synergic traffic sign detection and identification method
CN109002851A (en) * 2018-07-06 2018-12-14 东北大学 It is a kind of based on the fruit classification method of image multiple features fusion and application
CN109903836A (en) * 2019-03-31 2019-06-18 山西慧虎健康科技有限公司 A kind of diet intelligent recommendation and matching system and method based on constitution and big data
CN110705621A (en) * 2019-09-25 2020-01-17 北京影谱科技股份有限公司 Food image identification method and system based on DCNN and food calorie calculation method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIAOYANG LIU等: "A Detection Method for Apple Fruits Based on Color and Shape Features", 《IEEE ACCESS》 *
孙浩荣: "基于SVM的食物图像分类算法的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
梅舒欢等: "基于Faster R-CNN的食品图像检索和分类", 《南京信息工程大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287193A (en) * 2020-10-30 2021-01-29 腾讯科技(深圳)有限公司 Data clustering method and device, computer equipment and storage medium
CN112287193B (en) * 2020-10-30 2022-10-04 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Peng et al. Self-paced joint sparse representation for the classification of hyperspectral images
US10353948B2 (en) Content based image retrieval
WO2019100724A1 (en) Method and device for training multi-label classification model
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
CN107563442B (en) Hyperspectral image classification method based on sparse low-rank regular graph tensor embedding
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
Keivani et al. Automated analysis of leaf shape, texture, and color features for plant classification.
CN113361495A (en) Face image similarity calculation method, device, equipment and storage medium
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN106909895B (en) Gesture recognition method based on random projection multi-kernel learning
Ahmed et al. Deep image sensing and retrieval using suppression, scale spacing and division, interpolation and spatial color coordinates with bag of words for large and complex datasets
CN109213886B (en) Image retrieval method and system based on image segmentation and fuzzy pattern recognition
CN112580480B (en) Hyperspectral remote sensing image classification method and device
CN103064941A (en) Image retrieval method and device
Zhao et al. Combining multiple SVM classifiers for adult image recognition
Mohammed et al. Proposed approach for automatic underwater object classification
CN111539470A (en) Image processing method, image processing device, computer equipment and storage medium
CN105844299B (en) A kind of image classification method based on bag of words
Huang et al. Dq-detr: Detr with dynamic query for tiny object detection
Drzewiecki et al. Applicability of multifractal features as global characteristics of WorldView-2 panchromatic satellite images
Wang et al. Cascading classifier with discriminative multi-features for a specific 3D object real-time detection
Devesh et al. Retrieval of monuments images through ACO optimization approach
Hoang Unsupervised LBP histogram selection for color texture classification via sparse representation
Johnson et al. A study on eye fixation prediction and salient object detection in supervised saliency
Lee et al. Octagonal prism LBP representation for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200814

RJ01 Rejection of invention patent application after publication