CN110751225A - Image classification method, device and storage medium - Google Patents
Image classification method, device and storage medium Download PDFInfo
- Publication number
- CN110751225A CN110751225A CN201911029109.7A CN201911029109A CN110751225A CN 110751225 A CN110751225 A CN 110751225A CN 201911029109 A CN201911029109 A CN 201911029109A CN 110751225 A CN110751225 A CN 110751225A
- Authority
- CN
- China
- Prior art keywords
- image
- image classification
- detected
- data
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application belongs to the technical field of machine vision, and provides an image classification method, an image classification device and a storage medium, wherein the method comprises the following steps: segmenting an original image to obtain a plurality of images to be detected; for each image to be detected, inputting the image to be detected into an image classification model for calculation to obtain first characteristic data; extracting image features of the image to be detected, and extracting preset target features to obtain second feature data; combining the first feature data and the second feature data to generate a feature image; and carrying out image classification on the characteristic images through the image classification model, and outputting a classification result. According to the image classification method and device, the second feature data obtained by extracting the preset target features from the image to be detected and the first feature data obtained by convolution are combined to generate the feature map, the feature map is input to the image classification model again for image classification, the network structure of the image classification model is improved, and the problem of low classification accuracy is solved.
Description
Technical Field
The invention relates to the technical field of machine vision, in particular to an image classification method, an image classification device and a storage medium.
Background
After wave soldering of a production line, due to the fact that a soldering process is immature, continuous soldering, missing soldering and other phenomena are often caused, the service life, attractiveness and the like of a product are greatly influenced, and quality detection needs to be carried out on the product. In the prior art, an image processing method of denoising, transformation, segmentation and feature extraction in a visual traditional algorithm is generally adopted to process a welding spot image, and then a Support Vector Machine (SVM) machine is used to classify the welding spot; or after the image processing method in the visual traditional algorithm is adopted to process the welding spot image, the SVM is used for classifying the welding spots when the number of the welding spots is small, and the CNN convolutional neural network is used for detecting the welding spots when the number of the welding spots is large.
However, the prior art is affected by the low accuracy of the output result of the SVM machine, so that the accuracy is low when the SVM machine is adopted to detect the welding spots, namely, classify the images. In another welding spot detection method adopted in the prior art, although the detection of the welding spots by using the CNN convolutional neural network has higher accuracy when the number of the welding spots is large, the CNN convolutional neural network algorithm has a black box effect, so that accurate identification and classification are difficult.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image classification method, an image classification device, and a storage medium, so as to solve the problem of low accuracy of solder joint detection.
A first aspect of an embodiment of the present invention provides an image classification method, including:
segmenting an original image to obtain a plurality of images to be detected;
for each image to be detected, inputting the image to be detected into an image classification model for calculation to obtain first characteristic data;
extracting image features of the image to be detected, and extracting preset target features to obtain second feature data;
combining the first feature data and the second feature data to generate a feature image;
and carrying out image classification on the characteristic images through the image classification model, and outputting a classification result.
In an implementation example, before performing image feature extraction on the image to be detected and extracting a preset target feature to obtain second feature data, the method includes:
graying the image to be detected and equally dividing the grayed image to be detected according to a preset image segmentation rule.
In one embodiment, the preset image segmentation rule is set according to a size of the first feature data so that a size of each of the divided images coincides with a size of the first feature data.
In an implementation example, before the step of inputting the image to be detected into an image classification model for calculation to obtain first feature data, the method further includes:
and establishing a difficult case pool, and performing iterative training on the image classification model according to the error-prone samples in the difficult case pool.
In one example implementation, the method further comprises:
carrying out image classification training on the image classification model according to training data; the training data includes positive samples and samples obtained by oversampling the difficult case pool.
In one implementation example, the image classification model includes an input layer, a convolutional layer, a pooling layer, a fully-connected layer, and a classifier;
and calculating the image to be detected through the input layer, the convolution layer and the pooling layer to obtain first characteristic data.
In an implementation example, the performing, by the image classification model, solder joint detection on the feature image and outputting a detection result includes:
inputting the characteristic image into a full connection layer of the image classification model to perform image classification to obtain classification data;
and calculating the probability of the classified data through the classifier to determine a classification result.
A second aspect of an embodiment of the present invention provides an image classification apparatus, including:
the image segmentation module is used for segmenting the original image to obtain a plurality of images to be detected;
the calculation module is used for inputting the images to be detected into an image classification model for calculation so as to obtain first characteristic data for each image to be detected;
the image feature extraction module is used for extracting image features of the image to be detected and extracting preset target features to obtain second feature data;
the data merging module is used for merging the first characteristic data and the second characteristic data to generate a characteristic image;
and the image classification module is used for carrying out image classification on the characteristic images through the image classification model and outputting a classification result.
A third aspect of embodiments of the present invention provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the method of the first aspect.
A fourth aspect of an embodiment of the present invention provides an image classification device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the image classification method of the first aspect when executing the computer program.
According to the image classification method, the image classification device and the storage medium provided by the embodiment of the invention, a plurality of images to be detected are obtained by segmenting an original image; for each image to be detected, inputting the image to be detected into an image classification model for calculation to obtain first characteristic data; extracting image features of the image to be detected, and extracting preset target features to obtain second feature data; combining the first feature data and the second feature data to generate a feature image; and carrying out image classification on the characteristic images through the image classification model, and outputting a classification result. The method comprises the steps of combining second feature data obtained by extracting preset target features from an image to be detected and first feature data obtained by convolution to generate a feature map, inputting the feature map to an image classification model again for image classification, improving the network structure of the image classification model and overcoming the black box effect of the model. Therefore, the identification accuracy of the classification result output after the image classification model carries out image classification on the characteristic image is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image classification method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an implementation of an image classification method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of an image classification method according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image classification apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image classification apparatus according to a fifth embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
Example one
Fig. 1 is a schematic flow chart of an image classification method according to an embodiment of the present invention. The method can be applied to application scenes for detecting defects of welding spots on an image to be detected, and can be executed by a detection device, wherein the detection device can be a server, an intelligent terminal, a tablet or a PC (personal computer) and the like; in the embodiment of the present invention, a detection apparatus is used as an execution subject, and the method specifically includes the following steps:
s110, segmenting the original image to obtain a plurality of images to be detected;
the image classification method can be applied to various application scenes in which image classification is required, and in the embodiment, the application scene in which the image classification method is used for detecting the defects of the welding spots is described, so that the welding spots on the workpiece or the product can be photographed to obtain the original image with the information of the welding spots. The detection device realizes the detection of the welding spot defects by carrying out image recognition and classification on the welding spots on the original image. Specifically, after the detection device acquires an original image, segmenting welding spots on the original image to obtain a plurality of to-be-detected images containing welding spot information in the original image; or by segmenting the original image manually. Optionally, the detecting device may divide the original image to obtain the image to be detected by identifying each welding point on the original image according to a preset welding point image feature, and after determining the position of each welding point on the original image, dividing the welding point on the detected image to obtain an image of each welding point, i.e., the image to be detected. Wherein, the images to be detected all comprise complete welding spot information. And the size of each image to be detected can be consistent, for example, uniformly adjusted to 64 × 64, in order to facilitate the subsequent analysis processing of the image to be detected.
S120, inputting the image to be detected into an image classification model for calculation to obtain first characteristic data for each image to be detected;
and inputting each obtained image to be detected into a pre-trained image classification model one by one for calculation to obtain first characteristic data. Optionally, the image classification model may be a convolutional neural network model, and the image classification model includes an input layer, a convolutional layer, a pooling layer, a full-link layer, and a classifier. And after the image to be detected is input into the image classification model, the first characteristic data is obtained by calculation through the input, convolution and pooling layers in the model.
In an implementation example, optionally, the CNN convolutional neural network uses a convolution kernel size of 3 × 3, and uses 3 input channels of RGB; and dropout is added to each layer of network, and the number of parameters is randomly reduced by 20%. Specifically, the detection device can input the image to be detected into the image classification model through RGB three channels. The convolution layer parameters in the image classification model can set the size of a convolution kernel to be 3 x 3 and the convolution step size to be 1; convolutional layer Padding may be set to SAME to continue sampling the boundaries of the image to be detected. Optionally, the number of convolution kernels of the 4 convolution layers in the image classification model may be 8, 16, 32, and 64 from the input end to the output end, respectively. And the convolution layer in the image classification model is attached with a Leaky Relu activation function, and the problems of gradient loss and neuron failure of the image classification model are solved by adding nonlinear factors. Dropout is added to each layer of network, and the number of parameters is randomly reduced by 20% to prevent overfitting of the neural network.
After the image to be detected is subjected to convolution calculation through the input layer and the plurality of convolution layers of the image classification model to obtain a feature map, the feature map is input into the pooling layer of the image classification model to be subjected to feature compression so as to extract main features and simplify network calculation complexity, and therefore first feature data with a certain size is obtained. Fig. 2 is a schematic diagram of an implementation of the image classification method according to the first embodiment. Optionally, the detection device inputs the to-be-detected image with the size of 64 × 3 into the image classification model, and performs convolution calculation with convolution step size of 1 on the to-be-detected image through the convolution layer with the number of the first layer of convolution kernels of the image classification model being 8, so as to obtain a feature map with the size of 32 × 8; inputting the feature graphs of 32 × 8 into convolution layers with the number of 16 convolution kernels of the second layer of the model for convolution calculation to obtain feature graphs with the size of 16 × 16; inputting the 16 × 16 feature graphs into the convolution layers with the number of 32 convolution kernels of the third layer of the model for convolution calculation to obtain the feature graphs with the size of 8 × 32; and inputting the 8-32 feature graph into a convolution layer with the number of the fourth layer convolution kernels of the model being 64, performing convolution calculation, and compressing through a pooling layer, wherein the size of the pooling kernel is 2-2, and the pooling step size is 2. Therefore, the image to be detected is input into the input layer and the plurality of convolution layers of the image classification model for convolution calculation and input into the pooling layer for compression, and first feature data with the size of 4 × 64 can be obtained.
S130, extracting image features of the image to be detected, and extracting preset target features to obtain second feature data;
when each image to be detected is input into the image classification model one by one for convolution calculation, image feature extraction is carried out on each image to be detected, and preset target features are extracted from the image to be detected to obtain second feature data.
In an embodiment, before the detection device extracts the preset target feature of the image to be detected, graying the image to be detected and equally dividing the grayed image to be detected according to a preset image segmentation rule. The detection device needs to combine the image classification model to carry out convolution calculation on an image to be detected to obtain first characteristic data and second characteristic data obtained by extracting preset target characteristics from the same image to be detected to carry out defect detection on the welding spot image. In order to realize the unification of the size of the data, a preset image segmentation rule can be set according to the size of the first characteristic data of the convolution output data of the image classification model, so that the size of each average image obtained by equally dividing the grayed image to be detected according to the preset image segmentation rule is consistent with the size of the first characteristic data. For example, as shown in fig. 2, if the size of the image to be detected is 64 × 64. And carrying out graying processing on the image to be detected to obtain a 64 × 1 grayscale image, and dividing the 64 × 1 grayed image to be detected equally according to a preset image division rule. If the size of the first feature data is 4 × 64, the preset image segmentation rule may be to divide the to-be-detected image with the grayed size of 64 × 1 into (64/4) × (64/4) ═ 256 pieces of divided images, and each divided image has a size of 4 × 1. The detection device extracts preset target features from all the average images to be detected to obtain second feature data.
In one example, the preset target features may include, but are not limited to: mean grayscale histogram, standard deviation, skewness, high-magnitude ratio, peak, energy, weld perimeter, area, shape parameter, hydraulic radius, sphericity, eccentricity, center of gravity, compactness, cross-correlation, and wavelet frequency. The detection device can extract preset target features from all the average images in the welding spot images by using feature calculation functions corresponding to gray level histogram mean, standard deviation, skewness, high magnitude ratio, peak value, energy, welding spot perimeter, area, shape parameter, hydraulic radius, sphericity, eccentricity, center of gravity, compactness, cross correlation and wavelet frequency according to a traditional image feature extraction method, and second feature data of the welding spot images are obtained, wherein the size of the second feature data is 4 x 16.
S140, combining the first characteristic data and the second characteristic data to generate a characteristic image;
in order to overcome the black box effect of the convolutional neural network, the detection device needs to combine the first characteristic data obtained by calculating the image to be detected by the image classification model and the second characteristic data obtained by extracting the preset target characteristic from the same image to be detected, and then the welding spot defect in the image to be detected is detected. And combining the first characteristic data and the second characteristic data to generate a characteristic image. As shown in fig. 2, if the first feature data is 4 × 64 and the second feature data is 4 × 16, the combined feature image may be 4 × 64+16 data.
S150, carrying out image classification on the characteristic images through the image classification model, and outputting a classification result.
After the first characteristic data and the second characteristic data are combined to generate a characteristic image, the detection device performs image classification, namely welding point detection on the characteristic image through an image classification model, and therefore a classification result is output. And improving the CNN network structure by utilizing the black box characteristic of the depth algorithm and the white box characteristic of the traditional image processing. And the external features are embedded into the CNN network to improve the reliability of the classification result. Therefore, the defect that image detail information is lost in the traditional feature extraction method is overcome, and the problem of random overfitting of the neural network in the convergence direction when the data volume is small is solved. Specifically, the characteristic image is input into a full connection layer of an image classification model to perform image characteristic integration and output an image classification result, and then the image classification result is input into a classifier to calculate probability and output a classification result, so that defect detection of the welding spot is completed.
In one implementation example, the process of image classifying the feature image by the image classification model may be: and inputting the characteristic image into a full connection layer of the image classification model to perform image classification, so as to obtain classification data. Alternatively, as shown in fig. 2, the fully-connected layer may be 1280 × 1000 in size, and the detection device tiles the three-dimensional 4 × 80 feature images into 1 × 1280 before inputting the feature images into the fully-connected layer. Inputting the tiled characteristic image of 1 x 1280 into a full connection layer of the image classification model to obtain neurons with the size of 1 x 1000, and classifying the image through the full connection layer to obtain classification data.
In one implementation example, the controller equalizes samples in training data in a downsampling mode, and performs image classification training on the image classification model according to the equalized samples; the training data comprises positive samples and negative samples; optionally, the positive sample may be a sample corresponding to a preset detection result (good, exposed copper, needle missing and continuous welding). The image classification model is generated by training samples and negative samples corresponding to the detection results of the four preset welding points of good welding points, exposed copper welding points, needle missing welding points and continuous welding. As shown in fig. 2, when the feature image is input into the 1000 × 4 full connection layer of the image classification model for image classification, classification data corresponding to the four welding spot detection results, i.e., good welding spot, exposed copper welding spot, missing needle welding spot, and welding spot, can be output. And then calculating the probability of the classified data through a classifier to determine a classification result. Alternatively, the classifier may be a softmax classification function. And converting classification results of the four welding spot detection results of good welding spot detection, copper exposure, needle shortage and continuous welding into probabilities through a softmax classification function, outputting the classification result with the highest probability, and completing detection of the welding spot defects in the welding spot image.
According to the image classification method provided by the embodiment of the invention, the original image is segmented to obtain the image to be detected; inputting the image to be detected into an image classification model for calculation to obtain first characteristic data; extracting image features of the image to be detected, and extracting preset target features to obtain second feature data; combining the first feature data and the second feature data to generate a feature image; and carrying out image classification on the characteristic images through the image classification model, and outputting a classification result. The method comprises the steps of combining second feature data obtained by extracting preset target features from an image to be detected and first feature data obtained by convolution to generate a feature map, inputting the feature map to an image classification model again for image classification, improving the network structure of the image classification model and overcoming the black box effect of the model. Therefore, the identification accuracy of the classification result output after the image classification model carries out image classification on the characteristic image is improved.
Example two
Fig. 3 is a schematic flow chart of an image classification method according to a second embodiment of the present invention. On the basis of the first embodiment, the embodiment also provides a training process of the image classification model, so that the problem of difficult samples is solved. The method specifically comprises the following steps:
s210, establishing a difficult case pool, and performing iterative training on the image classification model according to error-prone samples in the difficult case pool;
after the image classification model is trained, iterative training can be carried out on the image classification model aiming at the error-prone sample, and the problem of difficult samples is solved by carrying out repeated iterative training in an oversampling mode. And storing the error-prone samples with wrong classification into the difficult sample pool after the image classification model is trained each time. The error-prone sample used in each iterative training is obtained from a difficult-to-sample pool, and specifically, the samples obtained by oversampling the difficult-to-sample pool include: and (4) according to the sorting of the score deviations of the sample classifications in the difficult sample pool, the more easily to be mistakenly sorted in the front, and oversampling is carried out according to the proportion of the score in the total score. If there are three difficult samples, their score deviation is 0.9, 0.6, 0.3, their total score deviation is 1.8, and the preset expansion multiple is 10, then the number of oversampling is 0.9/1.8 × 10 ═ 5, 0.6/1.8 × 10 ═ 3, 0.1/1.8 × 10 ═ 1, and after the difficult samples are expanded, the original training data is added to train the image classification model. Alternatively, the learning rate of the image classification model may be initially set to 0.0001, and the Adam learning rate adaptive optimizer is used for gradient descent. The batch size is 64, training 100 rounds converge.
S220, carrying out image classification training on the image classification model according to training data; the training data includes positive samples and samples obtained by oversampling the difficult case pool.
When the image classification model is trained to construct a model through training data, the training data may include positive samples and samples obtained by oversampling a difficult case pool. The proportion of the error-prone samples is increased in the training data, the image classification model is optimized, and the identification accuracy is improved. In an application scene of welding spot defect detection, the positive sample can be a welding spot sample with a preset detection result; the predetermined test results may include good, bare copper, missing pin, and continuous solder. Therefore, after the input image to be detected is detected by the image classification model generated by training the training data, the output classification result is one of the preset detection results.
Since the pre-trained image classification model used in the first embodiment may be a convolutional neural network model, it may be generated by training with training data. In order to realize the detection of the welding spot, the training data can comprise positive and negative sample data of a welding spot sample and a sample obtained by oversampling the difficult sample pool. If the data size of the training data is insufficient, the sample data size can be expanded by several data online expansion methods such as overturning, horizontal shifting, color change and the like. Specifically, the flipping expansion method is to randomly flip the sample picture horizontally, vertically or horizontally and vertically to generate a new picture; the horizontal offset expansion method is to randomly translate 1-5 pixels to the left or the right on the basis of the original image of the sample picture to generate a new picture; the color change expansion method is to change the brightness (plus-minus deviation 30), the contrast (scaling 0.9-1.1), the chroma (plus-minus deviation 0.2) and the saturation (scaling 0.9-1.1) of a sample picture to generate a new picture. After enough sample data is obtained, the data amount of positive and negative samples in the training data needs to be balanced. Specifically, in an application scene of welding spot defect detection, training data, namely data volumes of positive and negative samples in welding spot samples, can be balanced in a down-sampling mode, and the number of welding spot samples with good welding spot detection results is reduced, so that the welding spot sample data is balanced, and training of an image classification model and training time of the image classification model are facilitated. Optionally, the down-sampling control may be in a range of 10:1, if the number of bad sample samples is 100 and the number of good sample samples is 10000, 1000 samples are randomly drawn from the number of good sample samples 10000 to form a good sample: bad point sample 10: 1.
EXAMPLE III
Fig. 4 shows an image classification apparatus according to a third embodiment of the present invention. On the basis of the first or second embodiment, the embodiment of the present invention further provides an image classification apparatus 4, including:
the image segmentation module 401 is configured to segment an original image to obtain a plurality of images to be detected;
a calculating module 402, configured to, for each image to be detected, input the image to be detected into an image classification model for calculation to obtain first feature data;
in an implementation example, before the calculating module 402 inputs the image to be detected into the image classification model for calculation, and obtains the first feature data, the apparatus further includes:
and the difficult case pool establishing module is used for establishing a difficult case pool so as to carry out iterative training on the image classification model according to the error-prone samples in the difficult case pool.
In one implementation example, the image classification model includes an input layer, a convolutional layer, a pooling layer, a fully-connected layer, and a classifier; the calculating module 402 includes, when the image to be detected is input into the image classification model for calculation to obtain the first feature data:
and the computing unit is used for computing the image to be detected through the input layer, the convolutional layer and the pooling layer to obtain first characteristic data.
An image feature extraction module 403, configured to perform image feature extraction on the image to be detected, and extract a preset target feature to obtain second feature data;
in an implementation example, before the image feature extraction module 403 performs image feature extraction on the image to be detected, and extracts a preset target feature to obtain second feature data, the apparatus further includes:
and the image equally dividing module is used for graying the image to be detected and equally dividing the grayed image to be detected according to a preset image segmentation rule.
A data merging module 404, configured to merge the first feature data and the second feature data to generate a feature image;
and an image classification module 405, configured to perform image classification on the feature image through the image classification model, and output a classification result.
In an implementation example, when the feature image is classified by the image classification model and a classification result is output, the image classification module 405 includes:
the image classification unit is used for inputting the characteristic image into a full connection layer of the image classification model to perform image classification so as to obtain classification data;
and the detection result determining unit is used for calculating the probability of the classified data through the classifier to determine a classification result.
In one example, the apparatus further comprises:
the model training module is used for carrying out image classification training on the image classification model according to training data; the training data includes positive samples and samples obtained by oversampling the difficult case pool.
According to the image classification device provided by the embodiment of the invention, a plurality of images to be detected are obtained by segmenting the original image; for each image to be detected, inputting the image to be detected into an image classification model for calculation to obtain first characteristic data; extracting image features of the image to be detected, and extracting preset target features to obtain second feature data; combining the first feature data and the second feature data to generate a feature image; and carrying out image classification on the characteristic images through the image classification model, and outputting a classification result. The method comprises the steps of combining second feature data obtained by extracting preset target features from an image to be detected and first feature data obtained by convolution to generate a feature map, inputting the feature map to an image classification model again for image classification, improving the network structure of the image classification model and overcoming the black box effect of the model. Therefore, the identification accuracy of the classification result output after the image classification model carries out image classification on the characteristic image is improved.
Example four
The embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the image classification method in the first embodiment or the second embodiment.
Of course, the processor-executable instructions of the computer-readable storage medium provided by the embodiment of the present invention are not limited to the method operations described above, and may also perform related operations in the image classification method provided by any embodiment of the present invention.
EXAMPLE five
Fig. 5 is a schematic structural diagram of an image classification apparatus according to a fifth embodiment of the present invention. The device 5 comprises: a processor 1, a memory 2 and a computer program 3, such as a program for an image classification method, stored in said memory 2 and executable on said processor 1. The processor 1, when executing the computer program 3, implements the steps in the above-described embodiment of the image classification method, such as the steps S110 to S150 shown in fig. 1.
Illustratively, the computer program 3 may be divided into one or more modules, which are stored in the memory 2 and executed by the processor 1 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 3 in the apparatus. For example, the computer program 3 may be divided into an image segmentation module, a calculation module, an image feature extraction module, a data merging module and an image classification module, and each module has the following specific functions:
the image segmentation module is used for segmenting the original image to obtain a plurality of images to be detected;
the calculation module is used for inputting the images to be detected into an image classification model for calculation so as to obtain first characteristic data for each image to be detected;
the image feature extraction module is used for extracting image features of the image to be detected and extracting preset target features to obtain second feature data;
the data merging module is used for merging the first characteristic data and the second characteristic data to generate a characteristic image;
and the image classification module is used for carrying out image classification on the characteristic images through the image classification model and outputting a classification result.
The apparatus may include, but is not limited to, a processor 1, a memory 2, and a computer program 3 stored in the memory 2. It will be appreciated by those skilled in the art that fig. 5 is merely an example of an image classification apparatus and does not constitute a limitation of the apparatus, and may include more or less components than those shown, or combine some components, or different components, for example, the apparatus may further include an input-output device, a network access device, a bus, etc.
The Processor 1 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 2 may be an internal storage unit of the detection apparatus, such as a hard disk or a memory of the detection apparatus. The memory 2 may be an external storage device such as a plug-in hard disk provided in the image pickup apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the memory 2 may also include both an internal storage unit of the apparatus and an external storage device. The memory 2 is used for storing the computer program and other programs and data required for the image classification method. The memory 2 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. An image classification method, comprising:
segmenting an original image to obtain a plurality of images to be detected;
for each image to be detected, inputting the image to be detected into an image classification model for calculation to obtain first characteristic data;
extracting image features of the image to be detected, and extracting preset target features to obtain second feature data;
combining the first feature data and the second feature data to generate a feature image;
and carrying out image classification on the characteristic images through the image classification model, and outputting a classification result.
2. The image classification method according to claim 1, before performing image feature extraction on the image to be detected and extracting preset target features to obtain second feature data, comprising:
graying the image to be detected and equally dividing the grayed image to be detected according to a preset image segmentation rule.
3. The image classification method according to claim 2, wherein the preset image segmentation rule is set according to the size of the first feature data so that the size of each divided image coincides with the size of the first feature data.
4. The image classification method according to claim 1, before inputting the image to be detected into an image classification model for calculation to obtain first feature data, further comprising:
and establishing a difficult case pool, and performing iterative training on the image classification model according to the error-prone samples in the difficult case pool.
5. The image classification method of claim 4, characterized in that the method further comprises:
carrying out image classification training on the image classification model according to training data; the training data includes positive samples and samples obtained by oversampling the difficult case pool.
6. The image classification method of any of claims 1-5, characterized in that the image classification model comprises an input layer, a convolutional layer, a pooling layer, a fully-connected layer, and a classifier;
and calculating the image to be detected through the input layer, the convolution layer and the pooling layer to obtain first characteristic data.
7. The image classification method according to claim 6, wherein the image classification of the feature image by the image classification model and outputting a classification result comprises:
inputting the characteristic image into a full connection layer of the image classification model to perform image classification to obtain classification data;
and calculating the probability of the classified data through the classifier to determine a classification result.
8. An image classification apparatus, comprising:
the image segmentation module is used for segmenting the original image to obtain a plurality of images to be detected;
the calculation module is used for inputting the images to be detected into an image classification model for calculation so as to obtain first characteristic data for each image to be detected;
the image feature extraction module is used for extracting image features of the image to be detected and extracting preset target features to obtain second feature data;
the data merging module is used for merging the first characteristic data and the second characteristic data to generate a characteristic image;
and the image classification module is used for carrying out image classification on the characteristic images through the image classification model and outputting a classification result.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the image classification method according to any one of claims 1 to 7.
10. An image classification apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the image classification method according to any one of claims 1 to 7 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911029109.7A CN110751225A (en) | 2019-10-28 | 2019-10-28 | Image classification method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911029109.7A CN110751225A (en) | 2019-10-28 | 2019-10-28 | Image classification method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110751225A true CN110751225A (en) | 2020-02-04 |
Family
ID=69280329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911029109.7A Pending CN110751225A (en) | 2019-10-28 | 2019-10-28 | Image classification method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110751225A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541447A (en) * | 2020-12-18 | 2021-03-23 | 深圳地平线机器人科技有限公司 | Machine model updating method, device, medium and equipment |
CN113379678A (en) * | 2021-05-14 | 2021-09-10 | 珠海格力智能装备有限公司 | Circuit board detection method and device, electronic equipment and storage medium |
CN114275416A (en) * | 2022-01-19 | 2022-04-05 | 平安国际智慧城市科技股份有限公司 | Kitchen waste classification method, device, equipment and medium based on image recognition |
US12124928B2 (en) | 2020-12-18 | 2024-10-22 | Shenzhen Horizon Robotics Technology Co., Ltd. | Machine model update method and apparatus, medium, and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488515A (en) * | 2014-09-17 | 2016-04-13 | 富士通株式会社 | Method for training convolutional neural network classifier and image processing device |
CN106372658A (en) * | 2016-08-30 | 2017-02-01 | 广东工业大学 | Vehicle classifier training method |
CN106529424A (en) * | 2016-10-20 | 2017-03-22 | 中山大学 | Vehicle logo recognition method and system based on selective search algorithm |
CN109460787A (en) * | 2018-10-26 | 2019-03-12 | 北京交通大学 | IDS Framework method for building up, device and data processing equipment |
CN110047069A (en) * | 2019-04-22 | 2019-07-23 | 北京青燕祥云科技有限公司 | A kind of image detection device |
-
2019
- 2019-10-28 CN CN201911029109.7A patent/CN110751225A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488515A (en) * | 2014-09-17 | 2016-04-13 | 富士通株式会社 | Method for training convolutional neural network classifier and image processing device |
CN106372658A (en) * | 2016-08-30 | 2017-02-01 | 广东工业大学 | Vehicle classifier training method |
CN106529424A (en) * | 2016-10-20 | 2017-03-22 | 中山大学 | Vehicle logo recognition method and system based on selective search algorithm |
CN109460787A (en) * | 2018-10-26 | 2019-03-12 | 北京交通大学 | IDS Framework method for building up, device and data processing equipment |
CN110047069A (en) * | 2019-04-22 | 2019-07-23 | 北京青燕祥云科技有限公司 | A kind of image detection device |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541447A (en) * | 2020-12-18 | 2021-03-23 | 深圳地平线机器人科技有限公司 | Machine model updating method, device, medium and equipment |
US12124928B2 (en) | 2020-12-18 | 2024-10-22 | Shenzhen Horizon Robotics Technology Co., Ltd. | Machine model update method and apparatus, medium, and device |
CN113379678A (en) * | 2021-05-14 | 2021-09-10 | 珠海格力智能装备有限公司 | Circuit board detection method and device, electronic equipment and storage medium |
CN114275416A (en) * | 2022-01-19 | 2022-04-05 | 平安国际智慧城市科技股份有限公司 | Kitchen waste classification method, device, equipment and medium based on image recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107545239B (en) | Fake plate detection method based on license plate recognition and vehicle characteristic matching | |
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN110060237B (en) | Fault detection method, device, equipment and system | |
CN111814902A (en) | Target detection model training method, target identification method, device and medium | |
CN106846339A (en) | Image detection method and device | |
CN104063686B (en) | Crop leaf diseases image interactive diagnostic system and method | |
KR20180065889A (en) | Method and apparatus for detecting target | |
CN110751225A (en) | Image classification method, device and storage medium | |
CN107480676B (en) | Vehicle color identification method and device and electronic equipment | |
CN107273870A (en) | The pedestrian position detection method of integrating context information under a kind of monitoring scene | |
CN112686248B (en) | Certificate increase and decrease type detection method and device, readable storage medium and terminal | |
CN115205247A (en) | Method, device and equipment for detecting defects of battery pole piece and storage medium | |
CN111160194B (en) | Static gesture image recognition method based on multi-feature fusion | |
CN112365451A (en) | Method, device and equipment for determining image quality grade and computer readable medium | |
CN108647696B (en) | Picture color value determining method and device, electronic equipment and storage medium | |
CN114331946A (en) | Image data processing method, device and medium | |
CN114140844A (en) | Face silence living body detection method and device, electronic equipment and storage medium | |
CN113947613B (en) | Target area detection method, device, equipment and storage medium | |
CN117557784B (en) | Target detection method, target detection device, electronic equipment and storage medium | |
CN111931817A (en) | Pellet ore phase identification method and device | |
Triantoro et al. | Image based water gauge reading developed with ANN Kohonen | |
CN111027564A (en) | Low-illumination imaging license plate recognition method and device based on deep learning integration | |
CN110472639B (en) | Target extraction method based on significance prior information | |
US20240177316A1 (en) | Method for segmenting roads in images, electronic device, and storage medium | |
CN111723614A (en) | Traffic signal lamp identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200204 |