CN113065609A - Image classification method and device, electronic equipment and readable storage medium - Google Patents

Image classification method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113065609A
CN113065609A CN202110438241.4A CN202110438241A CN113065609A CN 113065609 A CN113065609 A CN 113065609A CN 202110438241 A CN202110438241 A CN 202110438241A CN 113065609 A CN113065609 A CN 113065609A
Authority
CN
China
Prior art keywords
image
image set
classification
labeled
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110438241.4A
Other languages
Chinese (zh)
Other versions
CN113065609B (en
Inventor
张玉琪
曹锋铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Saiante Technology Service Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202110438241.4A priority Critical patent/CN113065609B/en
Publication of CN113065609A publication Critical patent/CN113065609A/en
Application granted granted Critical
Publication of CN113065609B publication Critical patent/CN113065609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of intelligent decision making, and discloses an image classification method, which comprises the following steps: training a pre-constructed first convolution neural network model by using a first labeled image set to obtain a first image classification model; carrying out image screening and segmentation processing on an image set to be annotated to obtain a segmented image set; classifying and labeling the divided image sets by using the first image classification model to obtain a second labeled image set; merging the first annotation image set and the second annotation image set to obtain an annotation image set; performing iterative labeling training on the pre-constructed second convolutional neural network model by using a labeled image set to obtain a target image classification model; and classifying the images to be classified by using the target image classification model to obtain a classification result. The invention also relates to a blockchain technique, and the marked image set can be stored in blockchain nodes. The invention also provides an image classification device, electronic equipment and a storage medium. The invention can improve the accuracy of image classification.

Description

Image classification method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the field of intelligent decision making, in particular to an image classification method and device, electronic equipment and a readable storage medium.
Background
With the development of science and technology, artificial intelligence technology is gradually applied to various fields, for example, in order to better acquire information from crop images and judge the growth of crops, people gradually utilize artificial intelligence to classify the crop images.
At present, images are mainly trained by using images labeled by labels to classify the images, for example, the images of crops are classified by using the models to judge the growth stages of the crops (for example, the growth of wheat is divided into several stages of emergence, rooting, leaf growing, tillering, node pulling, ear bearing, ear sprouting, flowering and fruit setting), so that the growth vigor of the crops is judged to provide suggestions for planting personnel.
However, because the labeling efficiency of the image is not high, the image labeled by the label is difficult to obtain, so that the training samples of the model are less, the training effect of the model is influenced, and the accuracy of image classification is lower.
Disclosure of Invention
The invention provides an image classification method, an image classification device, electronic equipment and a computer-readable storage medium, and mainly aims to improve the accuracy of image classification.
In order to achieve the above object, the present invention provides an image classification method, including:
acquiring a first labeled image set, and training a pre-constructed first convolution neural network model by using the first labeled image set to obtain a first image classification model;
acquiring an image set to be annotated, and carrying out image screening and segmentation processing on the image set to be annotated to obtain a segmented image set;
classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set;
merging the first annotation image set and the second annotation image set to obtain an annotation image set; performing iterative labeling training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a target image classification model;
and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result.
Optionally, the training a pre-constructed first convolutional neural network model by using the first labeled image set to obtain a first image classification model includes:
step A: performing convolution pooling operation on the first labeled image set for preset times by using the first convolution neural network model to obtain a feature set;
and B: calculating each feature data in the feature set by using a preset activation function to obtain a label predicted value corresponding to each feature data;
and C: acquiring a label real value corresponding to a label contained in an annotated image corresponding to the feature data, and calculating a label predicted value and a loss value between the label real values by using a pre-constructed loss function;
step D: if the loss value is larger than or equal to a preset threshold value, updating the parameters of the first convolution neural network model, and returning to the step A;
step E: and if the loss value is smaller than a preset threshold value, obtaining and outputting the first image classification model.
Optionally, obtaining an image set to be labeled, and performing image screening and segmentation processing on the image set to be labeled to obtain a segmented image set, including:
carrying out image quality screening on the image set to be labeled to obtain a screened image set;
carrying out image segmentation on each image in the screening image set to obtain an interested area of each image in the screening image set;
and adjusting all pixels of the region outside the region of interest of each image in the screening image set to a preset gray value to obtain the segmentation image set.
Optionally, the image quality screening is performed on the image set to be labeled to obtain a screened image set, including:
carrying out gray level conversion on each image in the image set to be marked to obtain a gray level image array of each image;
calculating the variance of the gray level image array to obtain the gray level variance;
and selecting the image of which the gray variance is smaller than a preset variance value in the image set to be marked to obtain the screened image set.
Optionally, the performing iterative labeling training on the pre-constructed second convolutional neural network model by using the labeled image set to obtain a target image classification model includes:
step I: performing iterative training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a second image classification model;
step II: carrying out classification test on the second image classification model by using a preset test image set to obtain classification accuracy, and judging whether the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold value or not;
step III: if the classification accuracy of the second image classification model is smaller than a preset accuracy threshold, updating the labeled image set and the second convolutional neural network model, and returning to the step I; and if the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold, taking the second image classification model as the target image classification model.
Optionally, the classifying test of the second image classification model by using a preset test image set to obtain the classification accuracy includes:
classifying each image in the test image set by using the second image classification model to obtain a corresponding classification result;
carrying out consistency comparison on the category label of each image in the test image set and the corresponding classification result to obtain a comparison result;
and calculating the percentage according to the comparison result to obtain the classification accuracy.
Optionally, the updating the labeled image set and the second convolutional neural network model includes:
labeling the segmentation image set by using the second image classification model to obtain a third labeled image set;
merging the annotated image set and the third annotated image set to obtain an updated annotated image set;
and adding or deleting the structure of the second convolutional neural network to obtain an updated second convolutional neural network model.
In order to solve the above problem, the present invention also provides an image classification apparatus including:
the model training module is used for acquiring a first labeled image set, and training a pre-constructed first convolution neural network model by using the first labeled image set to obtain a first image classification model;
the image annotation module is used for acquiring an image set to be annotated, and performing image screening and segmentation processing on the image set to be annotated to obtain a segmented image set; classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set; merging the first annotation image set and the second annotation image set to obtain an annotation image set;
the image classification module is used for carrying out iterative labeling training on a pre-constructed second convolutional neural network model by utilizing the labeled image set to obtain a target image classification model; and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and a processor executing the computer program stored in the memory to implement the image classification method described above.
In order to solve the above problem, the present invention also provides a computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being executed by a processor in an electronic device to implement the image classification method described above.
In the embodiment of the invention, a first convolution neural network model which is constructed in advance is trained by utilizing the first labeled image set to obtain a first image classification model; carrying out image screening and segmentation processing on an image set to be annotated to obtain a segmented image set; classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set, and labeling the images by using the model to improve labeling efficiency; merging the first annotation image set and the second annotation image set to obtain an annotation image set; performing iterative annotation training on a pre-constructed second convolutional neural network model by using the annotated image set to obtain a target image classification model, performing iterative updating annotation by using the model continuous annotated image set, continuously updating to improve the annotation accuracy of the annotated image set, and performing iterative training on the model by using the updated annotated image set, so that the performance of the target image classification model is improved, and the image classification accuracy is further improved; and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result. The image is labeled by the model, labeling efficiency is improved, the model is used for continuously labeling the image set to perform iterative updating labeling, the labeling accuracy of the labeling image set is improved by continuous updating, the updated labeling image set is used for performing iterative training on the model, performance of the target image classification model is improved, and image classification accuracy is improved. Therefore, the image classification method, the image classification device, the electronic equipment and the readable storage medium provided by the embodiment of the invention improve the accuracy of image classification.
Drawings
Fig. 1 is a schematic flowchart of an image classification method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of an image classification apparatus according to an embodiment of the present invention;
fig. 3 is a schematic internal structural diagram of an electronic device implementing an image classification method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides an image classification method. The execution subject of the image classification method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiments of the present application. In other words, the image classification method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, which is a schematic flow chart of an image classification method according to an embodiment of the present invention, in an embodiment of the present invention, the image classification method includes:
s1, obtaining a first labeled image set, and training a pre-constructed first convolution neural network model by using the first labeled image set to obtain a first image classification model.
In the embodiment of the present invention, the first labeled image set is a set of crop labeled images, the crop labeled images are a set of crop image labeled with a crop growth stage label, and if the crop image is a wheat field image, wheat can be divided into 9 growth stages of seedling emergence, rooting, leaf growing, tillering, node pulling, booting, ear sprouting, flowering and fruiting, then the growth stage labels share 9 growth stages of seedling emergence, rooting, leaf growing, tillering, node pulling, booting, ear sprouting, flowering and fruiting, and if a certain crop image is in the ear sprouting stage, the crop image is labeled as the label of "ear sprouting". The crop growth is a continuous process which can generally reach several months, the judgment difficulty in the growth stage is high, and the classification marking needs to be carried out by professional personnel, so that the marking efficiency of the crop image is low, and the first marking image set is not easy to obtain.
In the embodiment of the invention, in order to train the image classification model for accurately judging the growth vigor of crops, the first labeled image set is used for training the pre-constructed first convolution neural network model. The convolutional neural network model typically contains a hidden layer and an output layer. The hidden layer comprises a convolution layer and a pooling layer, and the convolution layer can extract the characteristics of an input image to obtain an output characteristic diagram; the pooling layer performs feature selection and information filtering on the output feature map to obtain feature data; the output layer carries out nonlinear combination on the characteristic data and outputs a classification label to the input data by using a logic function or a normalized exponential function, thereby realizing the classification of the image. Optionally, in this embodiment of the present invention, the first convolutional neural network model is an EfficientNet-b3 model.
In detail, in the embodiment of the present invention, the training of the pre-constructed first convolutional neural network model by using the first labeled image set includes:
step A: performing convolution pooling operation on the first labeled image set for preset times by using the first convolution neural network model to obtain a feature set;
in detail, the first labeling image set is input into the first convolutional neural network model, further, each image of the first labeling image set is convolved by a convolution layer of a hidden layer of the first convolutional neural network model to obtain a feature map, then the feature map is pooled by a pooling layer to obtain feature data, and all the feature data are summarized to obtain the feature set.
And B: calculating each feature data in the feature set by using a preset activation function to obtain a label predicted value corresponding to each feature data;
optionally, in this embodiment of the present invention, the activation function is a Relu function.
And C: acquiring a label real value corresponding to a label contained in an annotated image corresponding to the feature data, and calculating a label predicted value and a loss value between the label real values by using a pre-constructed loss function;
in the embodiment of the present invention, the actual value of the tag is determined by the tag included in the labeled image, such as: and if the label of the labeled image A is an A label, the corresponding actual value of the label is 1.
Step D: if the loss value is larger than or equal to a preset threshold value, updating the parameters of the first convolution neural network model, and returning to the step A;
step E: and if the loss value is smaller than a preset threshold value, obtaining and outputting the first image classification model.
In detail, the convolution pooling operation includes: convolution operations and pooling operations.
Further, the convolution operation channel varies as:
Figure BDA0003033898860000061
where ω' is a channel number of a middle image of the convolution data set, ω is a channel number of an image in the first labeled image set, k is a size of a preset convolution kernel, f is a step of a preset convolution operation, and p is a preset data zero padding matrix.
Preferably, in the embodiment of the present invention, the pooling operation is a maximal pooling operation performed on the convolution data set to obtain the feature set.
Further, the activation function according to the preferred embodiment of the present invention includes:
Figure BDA0003033898860000071
wherein, mutRepresenting the label prediction value, s represents the feature data in the feature set.
In detail, the loss function according to the preferred embodiment of the present invention includes:
Figure BDA0003033898860000072
wherein L isceRepresenting the loss value, N being the total number of images of the annotated image set, i being a positive integer representing the number of images in the first annotated image set, yiThe actual values of the labels, p, corresponding to the differently numbered images of the first set of labeled imagesiAnd the label predicted values corresponding to the images with different numbers in the first labeling image set.
And S2, acquiring an image set to be annotated, and carrying out image screening and segmentation processing on the image set to be annotated to obtain a segmented image set.
In the embodiment of the present invention, in order to enable the image classification model to more accurately judge the classification of the images, more annotated images need to be acquired for training, so that the embodiment of the present invention acquires an image set to be annotated, where an image in the image set to be annotated is an unlabeled image with the same content as the image in the first annotated image set, and optionally, the image in the image set to be annotated is an image of a farmland where crops are located, for example: the marked wheat farmland picture without marks can be obtained from a certain picture database.
Further, in order to train a subsequent model more accurately, the embodiment of the present invention needs to ensure the quality of the images in the image set to be labeled, and therefore, in order to train a model better, the image set to be labeled needs to be screened, pictures with high quality of the image set to be labeled are retained, low-quality images with a fuzzy visual effect and excessive noise are removed, and the screened image set to be labeled is obtained. In detail, the embodiment of the invention obtains each image in the image set to be marked, performs gray level conversion to obtain a corresponding gray level image array, and calculates the variance of the gray level image array to obtain the gray level variance; and selecting the image of which the gray variance is smaller than a preset variance value in the image set to be marked to obtain the screened image set to be marked. Optionally, the embodiment of the present invention performs convolution operation using a pre-constructed laplacian to obtain the gray variance, and further, compares variance results of all pictures, and retains a picture with a smaller variance result to obtain a picture with high definition.
Further, in order to facilitate accurate labeling, it is necessary to remove interference of an irrelevant background Region in each image in the filtered image set to be labeled, segment each image in the filtered image set to be labeled, and mark a Region of Interest (ROI) of each image in the filtered image set to be labeled. Preferably, in an embodiment of the present invention, the region of interest is a region, further, a region outside the region of interest in each image in the screened image set to be labeled is subjected to pixel adjustment processing to obtain the segmented image set, and optionally, all pixels in the region outside the region of interest in each image in the screened image set to be labeled are adjusted to a preset gray value to obtain the segmented image set. For example, all pixels of the region outside the region of interest in the image are adjusted to be 0, so that the region outside the region of interest in each image in the screened image set to be labeled becomes black, thereby removing interference of an irrelevant background region and obtaining the segmented image set.
S3, classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set;
in the embodiment of the invention, in order to enable the image classification model to judge the classification of the image more accurately, for example, judging the growth stage of the crop image, more samples need to be obtained for iterative training. The images of the segmented image set can be labeled by using the first image classification model, and a second labeled image set is obtained as a sample. Specifically, the first image classification model is used for classifying and predicting the images of the segmented image set, a predicted pseudo label is marked on the images in the segmented image set according to the classification prediction result, and a second labeled image set is obtained after the prediction is finished. For example, the first image classification model is used to classify and label the segmented wheat field images, and if the wheat of a certain image is in the flowering stage, the first image classification model marks the image with a pseudo label of "flowering".
S4, combining the first annotation image set and the second annotation image set to obtain an annotation image set; and performing iterative labeling training on the pre-constructed second convolutional neural network model by using the labeled image set to obtain a target image classification model.
In the embodiment of the invention, the first annotation image set and the second annotation image set are combined to obtain an annotation image set; further, in order to enable the image classification model to judge the growth vigor of crops more accurately, the second convolutional neural network model which is pre-constructed is subjected to iterative annotation training by using the annotated image set, and the iterative annotation training method comprises the following steps:
step I: performing iterative training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a second image classification model;
in the embodiment of the present invention, the training manner of step I is the same as that of S1, and is not repeated herein.
Step II: carrying out classification test on the second image classification model by using a preset test image set to obtain classification accuracy, and judging whether the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold value or not;
in detail, in order to ensure whether the second image classification model can accurately classify the image, it is required to determine whether the classification accuracy of the second image classification model meets a preset threshold.
Further, the embodiment of the present invention performs classification test on the second image classification model by using a preset test image set to obtain classification accuracy, including: classifying each image in the test image set by using the second image classification model to obtain a corresponding classification result, wherein the images in the test image set are the images marked with class labels; carrying out consistency comparison on the category label of each image in the test image set and the corresponding classification result to obtain a comparison result; and calculating the percentage according to the comparison result to obtain the classification accuracy, for example: the test image set contains 100 images, the comparison result is that the class label of 95 images is consistent with the corresponding classification result, and if 5 images are inconsistent, the classification accuracy is 95/100 × 100% -95%.
Step III: if the classification accuracy of the second image classification model is smaller than a preset accuracy threshold, updating the labeled image set and the second convolutional neural network model, and returning to the step I; if the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold, taking the second image classification model as the target image classification model;
in detail, if the preset accuracy threshold is 95%, if the classification accuracy of the second image classification model is less than 95%, the labeled image set and the second convolutional neural network model are updated, and the updated convolutional neural network model is iteratively trained by using the updated image set until the classification accuracy of the second image classification model is greater than or equal to 95%.
Further, in an embodiment of the present invention, the updating the labeled image set and the second convolutional neural network model includes: merging the marked image set and the third marked image set to a segmentation image by using the second image classification model to obtain an updated marked image set; and performing structure adjustment on the second convolutional neural network model to obtain an updated second convolutional neural network model, specifically, the updated second convolutional neural network model is the second convolutional neural network model subjected to structure adjustment, and includes adding or deleting the structure of the second convolutional neural network, such as adding or deleting convolutional layers and pooling layers, and modifying parameters of the activation function and the loss function.
In another embodiment of the present invention, the labeled image set is stored in block link points, and the high throughput of data by the block link points is utilized to improve the access efficiency of the labeled image set.
And S5, when the image to be classified is received, classifying the image to be classified by using the target image classification model to obtain a classification result.
In the embodiment of the invention, the image to be classified is a crop image to be classified, the image to be classified is further classified by using the target image classification model to obtain a classification result, the growth stage of crops is judged according to the classification result, a suggestion is provided for a grower, and a claim settlement basis can also be provided for the agricultural insurance industry. Such as: the target image classification model is an image classification model for judging the growth of the wheat, and the wheat image to be classified is received at present. Further, the image classification model for judging the growth vigor of the wheat is used for classifying and judging the wheat images to be classified, wherein the classification judgment is to judge the growth stage of the wheat images, such as rooting, after the judgment is finished, labels of the growth stage are marked on the wheat images to be classified, the classification results of the wheat images to be labeled are obtained, the classification results of the wheat images to be labeled can provide suggestions for wheat growers, and can also provide a claim settlement basis for agricultural insurance.
As shown in fig. 2, it is a functional block diagram of the image classification apparatus of the present invention.
The image classification apparatus 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the image classification apparatus may include a model training module 101, an image labeling module 102, and an image classification module 103, which may also be referred to as a unit, and refer to a series of computer program segments that can be executed by a processor of an electronic device and can perform fixed functions, and are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the model training module 101 is configured to obtain a first labeled image set, train a pre-constructed first convolutional neural network model by using the first labeled image set, and obtain a first image classification model;
in the embodiment of the present invention, the first labeled image set is a set of crop labeled images, the crop labeled images are a set of crop image labeled with a crop growth stage label, and if the crop image is a wheat field image, wheat can be divided into 9 growth stages of seedling emergence, rooting, leaf growing, tillering, node pulling, booting, ear sprouting, flowering and fruiting, then the growth stage labels share 9 growth stages of seedling emergence, rooting, leaf growing, tillering, node pulling, booting, ear sprouting, flowering and fruiting, and if a certain crop image is in the ear sprouting stage, the crop image is labeled as the label of "ear sprouting". The crop growth is a continuous process which can generally reach several months, the judgment difficulty in the growth stage is high, and the classification marking needs to be carried out by professional personnel, so that the marking efficiency of the crop image is low, and the first marking image set is not easy to obtain.
In the embodiment of the present invention, in order to train an image classification model for accurately judging crop growth, the model training module 101 needs to train a pre-constructed first convolution neural network model by using the first labeled image set. The convolutional neural network model typically contains a hidden layer and an output layer. The hidden layer comprises a convolution layer and a pooling layer, and the convolution layer can extract the characteristics of an input image to obtain an output characteristic diagram; the pooling layer performs feature selection and information filtering on the output feature map to obtain feature data; the output layer carries out nonlinear combination on the characteristic data and outputs a classification label to the input data by using a logic function or a normalized exponential function, thereby realizing the classification of the image. Optionally, in this embodiment of the present invention, the first convolutional neural network model is an EfficientNet-b3 model.
In detail, the model training module 101 in the embodiment of the present invention trains a pre-constructed first convolutional neural network model by using the following means, including:
step A: performing convolution pooling operation on the first labeled image set for preset times by using the first convolution neural network model to obtain a feature set;
in detail, the first labeling image set is input into the first convolutional neural network model, further, each image of the first labeling image set is convolved by a convolution layer of a hidden layer of the first convolutional neural network model to obtain a feature map, then the feature map is pooled by a pooling layer to obtain feature data, and all the feature data are summarized to obtain the feature set.
And B: calculating each feature data in the feature set by using a preset activation function to obtain a label predicted value corresponding to each feature data;
optionally, in this embodiment of the present invention, the activation function is a Relu function.
And C: acquiring a label real value corresponding to a label contained in an annotated image corresponding to the feature data, and calculating a label predicted value and a loss value between the label real values by using a pre-constructed loss function;
in the embodiment of the present invention, the actual value of the tag is determined by the tag included in the labeled image, such as: and if the label of the labeled image A is an A label, the corresponding actual value of the label is 1.
Step D: if the loss value is larger than or equal to a preset threshold value, updating the parameters of the first convolution neural network model, and returning to the step A;
step E: and if the loss value is smaller than a preset threshold value, obtaining and outputting the first image classification model.
In detail, the convolution pooling operation includes: convolution operations and pooling operations.
Further, the convolution operation channel varies as:
Figure BDA0003033898860000121
where ω' is a channel number of a middle image of the convolution data set, ω is a channel number of an image in the first labeled image set, k is a size of a preset convolution kernel, f is a step of a preset convolution operation, and p is a preset data zero padding matrix.
Preferably, in the embodiment of the present invention, the pooling operation is a maximal pooling operation performed on the convolution data set to obtain the feature set.
Further, the activation function according to the preferred embodiment of the present invention includes:
Figure BDA0003033898860000122
wherein, mutRepresenting the label prediction value, s represents the feature data in the feature set.
In detail, the loss function according to the preferred embodiment of the present invention includes:
Figure BDA0003033898860000123
wherein L isceRepresenting the loss value, N being the total number of images of the annotated image set, i being a positive integer representing the number of images in the first annotated image set, yiThe actual values of the labels, p, corresponding to the differently numbered images of the first set of labeled imagesiAnd the label predicted values corresponding to the images with different numbers in the first labeling image set.
The image annotation module 102 is configured to obtain an image set to be annotated, and perform image screening and segmentation processing on the image set to be annotated to obtain a segmented image set; classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set; merging the first annotation image set and the second annotation image set to obtain an annotation image set;
in the embodiment of the present invention, in order to enable the image classification model to more accurately judge that images are classified, more annotated images need to be acquired for training, so that the image annotation module 102 in the embodiment of the present invention acquires an image set to be annotated, where an image in the image set to be annotated is an unlabeled image with the same content as an image in the first annotated image set, and optionally, the image in the image set to be annotated is an image of a farmland where crops are located, for example: the marked wheat farmland picture without marks can be obtained from a certain picture database.
Further, in order to train the subsequent model more accurately, the embodiment of the present invention needs to ensure the quality of the images in the image set to be labeled, and therefore, in order to train the model better, the image labeling module 102 needs to screen the image set to be labeled, retain the high-quality images of the image set to be labeled, remove the low-quality images with blurred visual effect and excessive noise, and obtain the screened image set to be labeled. In detail, the image annotation module 102 according to the embodiment of the present invention obtains a corresponding gray-scale map array by performing gray-scale conversion on each image in the to-be-annotated image set, and calculates a variance of the gray-scale map array to obtain a gray-scale variance; and selecting the image of which the gray variance is smaller than a preset variance value in the image set to be marked to obtain the screened image set to be marked. Optionally, the embodiment of the present invention performs convolution operation using a pre-constructed laplacian to obtain the gray variance, and further, compares variance results of all pictures, and retains a picture with a smaller variance result to obtain a picture with high definition.
Further, in order to facilitate accurate labeling, it is necessary to remove interference of an irrelevant background Region in each image in the filtered image set to be labeled, segment each image in the filtered image set to be labeled, and mark a Region of Interest (ROI) of each image in the filtered image set to be labeled. Preferably, in an embodiment of the present invention, the region of interest is a region, further, a region outside the region of interest in each image in the screened image set to be labeled is subjected to pixel adjustment processing to obtain the segmented image set, and optionally, all pixels in the region outside the region of interest in each image in the screened image set to be labeled are adjusted to a preset gray value to obtain the segmented image set. For example, all pixels of the region outside the region of interest in the image are adjusted to be 0, so that the region outside the region of interest in each image in the screened image set to be labeled becomes black, thereby removing interference of an irrelevant background region and obtaining the segmented image set.
In the embodiment of the invention, in order to enable the image classification model to judge the classification of the image more accurately, for example, judging the growth stage of the crop image, more samples need to be obtained for iterative training. The image labeling module 102 may label the images of the segmented image set by using the first image classification model, and obtain a second labeled image set as a sample. Specifically, the first image classification model is used for classifying and predicting the images of the segmented image set, a predicted pseudo label is marked on the images in the segmented image set according to the classification prediction result, and a second labeled image set is obtained after the prediction is finished. For example, the first image classification model is used to classify and label the segmented wheat field images, and if the wheat of a certain image is in the flowering stage, the first image classification model marks the image with a pseudo label of "flowering".
In an embodiment of the present invention, the image annotation module 102 merges the first annotated image set and the second annotated image set to obtain an annotated image set.
The image classification module 103 is configured to perform iterative annotation training on a pre-constructed second convolutional neural network model by using the annotated image set to obtain a target image classification model; and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result.
Further, in order to enable the image classification model to more accurately judge the growth vigor of crops, the image classification module 103 performs iterative labeling training on the second convolutional neural network model which is pre-constructed by using the following means, including:
step I: performing iterative training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a second image classification model;
in the embodiment of the present invention, the training manner of step I is the same as that of S1, and is not repeated herein.
Step II: carrying out classification test on the second image classification model by using a preset test image set to obtain classification accuracy, and judging whether the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold value or not;
in detail, in order to ensure whether the second image classification model can accurately classify the image, it is required to determine whether the classification accuracy of the second image classification model meets a preset threshold.
Further, the image classification module 103 according to the embodiment of the present invention performs a classification test on the second image classification model by using a preset test image set to obtain a classification accuracy, including: classifying each image in the test image set by using the second image classification model to obtain a corresponding classification result, wherein the images in the test image set are the images marked with class labels; carrying out consistency comparison on the category label of each image in the test image set and the corresponding classification result to obtain a comparison result; and calculating the percentage according to the comparison result to obtain the classification accuracy, for example: the test image set contains 100 images, the comparison result is that the class label of 95 images is consistent with the corresponding classification result, and if 5 images are inconsistent, the classification accuracy is 95/100 × 100% -95%.
Step III: if the classification accuracy of the second image classification model is smaller than a preset accuracy threshold, updating the labeled image set and the second convolutional neural network model, and returning to the step I; if the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold, taking the second image classification model as the target image classification model;
in detail, if the preset accuracy threshold is 95%, if the classification accuracy of the second image classification model is less than 95%, the labeled image set and the second convolutional neural network model are updated, and the updated convolutional neural network model is iteratively trained by using the updated image set until the classification accuracy of the second image classification model is greater than or equal to 95%.
Further, in the embodiment of the present invention, the updating the annotated image set and the second convolutional neural network model by the image classification module 103 includes: merging the marked image set and the third marked image set to a segmentation image by using the second image classification model to obtain an updated marked image set; and performing structure adjustment on the second convolutional neural network model to obtain an updated second convolutional neural network model, specifically, the updated second convolutional neural network model is the second convolutional neural network model subjected to structure adjustment, and includes adding or deleting the structure of the second convolutional neural network, such as adding or deleting convolutional layers and pooling layers, and modifying parameters of the activation function and the loss function.
In another embodiment of the present invention, the image classification module 103 stores the labeled image set in block link points, and improves the access efficiency of the labeled image set by using the high throughput of the block link points to data.
In the embodiment of the present invention, the image to be classified is an image of a crop to be classified, further, the image classification module 103 classifies the image to be classified by using the target image classification model to obtain a classification result, and determines a growth stage of the crop according to the classification result, so as to provide a suggestion for a grower, and also provide a basis for claim settlement for the agricultural insurance industry. Such as: the target image classification model is an image classification model for judging the growth of the wheat, and the wheat image to be classified is received at present. Further, the image classification model for judging the growth vigor of the wheat is used for classifying and judging the wheat images to be classified, wherein the classification judgment is to judge the growth stage of the wheat images, such as rooting, after the judgment is finished, labels of the growth stage are marked on the wheat images to be classified, the classification results of the wheat images to be labeled are obtained, the classification results of the wheat images to be labeled can provide suggestions for wheat growers, and can also provide a claim settlement basis for agricultural insurance.
Fig. 3 is a schematic structural diagram of an electronic device implementing the image classification method according to the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as an image classification program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of an image classification program, etc., but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., image classification programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 3 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 3 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which is generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the communication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard (Keyboard)), and optionally, a standard wired interface, or a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The image classification program stored in the memory 11 of the electronic device is a combination of a plurality of computer programs, which when executed in the processor 10, enable:
acquiring a first labeled image set, and training a pre-constructed first convolution neural network model by using the first labeled image set to obtain a first image classification model;
acquiring an image set to be annotated, and carrying out image screening and segmentation processing on the image set to be annotated to obtain a segmented image set;
classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set;
merging the first annotation image set and the second annotation image set to obtain an annotation image set; performing iterative labeling training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a target image classification model;
and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, the computer program may implement:
acquiring a first labeled image set, and training a pre-constructed first convolution neural network model by using the first labeled image set to obtain a first image classification model;
acquiring an image set to be annotated, and carrying out image screening and segmentation processing on the image set to be annotated to obtain a segmented image set;
classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set;
merging the first annotation image set and the second annotation image set to obtain an annotation image set; performing iterative labeling training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a target image classification model;
and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method of image classification, the method comprising:
acquiring a first labeled image set, and training a pre-constructed first convolution neural network model by using the first labeled image set to obtain a first image classification model;
acquiring an image set to be annotated, and carrying out image screening and segmentation processing on the image set to be annotated to obtain a segmented image set;
classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set;
merging the first annotation image set and the second annotation image set to obtain an annotation image set; performing iterative labeling training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a target image classification model;
and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result.
2. The image classification method of claim 1, wherein the training of the pre-constructed first convolutional neural network model with the first labeled image set to obtain a first image classification model comprises:
step A: performing convolution pooling operation on the first labeled image set for preset times by using the first convolution neural network model to obtain a feature set;
and B: calculating each feature data in the feature set by using a preset activation function to obtain a label predicted value corresponding to each feature data;
and C: acquiring a label real value corresponding to a label contained in an annotated image corresponding to the feature data, and calculating a label predicted value and a loss value between the label real values by using a pre-constructed loss function;
step D: if the loss value is larger than or equal to a preset threshold value, updating the parameters of the first convolution neural network model, and returning to the step A;
step E: and if the loss value is smaller than a preset threshold value, obtaining and outputting the first image classification model.
3. The image classification method according to claim 1, wherein the obtaining of the image set to be labeled, the image screening and the segmentation processing of the image set to be labeled, and the obtaining of the segmented image set comprise:
carrying out image quality screening on the image set to be labeled to obtain a screened image set;
carrying out image segmentation on each image in the screening image set to obtain an interested area of each image in the screening image set;
and adjusting all pixels of the region outside the region of interest of each image in the screening image set to a preset gray value to obtain the segmentation image set.
4. The image classification method according to claim 3, wherein the image quality screening of the image set to be labeled to obtain a screened image set comprises:
carrying out gray level conversion on each image in the image set to be marked to obtain a gray level image array of each image;
calculating the variance of the gray level image array to obtain the gray level variance;
and selecting the image of which the gray variance is smaller than a preset variance value in the image set to be marked to obtain the screened image set.
5. The image classification method according to any one of claims 1 to 4, wherein the iteratively labeling training the second convolutional neural network model which is pre-constructed by using the labeled image set to obtain a target image classification model comprises:
step I: performing iterative training on a pre-constructed second convolutional neural network model by using the labeled image set to obtain a second image classification model;
step II: carrying out classification test on the second image classification model by using a preset test image set to obtain classification accuracy, and judging whether the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold value or not;
step III: if the classification accuracy of the second image classification model is smaller than a preset accuracy threshold, updating the labeled image set and the second convolutional neural network model, and returning to the step I; and if the classification accuracy of the second image classification model is greater than or equal to a preset accuracy threshold, taking the second image classification model as the target image classification model.
6. The image classification method according to claim 5, wherein the performing classification tests on the second image classification model by using a preset test image set to obtain classification accuracy comprises:
classifying each image in the test image set by using the second image classification model to obtain a corresponding classification result;
carrying out consistency comparison on the category label of each image in the test image set and the corresponding classification result to obtain a comparison result;
and calculating the percentage according to the comparison result to obtain the classification accuracy.
7. The image classification method of claim 5, wherein the updating the set of annotated images and the second convolutional neural network model comprises:
labeling the segmentation image set by using the second image classification model to obtain a third labeled image set;
merging the annotated image set and the third annotated image set to obtain an updated annotated image set;
and adding or deleting the structure of the second convolutional neural network to obtain an updated second convolutional neural network model.
8. An image classification method, comprising:
the model training module is used for acquiring a first labeled image set, and training a pre-constructed first convolution neural network model by using the first labeled image set to obtain a first image classification model;
the image annotation module is used for acquiring an image set to be annotated, and performing image screening and segmentation processing on the image set to be annotated to obtain a segmented image set; classifying and labeling the segmentation image set by using the first image classification model to obtain a second labeled image set; merging the first annotation image set and the second annotation image set to obtain an annotation image set;
the image classification module is used for carrying out iterative labeling training on a pre-constructed second convolutional neural network model by utilizing the labeled image set to obtain a target image classification model; and when receiving the image to be classified, classifying the image to be classified by using the target image classification model to obtain a classification result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform the image classification method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the image classification method according to any one of claims 1 to 7.
CN202110438241.4A 2021-04-22 2021-04-22 Image classification method, device, electronic equipment and readable storage medium Active CN113065609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110438241.4A CN113065609B (en) 2021-04-22 2021-04-22 Image classification method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110438241.4A CN113065609B (en) 2021-04-22 2021-04-22 Image classification method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113065609A true CN113065609A (en) 2021-07-02
CN113065609B CN113065609B (en) 2024-04-09

Family

ID=76567581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110438241.4A Active CN113065609B (en) 2021-04-22 2021-04-22 Image classification method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113065609B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516194A (en) * 2021-07-20 2021-10-19 海南长光卫星信息技术有限公司 Hyperspectral remote sensing image semi-supervised classification method, device, equipment and storage medium
CN113705686A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Image classification method and device, electronic equipment and readable storage medium
CN115661142A (en) * 2022-12-14 2023-01-31 广东工业大学 Tongue diagnosis image processing method, device and medium based on key point detection

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106202930A (en) * 2016-07-12 2016-12-07 东北大学 A kind of Lung neoplasm risk prognoses system based on SS ELM and method
US20190073447A1 (en) * 2017-09-06 2019-03-07 International Business Machines Corporation Iterative semi-automatic annotation for workload reduction in medical image labeling
CN109543731A (en) * 2018-11-09 2019-03-29 江南大学 A kind of three preferred Semi-Supervised Regression algorithms under self-training frame
CN109582793A (en) * 2018-11-23 2019-04-05 深圳前海微众银行股份有限公司 Model training method, customer service system and data labeling system, readable storage medium storing program for executing
CN109741346A (en) * 2018-12-30 2019-05-10 上海联影智能医疗科技有限公司 Area-of-interest exacting method, device, equipment and storage medium
CN110517759A (en) * 2019-08-29 2019-11-29 腾讯科技(深圳)有限公司 A kind of method, method and device of model training that image to be marked determines
CN110909803A (en) * 2019-11-26 2020-03-24 腾讯科技(深圳)有限公司 Image recognition model training method and device and computer readable storage medium
CN111222409A (en) * 2019-11-26 2020-06-02 北京迈格威科技有限公司 Vehicle brand labeling method, device and system
CN111783635A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Image annotation method, device, equipment and storage medium
CN111898696A (en) * 2020-08-10 2020-11-06 腾讯云计算(长沙)有限责任公司 Method, device, medium and equipment for generating pseudo label and label prediction model
CN112016633A (en) * 2020-09-25 2020-12-01 北京百度网讯科技有限公司 Model training method and device, electronic equipment and storage medium
CN112016053A (en) * 2020-08-25 2020-12-01 北京金山云网络技术有限公司 Assessment method and device for data annotation and electronic equipment
CN112149705A (en) * 2019-06-28 2020-12-29 京东数字科技控股有限公司 Method and system for training classification model, computer equipment and storage medium
CN112364898A (en) * 2020-10-27 2021-02-12 星火科技技术(深圳)有限责任公司 Image identification automatic labeling method, device, equipment and storage medium
CN112418264A (en) * 2020-10-14 2021-02-26 上海眼控科技股份有限公司 Training method and device for detection model, target detection method and device and medium
CN112560964A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 Method and system for training Chinese herbal medicine pest and disease identification model based on semi-supervised learning
CN112669298A (en) * 2020-12-31 2021-04-16 武汉科技大学 Foundation cloud image cloud detection method based on model self-training

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106202930A (en) * 2016-07-12 2016-12-07 东北大学 A kind of Lung neoplasm risk prognoses system based on SS ELM and method
US20190073447A1 (en) * 2017-09-06 2019-03-07 International Business Machines Corporation Iterative semi-automatic annotation for workload reduction in medical image labeling
CN109543731A (en) * 2018-11-09 2019-03-29 江南大学 A kind of three preferred Semi-Supervised Regression algorithms under self-training frame
CN109582793A (en) * 2018-11-23 2019-04-05 深圳前海微众银行股份有限公司 Model training method, customer service system and data labeling system, readable storage medium storing program for executing
CN109741346A (en) * 2018-12-30 2019-05-10 上海联影智能医疗科技有限公司 Area-of-interest exacting method, device, equipment and storage medium
CN112149705A (en) * 2019-06-28 2020-12-29 京东数字科技控股有限公司 Method and system for training classification model, computer equipment and storage medium
CN110517759A (en) * 2019-08-29 2019-11-29 腾讯科技(深圳)有限公司 A kind of method, method and device of model training that image to be marked determines
CN110909803A (en) * 2019-11-26 2020-03-24 腾讯科技(深圳)有限公司 Image recognition model training method and device and computer readable storage medium
CN111222409A (en) * 2019-11-26 2020-06-02 北京迈格威科技有限公司 Vehicle brand labeling method, device and system
CN111783635A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Image annotation method, device, equipment and storage medium
CN111898696A (en) * 2020-08-10 2020-11-06 腾讯云计算(长沙)有限责任公司 Method, device, medium and equipment for generating pseudo label and label prediction model
CN112016053A (en) * 2020-08-25 2020-12-01 北京金山云网络技术有限公司 Assessment method and device for data annotation and electronic equipment
CN112016633A (en) * 2020-09-25 2020-12-01 北京百度网讯科技有限公司 Model training method and device, electronic equipment and storage medium
CN112418264A (en) * 2020-10-14 2021-02-26 上海眼控科技股份有限公司 Training method and device for detection model, target detection method and device and medium
CN112364898A (en) * 2020-10-27 2021-02-12 星火科技技术(深圳)有限责任公司 Image identification automatic labeling method, device, equipment and storage medium
CN112560964A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 Method and system for training Chinese herbal medicine pest and disease identification model based on semi-supervised learning
CN112669298A (en) * 2020-12-31 2021-04-16 武汉科技大学 Foundation cloud image cloud detection method based on model self-training

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAIXIA BI等: "An Active Deep Learning Approach for Minimally-Supervised PolSAR Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, pages 1 - 18 *
朱东阳等: "基于主动学习和加权支持向量机的工业故障识别", 《浙江大学学报(工学版)》, vol. 51, no. 4, pages 697 - 705 *
王健: "面向样本不平衡的故障特征提取方法", 《中国博士学位论文全文数据库 信息科技辑》, no. 06, pages 140 - 62 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516194A (en) * 2021-07-20 2021-10-19 海南长光卫星信息技术有限公司 Hyperspectral remote sensing image semi-supervised classification method, device, equipment and storage medium
CN113516194B (en) * 2021-07-20 2023-08-08 海南长光卫星信息技术有限公司 Semi-supervised classification method, device, equipment and storage medium for hyperspectral remote sensing images
CN113705686A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Image classification method and device, electronic equipment and readable storage medium
CN113705686B (en) * 2021-08-30 2023-09-15 平安科技(深圳)有限公司 Image classification method, device, electronic equipment and readable storage medium
CN115661142A (en) * 2022-12-14 2023-01-31 广东工业大学 Tongue diagnosis image processing method, device and medium based on key point detection

Also Published As

Publication number Publication date
CN113065609B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN113065609B (en) Image classification method, device, electronic equipment and readable storage medium
CN109086811B (en) Multi-label image classification method and device and electronic equipment
CN111862096B (en) Image segmentation method and device, electronic equipment and storage medium
CN112465060A (en) Method and device for detecting target object in image, electronic equipment and readable storage medium
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN111932547B (en) Method and device for segmenting target object in image, electronic device and storage medium
CN111652845A (en) Abnormal cell automatic labeling method and device, electronic equipment and storage medium
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
WO2021217852A1 (en) Damage detection method and apparatus, and electronic device and medium
CN111932534A (en) Medical image picture analysis method and device, electronic equipment and readable storage medium
CN113268665A (en) Information recommendation method, device and equipment based on random forest and storage medium
CN111914939A (en) Method, device and equipment for identifying blurred image and computer readable storage medium
CN112137591A (en) Target object position detection method, device, equipment and medium based on video stream
CN114757854B (en) Night vision image quality improving method, device and equipment based on multispectral analysis
CN112132232A (en) Medical image classification labeling method and system and server
CN114708461A (en) Multi-modal learning model-based classification method, device, equipment and storage medium
CN113487621A (en) Medical image grading method and device, electronic equipment and readable storage medium
CN114511733A (en) Fine-grained image identification method and device based on weak supervised learning and readable medium
CN112101481A (en) Method, device and equipment for screening influence factors of target object and storage medium
CN115049836B (en) Image segmentation method, device, equipment and storage medium
CN115294426B (en) Method, device and equipment for tracking interventional medical equipment and storage medium
CN117036947A (en) Image recognition-based agricultural risk early warning method, device, equipment and medium
CN111652282A (en) Big data based user preference analysis method and device and electronic equipment
CN116188855A (en) Multi-scale plant disease identification method, device, storage medium and apparatus
CN112233194B (en) Medical picture optimization method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211019

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant after: Shenzhen saiante Technology Service Co.,Ltd.

Address before: 1-34 / F, Qianhai free trade building, 3048 Xinghai Avenue, Mawan, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong 518000

Applicant before: Ping An International Smart City Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant