CN111767959B - Plush fiber classifying method and device - Google Patents

Plush fiber classifying method and device Download PDF

Info

Publication number
CN111767959B
CN111767959B CN202010623714.3A CN202010623714A CN111767959B CN 111767959 B CN111767959 B CN 111767959B CN 202010623714 A CN202010623714 A CN 202010623714A CN 111767959 B CN111767959 B CN 111767959B
Authority
CN
China
Prior art keywords
wool
prediction
classified
pile
proportion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010623714.3A
Other languages
Chinese (zh)
Other versions
CN111767959A (en
Inventor
艾国
张帅
张发恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alnnovation Guangzhou Technology Co ltd
Original Assignee
Alnnovation Guangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alnnovation Guangzhou Technology Co ltd filed Critical Alnnovation Guangzhou Technology Co ltd
Priority to CN202010623714.3A priority Critical patent/CN111767959B/en
Publication of CN111767959A publication Critical patent/CN111767959A/en
Application granted granted Critical
Publication of CN111767959B publication Critical patent/CN111767959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a plush fiber classifying method and a device, the plush fiber classifying method comprises the following steps: acquiring a plush fiber image to be classified; inputting the plush fiber image to be classified into a preset feature extraction model to obtain a corresponding feature extraction image; inputting the feature extraction image to a preset classification model to obtain a wool prediction probability and a wool prediction probability; inputting the feature extraction image to a preset semantic segmentation model, and calculating a wool prediction proportion and a wool prediction proportion; and obtaining a classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion. The method and the device for classifying the wool fibers can greatly improve the accuracy of wool fiber classification, realize high-accuracy wool fiber classification and improve the efficiency of wool fiber classification.

Description

Plush fiber classifying method and device
Technical Field
The application relates to the technical field of fiber classification, in particular to a plush fiber classification method and device.
Background
In fiber composition analysis, lint fiber classification has been a difficult, challenging component. At present, the classification of the plush fibers is mainly finished by manual detection and classification, but the efficiency of the method is low, the accuracy of detection and classification is difficult to ensure, and the labor cost is high.
With the continuous development and progress of science and technology, a manner of realizing wool fiber classification based on deep learning has become a potential efficient solution. In the pile fiber classification, since the appearance forms of the pile fibers are very similar and it is very difficult to acquire a large amount of pile sample data, the existing classification models based on the neural network are difficult to have high accuracy in pile fiber classification, so that the pile fiber classification is difficult to finish better and faster.
Disclosure of Invention
The embodiment of the application aims to provide a plush fiber classifying method and device, which can greatly improve the accuracy of plush fiber classification, realize high-accuracy plush fiber classification and improve the efficiency of plush fiber classification.
In a first aspect, an embodiment of the present application provides a method for classifying pile fibers, including:
acquiring a plush fiber image to be classified;
inputting the plush fiber image to be classified into a preset feature extraction model to obtain a corresponding feature extraction image;
inputting the feature extraction image to a preset classification model to obtain a wool prediction probability and a wool prediction probability;
inputting the feature extraction image to a preset semantic segmentation model, and calculating a wool prediction proportion and a wool prediction proportion;
and obtaining a classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion.
In the implementation process, the plush fiber classifying method of the embodiment of the application inputs the acquired plush fiber image to be classified into a preset feature extraction model to obtain a corresponding feature extraction image; inputting the feature extraction image to a preset classification model and a preset semantic segmentation model to obtain a hair prediction probability, a hair prediction proportion and a hair prediction proportion; according to the wool prediction probability, the wool prediction proportion and the wool prediction proportion, the classification result of the wool fibers to be classified is obtained, the respective advantages of a classification model and a semantic segmentation model are effectively combined, global information of the wool fiber images to be classified is grasped through the classification model, the wool and the wool are removed from the whole, detail information of the wool fiber images to be classified is grasped through the semantic segmentation model, the wool and the wool are distinguished from the detail, the requirement on wool sample data can be greatly reduced, and finally the results of the classification model and the semantic segmentation model are fused, so that the accuracy of wool fiber classification can be greatly improved, the wool fiber classification with high accuracy is realized, and meanwhile, the wool fiber classification efficiency is improved.
Further, the obtaining a classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion includes:
multiplying the hair prediction probability by the hair prediction proportion to obtain a hair comprehensive prediction probability;
multiplying the velvet prediction probability by the velvet prediction proportion to obtain a velvet comprehensive prediction probability;
and obtaining a classification result of the pile fibers to be classified according to the greater of the comprehensive pile prediction probability and the comprehensive pile prediction probability.
In the implementation process, the method correspondingly multiplies the wool prediction probability, the wool prediction proportion, the wool prediction probability and the wool prediction proportion, better fuses the results of the classification model and the semantic segmentation model, ensures that the obtained wool comprehensive prediction probability and the wool comprehensive prediction probability are more suitable and definite, and further can better determine the classification of the wool fibers to be classified.
Further, the inputting the feature extraction image to a preset semantic segmentation model, calculating a hair prediction ratio and a hair prediction ratio, includes:
inputting the feature extraction image to a preset semantic segmentation model to obtain corresponding background pixels, mao Xiangsu and velvet pixels;
counting the number of the wool pixels and the number of the wool pixels respectively;
and calculating to obtain a hair prediction proportion and a hair prediction proportion according to the number of the hair pixels, the number of the hair pixels and the total number of the pixels of the feature extraction image.
In the implementation process, the method can calculate the wool prediction proportion and the wool prediction proportion rapidly and accurately, and further improves the wool fiber classification efficiency.
Further, the preset classification model is obtained through training by the following steps:
acquiring a classified plush fiber mask image and a corresponding classified label;
training a preset first neural network model by using the classified plush fiber mask image and the classified label to obtain the preset classified model.
In the implementation process, the preset classification model is not directly used for classifying the wool fibers, and is mainly used for grasping global information of the wool fiber images to be classified, and firstly, classifying wool and wool on the whole, so that the requirement on wool sample data can be greatly reduced by training the preset first neural network model, training of the preset first neural network model can be facilitated, and the preset classification model can be obtained more quickly.
Further, the preset semantic segmentation model is obtained through training by the following steps:
acquiring a classified plush fiber mask image and a corresponding classified label;
training a preset second neural network model by using the classified plush fiber mask image and the classified label to obtain the preset semantic segmentation model.
In the implementation process, the preset semantic segmentation model is not directly used for classifying the wool fibers, and is mainly used for grasping detailed information of the wool fiber images to be classified and distinguishing wool from detail, so that the requirement on wool sample data can be greatly reduced for training of the preset second neural network model, training of the preset second neural network model can be facilitated, and the preset semantic segmentation model can be obtained more quickly.
In a second aspect, an embodiment of the present application provides a wool fiber classifying apparatus, including:
the acquisition module is used for acquiring the plush fiber image to be classified;
the feature extraction module is used for inputting the plush fiber images to be classified into a preset feature extraction model to obtain corresponding feature extraction images;
the first calculation module is used for inputting the feature extraction image into a preset classification model to obtain a wool prediction probability and a wool prediction probability;
the second calculation module is used for inputting the feature extraction image to a preset semantic segmentation model and calculating a wool prediction proportion and a wool prediction proportion;
and the classification module is used for obtaining a classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion.
In the implementation process, the plush fiber classifying device of the embodiment of the application inputs the acquired plush fiber image to be classified into a preset feature extraction model to obtain a corresponding feature extraction image; inputting the feature extraction image to a preset classification model and a preset semantic segmentation model to obtain a hair prediction probability, a hair prediction proportion and a hair prediction proportion; according to the wool prediction probability, the wool prediction proportion and the wool prediction proportion, the classification result of the wool fibers to be classified is obtained, the respective advantages of a classification model and a semantic segmentation model are effectively combined, global information of the wool fiber images to be classified is grasped through the classification model, the wool and the wool are removed from the whole, detail information of the wool fiber images to be classified is grasped through the semantic segmentation model, the wool and the wool are distinguished from the detail, the requirement on wool sample data can be greatly reduced, and finally the results of the classification model and the semantic segmentation model are fused, so that the accuracy of wool fiber classification can be greatly improved, the wool fiber classification with high accuracy is realized, and meanwhile, the wool fiber classification efficiency is improved.
Further, the classification module is specifically configured to:
multiplying the hair prediction probability by the hair prediction proportion to obtain a hair comprehensive prediction probability;
multiplying the velvet prediction probability by the velvet prediction proportion to obtain a velvet comprehensive prediction probability;
and obtaining a classification result of the pile fibers to be classified according to the greater of the comprehensive pile prediction probability and the comprehensive pile prediction probability.
In the implementation process, the device correspondingly multiplies the wool prediction probability, the wool prediction proportion, the wool prediction probability and the wool prediction proportion, better fuses the results of the classification model and the semantic segmentation model, ensures that the obtained wool comprehensive prediction probability and the wool comprehensive prediction probability are more suitable and definite, and further can better determine the classification of the wool fibers to be classified.
Further, the second computing module is specifically configured to:
inputting the feature extraction image to a preset semantic segmentation model to obtain corresponding background pixels, mao Xiangsu and velvet pixels;
counting the number of the wool pixels and the number of the wool pixels respectively;
and calculating to obtain a hair prediction proportion and a hair prediction proportion according to the number of the hair pixels, the number of the hair pixels and the total number of the pixels of the feature extraction image.
In the implementation process, the device can calculate the wool prediction proportion and the wool prediction proportion rapidly and accurately, and further improves the wool fiber classification efficiency.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory is configured to store a computer program, and the processor is configured to run the computer program to cause the electronic device to execute the pile fiber classification method described above.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium storing a computer program used in the electronic device described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a pile fiber classifying method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a method for classifying pile fibers according to an embodiment of the present application;
fig. 3 is a flowchart of step S140 according to the first embodiment of the present application;
fig. 4 is a flowchart of step S150 according to the first embodiment of the present application;
fig. 5 is a block diagram of a pile fiber classifying device according to a second embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
In the pile fiber classification, since the appearance forms of the pile fibers are very similar and it is very difficult to acquire a large amount of pile sample data, the existing classification models based on the neural network are difficult to have high accuracy in pile fiber classification, so that the pile fiber classification is difficult to finish better and faster.
Aiming at the problems in the prior art, the application provides a plush fiber classifying method and device, which can greatly improve the accuracy of plush fiber classification, realize high-accuracy plush fiber classification and improve the plush fiber classifying efficiency.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of a pile fiber classifying method according to an embodiment of the present application. The execution subject of the pile fiber classification method described below in the embodiment of the present application may be a server.
The plush fiber classifying method of the embodiment of the application comprises the following steps:
step S110, a plush fiber image to be classified is acquired.
In this embodiment, the pile fiber image to be classified is an image of a single pile fiber to be classified, and the pile fiber image to be classified may be an image of a single pile fiber to be classified under a biological microscope.
Step S120, inputting the plush fiber image to be classified into a preset feature extraction model to obtain a corresponding feature extraction image.
In this embodiment, the preset feature extraction model may be an efficientnet-b4 network or other neural networks for image feature extraction.
And S130, inputting the feature extraction image into a preset classification model to obtain a wool prediction probability and a wool prediction probability.
In this embodiment, the preset classification model is a pre-trained classification model, and the preset classification model may be composed of a global pooling layer and a full connection layer, and a loss function of the preset classification model may be Focal loss.
The problems of unbalance of sample types, unbalance of sample classification difficulty and the like can be solved by adopting a Focal loss as a loss function of a preset classification model, the problems are alleviated by modifying a cross entropy function and increasing a category weight alpha and a sample difficulty weight adjustment factor, and the accuracy is improved.
The wool prediction probability and the wool prediction probability obtained through a preset classification model are obtained by firstly removing classified wool and wool from the whole body and predicting the probability that the wool fibers to be classified are wool or the wool fibers to be classified are wool.
Step S140, inputting the feature extraction image to a preset semantic segmentation model, and calculating a wool prediction proportion and a wool prediction proportion.
In this embodiment, the preset semantic segmentation model is a pre-trained semantic segmentation model, the preset semantic segmentation model may be a PSP-Net or other neural network for semantic segmentation, and a loss function of the preset semantic segmentation model may be Focal loss.
Similarly, the loss function of the preset semantic segmentation model adopts Focal loss, so that the problems of unbalanced sample category, unbalanced sample classification difficulty and the like can be solved, the problems are alleviated by modifying the cross entropy function and increasing the category weight alpha and the sample difficulty weight adjustment factor, and the accuracy is improved.
Inputting the features to extract the image to a preset semantic segmentation model, and distinguishing the hair and the velvet from the detail through the preset semantic segmentation model.
The fur prediction proportion and the velvet prediction proportion are the proportion of the fur and the proportion of the velvet in the prediction feature extraction image.
And step S150, obtaining a classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion.
In this embodiment, after the wool and the velvet are classified on the whole to obtain the wool prediction probability and the velvet prediction probability, and the wool and the velvet are distinguished from the detail to obtain the wool prediction proportion and the velvet prediction proportion, the classification of the wool fibers to be classified can be determined through the fusion of the results of the classification model and the semantic segmentation model.
The classification result of the pile fibers to be classified may be that the pile fibers to be classified are wool, or that the pile fibers to be classified are wool.
Based on the above, the pile fiber classifying method according to the embodiment of the present application may be referred to as a frame diagram of the pile fiber classifying method shown in fig. 2.
According to the plush fiber classifying method, an obtained plush fiber image to be classified is input into a preset feature extraction model, and a corresponding feature extraction image is obtained; inputting the feature extraction image to a preset classification model and a preset semantic segmentation model to obtain a hair prediction probability, a hair prediction proportion and a hair prediction proportion; according to the wool prediction probability, the wool prediction proportion and the wool prediction proportion, the classification result of the wool fibers to be classified is obtained, the respective advantages of a classification model and a semantic segmentation model are effectively combined, global information of the wool fiber images to be classified is grasped through the classification model, the wool and the wool are removed from the whole, detail information of the wool fiber images to be classified is grasped through the semantic segmentation model, the wool and the wool are distinguished from the detail, the requirement on wool sample data can be greatly reduced, and finally the results of the classification model and the semantic segmentation model are fused, so that the accuracy of wool fiber classification can be greatly improved, the wool fiber classification with high accuracy is realized, and meanwhile, the wool fiber classification efficiency is improved.
It should be noted that, in other embodiments, after executing step S120, step S140 may be executed first, then step S130 may be executed, then step S150 may be executed, or after executing step S120, step S130 and step S140 may be executed simultaneously, and then step S150 may be executed, which does not limit the execution sequence of the steps of "input feature extraction image to a preset classification model, obtaining the hair prediction probability and the hair prediction probability" step and "input feature extraction image to a preset semantic segmentation model, calculating the hair prediction ratio and the hair prediction ratio".
In order to quickly and accurately calculate the pile prediction ratio and the pile prediction ratio, an embodiment of the present application provides a possible implementation manner, referring to fig. 3, fig. 3 is a schematic flow chart of step S140 provided by the embodiment of the present application, and the pile fiber classification method of the embodiment of the present application, step S140, inputs a feature extraction image to a preset semantic segmentation model, calculates the pile prediction ratio and the pile prediction ratio, and may include the following steps:
step S141, inputting a feature extraction image to a preset semantic segmentation model to obtain corresponding background pixels, mao Xiangsu and velvet pixels;
step S142, counting the number of the wool pixels respectively;
step S143, calculating to obtain the fur prediction proportion and the fur prediction proportion according to the number of the fur pixels, the number of the fur pixels and the total number of the pixels of the feature extraction image.
In the process, the method can calculate the wool prediction proportion and the wool prediction proportion rapidly and accurately, and further improves the wool fiber classification efficiency.
In order to better integrate the results of the classification model and the semantic segmentation model when determining the classification of the pile fibers to be classified, the embodiment of the present application provides a possible implementation manner, referring to fig. 4, fig. 4 is a schematic flow chart of step S150 provided by the embodiment of the present application, and step S150 of the pile fiber classification method of the embodiment of the present application obtains the classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion, and may include the following steps:
step S151, multiplying the hair prediction probability by the hair prediction proportion to obtain a hair comprehensive prediction probability;
step S152, multiplying the velvet prediction probability and the velvet prediction proportion to obtain a velvet comprehensive prediction probability;
and step S153, obtaining a classification result of the pile fibers to be classified according to the larger one of the comprehensive pile prediction probability and the comprehensive pile prediction probability.
It should be noted that, when the steps S151 to S153 are executed, the step S152 may be executed first, then the step S151 may be executed, then the step S153 may be executed, or the steps S151 and S152 may be executed simultaneously, and then the step S153 may be executed, which is not limited by the execution sequence of the steps of "multiplying the hair predicting probability with the hair predicting proportion to obtain the hair predicting probability" and "multiplying the hair predicting probability with the hair predicting proportion to obtain the hair predicting probability".
In the process, the method correspondingly multiplies the wool prediction probability, the wool prediction proportion, the wool prediction probability and the wool prediction proportion, better fuses the results of the classification model and the semantic segmentation model, ensures that the obtained wool comprehensive prediction probability and the wool comprehensive prediction probability are more suitable and definite, and further can better determine the classification of the wool fibers to be classified.
As an alternative embodiment, the preset classification model may be trained by the following steps:
acquiring a classified plush fiber mask image and a corresponding classified label;
training a preset first neural network model by using the classified plush fiber mask image and the classified label to obtain the preset classified model.
The preset classification model is not directly used for classifying the plush fibers, and is mainly used for grasping global information of the plush fiber images to be classified, and classifying the plush and the plush on the whole first time, so that the requirement on plush sample data can be greatly reduced by training the preset first neural network model, and training of the preset first neural network model can be facilitated, and the preset classification model can be obtained more quickly.
As an alternative embodiment, the preset semantic segmentation model may be obtained by training the following steps:
acquiring a classified plush fiber mask image and a corresponding classified label;
training a preset second neural network model by using the classified plush fiber mask image and the classified label to obtain the preset semantic segmentation model.
The preset semantic segmentation model is not directly used for classifying the wool fibers, and is mainly used for grasping detailed information of the wool fiber images to be classified and distinguishing wool from wool in detail, so that the requirement on wool sample data can be greatly reduced by training of the preset second neural network model, training of the preset second neural network model can be facilitated, and the preset semantic segmentation model can be obtained more quickly.
Example two
In order to perform a corresponding method of the above-described embodiments to achieve the corresponding functions and technical effects, a lint fiber sorting apparatus is provided as follows.
Referring to fig. 5, fig. 5 is a block diagram illustrating a pile fiber classifying apparatus according to an embodiment of the present application.
The plush fiber classifying device of the embodiment of the application comprises:
an acquisition module 210, configured to acquire a pile fiber image to be classified;
the feature extraction module 220 is configured to input a pile fiber image to be classified into a preset feature extraction model to obtain a corresponding feature extraction image;
the first calculation module 230 is configured to input the feature extraction image to a preset classification model to obtain a hair prediction probability and a hair prediction probability;
the second calculation module 240 is configured to input the feature extraction image to a preset semantic segmentation model, and calculate a hair prediction proportion and a hair prediction proportion;
the classification module 250 is configured to obtain a classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion, and the pile prediction proportion.
According to the plush fiber classifying device, an acquired plush fiber image to be classified is input into a preset feature extraction model to obtain a corresponding feature extraction image; inputting the feature extraction image to a preset classification model and a preset semantic segmentation model to obtain a hair prediction probability, a hair prediction proportion and a hair prediction proportion; according to the wool prediction probability, the wool prediction proportion and the wool prediction proportion, the classification result of the wool fibers to be classified is obtained, the respective advantages of a classification model and a semantic segmentation model are effectively combined, global information of the wool fiber images to be classified is grasped through the classification model, the wool and the wool are removed from the whole, detail information of the wool fiber images to be classified is grasped through the semantic segmentation model, the wool and the wool are distinguished from the detail, the requirement on wool sample data can be greatly reduced, and finally the results of the classification model and the semantic segmentation model are fused, so that the accuracy of wool fiber classification can be greatly improved, the wool fiber classification with high accuracy is realized, and meanwhile, the wool fiber classification efficiency is improved.
As an alternative embodiment, the second computing module 240 may be specifically configured to:
inputting the feature extraction image to a preset semantic segmentation model to obtain corresponding background pixels, mao Xiangsu and velvet pixels;
counting the number of the wool pixels respectively;
and calculating to obtain a fur prediction proportion and a fur prediction proportion according to the number of fur pixels, the number of fur pixels and the total number of pixels of the feature extraction image.
As an alternative embodiment, the classification module 250 may be specifically configured to:
multiplying the hair prediction probability by the hair prediction proportion to obtain a hair comprehensive prediction probability;
multiplying the velvet prediction probability by the velvet prediction proportion to obtain a velvet comprehensive prediction probability;
and obtaining a classification result of the pile fibers to be classified according to the larger one of the comprehensive pile prediction probability and the comprehensive pile prediction probability.
The pile fiber classifying device may implement the pile fiber classifying method according to the first embodiment. The options in the first embodiment described above also apply to this embodiment, and are not described in detail here.
The rest of the embodiments of the present application may refer to the content of the first embodiment, and in this embodiment, no further description is given.
Example III
The embodiment of the application provides electronic equipment, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the electronic equipment to execute the plush fiber classification method.
Alternatively, the above-mentioned electronic device may be a server or the like.
In addition, the embodiment of the application also provides a computer readable storage medium which stores the computer program used in the electronic equipment.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (6)

1. A method of classifying pile fibers, comprising:
acquiring a plush fiber image to be classified;
inputting the plush fiber image to be classified into a preset feature extraction model to obtain a corresponding feature extraction image;
inputting the feature extraction image to a preset classification model to obtain a wool prediction probability and a wool prediction probability; the wool prediction probability and the wool prediction probability are obtained through a preset classification model, namely, firstly, the wool and the wool are removed from the whole, and then the probability that the wool fibers to be classified are wool or the wool fibers to be classified are wool is predicted;
inputting the feature extraction image to a preset semantic segmentation model, and calculating a wool prediction proportion and a wool prediction proportion;
obtaining a classification result of the pile fibers to be classified according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion;
the method for classifying the pile fibers according to the pile prediction probability, the pile prediction proportion and the pile prediction proportion comprises the steps of:
multiplying the hair prediction probability by the hair prediction proportion to obtain a hair comprehensive prediction probability;
multiplying the velvet prediction probability by the velvet prediction proportion to obtain a velvet comprehensive prediction probability;
obtaining a classification result of the pile fibers to be classified according to the greater of the comprehensive pile prediction probability and the comprehensive pile prediction probability;
the step of inputting the feature extraction image to a preset semantic segmentation model, and calculating a wool prediction proportion and a wool prediction proportion comprises the following steps:
inputting the feature extraction image to a preset semantic segmentation model to obtain corresponding background pixels, mao Xiangsu and velvet pixels;
counting the number of the wool pixels and the number of the wool pixels respectively;
and calculating to obtain a hair prediction proportion and a hair prediction proportion according to the number of the hair pixels, the number of the hair pixels and the total number of the pixels of the feature extraction image.
2. The pile fiber classifying method according to claim 1, wherein the preset classification model is trained by:
acquiring a classified plush fiber mask image and a corresponding classified label;
training a preset first neural network model by using the classified plush fiber mask image and the classified label to obtain the preset classified model.
3. The pile fiber classification method according to claim 1, characterized in that the preset semantic segmentation model is trained by the following steps:
acquiring a classified plush fiber mask image and a corresponding classified label;
training a preset second neural network model by using the classified plush fiber mask image and the classified label to obtain the preset semantic segmentation model.
4. A wool fiber classifying apparatus, comprising:
the acquisition module is used for acquiring the plush fiber image to be classified;
the feature extraction module is used for inputting the plush fiber images to be classified into a preset feature extraction model to obtain corresponding feature extraction images;
the first calculation module is used for inputting the feature extraction image into a preset classification model to obtain a wool prediction probability and a wool prediction probability; the wool prediction probability and the wool prediction probability are obtained through a preset classification model, namely, firstly, the wool and the wool are removed from the whole, and then the probability that the wool fibers to be classified are wool or the wool fibers to be classified are wool is predicted;
the second calculation module is used for inputting the feature extraction image to a preset semantic segmentation model and calculating a wool prediction proportion and a wool prediction proportion;
the classification module is used for obtaining a classification result of the plush fibers to be classified according to the plush prediction probability, the plush prediction proportion and the plush prediction proportion;
the classification module is specifically configured to:
multiplying the hair prediction probability by the hair prediction proportion to obtain a hair comprehensive prediction probability;
multiplying the velvet prediction probability by the velvet prediction proportion to obtain a velvet comprehensive prediction probability;
obtaining a classification result of the pile fibers to be classified according to the greater of the comprehensive pile prediction probability and the comprehensive pile prediction probability;
the second computing module is specifically configured to:
inputting the feature extraction image to a preset semantic segmentation model to obtain corresponding background pixels, mao Xiangsu and velvet pixels;
counting the number of the wool pixels and the number of the wool pixels respectively;
and calculating to obtain a hair prediction proportion and a hair prediction proportion according to the number of the hair pixels, the number of the hair pixels and the total number of the pixels of the feature extraction image.
5. An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the lint fiber classification method according to any one of claims 1 to 3.
6. A computer-readable storage medium, characterized in that it stores a computer program for use in the electronic device of claim 5.
CN202010623714.3A 2020-06-30 2020-06-30 Plush fiber classifying method and device Active CN111767959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010623714.3A CN111767959B (en) 2020-06-30 2020-06-30 Plush fiber classifying method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010623714.3A CN111767959B (en) 2020-06-30 2020-06-30 Plush fiber classifying method and device

Publications (2)

Publication Number Publication Date
CN111767959A CN111767959A (en) 2020-10-13
CN111767959B true CN111767959B (en) 2023-10-31

Family

ID=72724589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010623714.3A Active CN111767959B (en) 2020-06-30 2020-06-30 Plush fiber classifying method and device

Country Status (1)

Country Link
CN (1) CN111767959B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361655B (en) * 2021-07-12 2022-09-27 武汉智目智能技术合伙企业(有限合伙) Differential fiber classification method based on residual error network and characteristic difference fitting

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014109708A1 (en) * 2013-01-08 2014-07-17 Agency For Science, Technology And Research A method and system for assessing fibrosis in a tissue
CN105760877A (en) * 2016-02-19 2016-07-13 天纺标检测科技有限公司 Wool and cashmere identification algorithm based on gray level co-occurrence matrix model
CN109583307A (en) * 2018-10-31 2019-04-05 东华大学 A kind of Cashmere and Woolens fiber recognition method based on local feature Yu word packet model
CN110163300A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of image classification method, device, electronic equipment and storage medium
CN110717368A (en) * 2018-07-13 2020-01-21 北京服装学院 Qualitative classification method for textiles
CN110738261A (en) * 2019-10-16 2020-01-31 北京百度网讯科技有限公司 Image classification and model training method and device, electronic equipment and storage medium
CN111126384A (en) * 2019-12-12 2020-05-08 创新奇智(青岛)科技有限公司 Commodity classification system and method based on feature fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014109708A1 (en) * 2013-01-08 2014-07-17 Agency For Science, Technology And Research A method and system for assessing fibrosis in a tissue
CN105009174A (en) * 2013-01-08 2015-10-28 新加坡科技研究局 Method and system for assessing fibrosis in tissue
CN105760877A (en) * 2016-02-19 2016-07-13 天纺标检测科技有限公司 Wool and cashmere identification algorithm based on gray level co-occurrence matrix model
CN110717368A (en) * 2018-07-13 2020-01-21 北京服装学院 Qualitative classification method for textiles
CN109583307A (en) * 2018-10-31 2019-04-05 东华大学 A kind of Cashmere and Woolens fiber recognition method based on local feature Yu word packet model
CN110163300A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of image classification method, device, electronic equipment and storage medium
CN110738261A (en) * 2019-10-16 2020-01-31 北京百度网讯科技有限公司 Image classification and model training method and device, electronic equipment and storage medium
CN111126384A (en) * 2019-12-12 2020-05-08 创新奇智(青岛)科技有限公司 Commodity classification system and method based on feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于自适应域值分割与力矩的棉花异性纤维分类方法;刘双喜 等;农业工程学报(第S2期);全文 *
应用卷积网络及深度学习理论的羊绒与羊毛鉴别;王飞 等;《纺织学报》;第38卷(第12期);第150-156页 *

Also Published As

Publication number Publication date
CN111767959A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
Kuznetsova et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale
CN108960409B (en) Method and device for generating annotation data and computer-readable storage medium
CN113688665B (en) Remote sensing image target detection method and system based on semi-supervised iterative learning
CN107247947A (en) Face character recognition methods and device
CN112800097A (en) Special topic recommendation method and device based on deep interest network
CN109816438B (en) Information pushing method and device
JP2015191426A (en) Learning data generation device
CN111709966B (en) Fundus image segmentation model training method and device
CN113743455A (en) Target retrieval method, device, electronic equipment and storage medium
CN111259823A (en) Pornographic image identification method based on convolutional neural network
Shoohi et al. DCGAN for Handling Imbalanced Malaria Dataset based on Over-Sampling Technique and using CNN.
CN111401105A (en) Video expression recognition method, device and equipment
CN111767959B (en) Plush fiber classifying method and device
CN109460474B (en) User preference trend mining method
Pajot et al. Unsupervised adversarial image inpainting
CN113838524A (en) S-nitrosylation site prediction method, model training method and storage medium
CN111967276B (en) Translation quality evaluation method and device, electronic equipment and storage medium
CN111445025B (en) Method and device for determining hyper-parameters of business model
CN111126617B (en) Method, device and equipment for selecting fusion model weight parameters
CN107316081A (en) A kind of uncertain data sorting technique based on extreme learning machine
Rakowski et al. Hand shape recognition using very deep convolutional neural networks
CN112084876A (en) Target object tracking method, system, device and medium
CN116977271A (en) Defect detection method, model training method, device and electronic equipment
CN109785376B (en) Training method of depth estimation device, depth estimation device and storage medium
CN116977260A (en) Target defect detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant