CN111191665A - Image classification method and device and electronic equipment - Google Patents
Image classification method and device and electronic equipment Download PDFInfo
- Publication number
- CN111191665A CN111191665A CN201811354310.8A CN201811354310A CN111191665A CN 111191665 A CN111191665 A CN 111191665A CN 201811354310 A CN201811354310 A CN 201811354310A CN 111191665 A CN111191665 A CN 111191665A
- Authority
- CN
- China
- Prior art keywords
- image
- analyzed
- classification
- subclass
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000004458 analytical method Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 241000736199 Paeonia Species 0.000 description 3
- 235000006484 Paeonia officinalis Nutrition 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses an image classification method and device and electronic equipment. The method comprises the following steps: acquiring at least one image to be analyzed of a target object; classifying each image to be analyzed, and determining the image classification corresponding to any image to be analyzed; calculating the similarity of any image to be analyzed and a standard image corresponding to at least one image subclass corresponding to the image to be analyzed; determining the image subclasses with the similarity larger than a preset similarity threshold as image subclasses corresponding to any images to be analyzed; and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed. The method and the device for classifying the target object achieve the purpose of accurately classifying at least one image to be analyzed of the target object, provide a basis for subsequently determining the comprehensive classification result, and further improve the accuracy of the comprehensive classification result.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an image classification method and apparatus, and an electronic device.
Background
Image classification refers to a technique of classifying an image or each pixel or region in an image into one of several categories by using a computer instead of human visual interpretation. The efficiency of analyzing according to the classification result in the later stage can be improved through image classification. However, the related image classification techniques in the prior art can only achieve the preliminary classification of images, and cannot further determine the subclasses of the images belonging to the classification. For example, only the image to be analyzed can be determined as peony, but not as that.
Disclosure of Invention
In view of this, embodiments of the present application provide an image classification method and apparatus, and an electronic device, which can solve the above technical problems.
In order to solve the above problem, the embodiments of the present application mainly provide the following technical solutions:
in a first aspect, an embodiment of the present application provides an image classification method, where the method includes:
acquiring at least one image to be analyzed corresponding to a target;
classifying each image to be analyzed, and determining the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass;
calculating the similarity of any image to be analyzed and a standard image corresponding to at least one image subclass corresponding to the image to be analyzed;
determining the image subclasses with the similarity larger than a preset similarity threshold as image subclasses corresponding to any images to be analyzed;
and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed.
In a second aspect, an embodiment of the present application further provides an image classification apparatus, including:
the image acquisition module is used for acquiring at least one image to be analyzed of the target object;
the image classification module is used for classifying each image to be analyzed and determining the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass;
the similarity calculation module is used for calculating the similarity of any image to be analyzed and the standard image corresponding to the at least one image subclass corresponding to the image to be analyzed;
the subclass determining module is used for determining the image subclass with the similarity larger than a preset similarity threshold as the image subclass corresponding to any image to be analyzed;
and the comprehensive analysis module is used for determining a comprehensive classification result corresponding to the target according to the image subclasses corresponding to the images to be analyzed respectively.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
at least one processor;
and at least one memory, bus connected with the processor; wherein,
the processor and the memory complete mutual communication through the bus;
the processor is used for calling program instructions in the memory so as to execute the image classification method.
In a fourth aspect, embodiments of the present application further provide a non-transitory computer-readable storage medium storing computer instructions, which cause a computer to execute the image classification method.
The technical scheme provided by the embodiment of the application has the following beneficial effects: the method comprises the steps of obtaining at least one image to be analyzed of a target object, classifying the images to be analyzed, determining the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass, calculating the similarity of any image to be analyzed and a standard image corresponding to the at least one image subclass corresponding to the image to be analyzed, and determining the image subclass with the similarity larger than a preset similarity threshold value as the image subclass corresponding to any image to be analyzed, so that the purpose of accurately classifying the at least one image to be analyzed of the target object is achieved, and a basis is provided for subsequently determining a comprehensive classification result; and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed, so that a multi-dimensional and comprehensive classification result with higher accuracy aiming at the target object can be obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flowchart illustrating an image classification method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating an image classification method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram illustrating an image classification apparatus provided in an embodiment of the present application;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In the prior art, image classification can only determine the major category to which an image belongs and cannot determine the minor category to which the image belongs, for example, only certain image is peony, but not certain variety of peony. Therefore, the image classification precision is low, and the subsequent analysis efficiency according to the classification result is directly influenced.
The application provides an image classification method, an image classification device, an electronic device and a computer-readable storage medium, which aim to solve the above technical problems in the prior art.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
In one embodiment, as shown in fig. 1, the present application provides an image classification method, including: step S101 to step S105.
Step S101, at least one image to be analyzed of the target object is obtained.
In the embodiment of the present application, the target object may be determined according to actual needs, and specifically, the target object may be directed to a human, an animal, a plant, or another type.
In practical application, at least one image to be analyzed of the target object may be an image of the same position of the target object or an image of different positions. For example, assuming that two images to be analyzed of the target object are acquired, if the target object is a person, one of the two images may be a tongue image and the other may be a whole face image.
In practical application, the image to be analyzed may be acquired by a current user in real time by using a terminal device having an image acquisition function, or may be acquired from image data stored in a storage medium such as a database. Specifically, the terminal device may be an electronic device such as a mobile phone, a Pad, a notebook, and a wearable device.
Step S102, classifying the images to be analyzed, and determining the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass.
In the embodiment of the present application, the image classification is used to represent the type (i.e. large class) corresponding to the target object, for example, if the target object points to a user (i.e. a person, for convenience of description of the present application, the point to which the target object points is referred to as a person), the determination of the image classification realizes the preliminary classification of the image to be analyzed, so as to determine the range of the image subclass, achieve the purpose of narrowing the range of the image subclass, and provide a guarantee for subsequently improving the efficiency of similarity calculation.
For example, if the image classification corresponding to the image to be analyzed is directed to the tongue, the image subclass directed to the tongue of the image classification corresponds to a specific classification for the tongue, which may be specifically represented as an image subclass of tongue color, an image subclass of tongue texture, or the like.
Step S103, calculating the similarity of the standard image corresponding to any image to be analyzed and at least one image subclass corresponding to the image to be analyzed.
In the embodiment of the application, the similarity is used for representing the similarity of any standard image to be analyzed and at least one image subclass corresponding to the image to be analyzed.
In practical application, assuming that six image classifications are provided, namely, image classification 1 to image classification 6, and assuming that two images to be analyzed are obtained, namely, image 1 to be analyzed and image 2 to be analyzed, wherein image 1 to be analyzed corresponds to image classification 1, image 2 to be analyzed corresponds to image classification 2, subclass of image classification 1 is subclass a and subclass b, and subclass of image classification 2 is subclass c, similarity of the standard image corresponding to subclass a and the standard image corresponding to subclass b of image 1 to be analyzed and similarity of the standard image corresponding to subclass c of image 2 to be analyzed are calculated respectively.
In practical application, the histogram algorithm, the perceptual hash algorithm, the average hash algorithm, the difference hash algorithm, the scale invariant feature algorithm, and the like may be used to calculate the similarity, which is not described herein again.
Step S104, determining the image subclass with similarity greater than a predetermined similarity threshold as the image subclass corresponding to any image to be analyzed.
In practical application, the greater the similarity is, the greater the approximate similarity degree of the two images is represented, and the division standard for dividing the image subclasses is determined by setting the similarity threshold value.
For example, the image corresponding to the image 1 to be analyzed is classified into a tongue portion (i.e., image classification), the tongue portion includes 3 image subclasses, each of which is subclass 1, subclass 2, and subclass 3, and the similarity of the image 1 to be analyzed and the standard image corresponding to subclass 1 is calculated to be 70%, the similarity of the standard image corresponding to subclass 2 is calculated to be 80%, and the similarity of the standard image corresponding to subclass 3 is calculated to be 75%. And if the preset similarity threshold is 70%, determining that the subclass 2 and the subclass 3 are image subclasses corresponding to the image 1 to be analyzed.
And S105, determining a comprehensive classification result corresponding to the target according to the image subclasses respectively corresponding to the images to be analyzed.
According to the method and the device, the comprehensive analysis is carried out on the image subclasses respectively corresponding to the images to be analyzed, so that the purpose of comprehensively determining the comprehensive classification result is achieved, and the accuracy of the comprehensive classification result is improved.
For example, the image subclasses determined in step S104 include three image subclasses, namely an image subclass a, an image subclass B, and an image subclass C, and the embodiment of the present application performs analysis again according to the three image subclasses to obtain a comprehensive classification result corresponding to the target object.
In the embodiment of the application, at least one image to be analyzed of a target object is obtained, each image to be analyzed is classified, the image classification corresponding to any image to be analyzed is determined, the image classification comprises at least one image subclass, the similarity of the standard image corresponding to the image to be analyzed and the image subclass corresponding to the image to be analyzed is calculated, the image subclass with the similarity larger than a preset similarity threshold value is determined as the image subclass corresponding to the image to be analyzed, the purpose of accurately classifying the at least one image to be analyzed of the target object is achieved, and a basis is provided for subsequently determining a comprehensive classification result; and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed, so that a multi-dimensional and comprehensive classification result with higher accuracy aiming at the target object can be obtained.
In the embodiment of the present application, when a human being is a target object, the image classification is divided according to the structure of the human body, for example, the image classification may be divided into tongue, hand, eye, face, limb, and the like.
In one implementation manner, in this embodiment of the present application, the step S102 of classifying each image to be analyzed and determining an image classification corresponding to any image to be analyzed includes: step S1021 and step S1022.
Step S1021 (not shown), inputting any image to be analyzed into a preset image recognition model, and determining a corresponding image classification.
In practical application, the image recognition model can be a model constructed by utilizing a convolutional neural network, and before the model is used, the constructed convolutional neural network is trained through a pre-collected sample set, so that the image recognition model is obtained, and the purpose of automatically recognizing the image classification corresponding to the image to be analyzed is achieved by utilizing the image recognition model.
Step S1022 (not shown in the figure), according to the received selection instruction of the image classification input by the user for any image to be analyzed, determines the image classification corresponding to any image to be analyzed.
In practical application, selectable image classifications (such as image classification 1, image classification 2 and image classification 3) can be configured on a preset interactive interface, and a user inputs the selected image classification through the interactive interface so as to determine the image classification corresponding to any image to be analyzed according to a selection instruction input by the user.
In addition, two ways of determining the image classification are provided in the embodiment of the present application, and in practical applications, the way of determining the image classification may be provided only with reference to step S1021 or step S1022.
In another implementation, as shown in fig. 1, step S103 includes: step S1031 and step S1032, wherein,
step S1031 (not shown in the drawings), extracting image features of any image to be analyzed based on a preset feature extraction model for image classification;
step S1032 (not shown in the figure), the similarity of the image features of the standard image corresponding to the image sub-categories corresponding to any image to be analyzed is calculated.
According to the embodiment of the application, the accuracy of the extracted image features is ensured through the feature extraction model aiming at image classification, the extraction speed of the image features is increased, and the accuracy of similarity calculation is ensured.
In practical application, the feature extraction models of different image classifications can be CNN convolutional neural network models, so that the purpose of extracting the image features of any image to be analyzed in a multi-dimensional manner is achieved. Specifically, the image features of the standard images corresponding to the respective image subclasses may be pre-stored, so that the step of extracting the image features of the standard images corresponding to the respective image subclasses is omitted in the similarity calculation process, and the purpose of simplifying the similarity calculation is achieved.
In another implementation manner, in this embodiment of the present application, the step S103 of calculating the similarity of any standard image to be analyzed and corresponding to at least one image subclass corresponding to the image to be analyzed includes: and when the number of the standard images corresponding to any image subclass exceeds a preset number threshold, calculating the average value of the similarity of the corresponding image to be analyzed and each standard image corresponding to any image subclass, and taking the average value as the similarity of the image to be analyzed and each standard image corresponding to any image subclass.
In practical application, if the image to be analyzed is the image X, if the number of the standard images corresponding to the image subclass a is assumed to be 1 (i.e., a predetermined number threshold), the similarity between the image X and the standard image corresponding to the image subclass a only has one result, and the similarity is the similarity between the image X and the standard image corresponding to the image subclass a; assuming that the number of the standard images corresponding to the image sub-class a is 3, the similarity between the image X and the standard image corresponding to the image sub-class a has three results, and in this case, it is necessary to take the average value of the three similarities as the similarity between the image X and the standard image corresponding to the image sub-class a.
In yet another implementation, as shown in fig. 1, the method further comprises:
step S106 (not shown in the figure), obtaining the priority information of the image classification corresponding to each determined image to be analyzed;
step S105 determines, according to the image subclass corresponding to each image to be analyzed, that a comprehensive classification result is obtained for the target object, including: and determining a comprehensive classification result aiming at the target object based on the priority information of the image classification corresponding to each image to be analyzed and according to the image subclass corresponding to each image to be analyzed.
In the embodiment of the application, the priority information is used for representing the importance degree for determining the comprehensive classification result.
In practical applications, if the image classification corresponding to any one of the images to be analyzed determined in step S102 includes classification 1, classification 2, and classification 3, the image subclasses corresponding to the three classifications all are the same, and all include subclasses 1 to 5. Assuming that the image subclass determined according to the category 1 is the subclass 2, the image subclass determined according to the category 3 is the subclass 3, and the image subclass determined according to the category 3 is the category 4, the result of determining the comprehensive category can be determined according to the priority information, and assuming that the priority of the category 1 is higher, the result of finally determining the comprehensive category is the subclass 2.
In another implementation manner, the step S105 determines a comprehensive classification result for the target object according to the image subclasses respectively corresponding to the images to be analyzed, including:
and determining a comprehensive classification result aiming at the target object according to the image subclasses corresponding to the images to be analyzed and the object related information of the target object.
Specifically, the object-related information includes at least one of: object attribute information, and environment information corresponding to the target object.
In practical application, the object attribute information may be determined according to the type of the target object, for example, when the target object points to a human, the object attribute information may be the age, sex, occupation, etc. of the target object; and the environmental information corresponding to the target object may be factors such as a living area (e.g., beijing), climate, and the like.
The method and the device for determining the comprehensive classification result take the object related information of the target object as a consideration factor for determining the comprehensive classification result, and provide necessary basis for analyzing reasons and providing suggestions after the comprehensive classification result is determined.
In yet another embodiment, as shown in fig. 2, the method comprises: step S201 to step S206, wherein the steps S201, S202, S203, S204, and S205 are the same as or similar to the steps S101, S102, S103, S104, and S105, respectively, and are not repeated herein.
In step S206, if the comprehensive classification result is the predetermined result, when the predetermined image classification is not included in the determined image classification, prompt information for acquiring at least one to-be-analyzed image of the predetermined image classification is generated to redetermine the comprehensive classification result.
For example, assuming that the determined comprehensive classification result is an image subclass with abnormal tongue fur color, if the face image classification is not included in the image subclass determined in step S202, the image subclass determined as abnormal tongue fur color may have an inaccurate problem, and in order to improve the accuracy of the comprehensive classification result, it is necessary to acquire an image to be analyzed of the face image classification (i.e., a predetermined image classification) so as to redetermine the comprehensive classification result.
In an embodiment, as shown in fig. 3, an image classification apparatus according to an embodiment of the present application is schematically configured, and as shown in fig. 3, the apparatus 40 includes:
an image obtaining module 401, configured to obtain at least one image to be analyzed of a target object;
an image classification module 402, configured to classify each image to be analyzed, and determine an image classification corresponding to any image to be analyzed, where the image classification includes at least one image subclass;
a similarity calculation module 403, configured to calculate similarities of standard images corresponding to at least one image subclass corresponding to each image to be analyzed;
a subclass determining module 404, configured to determine an image subclass with a similarity greater than a predetermined similarity threshold as an image subclass corresponding to any image to be analyzed;
and the comprehensive analysis module 405 is configured to determine a comprehensive classification result corresponding to the target according to the image subclass corresponding to each image to be analyzed.
The image classification device obtains at least one image to be analyzed of a target object, classifies the images to be analyzed, determines the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass, calculates the similarity of the standard image corresponding to the image to be analyzed and the image subclass corresponding to the image to be analyzed, and determines the image subclass with the similarity larger than a preset similarity threshold value as the image subclass corresponding to the image to be analyzed, so that the purpose of accurately classifying the at least one image to be analyzed of the target object is realized, and a basis is provided for subsequently determining a comprehensive classification result; and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed, so that a multi-dimensional and comprehensive classification result with higher accuracy aiming at the target object can be obtained.
Further, when a human being is a target object, the image classification is divided by a human body structure.
Further, the image classification module 402 is configured to input any image to be analyzed into a preset image recognition model, and determine a corresponding image classification; or determining the image classification corresponding to any image to be analyzed according to the received selection instruction of the user for the image classification input by any image to be analyzed.
Further, the similarity calculation module 403 is further configured to extract image features of any image to be analyzed based on a preset feature extraction model for image classification; and calculating the similarity of the image characteristics and the image characteristics of the standard images corresponding to the image subclasses corresponding to any image to be analyzed.
Further, the similarity calculation module 403 is further configured to calculate an average value of the similarity between the corresponding image to be analyzed and each standard image corresponding to any image subclass when the number of standard images corresponding to any image subclass exceeds a predetermined number threshold, and take the average value as the similarity between the image to be analyzed and each standard image corresponding to any image subclass.
Further, the comprehensive analysis module 405 is further configured to obtain priority information of image classification corresponding to each determined image to be analyzed; and determining a comprehensive classification result aiming at the target object based on the priority information of each image to be analyzed and according to the image subclass corresponding to each image to be analyzed.
Further, the comprehensive analysis module 405 is further configured to determine a comprehensive classification result for the target object according to the image subclasses corresponding to the images to be analyzed respectively and by combining the object-related information of the target object.
Further, the comprehensive analysis module 405 is further configured to generate prompt information for acquiring at least one to-be-analyzed image of the predetermined image classification when the comprehensive classification result is the predetermined result and the predetermined image classification is not included in the determined image classification, so as to re-determine the comprehensive classification result.
Further, the object-related information comprises at least one of: object attribute information, and environment information corresponding to the target object.
The image classification apparatus of the present embodiment can perform the image classification method provided in the embodiments of the present application, and the implementation principles thereof are similar, which is not described herein again.
In one embodiment, an embodiment of the present application provides an electronic device, as shown in fig. 4, an electronic device 600 shown in fig. 4 includes: a processor 6001 and a memory 6003. Processor 6001 and memory 6003 are coupled, such as via bus 6002. Further, the electronic device 600 may further include a transceiver 6006, where the transceiver is used for providing the electronic device with a communication service function with other devices. It should be noted that the transceiver 6006 is not limited to one in practical applications, and the structure of the electronic device 600 is not limited to the embodiment of the present application.
The processor 6001 could be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 6001 might also be a combination that performs a computing function, such as a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
The bus 6002 may include a path that conveys information between the aforementioned components. The bus 6002 may be a PCI bus, an EISA bus, or the like. The bus 6002 can be divided into an address bus, a data bus, a control bus, and so forth. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
The memory 6003 is used to store application code that implements aspects of the subject application, and execution is controlled by the processor 6001. The processor 6001 is configured to execute application program code stored in the memory 6003 to implement the image classification apparatus provided by the embodiment shown in FIG. 3.
The electronic device provided by the embodiment of the application obtains at least one image to be analyzed of a target object, classifies each image to be analyzed, determines the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass, calculates the similarity of each image to be analyzed and a standard image corresponding to the at least one image subclass corresponding to the image to be analyzed, and determines the image subclass with the similarity larger than a preset similarity threshold value as the image subclass corresponding to any image to be analyzed, so that the purpose of accurately classifying the at least one image to be analyzed of the target object is realized, and a basis is provided for subsequently determining a comprehensive classification result; and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed, so that a multi-dimensional and comprehensive classification result with higher accuracy aiming at the target object can be obtained.
In one embodiment, the present application provides a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the image classification method described above.
Compared with the prior art, the non-transitory computer-readable storage medium provided by the embodiment of the application obtains at least one image to be analyzed of a target object, classifies each image to be analyzed, determines the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass, calculates the similarity of any image to be analyzed and a standard image corresponding to at least one image subclass corresponding to any image to be analyzed, determines the image subclass with the similarity larger than a preset similarity threshold as the image subclass corresponding to any image to be analyzed, achieves the purpose of accurately classifying at least one image to be analyzed of the target object, and provides a basis for subsequently determining a comprehensive classification result; and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed, so that a multi-dimensional and comprehensive classification result with higher accuracy aiming at the target object can be obtained.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.
Claims (10)
1. An image classification method, comprising:
acquiring at least one image to be analyzed of a target object;
classifying each image to be analyzed, and determining an image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass;
calculating the similarity of the standard image corresponding to any image to be analyzed and at least one image subclass corresponding to the image to be analyzed;
determining the image subclass with the similarity larger than a preset similarity threshold as the image subclass corresponding to any image to be analyzed;
and determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed.
2. The method according to claim 1, wherein the classifying each image to be analyzed and determining the image classification corresponding to any image to be analyzed comprises:
inputting any image to be analyzed into a preset image recognition model, and determining corresponding image classification; or
And determining the image classification corresponding to any image to be analyzed according to a received selection instruction of the user for the image classification input by any image to be analyzed.
3. The method according to claim 1, wherein the calculating the similarity of the standard images corresponding to the at least one image subclass corresponding to each image to be analyzed comprises:
extracting image features of any image to be analyzed based on a preset feature extraction model aiming at the image classification;
and calculating the similarity of the image characteristics and the image characteristics of the standard images corresponding to the image subclasses corresponding to any image to be analyzed.
4. The method according to claim 1, wherein the calculating the similarity of the standard images corresponding to the at least one image subclass corresponding to each image to be analyzed comprises:
and when the number of the standard images corresponding to any image subclass exceeds a preset number threshold, calculating the average value of the similarity of the corresponding image to be analyzed and each standard image corresponding to any image subclass, and taking the average value as the similarity of the image to be analyzed and each standard image corresponding to any image subclass.
5. The method of claim 1, further comprising:
acquiring the priority information of image classification corresponding to each determined image to be analyzed;
determining a comprehensive classification result for the target object according to the image subclasses respectively corresponding to the images to be analyzed, wherein the comprehensive classification result comprises the following steps:
and determining a comprehensive classification result aiming at the target object based on the priority information of each image to be analyzed and according to the image subclass corresponding to each image to be analyzed.
6. The method according to claim 1, wherein the determining a comprehensive classification result for the target object according to the image subclasses respectively corresponding to the images to be analyzed comprises:
and determining a comprehensive classification result aiming at the target object according to the image subclasses corresponding to the images to be analyzed and the object related information of the target object.
7. The method of claim 6, wherein the object-related information comprises at least one of:
object attribute information; environmental information corresponding to the target object.
8. An image classification apparatus, comprising:
the image acquisition module is used for acquiring at least one image to be analyzed of the target object;
the image classification module is used for classifying each image to be analyzed and determining the image classification corresponding to any image to be analyzed, wherein the image classification comprises at least one image subclass;
the similarity calculation module is used for calculating the similarity of the standard image corresponding to the image to be analyzed and the at least one image subclass corresponding to the image to be analyzed;
a subclass determining module, configured to determine an image subclass with a similarity greater than a predetermined similarity threshold as an image subclass corresponding to the any image to be analyzed;
and the comprehensive analysis module is used for determining a comprehensive classification result aiming at the target object according to the image subclasses respectively corresponding to the images to be analyzed.
9. An electronic device, comprising:
at least one processor;
and at least one memory, bus connected with the processor; wherein,
the processor and the memory complete mutual communication through the bus;
the processor is configured to invoke program instructions in the memory to perform the image classification method of any of claims 1 to 7.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the image classification method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811354310.8A CN111191665A (en) | 2018-11-14 | 2018-11-14 | Image classification method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811354310.8A CN111191665A (en) | 2018-11-14 | 2018-11-14 | Image classification method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111191665A true CN111191665A (en) | 2020-05-22 |
Family
ID=70710603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811354310.8A Pending CN111191665A (en) | 2018-11-14 | 2018-11-14 | Image classification method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111191665A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115587228A (en) * | 2022-11-10 | 2023-01-10 | 百度在线网络技术(北京)有限公司 | Object query method, object storage method and device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034836A (en) * | 2011-09-29 | 2013-04-10 | 株式会社理光 | Road sign detection method and device |
CN104573744A (en) * | 2015-01-19 | 2015-04-29 | 上海交通大学 | Fine granularity classification recognition method and object part location and feature extraction method thereof |
CN104933044A (en) * | 2014-03-17 | 2015-09-23 | 北京奇虎科技有限公司 | Application uninstalling reason classification method and classification apparatus |
CN105911602A (en) * | 2016-05-03 | 2016-08-31 | 东莞市华盾电子科技有限公司 | Metal classification image display method and device |
CN106250821A (en) * | 2016-07-20 | 2016-12-21 | 南京邮电大学 | The face identification method that a kind of cluster is classified again |
CN106960017A (en) * | 2017-03-03 | 2017-07-18 | 掌阅科技股份有限公司 | E-book is classified and its training method, device and equipment |
US20170270115A1 (en) * | 2013-03-15 | 2017-09-21 | Gordon Villy Cormack | Systems and Methods for Classifying Electronic Information Using Advanced Active Learning Techniques |
CN108304847A (en) * | 2017-11-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image classification method and device, personalized recommendation method and device |
CN108304882A (en) * | 2018-02-07 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of image classification method, device and server, user terminal, storage medium |
-
2018
- 2018-11-14 CN CN201811354310.8A patent/CN111191665A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034836A (en) * | 2011-09-29 | 2013-04-10 | 株式会社理光 | Road sign detection method and device |
US20170270115A1 (en) * | 2013-03-15 | 2017-09-21 | Gordon Villy Cormack | Systems and Methods for Classifying Electronic Information Using Advanced Active Learning Techniques |
CN104933044A (en) * | 2014-03-17 | 2015-09-23 | 北京奇虎科技有限公司 | Application uninstalling reason classification method and classification apparatus |
CN104573744A (en) * | 2015-01-19 | 2015-04-29 | 上海交通大学 | Fine granularity classification recognition method and object part location and feature extraction method thereof |
CN105911602A (en) * | 2016-05-03 | 2016-08-31 | 东莞市华盾电子科技有限公司 | Metal classification image display method and device |
CN106250821A (en) * | 2016-07-20 | 2016-12-21 | 南京邮电大学 | The face identification method that a kind of cluster is classified again |
CN106960017A (en) * | 2017-03-03 | 2017-07-18 | 掌阅科技股份有限公司 | E-book is classified and its training method, device and equipment |
CN108304847A (en) * | 2017-11-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image classification method and device, personalized recommendation method and device |
CN108304882A (en) * | 2018-02-07 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of image classification method, device and server, user terminal, storage medium |
Non-Patent Citations (2)
Title |
---|
RICARDO CERRI 等: "Hierarchical multi-label classification using local neural networks", 《JOURNAL OF COMPUTER AND SYSTEM SCIENCES》, vol. 80, no. 01, 28 February 2014 (2014-02-28), pages 39 - 56 * |
赵波: "细粒度图像分类、分割、生成与检索关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 2018, 15 July 2018 (2018-07-15), pages 138 - 70 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115587228A (en) * | 2022-11-10 | 2023-01-10 | 百度在线网络技术(北京)有限公司 | Object query method, object storage method and device |
CN115587228B (en) * | 2022-11-10 | 2024-05-14 | 百度在线网络技术(北京)有限公司 | Object query method, object storage method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107944020B (en) | Face image searching method and device, computer device and storage medium | |
CN107958230B (en) | Facial expression recognition method and device | |
CN110287942B (en) | Training method of age estimation model, age estimation method and corresponding device | |
US20210089827A1 (en) | Feature representation device, feature representation method, and program | |
CN112381837B (en) | Image processing method and electronic equipment | |
CN109117801A (en) | Method, apparatus, terminal and the computer readable storage medium of recognition of face | |
CN111523414A (en) | Face recognition method and device, computer equipment and storage medium | |
CN111401339B (en) | Method and device for identifying age of person in face image and electronic equipment | |
CN110148117B (en) | Power equipment defect identification method and device based on power image and storage medium | |
CN112257738A (en) | Training method and device of machine learning model and classification method and device of image | |
JP2014164656A (en) | Image processing method and program | |
CN107272899B (en) | VR (virtual reality) interaction method and device based on dynamic gestures and electronic equipment | |
US20230334893A1 (en) | Method for optimizing human body posture recognition model, device and computer-readable storage medium | |
CN110648289A (en) | Image denoising processing method and device | |
CN112418135A (en) | Human behavior recognition method and device, computer equipment and readable storage medium | |
CN113408570A (en) | Image category identification method and device based on model distillation, storage medium and terminal | |
CN112733767A (en) | Human body key point detection method and device, storage medium and terminal equipment | |
CN105096304B (en) | The method of estimation and equipment of a kind of characteristics of image | |
CN108288023B (en) | Face recognition method and device | |
CN107729863B (en) | Human finger vein recognition method | |
WO2015068417A1 (en) | Image collation system, image collation method, and program | |
CN110287943B (en) | Image object recognition method and device, electronic equipment and storage medium | |
CN111191665A (en) | Image classification method and device and electronic equipment | |
CN116432608A (en) | Text generation method and device based on artificial intelligence, computer equipment and medium | |
CN110807286A (en) | Structural grid identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |