CN110705653A - Image classification method, image classification device and terminal equipment - Google Patents

Image classification method, image classification device and terminal equipment Download PDF

Info

Publication number
CN110705653A
CN110705653A CN201911006182.2A CN201911006182A CN110705653A CN 110705653 A CN110705653 A CN 110705653A CN 201911006182 A CN201911006182 A CN 201911006182A CN 110705653 A CN110705653 A CN 110705653A
Authority
CN
China
Prior art keywords
image
detected
features
feature
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911006182.2A
Other languages
Chinese (zh)
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911006182.2A priority Critical patent/CN110705653A/en
Publication of CN110705653A publication Critical patent/CN110705653A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides an image classification method, an image classification device and a terminal device, which comprise: acquiring an image to be detected; respectively acquiring global features and color histogram features of the image to be detected; reducing the dimensionality of the color histogram feature to one dimension to obtain a one-dimensional color histogram feature; performing feature splicing on the one-dimensional color histogram features and the one-dimensional global features to obtain one-dimensional splicing features of the image to be detected; inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and outputting the category of the image to be detected by the classifier. By the method and the device, the accuracy of image classification can be improved.

Description

Image classification method, image classification device and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image classification method, an image classification device, and a terminal device.
Background
With the rapid development of deep learning technology, deep learning has become an advanced technology in image classification. The existing image classification method generally uses depth learning to extract global features of the whole image for image classification, but for images with similar categories, the images are difficult to distinguish by using the global features, and the accuracy of image classification is low.
Disclosure of Invention
The application provides an image classification method, an image classification device and a terminal device, which are used for improving the accuracy of image classification.
A first aspect of the present application provides an image classification method, the image classification including:
acquiring an image to be detected;
respectively acquiring a global feature and a color histogram feature of the image to be detected, wherein the dimension of the global feature is one-dimensional, and the dimension of the color histogram feature is two-dimensional;
reducing the dimensionality of the color histogram feature to one dimension to obtain a one-dimensional color histogram feature;
performing feature splicing on the one-dimensional color histogram features and the one-dimensional global features to obtain one-dimensional splicing features of the image to be detected;
inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and outputting the category of the image to be detected by the classifier.
A second aspect of the present application provides an image classification apparatus comprising:
the image acquisition module is used for acquiring an image to be detected;
the characteristic acquisition module is used for respectively acquiring the global characteristic and the color histogram characteristic of the image to be detected, wherein the dimension of the global characteristic is one dimension, and the dimension of the color histogram characteristic is two dimensions;
the characteristic dimension reduction module is used for reducing the dimension of the color histogram characteristic to one dimension to obtain the one-dimensional color histogram characteristic;
the characteristic splicing module is used for carrying out characteristic splicing on the one-dimensional color histogram characteristic and the one-dimensional global characteristic to obtain a one-dimensional splicing characteristic of the image to be detected;
and the category output module is used for inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and the classifier outputs the category of the image to be detected.
A third aspect of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the image classification method according to the first aspect when executing the computer program.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image classification method as described in the first aspect above.
A fifth aspect of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to perform the steps of the image classification method as described in the first aspect above.
According to the scheme, the image to be detected is divided into two branches to be processed, the global feature and the color histogram feature of the image to be detected are obtained respectively, the dimensionality of the color histogram feature is reduced to one dimension, the calculated amount can be reduced, the one-dimensional splicing feature obtained after the one-dimensional global feature and the color histogram feature are spliced is input to the classifier, and the category of the image to be detected can be obtained through the classifier. Because the color distinguishing degrees of different images (such as snow scenes, sand beaches, falling days and other different types of images in natural scenes) are large, the color histogram features with large distinguishing degrees are introduced as auxiliary features on the basis of the global features of the images to be detected, the dimensionality of the color histogram features is reduced to one dimension, the calculated amount can be reduced, the image classification efficiency is improved, the global features of the images to be detected and the color histogram features after dimensionality reduction are combined, the number of the features for image classification is increased, the image classification accuracy is improved, and the problems that the categories of the images are similar and the global features are difficult to distinguish can be solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of an image classification method according to an embodiment of the present application;
FIG. 2a is a diagram illustrating an example of a training process for a classification module; FIG. 2b is a diagram illustrating an example of a training process for a classifier;
fig. 3 is a schematic flow chart illustrating an implementation of an image classification method according to a second embodiment of the present application;
FIG. 4a is an exemplary graph of color histogram features of a snowscene; FIG. 4b is an exemplary color histogram feature for sand;
fig. 5 is a schematic diagram of an image classification apparatus according to a third embodiment of the present application;
fig. 6 is a schematic diagram of a terminal device provided in the fourth embodiment of the present application;
fig. 7 is a schematic diagram of a terminal device provided in the fifth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, which is a schematic view of an implementation flow of an image classification method provided in an embodiment of the present application, where the image classification method is applied to a terminal device, as shown in the figure, the image classification method may include the following steps:
and step S101, acquiring an image to be detected.
In this embodiment of the application, the image to be detected may be locally obtained from the terminal device, and may also receive the image to be detected sent by other devices, which is not limited herein. The local acquisition of the image to be detected from the terminal equipment may refer to the acquisition of the image to be detected from a memory of the terminal equipment, or may refer to the acquisition of a picture which is not stored in the memory when the terminal equipment takes a picture. When the image to be detected is a picture which is shot by the terminal equipment and is not stored in the memory, the picture can be classified before being stored in the memory, and the picture does not need to be classified subsequently.
The image to be detected may refer to an image of a category to be detected. The image generally comprises a main body and a background, wherein the main body is an object mainly represented by the image, the background is a scene which supports the main body in the image, the category of the image is determined according to the main body in the image, for example, the main body in the image is a sunset, and then the category of the image is a sunset category or a sunset category; the main subject in the image is a blue sky, and the category of the image is a sky category.
And step S102, respectively obtaining the global characteristic and the color histogram characteristic of the image to be detected.
The global feature and the color histogram feature are both image features of an image to be detected, the dimension of the global feature is one dimension, and the dimension of the color histogram feature is two dimensions. The global feature refers to a feature vector extracted from the whole image to be detected, and is a feature from the whole image to be detected; the color histogram feature is a feature vector extracted according to colors of an image to be detected, describes quantity features of the color in the image to be detected, and generally comprises a histogram feature of a Red (Red, R) channel, a histogram feature of a Green (Green, G) channel and a histogram feature of a Green (Blue, B) channel, wherein the histogram feature of the R channel describes distribution of pixels on the R channel, the histogram feature of the G channel describes distribution of pixels on the G channel, and the histogram feature of the B channel describes distribution of pixels on the B channel.
In this embodiment of the application, the image to be detected may be divided into two branches to be processed respectively, specifically, the image to be detected is copied to obtain two identical images to be detected, the two identical images to be detected may be referred to as a first image to be detected and a second image to be detected respectively, one branch processes the first image to be detected to obtain a global feature of the image to be detected, and the other branch processes the second image to be detected to obtain a color histogram feature of the image to be detected.
Optionally, the acquiring the global feature of the image to be detected includes:
down-sampling the image to be detected;
and inputting the down-sampled image to be detected into a classification model, and outputting the global features by the classification model.
In the embodiment of the application, the image to be detected is downsampled, so that the size of the image to be detected can be reduced, and the calculated amount of the image to be detected is reduced. The classification model may be a model for obtaining a global feature of the image to be detected, and is used to output the global feature of the image to be detected, where a specific type of the classification model is not defined.
Optionally, the embodiment of the present application further includes:
and training the classification model.
In the embodiment of the application, before the global features of the image to be detected are obtained by using the classification model, the classification model needs to be trained first, and the trained classification model is used to obtain the global features of the image to be detected. As shown in fig. 2a, when training a classification model, a training sample is down-sampled, the down-sampled training sample is input to the classification model, the classification model outputs global features of the training sample, and a loss function is used to train and pass back.
And step S103, reducing the dimensionality of the color histogram feature to one dimension to obtain a one-dimensional color histogram feature.
Optionally, the color histogram feature includes a histogram feature of a red channel, a histogram feature of a green channel, and a histogram feature of a blue channel, and before reducing the dimension of the color histogram feature to one dimension, the method further includes:
performing feature splicing on the histogram feature of the red channel, the histogram feature of the green channel and the histogram feature of the blue channel to obtain spliced histogram features;
accordingly, the reducing the dimension of the color histogram feature to one dimension includes:
and reducing the dimension of the spliced histogram features to one dimension.
In the embodiment of the application, the histogram feature of the R channel, the histogram feature of the G channel, and the histogram feature of the B channel of the image to be tested may be feature-spliced according to a first preset condition. The first preset condition is a preset splicing strategy, including but not limited to a splicing order of the histogram feature of the R channel, the histogram feature of the G channel, and the histogram feature of the B channel, for example, splicing is performed according to an R, G, B order, or splicing is performed according to an G, R, B order, and the like, which is not limited herein. And splicing the histogram feature of the R channel, the histogram feature of the G channel and the histogram feature of the B channel of the image to be detected to obtain the spliced histogram feature.
Illustratively, the histogram feature of the R channel in the one-dimensional color histogram feature of the image to be measured is vRHistogram feature of G channel is vGHistogram feature of B channel is vBIf the histogram features of the RGB three channels are spliced according to the RGB sequence, the spliced histogram features are v, v E { v ∈ { v }R,vG,vBIs global with w1After the global feature and the spliced histogram feature are spliced, the obtained splicing feature of the image to be detected can be w e { v ∈ { v }R,vG,vB,w1Is equal to or w e { w ∈ }1,vR,vG,vB}。
And step S104, performing feature splicing on the one-dimensional color histogram features and the one-dimensional global features to obtain one-dimensional splicing features of the image to be detected.
Due to the fact that the calculated amount of the one-dimensional features is small, the processing efficiency is high, the two-dimensional color histogram features are reduced to one dimension, then the one-dimensional color histogram features and the one-dimensional global features are spliced, the dimension of the spliced features (namely feature vectors) is one dimension, and the image classification efficiency is improved.
In this embodiment of the present application, the one-dimensional color histogram feature and the one-dimensional global feature may be subjected to feature splicing according to a second preset condition. The second preset condition may refer to a preset stitching policy, which includes but is not limited to a stitching order of the global feature and the color histogram feature, for example, the one-dimensional global feature of the image to be measured may be stitched after the one-dimensional color histogram feature of the image to be measured, or the one-dimensional global feature of the image to be measured may be stitched before the one-dimensional color histogram of the image to be measured, which is not limited herein. And splicing the one-dimensional global feature of the image to be detected and the one-dimensional color histogram feature of the image to be detected to obtain a feature, namely the splicing feature of the one-dimensional image to be detected. The one-dimensional splicing characteristic of the image to be detected means that the dimension of the splicing characteristic of the image to be detected is one dimension.
And S105, inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and outputting the category of the image to be detected by the classifier.
In the embodiment of the application, the splicing feature obtained after splicing the one-dimensional global feature and the one-dimensional color histogram feature of the image to be detected is input to the classifier, so that the classifier can classify the image to be detected according to the global feature and the color histogram feature of the image to be detected, namely, the color histogram feature capable of distinguishing colors of the image to be detected is introduced into the image classification, and the problems that the categories of the images are similar and the global feature cannot be distinguished can be solved. The classifier may be a model for classifying the image to be detected according to the image feature of the image to be detected, and optionally, the classifier in the present application may be a non-linear classifier, such as a Support Vector Machine (SVM). The nonlinear classifier can effectively expand the classification dimensionality and reduce the defects of linear classifiers such as softmax and full connection layers in nonlinear classification.
Optionally, the embodiment of the present application further includes:
training the classifier;
the training the classifier comprises:
obtaining the global characteristics of the training samples through the trained classification model;
acquiring color histogram features of the training samples;
and performing feature splicing on the global features and the color histogram features of the training samples, and training the classifier according to the splicing features of the training samples by using the splicing features of the training samples.
In the embodiment of the application, before the classifier is used to obtain the category of the image to be detected, the classifier needs to be trained first, and the trained classifier is used to obtain the category of the image to be detected. The training process of the classifier is as shown in fig. 2B, the trained classifier is used for obtaining the global features of the training samples, meanwhile, the color histogram features of the training samples are obtained, the histogram features of the R channel, the histogram features of the G channel and the histogram features of the B channel in the color histogram features are subjected to feature splicing, the global features of the training samples and the spliced histogram features are spliced again, the spliced features are input into the classifier, and target supervision is used for performing return training on the classifier. Wherein the target supervision is supervised learning in deep learning, such as a loss function. Optionally, after the feature splicing is performed on the color histogram features, the dimension of the spliced histogram features may be reduced to one dimension, and then the global features of the training samples and the spliced one-dimensional histogram features may be spliced again.
According to the embodiment of the application, on the basis of the global features of the image to be detected, the color histogram features with high distinguishing degree are introduced as auxiliary features, the dimensionality of the color histogram features is reduced to one dimension, the calculated amount can be reduced, the image classification efficiency is improved, then the global features of the image to be detected and the color histogram features after dimensionality reduction are combined, the number of the features for image classification is increased, the accuracy of image classification is improved, the problems that the categories of the images are similar and the global features are difficult to distinguish are solved, the calculated amount of the color histogram features is small, and the color histogram features are suitable for being deployed in embedded equipment.
Referring to fig. 3, which is a schematic view of an implementation flow of an image classification method provided in the second embodiment of the present application, where the image classification method is applied to a terminal device, as shown in the figure, the image classification method may include the following steps:
step S301, acquiring an image to be detected.
The step is the same as step S101, and reference may be made to the related description of step S101, which is not repeated herein.
Step S302, obtaining the scene of the image to be detected.
The scene to which the image to be detected belongs may refer to a scene to which a main body in the image to be detected belongs, for example, if the main body of the image is a flower, the scene to which the image belongs is a plant scene; the main body of the image is a sand beach, a valley, etc., and the scene to which the image belongs is a natural scene.
Step S303, if the scene to which the image to be detected belongs is a preset scene, respectively acquiring the global feature and the color histogram feature of the image to be detected.
The step is partially the same as step S102, and the same parts may specifically refer to the related description of step S102, which is not described herein again.
The preset scene may refer to a scene with a large color discrimination degree, and the color discrimination degree of the image belonging to the natural scene is large (for example, fig. 4a is a color histogram feature of a snow scene, fig. 4B is a color histogram feature of a sand beach, comparing fig. 4a and fig. 4B, two images with different categories can be observed, the color histogram features of the images are different, R in the image represents a histogram feature of an R channel, G represents a histogram feature of a G channel, B represents a histogram feature of a B channel, a horizontal axis represents pixel values of 0 to 255, and a vertical axis represents the number of pixels).
Step S304, reducing the dimension of the color histogram feature to one dimension to obtain a one-dimensional color histogram feature.
The step is the same as step S103, and reference may be made to the related description of step S103, which is not described herein again.
Step S305, performing feature splicing on the one-dimensional color histogram features and the one-dimensional global features to obtain one-dimensional splicing features of the image to be detected.
The step is the same as step S104, and reference may be made to the related description of step S104, which is not repeated herein.
Step S306, inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and outputting the category of the image to be detected by the classifier.
The step is the same as step S105, and reference may be made to the related description of step S105, which is not repeated herein.
According to the image classification method and device, when the scene to which the image to be detected belongs is a preset scene, the color histogram features capable of distinguishing colors are introduced, the global features and the color histogram features of the image to be detected are combined to classify the image to be detected, the problems that different types of images in the preset scene are similar and the global features are difficult to distinguish can be solved, and the accuracy of image classification is improved.
Fig. 5 is a schematic diagram of an image classification apparatus provided in the third embodiment of the present application, and for convenience of description, only the relevant portions of the third embodiment of the present application are shown.
The image classification apparatus includes:
an image obtaining module 51, configured to obtain an image to be detected;
a feature obtaining module 52, configured to obtain a global feature and a color histogram feature of the image to be detected, respectively, where a dimension of the global feature is one dimension, and a dimension of the color histogram feature is two dimensions;
a feature dimension reduction module 53, configured to reduce the dimension of the color histogram feature to one dimension, so as to obtain a one-dimensional color histogram feature;
a feature stitching module 54, configured to perform feature stitching on the one-dimensional color histogram feature and the one-dimensional global feature to obtain a one-dimensional stitching feature of the image to be detected;
and the category output module 55 is configured to input the splicing characteristics of the one-dimensional image to be detected to a classifier, and the classifier outputs the category of the image to be detected.
Optionally, the color histogram features include a histogram feature of a red channel, a histogram feature of a green channel, and a histogram feature of a blue channel, and the image classification apparatus further includes:
the histogram splicing module is used for performing feature splicing on the histogram feature of the red channel, the histogram feature of the green channel and the histogram feature of the blue channel to obtain the spliced histogram feature;
the feature dimension reduction module 53 is specifically configured to:
and reducing the dimension of the spliced histogram features to one dimension.
Optionally, the feature obtaining module 52 includes:
the down-sampling unit is used for down-sampling the image to be detected;
and the feature output unit is used for inputting the down-sampled image to be detected into a classification model, and the classification model outputs the global features.
Optionally, the image classification apparatus further includes:
the classification model training module is used for training the classification model;
a classifier training module for training the classifier;
the classifier training module comprises:
the global feature obtaining unit is used for obtaining the global features of the training samples through the trained classification models;
a histogram feature obtaining unit, configured to obtain color histogram features of the training samples;
and the feature processing unit is used for performing feature splicing on the global features and the color histogram features of the training samples, and training the classifier according to the splicing features of the training samples.
Optionally, the image classification apparatus further includes:
the scene acquisition module is used for acquiring the scene of the image to be detected;
correspondingly, the feature obtaining module 52 is specifically configured to:
and if the scene to which the image to be detected belongs is a preset scene, respectively acquiring the global features and the color histogram features of the image to be detected.
The image classification device provided in the embodiment of the present application can be applied to the first method embodiment and the second method embodiment, and for details, reference is made to the description of the first method embodiment and the second method embodiment, and details are not repeated here.
Fig. 6 is a schematic diagram of a terminal device according to a fourth embodiment of the present application. The terminal device as shown in the figure may include: one or more processors 601 (only one shown); one or more input devices 602 (only one shown), one or more output devices 603 (only one shown), and memory 604. The processor 601, the input device 602, the output device 603, and the memory 604 are connected by a bus 605. The memory 604 is used for storing instructions and the processor 601 is used for executing instructions stored by the memory 604. Wherein:
the processor 601 is configured to obtain an image to be detected; respectively acquiring a global feature and a color histogram feature of the image to be detected, wherein the dimension of the global feature is one-dimensional, and the dimension of the color histogram feature is two-dimensional; reducing the dimensionality of the color histogram feature to one dimension to obtain a one-dimensional color histogram feature; performing feature splicing on the one-dimensional color histogram features and the one-dimensional global features to obtain one-dimensional splicing features of the image to be detected; inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and outputting the category of the image to be detected by the classifier.
Optionally, the color histogram features include a histogram feature of a red channel, a histogram feature of a green channel, and a histogram feature of a blue channel, and the processor 601 is further configured to:
and performing feature splicing on the histogram feature of the red channel, the histogram feature of the green channel and the histogram feature of the blue channel to obtain a spliced histogram feature.
Optionally, the processor 601 is specifically configured to:
and reducing the dimension of the spliced histogram features to one dimension.
Optionally, the processor 601 is specifically configured to:
down-sampling the image to be detected;
and inputting the down-sampled image to be detected into a classification model, and outputting the global features by the classification model.
Optionally, the processor 601 is further configured to:
training the classification model;
the classifier is trained.
Optionally, the processor 601 is specifically configured to:
obtaining the global characteristics of the training samples through the trained classification model;
acquiring color histogram features of the training samples;
and performing feature splicing on the global features and the color histogram features of the training samples, and training the classifier according to the splicing features of the training samples by using the splicing features of the training samples.
Optionally, the processor 601 is further configured to:
and acquiring the scene of the image to be detected.
Optionally, the processor 601 is specifically configured to:
and if the scene to which the image to be detected belongs is a preset scene, respectively acquiring the global features and the color histogram features of the image to be detected.
It should be understood that, in the embodiment of the present Application, the Processor 601 may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 602 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, a data receiving interface, and the like. The output device 603 may include a display (LCD, etc.), speakers, a data transmission interface, and the like.
The memory 604 may include both read-only memory and random access memory, and provides instructions and data to the processor 601. A portion of the memory 604 may also include non-volatile random access memory. For example, the memory 604 may also store device type information.
In a specific implementation, the processor 601, the input device 602, the output device 603, and the memory 604 described in this embodiment of the present application may execute the implementation described in the embodiment of the image classification method provided in this embodiment of the present application, or may execute the implementation described in the image classification apparatus described in the third embodiment of the present application, which is not described herein again.
Fig. 7 is a schematic diagram of a terminal device provided in the fifth embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72 stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the various image classification method embodiments described above. Alternatively, the processor 70 implements the functions of the modules/units in the above-described apparatus embodiments when executing the computer program 72.
The terminal device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of a terminal device 7 and does not constitute a limitation of the terminal device 7 and may comprise more or less components than shown, or some components may be combined, or different components, for example the terminal device may further comprise input output devices, network access devices, buses, etc.
The processor 70 may be a central processing unit CPU, but may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing the computer program and other programs and data required by the terminal device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image classification method, characterized in that the image classification comprises:
acquiring an image to be detected;
respectively acquiring a global feature and a color histogram feature of the image to be detected, wherein the dimension of the global feature is one-dimensional, and the dimension of the color histogram feature is two-dimensional;
reducing the dimensionality of the color histogram feature to one dimension to obtain a one-dimensional color histogram feature;
performing feature splicing on the one-dimensional color histogram features and the one-dimensional global features to obtain one-dimensional splicing features of the image to be detected;
inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and outputting the category of the image to be detected by the classifier.
2. The image classification method of claim 1, wherein the color histogram features include histogram features of a red channel, histogram features of a green channel, and histogram features of a blue channel, and further comprising, before reducing the dimensionality of the color histogram features to one dimension:
performing feature splicing on the histogram feature of the red channel, the histogram feature of the green channel and the histogram feature of the blue channel to obtain spliced histogram features;
accordingly, the reducing the dimension of the color histogram feature to one dimension includes:
and reducing the dimension of the spliced histogram features to one dimension.
3. The image classification method according to claim 1, wherein the obtaining of the global features of the image to be detected includes:
down-sampling the image to be detected;
and inputting the down-sampled image to be detected into a classification model, and outputting the global features by the classification model.
4. The image classification method according to claim 3, further comprising:
training the classification model;
the classifier is trained.
5. The image classification method of claim 4, wherein the training the classifier comprises:
obtaining the global characteristics of the training samples through the trained classification model;
acquiring color histogram features of the training samples;
and performing feature splicing on the global features and the color histogram features of the training samples to obtain spliced features of the training samples, and training the classifier according to the spliced features of the training samples.
6. The image classification method according to any one of claims 1 to 5, further comprising, after acquiring the image to be measured:
acquiring a scene to which the image to be detected belongs;
correspondingly, the respectively obtaining the global feature and the color histogram feature of the image to be detected includes:
and if the scene to which the image to be detected belongs is a preset scene, respectively acquiring the global features and the color histogram features of the image to be detected.
7. An image classification apparatus, characterized by comprising:
the image acquisition module is used for acquiring an image to be detected;
the characteristic acquisition module is used for respectively acquiring the global characteristic and the color histogram characteristic of the image to be detected, wherein the dimension of the global characteristic is one dimension, and the dimension of the color histogram characteristic is two dimensions;
the characteristic dimension reduction module is used for reducing the dimension of the color histogram characteristic to one dimension to obtain the one-dimensional color histogram characteristic;
the characteristic splicing module is used for carrying out characteristic splicing on the one-dimensional color histogram characteristic and the one-dimensional global characteristic to obtain a one-dimensional splicing characteristic of the image to be detected;
and the category output module is used for inputting the splicing characteristics of the one-dimensional image to be detected into a classifier, and the classifier outputs the category of the image to be detected.
8. The image classification device of claim 7, wherein the color histogram features include histogram features of a red channel, histogram features of a green channel, and histogram features of a blue channel, the image classification device further comprising:
the histogram splicing module is used for performing feature splicing on the histogram feature of the red channel, the histogram feature of the green channel and the histogram feature of the blue channel to obtain the spliced histogram feature;
and the feature dimension reduction module is used for reducing the dimension of the spliced histogram features to one dimension.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the image classification method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the image classification method according to any one of claims 1 to 6.
CN201911006182.2A 2019-10-22 2019-10-22 Image classification method, image classification device and terminal equipment Pending CN110705653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911006182.2A CN110705653A (en) 2019-10-22 2019-10-22 Image classification method, image classification device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911006182.2A CN110705653A (en) 2019-10-22 2019-10-22 Image classification method, image classification device and terminal equipment

Publications (1)

Publication Number Publication Date
CN110705653A true CN110705653A (en) 2020-01-17

Family

ID=69200907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911006182.2A Pending CN110705653A (en) 2019-10-22 2019-10-22 Image classification method, image classification device and terminal equipment

Country Status (1)

Country Link
CN (1) CN110705653A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340124A (en) * 2020-03-03 2020-06-26 Oppo广东移动通信有限公司 Method and device for identifying entity category in image
JP7232376B1 (en) * 2022-11-29 2023-03-02 株式会社レフ・テクノロジー Plant etiolation information acquisition method, computer program, plant etiolation information acquisition device, and plant etiolation information acquisition system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090252413A1 (en) * 2008-04-04 2009-10-08 Microsoft Corporation Image classification
CN104680173A (en) * 2015-01-26 2015-06-03 河海大学 Scene classification method for remote sensing images
CN105354581A (en) * 2015-11-10 2016-02-24 西安电子科技大学 Color image feature extraction method fusing color feature and convolutional neural network
CN105488510A (en) * 2015-11-20 2016-04-13 上海华力创通半导体有限公司 Construction method and system for color histogram of static picture
CN107341440A (en) * 2017-05-08 2017-11-10 西安电子科技大学昆山创新研究院 Indoor RGB D scene image recognition methods based on multitask measurement Multiple Kernel Learning
CN108596102A (en) * 2018-04-26 2018-09-28 北京航空航天大学青岛研究院 Indoor scene object segmentation grader building method based on RGB-D
CN108647703A (en) * 2018-04-19 2018-10-12 北京联合大学 A kind of type judgement method of the classification image library based on conspicuousness

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090252413A1 (en) * 2008-04-04 2009-10-08 Microsoft Corporation Image classification
CN104680173A (en) * 2015-01-26 2015-06-03 河海大学 Scene classification method for remote sensing images
CN105354581A (en) * 2015-11-10 2016-02-24 西安电子科技大学 Color image feature extraction method fusing color feature and convolutional neural network
CN105488510A (en) * 2015-11-20 2016-04-13 上海华力创通半导体有限公司 Construction method and system for color histogram of static picture
CN107341440A (en) * 2017-05-08 2017-11-10 西安电子科技大学昆山创新研究院 Indoor RGB D scene image recognition methods based on multitask measurement Multiple Kernel Learning
CN108647703A (en) * 2018-04-19 2018-10-12 北京联合大学 A kind of type judgement method of the classification image library based on conspicuousness
CN108596102A (en) * 2018-04-26 2018-09-28 北京航空航天大学青岛研究院 Indoor scene object segmentation grader building method based on RGB-D

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘国帅,仲伟峰,殷飞,刘成林: "自然场景图像与合成图像的快速分类", 《中国图象图形学报》 *
李学龙,史建华,董永生,陶大程: "场景图像分类技术综述", 《中国科学:信息科学》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340124A (en) * 2020-03-03 2020-06-26 Oppo广东移动通信有限公司 Method and device for identifying entity category in image
JP7232376B1 (en) * 2022-11-29 2023-03-02 株式会社レフ・テクノロジー Plant etiolation information acquisition method, computer program, plant etiolation information acquisition device, and plant etiolation information acquisition system

Similar Documents

Publication Publication Date Title
US11151397B2 (en) Liveness testing methods and apparatuses and image processing methods and apparatuses
CN110751218B (en) Image classification method, image classification device and terminal equipment
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
US20200279358A1 (en) Method, device, and system for testing an image
Pajankar Raspberry Pi computer vision programming
CN112102164B (en) Image processing method, device, terminal and storage medium
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN108961183B (en) Image processing method, terminal device and computer-readable storage medium
CN108898082B (en) Picture processing method, picture processing device and terminal equipment
CN110119733B (en) Page identification method and device, terminal equipment and computer readable storage medium
CN108924440B (en) Sticker display method, device, terminal and computer-readable storage medium
CN109118447B (en) Picture processing method, picture processing device and terminal equipment
CN111325271A (en) Image classification method and device
WO2021129466A1 (en) Watermark detection method, device, terminal and storage medium
CN111047509A (en) Image special effect processing method and device and terminal
CN112668577A (en) Method, terminal and device for detecting target object in large-scale image
Spizhevoi et al. OpenCV 3 Computer Vision with Python Cookbook: Leverage the power of OpenCV 3 and Python to build computer vision applications
CN110705653A (en) Image classification method, image classification device and terminal equipment
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN110618852A (en) View processing method, view processing device and terminal equipment
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
CN110677586B (en) Image display method, image display device and mobile terminal
Zhu et al. Recaptured image forensics based on normalized local ternary count histograms of residual maps
CN108776959B (en) Image processing method and device and terminal equipment
CN111754435A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117

RJ01 Rejection of invention patent application after publication