CN114140655A - Image classification method and device, storage medium and electronic equipment - Google Patents

Image classification method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114140655A
CN114140655A CN202210111057.3A CN202210111057A CN114140655A CN 114140655 A CN114140655 A CN 114140655A CN 202210111057 A CN202210111057 A CN 202210111057A CN 114140655 A CN114140655 A CN 114140655A
Authority
CN
China
Prior art keywords
image
image set
classification
landscape
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210111057.3A
Other languages
Chinese (zh)
Inventor
张学银
王尚文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongxun Wanglian Technology Co ltd
Original Assignee
Shenzhen Zhongxun Wanglian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongxun Wanglian Technology Co ltd filed Critical Shenzhen Zhongxun Wanglian Technology Co ltd
Priority to CN202210111057.3A priority Critical patent/CN114140655A/en
Publication of CN114140655A publication Critical patent/CN114140655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image classification method, an image classification device, a storage medium and electronic equipment. The image classification method comprises the steps of obtaining an original image set to be processed; inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets; performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor; classifying the landscape image set based on the semantic style factor and a first preset classification rule; extracting a face image in each person image in the person image set; and classifying the figure image set based on the face image and a second preset classification rule. The scheme can accurately classify the images according to the user requirements.

Description

Image classification method and device, storage medium and electronic equipment
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an image classification method, an image classification device, a storage medium and electronic equipment.
Background
With the development of science and technology, mobile terminals such as mobile phones and digital cameras with photographing functions are lighter and thinner, and the quality of photographed pictures is better. People often carry a mobile terminal with a photographing function when playing outside so as to take photos of the heart instrument at any time.
However, the storage and classification of photographs taken by a camera at present may cause an excessive number of operations to be performed in the search process due to imprecision of classification. For example, when a user needs to search for a specific photo, the user needs to search for the photo in the gallery and re-classify the photo, the number of steps of operations to be performed is too many, and particularly, the difficulty of the user's operations is further increased when there are too many photos. In the photos classified according to time and place, the user can only search and classify according to time or place, and if the user does not know the time or place of the target photo or the photos under the corresponding target folder are too large, the user still needs to perform a large amount of operations, and cannot quickly search the target photo required by the user.
Disclosure of Invention
The embodiment of the application provides an image classification method, an image classification device, a storage medium and electronic equipment, which can accurately classify images according to user requirements.
In a first aspect, an embodiment of the present application provides an image classification method, including:
acquiring an original image set to be processed;
inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets;
performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor;
classifying the landscape image set based on the semantic style factor and a first preset classification rule;
extracting a face image in each person image in the person image set;
and classifying the figure image set based on the face image and a second preset classification rule.
In the image classification method provided in the embodiment of the present application, performing semantic analysis on each of the landscape images in the landscape image set to obtain a semantic style factor includes:
inputting the landscape image set into a convolutional network for processing to obtain an abstract semantic vector of each landscape image in the landscape image set;
extracting a color vector of each landscape image in the landscape image set;
and carrying out vector fusion on the abstract semantic vector and the color vector to obtain a semantic style factor.
In the image classification method provided in the embodiment of the present application, the classifying the person image set based on the face image and a second preset classification rule includes:
extracting the features of the face images to obtain feature sets corresponding to each character image;
respectively carrying out similarity matching on the feature set corresponding to each character image and the feature sets corresponding to other character images to obtain similar values;
and classifying the human image set based on the similarity value.
In the image classification method provided in the embodiment of the present application, the classifying the person image set based on the similarity value includes:
comparing the similarity value with a preset value to obtain a comparison result;
and classifying the human image set according to the comparison result.
In the image classification method provided in the embodiment of the present application, the extracting a face image in each person image in the person image set includes:
extracting face key point information and background key point information of each person image in the person image set;
determining a segmentation line of the face and the background in the figure image based on the face key point information and the background key point information;
and segmenting the figure image based on the segmentation line to obtain a face image in each figure image in the figure image set.
In a second aspect, an embodiment of the present application provides an image classification apparatus, including:
the image acquisition unit is used for acquiring an original image set to be processed;
the first classification unit is used for inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets;
the semantic analysis unit is used for performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor;
the second classification unit is used for classifying the landscape image set based on the semantic style factor and a first preset classification rule;
the image extraction unit is used for extracting a face image in each person image in the person image set;
and the third classification unit is used for classifying the figure image set based on the face image and a second preset classification rule.
In the image classification device provided in the embodiment of the present application, the semantic analysis unit is configured to:
inputting the landscape image set into a convolutional network for processing to obtain an abstract semantic vector of each landscape image in the landscape image set;
extracting a color vector of each landscape image in the landscape image set;
and carrying out vector fusion on the abstract semantic vector and the color vector to obtain a semantic style factor.
In the image classification apparatus provided in an embodiment of the present application, the third classification unit is configured to:
extracting the features of the face images to obtain feature sets corresponding to each character image;
respectively carrying out similarity matching on the feature set corresponding to each character image and the feature sets corresponding to other character images to obtain similar values;
and classifying the human image set based on the similarity value.
In a third aspect, an embodiment of the present application provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in the image classification method according to any one of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps in the image classification method according to any one of the embodiments of the present application when executing the computer program.
The image classification method provided by the embodiment of the application obtains an original image set to be processed; inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets; performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor; classifying the landscape image set based on the semantic style factor and a first preset classification rule; extracting a face image in each person image in the person image set; and classifying the figure image set based on the face image and a second preset classification rule. The scheme can accurately classify the images according to the user requirements.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image classification method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an image classification apparatus according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first" and "second", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Next, a method, an apparatus, a storage medium, and an electronic device for image classification according to embodiments of the present application will be respectively described.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image classification method according to an embodiment of the present disclosure. The specific flow of the image classification method can be as follows:
101. an original image set to be processed is acquired.
It will be appreciated that the original image set may include a plurality of images.
In some embodiments, after the original image set to be processed is obtained, sharpening, edge compensation, and noise reduction may be performed on each image in the original image set in sequence. The sharpening process may include a non-linear transformation or a linear transformation, among others. The noise reduction processing may employ a method including gaussian filtering, median filtering, or high-low pass filtering.
Specifically, the gaussian filtering method may refer to a process of performing weighted average on a pixel value of each image in the original image set, and for the pixel value of each pixel point, the pixel value of the pixel point itself and other pixel values in the neighborhood may be obtained by weighted average. The median filtering method may be to set the gray value of each pixel in each image in the original image set as the median of the gray values of all pixels in a neighborhood window of the pixel. The high-low pass filtering method may refer to including at least one of high-pass filtering and low-pass filtering. Here, the high-pass filtering may refer to removing high-frequency components from each image in the original image set, leaving low-frequency components. Low pass filtering may refer to removing low frequency components from each image in the original image set, leaving high frequency components. The high frequency component may refer to a portion where intensity (brightness/gray scale) changes more gradually in each image in the original image set. The low frequency component may refer to a portion of each image in the original image set where the intensity (brightness/grayscale) changes more strongly. Wherein the edge compensation process can enhance the contrast of the edge of each image in the original image set.
102. And inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets.
Specifically, feature extraction may be performed on each image in the original image set through the preset style classification model to obtain an image feature of each image, and then the image feature may be identified to determine a classification of the image feature, so as to obtain a classification of an image corresponding to the image feature. For example, when the image feature is a human face feature, an image corresponding to the image feature may be stored in the human image set. When the image feature is a landscape feature, an image corresponding to the image feature may be stored in the landscape image set.
In some embodiments, the image feature may be compared with a preset face feature and a preset landscape feature, respectively, to determine the classification of the image feature. And the images in the other image sets are non-face images and non-landscape images.
It should be noted that the other image sets may be, for example, an article image, a document image, and the like. In the embodiment of the present application, the other image sets may not be classified.
103. And performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor.
Specifically, the scenic image set can be input into a convolutional network for processing to obtain an abstract semantic vector of each scenic image in the scenic image set; extracting a color vector of each landscape image in the landscape image set; and carrying out vector fusion on the abstract semantic vector and the color vector to obtain a semantic style factor.
It should be noted that the convolutional network is a convolutional neural network. The abstract semantic vector of the image is extracted through the convolutional neural network, which is the prior art and is not described in detail herein.
104. And classifying the landscape image set based on the semantic style factor and a first preset classification rule.
It should be noted that the first preset classification rule may be set by the user. Such as a landscape type category, a geographic location category, etc.
105. And extracting the face image in each person image in the person image set.
Specifically, the face key point information and the background key point information of each person image in the person image set can be extracted; determining a segmentation line of the face and the background in the figure image based on the face key point information and the background key point information; and segmenting the person image based on the segmentation line to obtain a face image in each person image in the person image set.
106. Classifying the figure image set based on the face image and a second preset classification rule
Specifically, feature extraction can be performed on the face image to obtain a feature set corresponding to each person image; respectively carrying out similarity matching on the feature set corresponding to each character image and the feature sets corresponding to other character images to obtain similar values; and classifying the human image set based on the similarity value.
In some embodiments, the similarity value may be compared with a preset value to obtain a comparison result; and then classifying the human figure image set according to the comparison result. For example, the person images having the similarity value greater than or equal to the preset value may be classified into one category.
The preset value can be set according to actual conditions.
In summary, the image classification method provided by the embodiment of the present application obtains an original image set to be processed; inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets; performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor; classifying the landscape image set based on the semantic style factor and a first preset classification rule; extracting a face image in each person image in the person image set; and classifying the figure image set based on the face image and a second preset classification rule. The scheme can accurately classify the images according to the user requirements.
Referring to fig. 2, an image classification apparatus is further provided in the present application. The image classification apparatus 200 may include:
an image acquisition unit 201, configured to acquire an original image set to be processed;
the first classification unit 202 is configured to input the original image set into a preset style classification model for classification processing, so as to obtain a landscape image set, a person image set, and other image sets;
the semantic analysis unit 203 is configured to perform semantic analysis on each of the landscape images in the landscape image set to obtain a semantic style factor;
a second classification unit 204, configured to classify the landscape image set based on the semantic style factor and a first preset classification rule;
an image extraction unit 205 for extracting a face image in each of the personal images in the personal image set;
and a third classification unit 206, configured to classify the person image set based on the face image and a second preset classification rule.
In some embodiments, the semantic analysis unit 203 may be configured to:
inputting the landscape image set into a convolution network for processing to obtain an abstract semantic vector of each landscape image in the landscape image set;
extracting a color vector of each landscape image in the landscape image set;
and carrying out vector fusion on the abstract semantic vector and the color vector to obtain a semantic style factor.
In some embodiments, the third classification unit 206 may be configured to:
carrying out feature extraction on the face images to obtain feature sets corresponding to each character image;
respectively carrying out similarity matching on the feature set corresponding to each character image and the feature sets corresponding to other character images to obtain similar values;
and classifying the human image set based on the similarity value. All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again. The terms are the same as those in the image classification method, and details of implementation can be referred to the description in the method embodiment.
The image classification device 200 provided by the embodiment of the application acquires an original image set to be processed through the image acquisition unit 201; the original image set is input into a preset style classification model by a first classification unit 202 for classification processing, so that a landscape image set, a person image set and other image sets are obtained; semantic analysis is performed on each landscape image in the landscape image set by the semantic analysis unit 203 to obtain a semantic style factor; classifying the landscape image set by the second classification unit 204 based on the semantic style factor and the first preset classification rule; extracting a face image in each of the personal images in the personal image set by the image extraction unit 205; the set of person images is classified by the third classification unit 206 based on the face images and the second preset classification rule. The scheme can accurately classify the images according to the user requirements
The embodiment of the present application further provides a server, as shown in fig. 3, which shows a schematic structural diagram of the server according to the embodiment of the present application, specifically:
the server may include components such as a processor 301 of one or more processing cores, memory 302 of one or more computer-readable storage media, a power supply 303, and an input unit 304. Those skilled in the art will appreciate that the server architecture shown in FIG. 3 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 301 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the server. Optionally, processor 301 may include one or more processing cores; preferably, the processor 301 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 301.
The memory 302 may be used to store software programs and modules, and the processor 301 executes various functional applications and data processing by operating the software programs and modules stored in the memory 302. The memory 302 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 302 may also include a memory controller to provide the processor 301 with access to the memory 302.
The server further includes a power supply 303 for supplying power to the various components, and preferably, the power supply 303 may be logically connected to the processor 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 303 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The server may also include an input unit 304, the input unit 304 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the server may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 301 in the server loads the executable file corresponding to the process of one or more application programs into the memory 302 according to the following instructions, and the processor 301 runs the application programs stored in the memory 302, thereby implementing various functions as follows:
acquiring an original image set to be processed;
inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets;
performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor;
classifying the landscape image set based on the semantic style factor and a first preset classification rule;
extracting a face image in each person image in the person image set;
and classifying the figure image set based on the face image and a second preset classification rule.
The above operations can be specifically referred to the previous embodiments, and are not described herein.
Accordingly, an electronic device according to an embodiment of the present disclosure may include, as shown in fig. 4, a Radio Frequency (RF) circuit 401, a memory 402 including one or more computer-readable storage media, an input unit 403, a display unit 404, a sensor 405, an audio circuit 406, a Wireless Fidelity (WiFi) module 407, a processor 408 including one or more processing cores, and a power supply 409. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 401 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 408 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 401 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 401 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 402 may be used to store software programs and modules, and the processor 408 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 408 and the input unit 403 access to the memory 402.
The input unit 403 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in a particular embodiment, the input unit 403 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 408, and can receive and execute commands from the processor 408. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 403 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 404 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 404 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 408 to determine the type of touch event, and then the processor 408 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 4 the touch-sensitive surface and the display panel are shown as two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The electronic device may also include at least one sensor 405, such as a light sensor, motion sensor, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the electronic device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the motion sensor is stationary, can be used for applications of recognizing the posture of the electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like, and can also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, and further description is omitted here.
Audio circuitry 406, a speaker, and a microphone may provide an audio interface between the user and the electronic device. The audio circuit 406 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 406 and converted into audio data, which is then processed by the audio data output processor 408, and then passed through the RF circuit 401 to be sent to, for example, another electronic device, or output to the memory 402 for further processing. The audio circuitry 406 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.
WiFi belongs to short distance wireless transmission technology, and the electronic device can help the user send and receive e-mail, browse web page and access streaming media, etc. through the WiFi module 407, which provides wireless broadband internet access for the user. Although fig. 4 shows the WiFi module 407, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 408 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 408 may include one or more processing cores; preferably, the processor 408 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily the wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 408.
The electronic device also includes a power source 409 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 408 via a power management system to manage charging, discharging, and power consumption via the power management system. The power supply 409 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 408 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 408 runs the application programs stored in the memory 402, thereby implementing various functions:
acquiring an original image set to be processed;
inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets;
performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor;
classifying the landscape image set based on the semantic style factor and a first preset classification rule;
extracting a face image in each person image in the person image set;
and classifying the figure image set based on the face image and a second preset classification rule.
The above operations can be specifically referred to the previous embodiments, and are not described herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the image classification methods provided in the present application. For example, the instructions may perform the steps of:
acquiring an original image set to be processed;
inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets;
performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor;
classifying the landscape image set based on the semantic style factor and a first preset classification rule;
extracting a face image in each person image in the person image set;
and classifying the figure image set based on the face image and a second preset classification rule.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image classification method provided in the embodiment of the present application, beneficial effects that can be achieved by any image classification method provided in the embodiment of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The image classification method, the image classification device, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principles and embodiments of the present application, and the description of the embodiments above is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image classification method, comprising:
acquiring an original image set to be processed;
inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets;
performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor;
classifying the landscape image set based on the semantic style factor and a first preset classification rule;
extracting a face image in each person image in the person image set;
and classifying the figure image set based on the face image and a second preset classification rule.
2. The image classification method according to claim 1, wherein the semantic analysis of each of the scenic images in the set of scenic images to obtain a semantic style factor comprises:
inputting the landscape image set into a convolutional network for processing to obtain an abstract semantic vector of each landscape image in the landscape image set;
extracting a color vector of each landscape image in the landscape image set;
and carrying out vector fusion on the abstract semantic vector and the color vector to obtain a semantic style factor.
3. The image classification method according to claim 1, wherein the classifying the set of human images based on the face image and a second preset classification rule comprises:
extracting the features of the face images to obtain feature sets corresponding to each character image;
respectively carrying out similarity matching on the feature set corresponding to each character image and the feature sets corresponding to other character images to obtain similar values;
and classifying the human image set based on the similarity value.
4. The image classification method according to claim 3, wherein the classifying the human image set based on the similarity value includes:
comparing the similarity value with a preset value to obtain a comparison result;
and classifying the human image set according to the comparison result.
5. The image classification method according to claim 3, wherein the extracting the face image of each of the personal images in the personal image set comprises:
extracting face key point information and background key point information of each person image in the person image set;
determining a segmentation line of the face and the background in the figure image based on the face key point information and the background key point information;
and segmenting the figure image based on the segmentation line to obtain a face image in each figure image in the figure image set.
6. An image classification apparatus, comprising:
the image acquisition unit is used for acquiring an original image set to be processed;
the first classification unit is used for inputting the original image set into a preset style classification model for classification processing to obtain a landscape image set, a person image set and other image sets;
the semantic analysis unit is used for performing semantic analysis on each landscape image in the landscape image set to obtain a semantic style factor;
the second classification unit is used for classifying the landscape image set based on the semantic style factor and a first preset classification rule;
the image extraction unit is used for extracting a face image in each person image in the person image set;
and the third classification unit is used for classifying the figure image set based on the face image and a second preset classification rule.
7. The image classification apparatus of claim 6, wherein the semantic analysis unit is configured to:
inputting the landscape image set into a convolutional network for processing to obtain an abstract semantic vector of each landscape image in the landscape image set;
extracting a color vector of each landscape image in the landscape image set;
and carrying out vector fusion on the abstract semantic vector and the color vector to obtain a semantic style factor.
8. The image classification device according to claim 6, characterized in that the third classification unit is configured to:
extracting the features of the face images to obtain feature sets corresponding to each character image;
respectively carrying out similarity matching on the feature set corresponding to each character image and the feature sets corresponding to other character images to obtain similar values;
and classifying the human image set based on the similarity value.
9. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of any of claims 1-5.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-5 when executing the computer program.
CN202210111057.3A 2022-01-29 2022-01-29 Image classification method and device, storage medium and electronic equipment Pending CN114140655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210111057.3A CN114140655A (en) 2022-01-29 2022-01-29 Image classification method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210111057.3A CN114140655A (en) 2022-01-29 2022-01-29 Image classification method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114140655A true CN114140655A (en) 2022-03-04

Family

ID=80381873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210111057.3A Pending CN114140655A (en) 2022-01-29 2022-01-29 Image classification method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114140655A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116274170A (en) * 2023-03-27 2023-06-23 中建三局第一建设工程有限责任公司 Control method, system and related device of laser cleaning equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007094990A (en) * 2005-09-30 2007-04-12 Fujifilm Corp Image sorting device, method, and program
US20120281887A1 (en) * 2010-01-25 2012-11-08 Koichiro Yamaguchi Image sorting device, method, program, and integrated circuit and storage medium storing said program
CN105426497A (en) * 2015-11-24 2016-03-23 上海斐讯数据通信技术有限公司 Automatic classification method and system for photo album in intelligent terminal
CN106776662A (en) * 2015-11-25 2017-05-31 腾讯科技(深圳)有限公司 A kind of taxonomic revision method and apparatus of photo
CN107153838A (en) * 2017-04-19 2017-09-12 中国电子科技集团公司电子科学研究院 A kind of photo automatic grading method and device
CN107977431A (en) * 2017-11-30 2018-05-01 广东欧珀移动通信有限公司 Image processing method, device, computer equipment and computer-readable recording medium
CN111242074A (en) * 2020-01-20 2020-06-05 佛山科学技术学院 Certificate photo background replacement method based on image processing
CN112001302A (en) * 2020-08-21 2020-11-27 无锡锡商银行股份有限公司 Face recognition method based on face interesting region segmentation
CN112183672A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Image classification method, and training method and device of feature extraction network
CN112507155A (en) * 2020-12-22 2021-03-16 哈尔滨师范大学 Information processing method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007094990A (en) * 2005-09-30 2007-04-12 Fujifilm Corp Image sorting device, method, and program
US20120281887A1 (en) * 2010-01-25 2012-11-08 Koichiro Yamaguchi Image sorting device, method, program, and integrated circuit and storage medium storing said program
CN105426497A (en) * 2015-11-24 2016-03-23 上海斐讯数据通信技术有限公司 Automatic classification method and system for photo album in intelligent terminal
CN106776662A (en) * 2015-11-25 2017-05-31 腾讯科技(深圳)有限公司 A kind of taxonomic revision method and apparatus of photo
CN107153838A (en) * 2017-04-19 2017-09-12 中国电子科技集团公司电子科学研究院 A kind of photo automatic grading method and device
CN107977431A (en) * 2017-11-30 2018-05-01 广东欧珀移动通信有限公司 Image processing method, device, computer equipment and computer-readable recording medium
CN111242074A (en) * 2020-01-20 2020-06-05 佛山科学技术学院 Certificate photo background replacement method based on image processing
CN112001302A (en) * 2020-08-21 2020-11-27 无锡锡商银行股份有限公司 Face recognition method based on face interesting region segmentation
CN112183672A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Image classification method, and training method and device of feature extraction network
CN112507155A (en) * 2020-12-22 2021-03-16 哈尔滨师范大学 Information processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴忠安: "基于面部识别技术的个人智能相册的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
孙敏: "基于Android的图像分类系统研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116274170A (en) * 2023-03-27 2023-06-23 中建三局第一建设工程有限责任公司 Control method, system and related device of laser cleaning equipment
CN116274170B (en) * 2023-03-27 2023-10-13 中建三局第一建设工程有限责任公司 Control method, system and related device of laser cleaning equipment

Similar Documents

Publication Publication Date Title
CN107124555B (en) Method and device for controlling focusing, computer equipment and computer readable storage medium
CN108259758B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN107241552B (en) Image acquisition method, device, storage medium and terminal
CN107749046B (en) Image processing method and mobile terminal
CN105989572B (en) Picture processing method and device
CN110969056B (en) Document layout analysis method, device and storage medium for document image
US10706282B2 (en) Method and mobile terminal for processing image and storage medium
CN111556248B (en) Shooting method, shooting device, storage medium and mobile terminal
CN105513098B (en) Image processing method and device
CN109976848A (en) A kind of image display method, device, equipment and storage medium
CN113421211A (en) Method for blurring light spots, terminal device and storage medium
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN110336917B (en) Picture display method and device, storage medium and terminal
CN110717486B (en) Text detection method and device, electronic equipment and storage medium
CN108595104B (en) File processing method and terminal
CN107734049B (en) Network resource downloading method and device and mobile terminal
CN111027406B (en) Picture identification method and device, storage medium and electronic equipment
CN113283552A (en) Image classification method and device, storage medium and electronic equipment
CN108829600B (en) Method and device for testing algorithm library, storage medium and electronic equipment
CN107194363B (en) Image saturation processing method and device, storage medium and computer equipment
CN112837222A (en) Fingerprint image splicing method and device, storage medium and electronic equipment
CN110866488A (en) Image processing method and device
CN111787228A (en) Shooting method, shooting device, storage medium and mobile terminal
CN111402273A (en) Image processing method and electronic equipment
CN114140864B (en) Trajectory tracking method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220304