WO2019153286A1 - Image classification method and device - Google Patents

Image classification method and device Download PDF

Info

Publication number
WO2019153286A1
WO2019153286A1 PCT/CN2018/076081 CN2018076081W WO2019153286A1 WO 2019153286 A1 WO2019153286 A1 WO 2019153286A1 CN 2018076081 W CN2018076081 W CN 2018076081W WO 2019153286 A1 WO2019153286 A1 WO 2019153286A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
image file
feature information
classification
Prior art date
Application number
PCT/CN2018/076081
Other languages
French (fr)
Chinese (zh)
Inventor
孙伟
谭利文
杜明亮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/076081 priority Critical patent/WO2019153286A1/en
Priority to CN201880085333.5A priority patent/CN111566639A/en
Publication of WO2019153286A1 publication Critical patent/WO2019153286A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Definitions

  • the embodiments of the present application relate to the field of image processing, and in particular, to an image classification method and device.
  • Image classification is a method in which a device automatically divides a plurality of images into at least two types of images according to feature information of each image in a plurality of images for classification management. In the process of image classification, the amount of calculation required for the device to analyze multiple images to obtain their feature information is large.
  • the image a is transmitted to another device, and when the other device performs image classification on the image a, it is necessary to perform image analysis on the image a again to acquire the feature of the image a. information. Among them, repeatedly acquiring the feature information of the same image will generate a large amount of redundant calculation.
  • the embodiment of the present application provides an image classification method and device, which can reduce the calculation amount in the image classification process and improve the image classification efficiency.
  • an embodiment of the present application provides an image classification method, where the image classification method includes: capturing image data by a device, and acquiring image generation information when capturing image data, and generating a first image file including image data and image generation information. And then performing an image classification operation on the first image file using the image generation information; the device displays the first image file in the catalogue of the gallery in response to the operation for viewing the gallery.
  • the image generation information when the image data is captured may be acquired, and then the first image file including the image data and the image generation information is generated.
  • the image classification operation can be directly performed using the image generation information. That is, the device can skip the recognition image data to obtain image generation information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
  • the image generation information may include: information of a shooting parameter, information of a shooting mode, information of a shooting scene, and information of a camera type.
  • the shooting parameter may include a parameter such as an exposure value, the panoramic mode, a normal mode, and the like
  • the shooting scene may include a person shooting scene, a building shooting scene, and a natural Scenery shooting scenes, indoor shooting scenes, and outdoor shooting scenes.
  • the classification feature information is feature information obtained by performing an image recognition operation on the image data.
  • the camera type is used to indicate that the image data was captured using a front camera or a rear camera.
  • the image generation information further includes character feature information.
  • the character feature information includes information such as the number of faces, face indication information, face location information, indication information of other objects (such as animals), and the number of other objects.
  • the face indication information is used to indicate that the image data of the first image file includes a face or does not include a face; the indication information of the other object is used to indicate that the image data includes other objects or does not include other objects; Used to indicate the position of the face in the image data.
  • the device when performing the image classification operation, may perform an image recognition operation on the image data to analyze the image data to obtain image generation information. That is, the device can skip the image recognition operation on the image data to obtain the image generation information. That is to say, the above image generation information is not obtained by performing an image recognition operation. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
  • the feature information required by the device to perform the image classification operation includes not only the image generation information but also the classification feature information.
  • the foregoing apparatus performs image classification operation on the first image file by using image generation information, including: performing, by the device, an image recognition operation on the image data to analyze the image data to obtain classification feature information; and the device uses the image generation information and the classification feature information, An image classification operation is performed on the first image file.
  • the device may further save the classification feature information in the first image file to obtain the updated first image file.
  • the feature information image generation information and classification feature information
  • the image classification algorithm used by the device to analyze the image data to obtain different feature information is different.
  • the image classification algorithm used by the device to analyze image data to obtain image generation information includes a first algorithm.
  • the device does not perform an image recognition operation on the image data
  • the method for analyzing the image data to obtain the image generation information includes: the device performs an image recognition operation on the image data without using the first algorithm, and analyzes the image data to obtain the image corresponding to the first algorithm. Image generation information.
  • the first algorithm in the embodiment of the present application may include one or more image classification algorithms.
  • the image generation information in the embodiment of the present application may include feature information of one or more attributes, and the feature information of each attribute corresponds to an image classification algorithm.
  • the device may perform an image recognition operation on the image data by using a second algorithm to analyze the image data to obtain classification feature information corresponding to the second algorithm; and then use the image generation information and The feature information is classified, and an image classification operation is performed on the first image file.
  • the first algorithm is different from the second algorithm.
  • the device when the device performs an image classification operation on the first image file again, the device may read the feature information (image generation information and classification feature information) saved in the first image file.
  • the image recognition operation may be performed on the image data without using the third algorithm to analyze the image data.
  • First feature information corresponding to the third algorithm directly performing image classification operation on the first image file by using the first feature information. That is, the device can skip the recognition of the image data by using the third algorithm to analyze the image data to obtain the first feature information, and directly perform the image classification operation on the first image file by using the first feature information, thereby reducing the calculation amount of performing the image classification operation.
  • the first feature information may not be included in the first image file.
  • the device may identify the image data by using a third algorithm to acquire the first feature information, and perform an image classification operation by using the first feature information.
  • the device may save the first feature information in the first image file to obtain the updated first image file, so that when the device or other device performs the image classification operation on the first image file again, the device may directly save and save the image file.
  • the first feature information in the first image file does not need to re-use the first algorithm to identify the image data of the first image file.
  • the algorithm version used by the device to perform the image classification operation is continuously updated over time, and the same algorithm of different versions is used to identify the image data of the first image file. Characteristic information is different. Based on this, the classification feature information further includes a version of the image classification algorithm. In this case, even if the first feature information is stored in the first image file, the algorithm version identifying the first feature information and the version of the third algorithm are not necessarily the same.
  • performing the image classification operation by using the first feature information may include: determining, by the device, that the algorithm version identifying the first feature information is the same as the version of the third algorithm; and the device directly performing the image classification operation by using the first feature information, skipping The first image is identified using a third algorithm to obtain first feature information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
  • the method of the embodiment of the present application further includes: determining, by the device, that the algorithm version that identifies the first feature information is different from the version of the first algorithm; the device uses the third algorithm to identify the image data. And acquiring the first feature information, and performing the image classification operation by using the first feature information; and updating the first feature information saved in the first image file by using the first feature information that is identified, to obtain the updated first image file.
  • the new first feature information saved in the first image file can be directly utilized without re-using the first algorithm to identify the first Image data of an image file.
  • the format of the first image file is an Exchangeable image file format (EXIF).
  • EXIF Exchangeable image file format
  • the image generation information is saved in a Maker Note field of the first image file.
  • the above classification feature information is also saved in the Maker Note field of the first image file.
  • the image generation information is saved in a Maker Note field of the first image file in a Tagged Image File Format (TIFF) format.
  • TIFF Tagged Image File Format
  • the above classification feature information is also saved in the Maker Note field of the first image file using TIFF.
  • an embodiment of the present application provides an image classification method, where the image classification method includes: the device captures image data by using a camera; and the device acquires image generation information when capturing image data, and generates a first image including image data and image generation information.
  • An image file wherein the format of the first image file is EXIF, and the image generation information is saved in a Maker Note field of the first image file; the image generation information includes information of a shooting parameter, information of a shooting mode, information of a shooting scene, and a camera type At least one of the information; the device directly performs an image classification operation on the first image file using the image generation information; instead of performing an image recognition operation on the image data to analyze the image data to obtain image generation information, and then using the image generation information obtained by the analysis An image classification operation is performed on the first image file; finally, in response to the operation for viewing the gallery, the first image file is displayed in the catalog of the gallery.
  • the device when the camera captures image data, the device may acquire image generation information when capturing image data, and then generate a first image file including image data and image generation information.
  • the image classification operation can be directly performed using the image generation information. That is, the device can skip the recognition image data to obtain image generation information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
  • the foregoing apparatus performs image classification operation on the first image file by using image generation information, including: performing, by the device, an image recognition operation on the image data, to analyze the image data to obtain classification feature information; The device performs an image classification operation on the first image file using the image generation information and the classification feature information.
  • the device may save the classification feature information in the first image file to obtain the updated first image file.
  • the image generation information is saved in a Maker Note field by using TIFF.
  • an embodiment of the present application provides an apparatus, where the apparatus includes: an acquiring unit, a classifying unit, and a display unit.
  • An acquisition unit configured to capture image data, and acquire image generation information when the image data is captured, generate a first image file including the image data and the image generation information; and a classification unit configured to utilize the acquisition unit Obtaining the image generation information, performing an image classification operation on the first image file; and displaying means for displaying the first image file in a classification directory of the gallery in response to the operation for viewing the gallery.
  • the classification unit is further configured to perform an image recognition operation on the image data; wherein the image generation information is not obtained by the classification unit identifying the image data.
  • the foregoing classification unit is specifically configured to perform an image recognition operation on the image data to analyze the image data to obtain classification feature information, and use the image generation information and the classification feature information to The image file performs an image classification operation.
  • the foregoing apparatus further includes: an update unit.
  • the updating unit is configured to perform an image recognition operation on the image data by the classification unit to analyze the image data to obtain the classification feature information, and save the classification feature information in the first image file to obtain the updated first image file.
  • an embodiment of the present application provides an apparatus, including: a device, a processor, a memory, a camera, and a display; the memory and the display are coupled to the processor, the display is configured to display an image file, and the memory includes non-volatile a storage medium for storing computer program code, the computer program code comprising computer instructions for capturing image data when the processor executes the computer instruction, the processor for acquiring the camera capture Image generation information at the time of image data, generating a first image file including the image data and the image generation information; and performing image classification operation on the first image file using the image generation information.
  • the image generation information is not obtained by the processor performing an image recognition operation on the image data.
  • the processor is specifically configured to perform an image recognition operation on the image data to analyze the image data to obtain classification feature information, and use the image generation information and the classification feature information to The image file performs an image classification operation.
  • the processor is further configured to perform an image recognition operation on the image data to analyze the image data to obtain the classification feature information, and then save the classification feature information in the first image file. Get the updated first image file.
  • the image generation information, the format of the first image file, the image generation information, and the classification feature described in the second aspect, the third aspect, and the fourth aspect of the embodiments of the present application and any possible design manner thereof
  • the location of the information in the first image file, the format of the image generation information and the classification feature information in the vendor annotation field reference may be made to the related description in the first aspect of the design, which is not described herein.
  • an embodiment of the present application provides a control device, where the control device includes a processor and a memory, where the memory is used to store computer program code, where the computer program code includes computer instructions, and when the processor executes the computer instruction, the control
  • the apparatus performs the method as described in the first and second aspects of the embodiments of the present application and any of its possible design approaches.
  • the embodiment of the present application provides a computer storage medium, where the computer storage medium includes computer instructions, when the computer instruction is run on a device, causing the device to perform the first aspect of the embodiment of the present application Any of the methods described in the possible design approach.
  • the embodiment of the present application provides a computer program product, when the computer program product is run on a computer, causing the computer to perform the first aspect and the second aspect, and any one of the embodiments of the present application. The method of design described.
  • the third aspect, the fourth aspect, and any one of the design manners, and the technical effects brought by the second aspect, the fourth aspect to the seventh aspect may refer to the technology brought by different design modes in the foregoing first aspect. The effect will not be described here.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile phone according to an embodiment of the present application.
  • JPEG Joint Photographic Experts Group
  • FIG. 3 is a schematic diagram of a data structure of an EXIF image file according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram 1 of a system principle framework of an image classification method according to an embodiment of the present application.
  • FIG. 5 is a flowchart 1 of an image classification method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram 2 of a system principle framework of an image classification method according to an embodiment of the present application.
  • FIG. 7A is a schematic diagram of an example of a mobile phone interface according to an embodiment of the present application.
  • FIG. 7B is a second flowchart of an image classification method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram showing the data structure of a Maker Note field of the EXIF image shown in FIG. 3;
  • FIG. 9 is a schematic diagram 1 of a data structure of a TIFF field in the Maker Note field shown in FIG. 8;
  • FIG. 10 is a second schematic diagram of a data structure of a TIFF field in the Maker Note field shown in FIG. 8;
  • FIG. 11 is a schematic diagram 3 of a system principle framework of an image classification method according to an embodiment of the present disclosure.
  • FIG. 12 is a flowchart 3 of an image classification method according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram 1 of a device according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram 2 of a device according to an embodiment of the present disclosure.
  • first and second are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” and “second” may include one or more of the features either explicitly or implicitly.
  • the first feature information and the second feature information refer to different feature information in the first image file, and the non-first image file has two feature information.
  • the embodiment of the present application provides an image classification method, which can be applied to an image classification of a first image file by a device.
  • the image file (such as the first image file) in the embodiment of the present application refers to an image file obtained by encoding and compressing an image captured by the camera, such as a JPEG image file.
  • the image described in the embodiment of the present application can be understood as an electronic picture (hereinafter referred to as a picture).
  • the image classification in the present application refers to the device according to the feature information of the image data in each image file of the plurality of image files, such as shooting mode (such as panoramic mode) information, shooting scene (such as character scene) information, and face.
  • Information such as the number (such as 3 faces) is divided into at least two types of image files (ie, clustering a plurality of image files).
  • the devices may be a mobile phone (such as the mobile phone 100 shown in FIG. 1), a tablet computer, a personal computer (PC), and a personal digital assistant. , PDA), netbooks, wearable electronic devices, augmented reality (AR), virtual reality (VR) devices, car computers and other terminal devices.
  • a mobile phone such as the mobile phone 100 shown in FIG. 1
  • PC personal computer
  • PDA personal digital assistant
  • netbooks wearable electronic devices
  • AR augmented reality
  • VR virtual reality
  • the device may manage the image saved in the device, and perform the method provided in the embodiment of the present application to perform image classification on the image saved in the device.
  • a client or an application for managing a picture may be installed in the device, and the client may manage a picture saved in the cloud server after logging in to a picture management account; and the client may also be used for The method provided in the embodiment of the present application performs image classification on a picture in the cloud server.
  • the device in the embodiment of the present application may be a cloud server for storing and managing a picture, and the cloud server may receive the picture uploaded by the terminal, and then perform the method provided by the embodiment of the present application to perform image classification on the picture uploaded by the terminal.
  • the specific form of the above device is not specifically limited in the embodiment of the present application.
  • the mobile phone 100 is used as the device.
  • the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, and one or more sensors.
  • RF radio frequency
  • 106 Wireless Fidelity (WiFi) device 107, positioning device 108, audio circuit 109, peripheral interface 110, and power supply device 111 and the like.
  • WiFi Wireless Fidelity
  • These components can communicate over one or more communication buses or signal lines (not shown in Figure 1).
  • the hardware structure shown in FIG. 1 does not constitute a limitation to a mobile phone, and the mobile phone 100 may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • the processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 by using various interfaces and lines, and executes the mobile phone 100 by running or executing an application stored in the memory 103 and calling data stored in the memory 103.
  • processor 101 can include one or more processing units.
  • the processor 101 may further include a fingerprint verification chip for verifying the collected fingerprint.
  • the radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls.
  • the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101; in addition, transmit the data related to the uplink to the base station.
  • radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency circuit 102 can also communicate with other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol including, but not limited to, global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
  • the memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103.
  • the memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.).
  • the memory 103 may include a high speed random access memory (RAM), and may also include a nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
  • the memory 103 can store various operating systems, for example, operating system, Operating system, etc.
  • the above memory 103 may be independent and connected to the processor 101 via the above communication bus; the memory 103 may also be integrated with the processor 101.
  • the touch screen 104 may specifically include a touch panel 104-1 and a display 104-2.
  • the touch panel 104-1 can collect touch events on or near the user of the mobile phone 100 (for example, the user uses any suitable object such as a finger, a stylus, or the like on the touch panel 104-1 or on the touchpad 104.
  • the operation near -1), and the collected touch information is sent to other devices (for example, processor 101).
  • the touch event of the user in the vicinity of the touch panel 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.) , and only the user is located near the device to perform the desired function.
  • the touch panel 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • a display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100.
  • the display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the touchpad 104-1 can be overlaid on the display 104-2, and when the touchpad 104-1 detects a touch event on or near it, it is transmitted to the processor 101 to determine the type of touch event, and then the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event.
  • the touchpad 104-1 and the display 104-2 are implemented as two separate components to implement the input and output functions of the handset 100, in some embodiments, the touchpad 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It is to be understood that the touch screen 104 is formed by stacking a plurality of layers of materials. In the embodiment of the present application, only the touch panel (layer) and the display screen (layer) are shown, and other layers are not described in the embodiment of the present application. .
  • the touch panel 104-1 may be disposed on the front surface of the mobile phone 100 in the form of a full-board
  • the display screen 104-2 may also be disposed on the front surface of the mobile phone 100 in the form of a full-board, so that the front of the mobile phone can be borderless. Structure.
  • the mobile phone 100 can also have a fingerprint recognition function.
  • the fingerprint reader 112 can be configured on the back of the handset 100 (eg, below the rear camera) or on the front side of the handset 100 (eg, below the touch screen 104).
  • the fingerprint collection device 112 can be configured in the touch screen 104 to implement the fingerprint recognition function, that is, the fingerprint collection device 112 can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100.
  • the fingerprint capture device 112 is disposed in the touch screen 104 and may be part of the touch screen 104 or may be otherwise disposed in the touch screen 104.
  • the main component of the fingerprint collection device 112 in the embodiment of the present application is a fingerprint sensor, which can employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies.
  • the mobile phone 100 may also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range devices (eg, mobile phones, smart watches, etc.).
  • the Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
  • the handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor may turn off the power of the display when the mobile phone 100 moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • the WiFi device 107 is configured to provide the mobile phone 100 with network access complying with the WiFi-related standard protocol, and the mobile phone 100 can access the WiFi access point through the WiFi device 107, thereby helping the user to send and receive emails, browse web pages, and access streaming media. It provides users with wireless broadband Internet access.
  • the WiFi device 107 can also function as a WiFi wireless access point, and can provide WiFi network access for other devices.
  • the positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a Global Positioning System (GPS) or a Beidou satellite navigation system, or a Russian GLONASS. After receiving the geographical location transmitted by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103 for storage. In some other embodiments, the positioning device 108 can also be a receiver of an Assisted Global Positioning System (AGPS), which assists the positioning device 108 in performing ranging and positioning services by acting as an auxiliary server.
  • AGPS Assisted Global Positioning System
  • the secondary location server provides location assistance over a wireless communication network in communication with a location device 108 (i.e., a GPS receiver) of the device, such as handset 100.
  • the positioning device 108 can also be a WiFi access point based positioning technology. Since each WiFi access point has a globally unique Media Access Control (MAC) address, the device can scan and collect broadcast signals of surrounding WiFi access points when WiFi is turned on, so that it can be obtained. The MAC address broadcasted to the WiFi access point; the device sends the data (such as the MAC address) capable of indicating the WiFi access point to the location server through the wireless communication network, and the location server retrieves the geographical location of each WiFi access point. And in combination with the strength of the WiFi broadcast signal, the geographic location of the device is calculated and sent to the location device 108 of the device.
  • MAC Media Access Control
  • the audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100.
  • the audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 102 for transmission to, for example, another mobile phone, or the audio data is output to the memory 103 for further processing.
  • the peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.). For example, it is connected to the mouse through a Universal Serial Bus (USB) interface, and is connected to a Subscriber Identification Module (SIM) card provided by the service provider through a metal contact on the card slot of the subscriber identity module. . Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.
  • USB Universal Serial Bus
  • SIM Subscriber Identification Module
  • the mobile phone 100 can communicate with other devices in the device group through the peripheral interface 110.
  • the peripheral interface 110 can receive display data sent by other devices for display, etc. No restrictions are imposed.
  • the mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components.
  • the battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
  • the mobile phone 100 may further include a camera (front camera and/or rear camera), a flash, a micro projection device, a near field communication (NFC) device, and the like, and details are not described herein.
  • a camera front camera and/or rear camera
  • a flash a flash
  • micro projection device a micro projection device
  • NFC near field communication
  • the execution body of the image classification method provided by the embodiment of the present application may be an image processing device, which is a device that can be used for managing images (such as the mobile phone 100 shown in FIG. 1), or a central processing unit of the device. (Central Processing Unit, CPU), or a control module for image processing in the device, or a client for managing images in the device.
  • image processing device which is a device that can be used for managing images (such as the mobile phone 100 shown in FIG. 1), or a central processing unit of the device. (Central Processing Unit, CPU), or a control module for image processing in the device, or a client for managing images in the device.
  • CPU Central Processing Unit
  • the image classification method performed by the above device is taken as an example, and the image classification method provided by the present application is described.
  • JPEG is an international image compression standard.
  • JFIF JPEG File Interchange Format
  • JPEG/JFIF is the most commonly used format for storing and transmitting pictures on the World Wide Web.
  • the JPEG image file (that is, the image file in the JPEG format) starts with the character string "0xFFD8" and ends with the character string "0xFFD9".
  • the JPEG image file has a series of "0xFF**" format characters in the file header, called "JPEG mark” or "JPEG segment”, which is used to mark the information segment of the JPEG image file.
  • JPEG mark or "JPEG segment”
  • “0xFFD8” is used to mark the beginning of image information
  • “0xFFD9” is used to mark the end of image information
  • the EXIF image file (that is, the image file in the EXIF format) is one of the above JPEG image files, and conforms to the JPEG standard.
  • the EXIF image adds a shooting parameter (referred to as a first shooting parameter) of the camera-captured image in the JPEG image file.
  • the first shooting parameter may include: shooting date, shooting equipment parameters (such as camera brand and model, lens parameters, flash parameters, etc.), shooting parameters (such as shutter speed, aperture F value, ISO speed, focal length, metering) Mode, etc.), image processing parameters (such as sharpening, contrast, saturation, white balance, etc.) and GPS positioning data of captured images.
  • the EXIF image file may include EXIF information, where the EXIF information includes an EXIF field 301 (a field in the EXIF image file for saving the first shooting parameter) and a Maker Note field 302.
  • the parameters are saved in the EXIF field 301.
  • the Maker Note field 302 is a field reserved for the vendor to hold vendor-specific annotation data.
  • the EXIF information starts with the character string "0xFFE0" in the EXIF image and ends with the character string "0xFFEF", which is 64 KB (kilobytes).
  • the format of the first image file in the embodiment of the present application may be EXIF (ie, the first image file may be an EXIF image file), and the feature information in the first image file may be saved in the Maker Note field of the EXIF image file.
  • the image data field 303 of the EXIF image file is used to save image data.
  • the feature information in the first image file can be read, and the image is directly executed on the first image file by using the feature information in the first image file.
  • the classification operation instead of performing an image recognition operation on the image data in the first image file, the feature information for performing the image classification operation is obtained by a large number of calculations. Through this scheme, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
  • the format of the first image file in the embodiment of the present application includes, but is not limited to, the EXIF.
  • the first image file in the embodiment of the present application may also be another format file including a reserved field, and the reserved field may be used for saving.
  • Feature information for performing image classification operations includes, but is not limited to, the Maker Note field, and the reserved field may be used in the first image file, and may be used to save any field of the feature information for performing the image classification operation. This is not a limitation.
  • the first device may save the feature information of the image data in the first image file during the process of capturing the image and performing the image classification operation, and directly use the image file stored in the image file when performing the image classification operation.
  • Feature information may be saved.
  • the device may acquire feature information (image generation information in the embodiment of the present application) when the image data is collected by the camera, and save the feature information in the first image file (ie, perform 401). Generating a first image file 403; subsequently, when the image classification engine 402 of the device performs an image classification operation, the feature information (ie, image generation information) in the first image file 403 may be directly read, and the image data of the first image file is identified. Obtaining new feature information (such as classification feature information in the embodiment of the present application); then performing image classification operation on the first image file by using the read feature information and the new feature information; finally, the identified new Feature information, updating feature information in the first image file (eg, adding new feature information).
  • new feature information such as classification feature information in the embodiment of the present application
  • the embodiment of the present application provides an image classification method.
  • the format of the first image file is EXIF (ie, the first image file is an EXIF image file), and the feature information is stored in the Maker Note field of the EXIF image file as an example, and is implemented in the present application.
  • the image classification method provided by the example is explained. As shown in FIG. 5, the method in this embodiment of the present application includes S501-S503:
  • the first device captures image data, and acquires image generation information when capturing image data, and generates a first image file including image data and image generation information.
  • the preset field (such as the Maker Note field) of the first image file (such as the EXIF image file) of the embodiment of the present application may be used to save the feature information of the image data of the first image file.
  • the feature information may include image generation information.
  • the image generation information is feature information acquired by the first device when the camera of the first device captures the image data. For a detailed example of the image generation information, reference may be made to the subsequent description of the embodiments of the present application, and details are not described herein.
  • a first image file including image data may be generated after the first device captures image data.
  • the method for capturing an image file by the first device is different from the method for capturing an image by the device in the conventional solution. Specifically, the first device not only acquires image data captured by the camera, but also acquires image generation information when the image data is captured, and then generates an image file including the image data and the image generation information.
  • the image generation information may include a shooting parameter (referred to as a second shooting parameter) when the camera captures image data.
  • the second shooting parameters described above may include information of a shooting mode (such as a panoramic mode and a normal mode), a shooting scene (such as a human shooting scene, a building shooting scene, a natural scenery shooting scene, an indoor shooting scene, and an outdoor shooting scene, etc.) Information and camera type (camera type indicates that the image data is captured by the front camera or the rear camera).
  • the first device may determine information such as the shooting mode information, the shooting scene information, and the camera type, in response to the selection of the shooting mode, the shooting scene, and the front camera or the rear camera when the camera captures the image data.
  • the normal mode in the embodiment of the present application refers to a mode in which a photo is taken using a rear camera.
  • the image generation information further includes character feature information, where the character feature information includes a number of faces, face indication information, face position information, and indications of other objects (such as animals). Information such as the number of information and other objects.
  • the face indication information is used to indicate that the image data of the first image file includes a face or does not include a face; the indication information of the other object is used to indicate that the image data includes other objects or does not include other objects; Used to indicate the position of the face in the image data.
  • the second shooting parameter in the embodiment of the present application is different from the first shooting parameter saved in the EXIF field 301 of the EXIF image shown in FIG. 3.
  • the first shooting parameter can only be saved in the image without recording the second shooting parameter in the embodiment of the present application; and the second shooting parameter can be used to perform image classification on the image. operating.
  • the first device may generate a first image file including the image data and the second shooting parameter when the image data of the first image file is captured.
  • the image classification operation when the image classification operation is performed on the image, the image classification operation can be directly read from the first image file and performed using the second imaging parameter, and the calculation amount when the image classification operation is performed on the image can be reduced, thereby improving the image. Classification efficiency.
  • the first image parameter such as an EXIF image file
  • the first shooting parameter is saved in the EXIF field of the EXIF image file.
  • FIG. 6 is a schematic diagram showing the principle of generating an image file including feature information of image data when an image is captured according to an embodiment of the present application.
  • the camera engine can call an algorithm such as a scene algorithm and a face algorithm when the camera captures an image (ie, 61), and recognizes a user operation when the camera captures image data (ie, 62) to collect image generation of the image.
  • the collected image generation information and the image captured by the camera are passed to the JPEG creator 64; the image generation information from 63 is packed into a byte array by the MakerNote Maker in the JPEG Maker 64 (referred to as The feature byte array), the image from 61 is packed into a byte array by the EXIF maker in the JPEG maker 64; then, the image file including the image data and the above-described feature byte array is generated by the JPEG maker 64 (ie, An image file).
  • the first device may periodically perform the following S502 to perform image classification on the plurality of image files including the first image file.
  • the first device is the mobile phone 100 shown in (a) of FIG. 7A.
  • the photo album of the mobile phone 100 includes photos 1 to 8 and the like, and the first image file is any photo in the album of the mobile phone 100, such as the first image file is FIG. 7A.
  • the mobile phone 100 can periodically perform image classification on photos in the album of the mobile phone 100.
  • the mobile phone 100 can display the album interface shown in (b) of FIG. 7A, in which the mobile phone 100 displays the result of image classification of the photos in the album.
  • the mobile phone 100 divides the photos 1 - 8 into “person” album b, "animal” album a, and "landscape” album c.
  • the "person” album b includes the photo 1, the photo 3, the photo 5, and the photo 8 shown in (a) of FIG. 7A
  • the "animal" photo album a includes the one shown in (a) of FIG. 7A
  • Photo 2 and Photo 7 includes Photo 4 and Photo 6 shown in (a) of Fig. 7A.
  • the mobile phone 100 can display the "person” shown in (c) of FIG. 7A.
  • the "People" album interface includes Photo 1, Photo 3, Photo 5, and Photo 8.
  • the first device may perform image classification on the pictures in the album of the first device by performing S502 in response to the user operation.
  • the mobile phone 100 can also perform image classification on the photos 1 - 8 in response to the user's click operation on the "alliance" button shown in (a) of FIG. 7A, and then display the image in FIG. 7A.
  • the first device performs image classification operation on the first image file by using image generation information.
  • the image generation information is not obtained by the first device performing an image recognition operation on the image data, that is, the first device does not perform an image recognition operation on the image data in order to obtain the image generation information. That is, the first device may skip the following steps: performing an image recognition operation on the image data, analyzing the image data to obtain image generation information, and directly using the saved image generation information in the first image file to perform execution on the first image file. Image classification operation.
  • the image classification algorithm adopted by the first device to analyze the image data to obtain different feature information is different.
  • the image classification algorithm used by the first device to analyze the image data to obtain the image generation information includes the first algorithm. In this way, the first device does not perform an image recognition operation on the image data by using the first algorithm, and analyzes the image data to obtain image generation information corresponding to the first algorithm.
  • the first algorithm in the embodiment of the present application may include one or more image classification algorithms.
  • the image generation information in the embodiment of the present application may include feature information of one or more attributes, and the feature information of each attribute corresponds to an image classification algorithm.
  • the device displays the first image file in a category directory of the gallery in response to the operation for viewing the gallery.
  • the image classification directory in the embodiment of the present application may display the first image file according to the classification result obtained by executing S502.
  • the mobile phone 100 may also display the photo 1 - photo 8 in a responsive manner to the user's click operation on the "alliance" button shown in (a) of FIG. 7A.
  • the album interface shown in (b) of 7A ie, the catalogue of the gallery).
  • the foregoing operation for querying the gallery may be that the user inputs a keyword in a search box of the gallery, and the device may input the keyword in the search box of the gallery in response to the user, and display the first image file in the category directory of the gallery. Multiple image files that match the keywords entered by the user.
  • the mobile phone 100 performs the image classification operation on the image file (such as photo 1 - photo 8) by executing the above S501-S502. .
  • the mobile phone 100 can display the character image files of the photo 1, the photo 3, the photo 5, and the photo 8 in the catalogue of the gallery.
  • the mobile phone 100 can display the character image files of the photo 3 and the photo 5 in the catalogue of the gallery.
  • the image file captured by the first device includes not only image data, but also image generation information.
  • the first device can directly perform image classification operation on the first image file by using the image generation information; without performing image recognition operation on the image data to analyze the image data to obtain image generation information, the image classification process can be reduced.
  • the amount of calculation can further improve the efficiency of image classification.
  • the feature information required for the first device to perform the image classification operation includes not only the image generation information but also the classification feature information (the feature information obtained by performing the image recognition operation on the image data, and the classification feature information is different from the image generation information) Therefore, before the first device performs an image classification operation on the first image file, the first device may further perform an image recognition operation on the image data to analyze the image data to obtain classification feature information; and then use the image generation information and the classification feature information. And performing an image classification operation on the first image file.
  • the foregoing S502 may include S701-S702:
  • the first device performs an image recognition operation on the image data to analyze the image data to obtain classification feature information.
  • the first device may perform image recognition operations on the image data by using different image classification algorithms (such as the second algorithm) different from the foregoing first algorithm to analyze the image data to obtain classification feature information corresponding to the second algorithm.
  • the second algorithm is different from the first algorithm described above, and the classification feature information is different from the image generation information.
  • the method for performing the image recognition operation on the image data by the first device to obtain the classification feature information by using the second algorithm may refer to the method for performing the image recognition operation on the image data to obtain the classification feature information in the conventional technology, which is not described herein. .
  • the first device performs image classification operation on the first image file by using image generation information and classification feature information.
  • the method for performing an image classification operation on the first image file by using the image generation information and the classification feature information by the first device may refer to a method for performing an image classification operation on the image file according to the feature information of the image file in the conventional technology. The examples are not described here.
  • the method in the embodiment of the present application further includes S703:
  • the first device saves the classification feature information in the first image file to obtain the updated first image file.
  • the image generation information and the classification feature information are included in the first image file.
  • the image generation information and the classification feature information are collectively referred to as feature information of the first image file.
  • the feature information (image generation information and classification feature information) in the embodiment of the present application is saved in a preset field of the first image file.
  • the Maker Note field 302 shown in FIG. 3 is used as an example to describe the format of the preset field and the manner in which the feature information is saved in the preset field:
  • the feature information in the embodiment of the present application may be saved in the Maker Note field by using TIFF.
  • the data format for storing the feature information in the Maker Note field 302 includes, but is not limited to, TIFF. Other data formats for storing the feature information in the Maker Note field 302 are not described herein.
  • the Maker Note field 302 includes an information header 302a, a check field 302b, a Tagged Image File Format (TIFF) header 302c, and a TIFF field 302d.
  • TIFF Tagged Image File Format
  • the information header 302a is used to store vendor information. For example, "huawei" may be saved in the information header 302a; the verification field 302b is used to store verification information, which identifies the integrity and accuracy of the information held in the Maker Note field 302, for example, the verification information may It is a Cyclic Redundancy Check (CRC).
  • the TIFF header 302c is configured to save TIFF indication information for indicating that the format of the feature information stored in the TIFF field 302d is an Image File Directory (IFD) format.
  • the TIFF field 302d is used to hold feature information such as image generation information and classification feature information.
  • the device performs an image classification operation on the image, and after obtaining new feature information (ie, classification feature information), the feature information saved in the Maker Note field 302 can be updated (ie, the Maker Note field 302 is modified).
  • new feature information ie, classification feature information
  • the feature information saved in the Maker Note field 302 can be updated (ie, the Maker Note field 302 is modified).
  • the device may be saved in the update Maker Note field 302.
  • new check information is generated for the check field 302b.
  • FIG. 8 is a schematic diagram showing an example of a data structure of a Maker Note field provided by an embodiment of the present application.
  • the format in which the feature information is saved in the TIFF field 302d in FIG. 8 is the IFD format.
  • one or more IFDs may be saved in the TIFF field 302d.
  • IFD0 and IFD1 are included in the TIFF field 302d.
  • IFD0 the IFD information in the TIFF field 302d is described:
  • IFD0 includes a directory field and a data field
  • the directory field of IFD0 is used to store a directory of sub-IFDs (such as sub-IFD1 and sub-IFD2) in IFD0 and a connection end tag of the IFD0.
  • the data field of IFD0 is used to store sub-IFDs (such as sub-IFD1 and sub-IFD2, etc.).
  • the connection end tag of IFD0 is used to indicate the location where IFD0 ends.
  • the feature information in the embodiment of the present application may be saved in a directory or may be saved in a data domain.
  • each sub-IFD may also include a directory domain and a data domain.
  • the sub-IFD1 in IFD0 includes a directory domain and a data domain.
  • the function of the directory domain and the data domain of the sub-IFD1 reference may be made to the description of the directory domain and the data domain of the IFD in the embodiment of the present application.
  • FIG. 9 shows a schematic structural diagram of a directory of the IFD shown in FIG.
  • the directory of the IFD includes a plurality of tag items, and each tag item includes a tag identifier (Identity, ID), a tag type, and a tag value.
  • the tag value in the embodiment of the present application may be feature information; or, the feature information is stored in a data domain of the IFD, and the tag value is an address offset of the feature information in the data domain.
  • the tag value is the feature information; when a feature information is greater than 4 bytes, the feature information needs to be saved in the data domain, and the tag value is the feature information in the data.
  • the address offset of the domain when a feature information is less than or equal to 4 bytes, the tag value is the feature information; when a feature information is greater than 4 bytes, the feature information needs to be saved in the data domain, and the tag value is the feature information in the data.
  • the address offset of the domain when a feature information is less than or equal to 4 bytes, the tag value is the feature information; when a feature information is greater than 4 bytes, the feature information needs to be saved in the data domain, and the tag value is the feature information in the data.
  • the tag IDs of the three tag entries are 0x001, 0x002, 0x003, and 0x004, respectively.
  • the tag type corresponding to the tag ID 0x001 is Unsigned long, and the tag value is used to indicate the shooting mode information of the image (for example, when the tag value is 0, the image is taken in the self-timer mode; when the tag value is 1, Indicates that the image was taken in panorama mode).
  • the tag type corresponding to the tag ID0x002 is Unsigned byte, and its tag value is used to indicate the camera type of the image (for example, when the tag value is 0, it indicates that the image was taken using the rear camera; when the tag value is 1, the image is Shot with the front camera).
  • the label type corresponding to the label ID0x003 is Undefined, and the label value is used to indicate the face indication information (for example, when the label value is 0, it means that there is no face in the image; when the label value is 1, it indicates a human face in the image).
  • the tag type corresponding to tag ID 0x004 is Unsigned byte, and its tag value is address offset 1, which is the address offset of the face location information in the data field of IFD0.
  • the feature information that the first device needs to use when performing the image classification operation on the first image file may include the feature information of the preset multiple attributes.
  • the first device may use different algorithms to identify image data in the first image file to obtain feature information of the corresponding attribute.
  • the "preset multiple attributes" in the embodiment of the present application is determined by an image classification client (referred to as a client) in the first device. Specifically, the “preset multiple attributes” is determined by the attribute of the feature information that needs to be identified when the image classification client in the first device performs an image classification operation on the image.
  • the attributes of the feature information that the client of the first device needs to recognize when performing an image classification operation on the image file include: a face attribute, a scene attribute, and a mode attribute.
  • the face attribute corresponds to the number of faces and the face indication information
  • the scene attribute corresponds to the shooting scene information
  • the mode attribute corresponds to the shooting mode information.
  • the preset plurality of attributes include a face attribute, a scene attribute, and a mode attribute.
  • the feature information of each attribute corresponds to an image classification algorithm.
  • the feature information of the face attribute corresponds to the face algorithm.
  • the feature information of an attribute may be saved in the sub-IFD of an IFD.
  • the feature information of the face attribute may include: a version of the face algorithm, a number of faces, and face position information. Assume that IFD0 includes three sub-IFDs (such as sub-IFD1-sub-IFD3).
  • FIG. 10 shows the structure of the directory of the sub-IFD in the IFD0 shown in FIG.
  • the directory of the sub-IFD in the IFD includes a plurality of tag items, and each tag item includes a tag ID, a tag type, and a tag value.
  • the sub-IFD1 of IFD0 it is assumed that the sub-IFD1 of IFD0 includes three tag items, and the tag IDs of the three tag items are 0x001, 0x002, and 0x003, respectively.
  • the tag type corresponding to the tag ID 0x001 is Unsigned long, and the tag value is used to indicate the version of the face algorithm; the tag type corresponding to the tag ID 0x002 is Unsigned long, and the tag value is used to indicate the number of faces.
  • the tag type corresponding to the tag ID 0x003 is Unsigned byte, and the tag value is the address offset 2, which is the address offset of the face location information in the data field of the sub-IFD1 of the IFD0.
  • the tag IDs in the tag entries in different IFDs in this embodiment may be the same. Specifically, since the identifiers (such as IDs) of different IFDs are different, even if the label items in the two IFDs use the same label ID, the device can distinguish the label items in different IFDs. Also, the tag IDs in the tag entries in different sub-IFDs may be the same. Specifically, since the identifiers (such as IDs) of different sub-IFDs are different, even if the label items in the two sub-IFDs use the same label ID, the device can distinguish the label items in different sub-IFDs. Moreover, the tag types in the embodiments of the present application include, but are not limited to, the Unsigned long, the Unsigned byte, and the Undefined.
  • a tag item, an IFD or a sub-IFD may be added in the TIFF field 302d for saving the feature information of the new attribute.
  • IFD2 is added to the TIFF field 302d shown in FIG.
  • FIG. 11 is a schematic diagram of a principle for performing an image classification operation according to an embodiment of the present application.
  • the image classification engine may first read the feature information saved in the Maker Note field of the image file, and after reading the feature information, the image is parsed by Maker Note.
  • the image classification algorithm ie, 1102
  • the image classification operation ie, 1103
  • the image classification engine further
  • the new feature information 1104 can be used to update the feature information (ie, 1105) stored in the Maker Note field of the image file to obtain an updated image file.
  • the first device may acquire the image file before performing the image classification operation on the image file.
  • the method in this embodiment of the present application includes S1200:
  • the first device acquires the first image file.
  • the manner in which the first device acquires the first image file includes, but is not limited to, the manner shown in the above S501, and the first device may further receive the image file sent by the second device. That is, S1200 may include S1200a: the first device receives the first image file sent by the second device. Wherein, in different implementation manners (implementation manners a-d), the first image file received by the first device from the second device is different:
  • Implementation mode a The second device captures the first image file by capturing an image file as shown in S501.
  • the first device receives image generation information from the first image file of the second device.
  • Implementation b the second device captures the first image file by using the captured image file shown in S501; and the second device performs the image classification operation on the first image file by performing the image classification method provided by the embodiment of the present application. . After the second device performs the image classification operation on the first image file, the classification feature information of the image data in the first image file is saved in the first image file. That is, the first device receives the image generation information and the classification feature information from the first image file of the second device.
  • Implementation c The second device does not have the function of capturing an image file in the manner shown in S501, and the first device does not include the image generation information in the first image file received from the second device, and does not include the classification feature information.
  • the second device does not have the function of capturing the image file in the manner shown in S501; however, the second device performs the image classification operation on the first image file by performing the image classification method provided by the embodiment of the present application. After the second device performs an image classification operation on the first image file, the second image file includes classification feature information, and does not include image generation information.
  • the first device may first read Feature information saved in the first image file; when it is determined that the first feature information (image generation information and/or classification feature information) is stored in the first image file, the image recognition operation may be performed on the image data without using the third algorithm And analyzing the image data to obtain first feature information corresponding to the third algorithm; and directly performing image classification operation on the first image file by using the first feature information.
  • the first feature information image generation information and/or classification feature information
  • the device can skip the recognition of the image data by using the third algorithm to analyze the image data to obtain the first feature information, and directly perform the image classification operation on the first image file by using the first feature information, thereby reducing the calculation amount of performing the image classification operation.
  • the method in the embodiment of the present application further includes S1201-S1205.
  • the method of the embodiment of the present application further includes S1201-S1205:
  • S1201 The first device acquires feature information in the first image file.
  • the feature information (such as image generation information and classification feature information) in the embodiment of the present application is saved in a preset field of the first image file (such as a Maker Note field).
  • a preset field of the first image file such as a Maker Note field.
  • S1202 The first device determines whether the first feature information is included in the first image file.
  • the first device may not adopt the first feature information.
  • the three algorithms recognize the image data to obtain the first feature information (ie, skip S1203), and directly perform the image classification operation using the first feature information (ie, perform S1205).
  • the first device may adopt the third algorithm.
  • the image data in the first image file is identified to obtain the first feature information (ie, execution S1203), and then the image classification operation is performed using the first feature information obtained in S1203 (ie, execution S1205).
  • the first feature information may be feature information of the first attribute
  • the third algorithm is an image classification algorithm for identifying feature information of the first attribute.
  • the feature information of an image can be divided into the feature information of the attribute a and the feature information of the attribute b according to the attribute of the feature information; then the plurality of preset attributes may include: the attribute a and the attribute b.
  • the feature information of one image may be classified into feature information of a shooting attribute and feature information of a face attribute according to attributes of the feature information.
  • the preset attribute may include: attribute a-sub Attribute 1, attribute a-sub attribute 2 and attribute b.
  • the feature information of the shooting attribute may include shooting mode information and shooting scene information, etc.; the feature information of the face attribute may be divided into: face indication information, a version of the face algorithm, and a number of faces.
  • the Maker Note field of the first image file may save the feature information of the image data according to different attributes. For example, as shown in FIG. 8, the feature information of one attribute is stored in one IFD, and the attribute information of the feature information stored in different IFDs is different.
  • the IFD0 shown in FIG. 8 can be used to save feature information of shooting attributes such as the above-described shooting mode information and shooting scene information, and the shooting mode information is saved in the sub-IFD1 in the IFD0, and the shooting scene information is saved in the sub-IFD2 in the IFD0;
  • the IFD1 shown in 8 can be used to save the feature information of the face attribute (the version of the face algorithm, the number of faces and the face position information, etc.), and the version of the face algorithm is saved in the child IFD1 in the IFD1, in the IFD1.
  • the number of faces is saved in the child IFD2, and the face position information is saved in the child IFD3 in the IFD1.
  • each IFD saves the feature information of the tag corresponding to an attribute.
  • multiple tags can be saved in each IFD, and each tag includes a tag ID, a tag type, and a tag value.
  • the tag corresponding to the first attribute may not be included in the IFD of the TIFF field.
  • the feature information of the first attribute may not be saved by setting the label value of the feature information of the first attribute to be empty (such as Null).
  • the first device uses a third algorithm to identify image data in the first image file to obtain first feature information.
  • the method for identifying the image data of the first image file by using the image classification algorithm (such as the third algorithm) to obtain the first feature information may refer to the image classification operation when the device performs the image classification operation on the image in the conventional technology.
  • the method for identifying the feature information of the image data by the algorithm is not described herein again in the embodiment of the present application.
  • the first device saves the first feature information in the first image file to obtain the updated first image file.
  • the attribute of the feature information saved by each IFD may be pre-agreed.
  • the tag ID of each IFD corresponds to the feature information of one attribute; therefore, the first device may save the feature information of the first attribute in the Maker Note.
  • the tag value corresponding to the tag ID of the first attribute.
  • the first device performs an image classification operation on the first image file by using the first feature information.
  • the method for performing an image classification operation on the first image file by using the first feature information by the first device may refer to a method for performing an image classification operation on the image file according to the feature information of the image file in the conventional technology. Let me repeat.
  • the principle of performing the image classification operation provided by the embodiment of the present application, reference may be made to the schematic diagram shown in FIG. 11 , which is not repeatedly described herein.
  • the first device when performing the image classification operation on the first image file, the first device may first acquire the feature information in the first image file, and determine whether the first feature information is included in the first image file; When the first feature information is included in the first image file, the image data of the first image file may be skipped by using the third algorithm to obtain the first feature information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
  • the version of the algorithm used by the device to perform the image classification operation is continuously updated with time, and the feature information obtained by using the same algorithm of different versions to identify the image data of the first image file is different.
  • the classification feature information further includes a version of the image classification algorithm.
  • the algorithm version identifying the first feature information is not necessarily the same as the version of the third algorithm.
  • the method of the embodiment of the present application further includes S1301, and the S1204 may be replaced by S1204a:
  • the first device determines whether the algorithm version identifying the first feature information is the same as the version of the third algorithm.
  • the first device may skip using the third algorithm to identify the image data in the first image file to obtain the first feature information (ie, skipping S1203), performing an image classification operation on the first image file directly by using the first feature information (ie, executing S1205).
  • the first device may identify the image data by using the third algorithm to obtain the first feature information (ie, execute S1203), and then use the method obtained by executing S1203.
  • a feature information performs an image classification operation on the first image file (ie, execution S1205).
  • S1204a The first device uses the first feature information to update the saved first feature information in the first image file to obtain the updated first image file.
  • the method for saving the feature information in the first image file by the first device is: the first device adds the first feature to the preset field of the first image file. information.
  • the method for the first device to update the feature information saved in the first image file is: the first device
  • the saved first feature information in the preset field of the first image file is replaced with the identified first feature information.
  • the algorithm version for identifying the first feature information is different from the version of the third algorithm, and may be divided into two cases: (1) the algorithm version identifying the first feature information is lower than the version of the third algorithm; and (2) identifying the first The algorithm version of a feature information is higher than the version of the third algorithm.
  • the method of the embodiment of the present application further includes S1401:
  • the first device determines whether an algorithm version that identifies the first feature information is lower than a version of the third algorithm.
  • the first device may use the third algorithm with a higher version to identify the image data in the first image file to obtain the feature of the first attribute.
  • the information ie, S1203 is performed
  • the image classification operation is performed on the first image file using the first feature information obtained in S1203 (ie, S1205 is performed).
  • the first device When the algorithm version of the first feature information is higher than the version of the third algorithm, the first device does not need to use the third version of the lower algorithm to identify the image data to obtain the feature information of the first attribute, and may skip the third algorithm.
  • the image data is identified to obtain the feature information of the first attribute (ie, skipping S1203), and the image classification operation is performed on the first image file directly using the first feature information (ie, execution S1205).
  • the first device may update the feature information saved in the preset field by using the feature information identified by the version updated algorithm. In this way, when the image classification operation is performed on the image again, the updated feature information saved in the preset field can be directly used, the calculation amount in the image classification process can be reduced, and the image classification efficiency can be improved.
  • the first device and the second device and the like described above include hardware structures and/or software modules corresponding to each function.
  • the embodiments of the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.
  • the embodiment of the present application may perform the division of the function module on the first device according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the device 1300 is the first device in the foregoing method embodiment.
  • the device 1300 includes an acquisition unit 1301, a classification unit 1302, and a display unit 1303.
  • the obtaining unit 1301 is configured to support the device 1300 to perform S501, S1200, S1201 in the foregoing method embodiments, and/or other processes for the techniques described herein.
  • the foregoing classification unit 1302 is configured to support the device 1300 to perform S502, S701, S702, S1203, S1205 in the foregoing method embodiments, and/or other processes for the techniques described herein.
  • the above display unit 1303 is used to support the device 1300 to perform S503 in the above method embodiments, and/or other processes for the techniques described herein.
  • the above device 1300 further includes an update unit.
  • the update unit is configured to support the device 1300 to perform S703, S1204, S1204a in the above method embodiments, and/or other processes for the techniques described herein.
  • the above device 1300 further includes a determining unit.
  • the determining unit is configured to support the device 1300 to perform S1202, S1301, S1401 in the above method embodiments, and/or other processes for the techniques described herein.
  • the above device 1300 may also include other unit modules.
  • the above device 1300 further includes: a storage unit.
  • the storage unit is for saving the first image file.
  • the first image file may be saved in a cloud server, and the device 1300 may perform an image classification operation on the image file in the cloud server.
  • the above device 1300 may further include: a transceiver unit.
  • the device 1300 can interact with other devices through a transceiver unit.
  • the device 1300 can transmit an image file to other devices through the transceiver unit, or receive an image file sent by other devices.
  • the obtaining unit 1301 and the classifying unit 1302 and the like may be integrated into one processing module.
  • the transceiver unit may be an RF circuit, a WiFi module or a Bluetooth module of the device 1300, and the storage unit may be the device 1300.
  • the display unit 1303 may be a display module such as a display (touch screen).
  • FIG. 14 is a schematic diagram showing a possible structure of a terminal involved in the above embodiment.
  • the device 1400 includes a processing module 1401, a storage module 1402, and a display module 1403.
  • the processing module 1401 is configured to perform control management on the device 1400.
  • the display module 1403 is configured to display classification results of image files and image files.
  • the storage module 1402 is configured to save program code and data of the device 1400.
  • the device 1400 described above may also include a communication module 1404 for communicating with other devices.
  • the communication module 1404 is used to receive or send a message or image file to other devices.
  • the processing module 1401 may be a processor or a controller, and may be, for example, a CPU, a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), and field programmable. Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1404 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1402 can be a memory.
  • the processing module 1401 is a processor (such as the processor 101 shown in FIG. 1)
  • the communication module 1404 is a radio frequency circuit (such as the radio frequency circuit 102 shown in FIG. 1)
  • the storage module 1402 is a memory (such as the memory shown in FIG. 1).
  • the display module 1403 is a touch screen (including the touch panel 104-1 and the display panel 104-2 shown in FIG. 1 )
  • the device provided by the present application may be the mobile phone 100 shown in FIG. 1 .
  • the 1404 may include not only a radio frequency circuit but also a WiFi module and a Bluetooth module.
  • the communication modules such as the radio frequency circuit, the WiFi module, and the Bluetooth module may be collectively referred to as a communication interface, wherein the processor, the communication interface, the touch screen, and the memory may be coupled through a bus. together.
  • the embodiment of the present application further provides a control device, including a processor and a memory, where the memory is used to store computer program code, where the computer program code includes computer instructions, when the processor executes the computer instruction,
  • a control device including a processor and a memory
  • the memory is used to store computer program code
  • the computer program code includes computer instructions, when the processor executes the computer instruction
  • the embodiment of the present application further provides a computer storage medium, where the computer program code is stored, and when the processor executes the computer program code, the device performs the related method steps in FIG. 5 or FIG. 12 to implement the foregoing embodiment.
  • the method in in .
  • the embodiment of the present application further provides a computer program product, when the computer program product is run on a computer, causing the computer to perform the related method steps in FIG. 5 or FIG. 12 to implement the method in the foregoing embodiment.
  • the device 1300 and the device 1400, the computer storage medium or the computer program product provided by the present application are all used to perform the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can be referred to the corresponding ones provided above. The beneficial effects in the method are not described here.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a flash memory, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk, and the like, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to the field of image processing. Provided are an image classification method and device capable of reducing a computation load in an image classification process and improving image classification efficiency. The method comprises: a first device capturing image data, acquiring image generation information upon capturing the image data, and generating a first image file comprising the image data and the image generation information (S501); the first device performing, on the basis of the image generation information, an image classification operation on the first image file (S502); and in response to an operation to view a gallery, the device displaying the first image file in a classification directory of the gallery (S503).

Description

一种图像分类方法及设备Image classification method and device 技术领域Technical field
本申请实施例涉及图像处理领域,尤其涉及一种图像分类方法及设备。The embodiments of the present application relate to the field of image processing, and in particular, to an image classification method and device.
背景技术Background technique
图像分类是一种由设备根据多个图像中每个图像的特征信息,自动将多个图像划分为至少两类图像,进行分类管理的方式。在图像分类的过程中,设备分析多个图像获取其特征信息所需的计算量较大。Image classification is a method in which a device automatically divides a plurality of images into at least two types of images according to feature information of each image in a plurality of images for classification management. In the process of image classification, the amount of calculation required for the device to analyze multiple images to obtain their feature information is large.
并且,一个设备对图像a进行图像分类后,该图像a被传输至另一设备,该另一设备对图像a进行图像分类时,还需要重新对图像a进行图像分析,以获取图像a的特征信息。其中,重复获取同一图像的特征信息,会产生大量的冗余计算。Moreover, after one device performs image classification on the image a, the image a is transmitted to another device, and when the other device performs image classification on the image a, it is necessary to perform image analysis on the image a again to acquire the feature of the image a. information. Among them, repeatedly acquiring the feature information of the same image will generate a large amount of redundant calculation.
发明内容Summary of the invention
本申请实施例提供一种图像分类方法及设备,可以减少图像分类过程中的计算量,提高图像分类效率。The embodiment of the present application provides an image classification method and device, which can reduce the calculation amount in the image classification process and improve the image classification efficiency.
第一方面,本申请实施例提供一种图像分类方法,该图像分类方法包括:设备捕获图像数据,并获取捕获图像数据时的图像生成信息,生成包括图像数据和图像生成信息的第一图像文件;然后利用图像生成信息,对第一图像文件执行图像分类操作;设备响应于用于查看图库的操作,在图库的分类目录显示第一图像文件。In a first aspect, an embodiment of the present application provides an image classification method, where the image classification method includes: capturing image data by a device, and acquiring image generation information when capturing image data, and generating a first image file including image data and image generation information. And then performing an image classification operation on the first image file using the image generation information; the device displays the first image file in the catalogue of the gallery in response to the operation for viewing the gallery.
本申请实施例提供的图像分类方法,设备在拍摄图像数据时,便可以获取捕获图像数据时的图像生成信息,然后生成包括图像数据和图像生成信息的第一图像文件。如此,设备对第一图像文件执行图像分类操作时,便可以直接利用图像生成信息执行图像分类操作。也就是说,设备可以跳过识别图像数据得到图像生成信息。这样,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。In the image classification method provided by the embodiment of the present application, when the device captures image data, the image generation information when the image data is captured may be acquired, and then the first image file including the image data and the image generation information is generated. In this way, when the device performs an image classification operation on the first image file, the image classification operation can be directly performed using the image generation information. That is, the device can skip the recognition image data to obtain image generation information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
在第一方面的一种可能的设计方式中,上述图像生成信息可以包括:拍摄参数的信息、拍摄模式的信息、拍摄场景的信息和摄像头类型的信息。In a possible design manner of the first aspect, the image generation information may include: information of a shooting parameter, information of a shooting mode, information of a shooting scene, and information of a camera type.
在第一方面的另一种可能的设计方式中,上述拍摄参数可以包括曝光值等参数,该拍摄模式包括全景模式和普通模式等,该拍摄场景可以包括人物拍摄场景、建筑物拍摄场景、自然风光拍摄场景、室内拍摄场景和室外拍摄场景等。分类特征信息是对图像数据执行图像识别操作得到的特征信息。摄像头类型用于指示图像数据是采用前置摄像头或者后置摄像头捕获的。In another possible design manner of the first aspect, the shooting parameter may include a parameter such as an exposure value, the panoramic mode, a normal mode, and the like, and the shooting scene may include a person shooting scene, a building shooting scene, and a natural Scenery shooting scenes, indoor shooting scenes, and outdoor shooting scenes. The classification feature information is feature information obtained by performing an image recognition operation on the image data. The camera type is used to indicate that the image data was captured using a front camera or a rear camera.
在第一方面的另一种可能的设计方式中,当上述拍摄场景为人物拍摄场景时,上述图像生成信息还包括人物特征信息。示例性的,该人物特征信息包括人脸个数、人脸指示信息、人脸位置信息、其他对象(如动物)的指示信息和其他对象的个数等信息。其中,人脸指示信息用于指示第一图像文件的图像数据中包括人脸或者不包括人脸;其他对象的指示信息用于指示图像数据中包括其他对象或者不包括其他对象;人脸位置信息用于指示人脸在图像数据中的位置。In another possible design manner of the first aspect, when the shooting scene is a person shooting scene, the image generation information further includes character feature information. Exemplarily, the character feature information includes information such as the number of faces, face indication information, face location information, indication information of other objects (such as animals), and the number of other objects. The face indication information is used to indicate that the image data of the first image file includes a face or does not include a face; the indication information of the other object is used to indicate that the image data includes other objects or does not include other objects; Used to indicate the position of the face in the image data.
在第一方面的另一种可能的设计方式中,设备在执行图像分类操作时,可以对图像数据不执行图像识别操作以分析图像数据得到图像生成信息。即设备可以跳过对图像数据执行图像识别操作得到图像生成信息。也就是说,上述图像生成信息不是执行图像识别操作得到的。这样,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。In another possible design manner of the first aspect, when performing the image classification operation, the device may perform an image recognition operation on the image data to analyze the image data to obtain image generation information. That is, the device can skip the image recognition operation on the image data to obtain the image generation information. That is to say, the above image generation information is not obtained by performing an image recognition operation. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
在第一方面的另一种可能的设计方式中,设备执行图像分类操作所需要的特征信息不仅包括上述图像生成信息,还包括分类特征信息。具体的,上述设备利用图像生成信息,对第一图像文件执行图像分类操作,包括:设备对图像数据执行图像识别操作,以分析图像数据得到分类特征信息;设备利用图像生成信息和分类特征信息,对第一图像文件执行图像分类操作。In another possible design manner of the first aspect, the feature information required by the device to perform the image classification operation includes not only the image generation information but also the classification feature information. Specifically, the foregoing apparatus performs image classification operation on the first image file by using image generation information, including: performing, by the device, an image recognition operation on the image data to analyze the image data to obtain classification feature information; and the device uses the image generation information and the classification feature information, An image classification operation is performed on the first image file.
在第一方面的另一种可能的设计方式中,设备在分析图像数据得到分类特征信息之后,还可以在第一图像文件中保存分类特征信息,得到更新后的第一图像文件。这样,该设备或者其他设备对该第一图像文件再次执行图像分类操作时,可以直接利用保存在第一图像文件中的特征信息(图像生成信息和分类特征信息),而不需要重新识别第一图像文件的图像数据得到该特征信息。In another possible design manner of the first aspect, after analyzing the image data to obtain the classification feature information, the device may further save the classification feature information in the first image file to obtain the updated first image file. In this way, when the device or other device performs the image classification operation again on the first image file, the feature information (image generation information and classification feature information) stored in the first image file can be directly used without re-recognizing the first The image data of the image file obtains the feature information.
在第一方面的另一种可能的设计方式中,设备分析图像数据得到不同的特征信息所采用的图像分类算法不同。其中,设备分析图像数据得到图像生成信息所采用的图像分类算法包括第一算法。这样,上述设备对图像数据不执行图像识别操作,以分析图像数据得到图像生成信息的方法包括:设备不采用第一算法对图像数据执行图像识别操作,以分析图像数据得到与第一算法对应的图像生成信息。In another possible design manner of the first aspect, the image classification algorithm used by the device to analyze the image data to obtain different feature information is different. The image classification algorithm used by the device to analyze image data to obtain image generation information includes a first algorithm. In this way, the device does not perform an image recognition operation on the image data, and the method for analyzing the image data to obtain the image generation information includes: the device performs an image recognition operation on the image data without using the first algorithm, and analyzes the image data to obtain the image corresponding to the first algorithm. Image generation information.
需要说明的是,本申请实施例中的第一算法可以包括一个或多个图像分类算法。本申请实施例中的图像生成信息可以包括一个或多个属性的特征信息,每一种属性的特征信息对应一个图像分类算法。It should be noted that the first algorithm in the embodiment of the present application may include one or more image classification algorithms. The image generation information in the embodiment of the present application may include feature information of one or more attributes, and the feature information of each attribute corresponds to an image classification algorithm.
在第一方面的另一种可能的设计方式中,设备可以采用第二算法对图像数据执行图像识别操作,以分析图像数据得到与第二算法对应的分类特征信息;然后再利用图像生成信息和分类特征信息,对第一图像文件执行图像分类操作。第一算法与第二算法不同。In another possible design manner of the first aspect, the device may perform an image recognition operation on the image data by using a second algorithm to analyze the image data to obtain classification feature information corresponding to the second algorithm; and then use the image generation information and The feature information is classified, and an image classification operation is performed on the first image file. The first algorithm is different from the second algorithm.
在第一方面的另一种可能的设计方式中,当设备再次对第一图像文件执行图像分类操作时,该设备可以读取第一图像文件中保存的特征信息(图像生成信息和分类特征信息);当该设备确定第一图像文件中保存有第一特征信息(图像生成信息和/或分类特征信息)时,则可以不采用第三算法对图像数据执行图像识别操作,以分析图像数据得到与第三算法对应的第一特征信息;直接利用第一特征信息对第一图像文件进行图像分类操作。即设备可以跳过采用第三算法识别图像数据,以分析图像数据得到第一特征信息,直接利用第一特征信息对第一图像文件执行图像分类操作,可以减少执行图像分类操作的计算量。In another possible design manner of the first aspect, when the device performs an image classification operation on the first image file again, the device may read the feature information (image generation information and classification feature information) saved in the first image file. When the device determines that the first feature information (image generation information and/or classification feature information) is stored in the first image file, the image recognition operation may be performed on the image data without using the third algorithm to analyze the image data. First feature information corresponding to the third algorithm; directly performing image classification operation on the first image file by using the first feature information. That is, the device can skip the recognition of the image data by using the third algorithm to analyze the image data to obtain the first feature information, and directly perform the image classification operation on the first image file by using the first feature information, thereby reducing the calculation amount of performing the image classification operation.
在第一方面的另一种可能的设计方式中,第一图像文件中可能不包括第一特征信息。在设备确定第一图像文件中不包括第一特征信息时,设备可以采用第三算法识别图像数据,以获取第一特征信息,并利用第一特征信息执行图像分类操作。并且,设备可以在第一图像文件中保存第一特征信息,得到更新后的第一图像文件,以便于该 设备或者其他设备对该第一图像文件再次执行图像分类操作时,可以直接利用保存在第一图像文件中的第一特征信息,而不需要重新采用第一算法识别第一图像文件的图像数据。In another possible design manner of the first aspect, the first feature information may not be included in the first image file. When the device determines that the first feature information is not included in the first image file, the device may identify the image data by using a third algorithm to acquire the first feature information, and perform an image classification operation by using the first feature information. Moreover, the device may save the first feature information in the first image file to obtain the updated first image file, so that when the device or other device performs the image classification operation on the first image file again, the device may directly save and save the image file. The first feature information in the first image file does not need to re-use the first algorithm to identify the image data of the first image file.
在第一方面的另一种可能的设计方式中,设备执行图像分类操作所采用的算法版本会随着时间的变化不断更新,并且采用不同版本的同一算法识别第一图像文件的图像数据得到的特征信息不同。基于此,上述分类特征信息中还包括图像分类算法的版本。在这种情况下,即使上述第一图像文件中保存有第一特征信息,识别该第一特征信息的算法版本与第三算法的版本也不一定相同。鉴于这种情况,上述利用第一特征信息执行图像分类操作可以包括:设备确定识别第一特征信息的算法版本与第三算法的版本相同;设备直接利用第一特征信息执行图像分类操作,跳过采用第三算法识别第一图像以得到第一特征信息。如此,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。In another possible design manner of the first aspect, the algorithm version used by the device to perform the image classification operation is continuously updated over time, and the same algorithm of different versions is used to identify the image data of the first image file. Characteristic information is different. Based on this, the classification feature information further includes a version of the image classification algorithm. In this case, even if the first feature information is stored in the first image file, the algorithm version identifying the first feature information and the version of the third algorithm are not necessarily the same. In view of the above, performing the image classification operation by using the first feature information may include: determining, by the device, that the algorithm version identifying the first feature information is the same as the version of the third algorithm; and the device directly performing the image classification operation by using the first feature information, skipping The first image is identified using a third algorithm to obtain first feature information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
在第一方面的另一种可能的设计方式中,本申请实施例的方法还包括:设备确定识别第一特征信息的算法版本与第一算法的版本不同;该设备采用第三算法识别图像数据,以获取第一特征信息,并利用第一特征信息执行图像分类操作;采用识别到的第一特征信息,更新第一图像文件中保存的第一特征信息,得到更新后的第一图像文件。如此,该设备或者其他设备对该第一图像文件再次执行图像分类操作时,则可以直接利用保存在第一图像文件中的新的第一特征信息,而不需要重新采用第一算法识别第一图像文件的图像数据。In another possible design manner of the first aspect, the method of the embodiment of the present application further includes: determining, by the device, that the algorithm version that identifies the first feature information is different from the version of the first algorithm; the device uses the third algorithm to identify the image data. And acquiring the first feature information, and performing the image classification operation by using the first feature information; and updating the first feature information saved in the first image file by using the first feature information that is identified, to obtain the updated first image file. In this way, when the device or other device performs the image classification operation on the first image file again, the new first feature information saved in the first image file can be directly utilized without re-using the first algorithm to identify the first Image data of an image file.
在第一方面的另一种可能的设计方式中,第一图像文件的格式为可交换图像文件格式(Exchangeable image file format,EXIF)。In another possible design manner of the first aspect, the format of the first image file is an Exchangeable image file format (EXIF).
在第一方面的另一种可能的设计方式中,图像生成信息保存在第一图像文件的厂商注释(Maker Note)字段。当然,上述分类特征信息也保存在第一图像文件的Maker Note字段。In another possible design of the first aspect, the image generation information is saved in a Maker Note field of the first image file. Of course, the above classification feature information is also saved in the Maker Note field of the first image file.
在第一方面的另一种可能的设计方式中,图像生成信息采用标签图像文件格式(Tagged Image File Format,TIFF)保存在第一图像文件的Maker Note字段。当然,上述分类特征信息也采用TIFF保存在第一图像文件的Maker Note字段。In another possible design of the first aspect, the image generation information is saved in a Maker Note field of the first image file in a Tagged Image File Format (TIFF) format. Of course, the above classification feature information is also saved in the Maker Note field of the first image file using TIFF.
第二方面,本申请实施例提供一种图像分类方法,该图像分类方法包括:设备通过摄像头捕获图像数据;设备获取捕获图像数据时的图像生成信息,生成包括图像数据和图像生成信息的第一图像文件;其中,第一图像文件的格式为EXIF,图像生成信息保存在第一图像文件的Maker Note字段;该图像生成信息包括拍摄参数的信息、拍摄模式的信息、拍摄场景的信息和摄像头类型的信息中的至少一种;设备直接利用图像生成信息对第一图像文件执行图像分类操作;而不是对图像数据执行图像识别操作以分析图像数据得到图像生成信息,再利用分析得到的图像生成信息对第一图像文件执行图像分类操作;最后,响应于用于查看图库的操作,在图库的分类目录显示第一图像文件。In a second aspect, an embodiment of the present application provides an image classification method, where the image classification method includes: the device captures image data by using a camera; and the device acquires image generation information when capturing image data, and generates a first image including image data and image generation information. An image file; wherein the format of the first image file is EXIF, and the image generation information is saved in a Maker Note field of the first image file; the image generation information includes information of a shooting parameter, information of a shooting mode, information of a shooting scene, and a camera type At least one of the information; the device directly performs an image classification operation on the first image file using the image generation information; instead of performing an image recognition operation on the image data to analyze the image data to obtain image generation information, and then using the image generation information obtained by the analysis An image classification operation is performed on the first image file; finally, in response to the operation for viewing the gallery, the first image file is displayed in the catalog of the gallery.
本申请实施例提供的图像分类方法,设备在摄像头拍摄图像数据时,便可以获取捕获图像数据时的图像生成信息,然后生成包括图像数据和图像生成信息的第一图像文件。如此,设备对第一图像文件执行图像分类操作时,便可以直接利用图像生成 信息执行图像分类操作。也就是说,设备可以跳过识别图像数据得到图像生成信息。这样,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。In the image classification method provided by the embodiment of the present application, when the camera captures image data, the device may acquire image generation information when capturing image data, and then generate a first image file including image data and image generation information. Thus, when the device performs an image classification operation on the first image file, the image classification operation can be directly performed using the image generation information. That is, the device can skip the recognition image data to obtain image generation information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
在第二方面的一种可能的设计方式中,上述设备利用图像生成信息,对第一图像文件执行图像分类操作,包括:设备对图像数据执行图像识别操作,以分析图像数据得到分类特征信息;设备利用图像生成信息和所述分类特征信息,对第一图像文件执行图像分类操作。In a possible design manner of the second aspect, the foregoing apparatus performs image classification operation on the first image file by using image generation information, including: performing, by the device, an image recognition operation on the image data, to analyze the image data to obtain classification feature information; The device performs an image classification operation on the first image file using the image generation information and the classification feature information.
在第二方面的另一种可能的设计方式中,设备可以在分析得到分类特征信息后,在第一图像文件中保存分类特征信息,得到更新后的第一图像文件。In another possible design manner of the second aspect, after analyzing the obtained classification feature information, the device may save the classification feature information in the first image file to obtain the updated first image file.
在第二方面的另一种可能的设计方式中,上述图像生成信息采用TIFF保存在Maker Note字段。In another possible design manner of the second aspect, the image generation information is saved in a Maker Note field by using TIFF.
第三方面,本申请实施例提供一种设备,该设备包括:获取单元、分类单元和显示单元。获取单元,用于捕获图像数据,并获取捕获所述图像数据时的图像生成信息,生成包括所述图像数据和所述图像生成信息的第一图像文件;分类单元,用于利用所述获取单元获取的所述图像生成信息,对所述第一图像文件执行图像分类操作;显示单元,用于响应于用于查看图库的操作,在图库的分类目录显示第一图像文件。In a third aspect, an embodiment of the present application provides an apparatus, where the apparatus includes: an acquiring unit, a classifying unit, and a display unit. An acquisition unit, configured to capture image data, and acquire image generation information when the image data is captured, generate a first image file including the image data and the image generation information; and a classification unit configured to utilize the acquisition unit Obtaining the image generation information, performing an image classification operation on the first image file; and displaying means for displaying the first image file in a classification directory of the gallery in response to the operation for viewing the gallery.
在第三方面的一种可能的设计方式中,上述分类单元,还用于对图像数据执行图像识别操作;其中,上述图像生成信息不是分类单元识别图像数据得到的。In a possible design manner of the third aspect, the classification unit is further configured to perform an image recognition operation on the image data; wherein the image generation information is not obtained by the classification unit identifying the image data.
在第三方面的另一种可能的设计方式中,上述分类单元,具体用于对图像数据执行图像识别操作,以分析图像数据得到分类特征信息;利用图像生成信息和分类特征信息,对第一图像文件执行图像分类操作。In another possible design manner of the third aspect, the foregoing classification unit is specifically configured to perform an image recognition operation on the image data to analyze the image data to obtain classification feature information, and use the image generation information and the classification feature information to The image file performs an image classification operation.
在第三方面的另一种可能的设计方式中,上述设备还包括:更新单元。该更新单元,用于在分类单元对图像数据执行图像识别操作,以分析图像数据得到分类特征信息之后,在第一图像文件中保存所述分类特征信息,得到更新后的第一图像文件。In another possible design manner of the third aspect, the foregoing apparatus further includes: an update unit. The updating unit is configured to perform an image recognition operation on the image data by the classification unit to analyze the image data to obtain the classification feature information, and save the classification feature information in the first image file to obtain the updated first image file.
第四方面,本申请实施例提供一种设备,该设备包括:设备包括:处理器、存储器、摄像头和显示器;存储器和显示器与处理器耦合,显示器用于显示图像文件,存储器包括非易失性存储介质,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当处理器执行计算机指令时,所述摄像头,用于捕获图像数据;所述处理器,用于获取所述摄像头捕获所述图像数据时的图像生成信息,生成包括所述图像数据和所述图像生成信息的第一图像文件;利用所述图像生成信息,对所述第一图像文件执行图像分类操作。显示器,用于响应于用于查看图库的操作,在图库的分类目录显示第一图像文件。In a fourth aspect, an embodiment of the present application provides an apparatus, including: a device, a processor, a memory, a camera, and a display; the memory and the display are coupled to the processor, the display is configured to display an image file, and the memory includes non-volatile a storage medium for storing computer program code, the computer program code comprising computer instructions for capturing image data when the processor executes the computer instruction, the processor for acquiring the camera capture Image generation information at the time of image data, generating a first image file including the image data and the image generation information; and performing image classification operation on the first image file using the image generation information. A display for displaying a first image file in a catalogue of the gallery in response to an operation for viewing the gallery.
在第四方面的一种可能的设计方式中,上述图像生成信息不是处理器对图像数据执行图像识别操作得到的。In a possible design manner of the fourth aspect, the image generation information is not obtained by the processor performing an image recognition operation on the image data.
在第四方面的另一种可能的设计方式中,上述处理器,具体用于对图像数据执行图像识别操作,以分析图像数据得到分类特征信息;利用图像生成信息和分类特征信息,对第一图像文件执行图像分类操作。In another possible design manner of the fourth aspect, the processor is specifically configured to perform an image recognition operation on the image data to analyze the image data to obtain classification feature information, and use the image generation information and the classification feature information to The image file performs an image classification operation.
在第四方面的另一种可能的设计方式中,上述处理器,还用于对图像数据执行图像识别操作,以分析图像数据得到分类特征信息之后,在第一图像文件中保存分类特征信息,得到更新后的第一图像文件。In another possible design manner of the fourth aspect, the processor is further configured to perform an image recognition operation on the image data to analyze the image data to obtain the classification feature information, and then save the classification feature information in the first image file. Get the updated first image file.
需要说明的是,本申请实施例第二方面、第三方面和第四方面及其任一种可能的设计方式中所述的图像生成信息、第一图像文件的格式、图像生成信息和分类特征信息在第一图像文件中的位置、厂商注释字段中保存图像生成信息和分类特征信息的格式,均可以参考第一方面的可能是设计方式中的相关描述,本申请实施例这里不予赘述。It should be noted that the image generation information, the format of the first image file, the image generation information, and the classification feature described in the second aspect, the third aspect, and the fourth aspect of the embodiments of the present application and any possible design manner thereof For the location of the information in the first image file, the format of the image generation information and the classification feature information in the vendor annotation field, reference may be made to the related description in the first aspect of the design, which is not described herein.
第五方面,本申请实施例提供一种控制设备,该控制设备包括处理器和存储器,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,当处理器执行该计算机指令时,控制设备执行如本申请实施例第一方面和第二方面及其任一种可能的设计方式所述的方法。In a fifth aspect, an embodiment of the present application provides a control device, where the control device includes a processor and a memory, where the memory is used to store computer program code, where the computer program code includes computer instructions, and when the processor executes the computer instruction, the control The apparatus performs the method as described in the first and second aspects of the embodiments of the present application and any of its possible design approaches.
第六方面,本申请实施例提供一种计算机存储介质,所述计算机存储介质包括计算机指令,当所述计算机指令在设备上运行时,使得所述设备执行如本申请实施例第一方面及其任一种可能的设计方式所述的方法。In a sixth aspect, the embodiment of the present application provides a computer storage medium, where the computer storage medium includes computer instructions, when the computer instruction is run on a device, causing the device to perform the first aspect of the embodiment of the present application Any of the methods described in the possible design approach.
第七方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如本申请实施例第一方面和第二方面及其任一种可能的设计方式所述的方法。In a seventh aspect, the embodiment of the present application provides a computer program product, when the computer program product is run on a computer, causing the computer to perform the first aspect and the second aspect, and any one of the embodiments of the present application. The method of design described.
另外,第三方面、第四方面及其任一种设计方式,以及第二方面、第四方面至第七方面所带来的技术效果可参见上述第一方面中不同设计方式所带来的技术效果,此处不再赘述。In addition, the third aspect, the fourth aspect, and any one of the design manners, and the technical effects brought by the second aspect, the fourth aspect to the seventh aspect, may refer to the technology brought by different design modes in the foregoing first aspect. The effect will not be described here.
附图说明DRAWINGS
图1为本申请实施例提供的一种手机的硬件结构示意图;1 is a schematic structural diagram of hardware of a mobile phone according to an embodiment of the present application;
图2为本申请实施例提供的一种联合图像专家小组(Joint Photographic Experts Group,JPEG)图像文件的数据结构示意图;2 is a schematic diagram of a data structure of a Joint Photographic Experts Group (JPEG) image file according to an embodiment of the present application;
图3为本申请实施例提供的一种EXIF图像文件的数据结构示意图;FIG. 3 is a schematic diagram of a data structure of an EXIF image file according to an embodiment of the present disclosure;
图4为本申请实施例提供的一种图像分类方法的系统原理框架示意图一;4 is a schematic diagram 1 of a system principle framework of an image classification method according to an embodiment of the present application;
图5为本申请实施例提供的一种图像分类方法的流程图一;FIG. 5 is a flowchart 1 of an image classification method according to an embodiment of the present application;
图6为本申请实施例提供的一种图像分类方法的系统原理框架示意图二;6 is a schematic diagram 2 of a system principle framework of an image classification method according to an embodiment of the present application;
图7A为本申请实施例提供的一种手机界面示例示意;FIG. 7A is a schematic diagram of an example of a mobile phone interface according to an embodiment of the present application;
图7B为本申请实施例提供的一种图像分类方法的流程图二;FIG. 7B is a second flowchart of an image classification method according to an embodiment of the present disclosure;
图8为图3所示的EXIF图像的Maker Note字段的数据结构示意图;8 is a schematic diagram showing the data structure of a Maker Note field of the EXIF image shown in FIG. 3;
图9为图8所示的Maker Note字段中TIFF字段的数据结构示意图一;9 is a schematic diagram 1 of a data structure of a TIFF field in the Maker Note field shown in FIG. 8;
图10为图8所示的Maker Note字段中TIFF字段的数据结构示意图二;10 is a second schematic diagram of a data structure of a TIFF field in the Maker Note field shown in FIG. 8;
图11为本申请实施例提供的一种图像分类方法的系统原理框架示意图三;FIG. 11 is a schematic diagram 3 of a system principle framework of an image classification method according to an embodiment of the present disclosure;
图12为本申请实施例提供的一种图像分类方法的流程图三;FIG. 12 is a flowchart 3 of an image classification method according to an embodiment of the present disclosure;
图13为本申请实施例提供的一种设备的结构组成示意图一;FIG. 13 is a schematic structural diagram 1 of a device according to an embodiment of the present disclosure;
图14为本申请实施例提供的一种设备的结构组成示意图二。FIG. 14 is a schematic structural diagram 2 of a device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。例如,第一特征信息和第二特 征信息是指第一图像文件中的不同特征信息,而非第一图像文件有两个特征信息。In the following, the terms "first" and "second" are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include one or more of the features either explicitly or implicitly. For example, the first feature information and the second feature information refer to different feature information in the first image file, and the non-first image file has two feature information.
本申请实施例提供一种图像分类方法,该图像分类方法可以应用于设备对第一图像文件进行图像分类。本申请实施例中的图像文件(如第一图像文件)是指对摄像头采集的图像进行编码压缩等处理后得到的图像文件,如JPEG图像文件。本申请实施例中所述的图像可以理解为电子图片(后续简称图片)。The embodiment of the present application provides an image classification method, which can be applied to an image classification of a first image file by a device. The image file (such as the first image file) in the embodiment of the present application refers to an image file obtained by encoding and compressing an image captured by the camera, such as a JPEG image file. The image described in the embodiment of the present application can be understood as an electronic picture (hereinafter referred to as a picture).
其中,本申请中的图像分类是指设备根据多个图像文件中每个图像文件中的图像数据的特征信息,例如拍摄模式(如全景模式)信息、拍摄场景(如人物场景)信息和人脸个数(如3张人脸)等信息,将该多个图像文件分为至少两类图像文件(即对多个图像文件进行聚类)。The image classification in the present application refers to the device according to the feature information of the image data in each image file of the plurality of image files, such as shooting mode (such as panoramic mode) information, shooting scene (such as character scene) information, and face. Information such as the number (such as 3 faces) is divided into at least two types of image files (ie, clustering a plurality of image files).
本申请实施例中的设备(如第一设备和第二设备)可以是手机(如图1所示的手机100)、平板电脑、个人计算机(Personal Computer,PC)、个人数字助理(personal digital assistant,PDA)、上网本、可穿戴电子设备、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、车载电脑等终端设备。The devices (such as the first device and the second device) in the embodiment of the present application may be a mobile phone (such as the mobile phone 100 shown in FIG. 1), a tablet computer, a personal computer (PC), and a personal digital assistant. , PDA), netbooks, wearable electronic devices, augmented reality (AR), virtual reality (VR) devices, car computers and other terminal devices.
例如,上述设备可以管理该设备中保存的图片,并执行本申请实施例提供的方法,对该设备中保存的图片进行图像分类。或者,上述设备中可以安装用于管理图片的客户端(或应用程序),该客户端可以在登录一图片管理账户后,管理保存在云服务器中的图片;并且,该客户端还可以用于执行本申请实施例提供的方法,对云服务器中的图片进行图像分类。For example, the device may manage the image saved in the device, and perform the method provided in the embodiment of the present application to perform image classification on the image saved in the device. Alternatively, a client (or an application) for managing a picture may be installed in the device, and the client may manage a picture saved in the cloud server after logging in to a picture management account; and the client may also be used for The method provided in the embodiment of the present application performs image classification on a picture in the cloud server.
或者,本申请实施例中的设备还可以是用于存储和管理图片的云服务器,该云服务器可以接收终端上传的图片,然后执行本申请实施例提供的方法,对终端上传的图片进行图像分类。本申请实施例对上述设备的具体形式不做特殊限制。Alternatively, the device in the embodiment of the present application may be a cloud server for storing and managing a picture, and the cloud server may receive the picture uploaded by the terminal, and then perform the method provided by the embodiment of the present application to perform image classification on the picture uploaded by the terminal. . The specific form of the above device is not specifically limited in the embodiment of the present application.
如图1所示,以手机100作为上述设备为例,手机100具体可以包括:处理器101、射频(Radio Frequency,RF)电路102、存储器103、触摸屏104、蓝牙装置105、一个或多个传感器106、无线保真(Wireless Fidelity,WiFi)装置107、定位装置108、音频电路109、外设接口110以及电源装置111等部件。这些部件可通过一根或多根通信总线或信号线(图1中未示出)进行通信。本领域技术人员可以理解,图1中示出的硬件结构并不构成对手机的限定,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。As shown in FIG. 1 , the mobile phone 100 is used as the device. The mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, and one or more sensors. 106, Wireless Fidelity (WiFi) device 107, positioning device 108, audio circuit 109, peripheral interface 110, and power supply device 111 and the like. These components can communicate over one or more communication buses or signal lines (not shown in Figure 1). It will be understood by those skilled in the art that the hardware structure shown in FIG. 1 does not constitute a limitation to a mobile phone, and the mobile phone 100 may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
下面结合图1对手机100的各个部件进行具体的介绍:The various components of the mobile phone 100 will be specifically described below with reference to FIG. 1 :
处理器101是手机100的控制中心,利用各种接口和线路连接手机100的各个部分,通过运行或执行存储在存储器103内的应用程序,以及调用存储在存储器103内的数据,执行手机100的各种功能和处理数据。在一些实施例中,处理器101可包括一个或多个处理单元。在本申请实施例一些实施例中,上述处理器101还可以包括指纹验证芯片,用于对采集到的指纹进行验证。The processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 by using various interfaces and lines, and executes the mobile phone 100 by running or executing an application stored in the memory 103 and calling data stored in the memory 103. Various functions and processing data. In some embodiments, processor 101 can include one or more processing units. In some embodiments of the present application, the processor 101 may further include a fingerprint verification chip for verifying the collected fingerprint.
射频电路102可用于在收发信息或通话过程中,无线信号的接收和发送。特别地,射频电路102可以将基站的下行数据接收后,给处理器101处理;另外,将涉及上行的数据发送给基站。通常,射频电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频电路102还可以通过无线通信和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯 系统、通用分组无线服务、码分多址、宽带码分多址、长期演进、电子邮件、短消息服务等。The radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls. In particular, the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101; in addition, transmit the data related to the uplink to the base station. Generally, radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency circuit 102 can also communicate with other devices through wireless communication. The wireless communication can use any communication standard or protocol including, but not limited to, global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
存储器103用于存储应用程序以及数据,处理器101通过运行存储在存储器103的应用程序以及数据,执行手机100的各种功能以及数据处理。存储器103主要包括存储程序区以及存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等);存储数据区可以存储根据使用手机100时所创建的数据(比如音频数据、电话本等)。此外,存储器103可以包括高速随机存取存储器(Random Access Memory,RAM),还可以包括非易失存储器,例如磁盘存储器件、闪存器件或其他易失性固态存储器件等。存储器103可以存储各种操作系统,例如,
Figure PCTCN2018076081-appb-000001
操作系统,
Figure PCTCN2018076081-appb-000002
操作系统等。上述存储器103可以是独立的,通过上述通信总线与处理器101相连接;存储器103也可以和处理器101集成在一起。
The memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103. The memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.). Further, the memory 103 may include a high speed random access memory (RAM), and may also include a nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device. The memory 103 can store various operating systems, for example,
Figure PCTCN2018076081-appb-000001
operating system,
Figure PCTCN2018076081-appb-000002
Operating system, etc. The above memory 103 may be independent and connected to the processor 101 via the above communication bus; the memory 103 may also be integrated with the processor 101.
触摸屏104具体可以包括触控板104-1和显示器104-2。The touch screen 104 may specifically include a touch panel 104-1 and a display 104-2.
其中,触控板104-1可采集手机100的用户在其上或附近的触摸事件(比如用户使用手指、触控笔等任何适合的物体在触控板104-1上或在触控板104-1附近的操作),并将采集到的触摸信息发送给其他器件(例如处理器101)。其中,用户在触控板104-1附近的触摸事件可以称之为悬浮触控;悬浮触控可以是指,用户无需为了选择、移动或拖动目标(例如图标等)而直接接触触控板,而只需用户位于设备附近以便执行所想要的功能。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型来实现触控板104-1。The touch panel 104-1 can collect touch events on or near the user of the mobile phone 100 (for example, the user uses any suitable object such as a finger, a stylus, or the like on the touch panel 104-1 or on the touchpad 104. The operation near -1), and the collected touch information is sent to other devices (for example, processor 101). The touch event of the user in the vicinity of the touch panel 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.) , and only the user is located near the device to perform the desired function. In addition, the touch panel 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
显示器(也称为显示屏)104-2可用于显示由用户输入的信息或提供给用户的信息以及手机100的各种菜单。可以采用液晶显示器、有机发光二极管等形式来配置显示器104-2。触控板104-1可以覆盖在显示器104-2之上,当触控板104-1检测到在其上或附近的触摸事件后,传送给处理器101以确定触摸事件的类型,随后处理器101可以根据触摸事件的类型在显示器104-2上提供相应的视觉输出。虽然在图1中,触控板104-1与显示屏104-2是作为两个独立的部件来实现手机100的输入和输出功能,但是在某些实施例中,可以将触控板104-1与显示屏104-2集成而实现手机100的输入和输出功能。可以理解的是,触摸屏104是由多层的材料堆叠而成,本申请实施例中只展示出了触控板(层)和显示屏(层),其他层在本申请实施例中不予记载。另外,触控板104-1可以以全面板的形式配置在手机100的正面,显示屏104-2也可以以全面板的形式配置在手机100的正面,这样在手机的正面就能够实现无边框的结构。A display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100. The display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The touchpad 104-1 can be overlaid on the display 104-2, and when the touchpad 104-1 detects a touch event on or near it, it is transmitted to the processor 101 to determine the type of touch event, and then the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event. Although in FIG. 1, the touchpad 104-1 and the display 104-2 are implemented as two separate components to implement the input and output functions of the handset 100, in some embodiments, the touchpad 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It is to be understood that the touch screen 104 is formed by stacking a plurality of layers of materials. In the embodiment of the present application, only the touch panel (layer) and the display screen (layer) are shown, and other layers are not described in the embodiment of the present application. . In addition, the touch panel 104-1 may be disposed on the front surface of the mobile phone 100 in the form of a full-board, and the display screen 104-2 may also be disposed on the front surface of the mobile phone 100 in the form of a full-board, so that the front of the mobile phone can be borderless. Structure.
另外,手机100还可以具有指纹识别功能。例如,可以在手机100的背面(例如后置摄像头的下方)配置指纹识别器112,或者在手机100的正面(例如触摸屏104的下方)配置指纹识别器112。又例如,可以在触摸屏104中配置指纹采集器件112来实现指纹识别功能,即指纹采集器件112可以与触摸屏104集成在一起来实现手机100的指纹识别功能。在这种情况下,该指纹采集器件112配置在触摸屏104中,可以是触摸屏104的一部分,也可以以其他方式配置在触摸屏104中。本申请实施例中的指纹采集器件112的主要部件是指纹传感器,该指纹传感器可以采用任何类型的感测技术,包括但不限于光学式、电容式、压电式或超声波传感技术等。In addition, the mobile phone 100 can also have a fingerprint recognition function. For example, the fingerprint reader 112 can be configured on the back of the handset 100 (eg, below the rear camera) or on the front side of the handset 100 (eg, below the touch screen 104). For another example, the fingerprint collection device 112 can be configured in the touch screen 104 to implement the fingerprint recognition function, that is, the fingerprint collection device 112 can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100. In this case, the fingerprint capture device 112 is disposed in the touch screen 104 and may be part of the touch screen 104 or may be otherwise disposed in the touch screen 104. The main component of the fingerprint collection device 112 in the embodiment of the present application is a fingerprint sensor, which can employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies.
手机100还可以包括蓝牙装置105,用于实现手机100与其他短距离的设备(例如手机、智能手表等)之间的数据交换。本申请实施例中的蓝牙装置可以是集成电路或者蓝牙芯片等。The mobile phone 100 may also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range devices (eg, mobile phones, smart watches, etc.). The Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
手机100还可以包括至少一种传感器106,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触摸屏104的显示器的亮度,接近传感器可在手机100移动到耳边时,关闭显示器的电源。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机100还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor may turn off the power of the display when the mobile phone 100 moves to the ear. . As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc. As for the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
WiFi装置107,用于为手机100提供遵循WiFi相关标准协议的网络接入,手机100可以通过WiFi装置107接入到WiFi接入点,进而帮助用户收发电子邮件、浏览网页和访问流媒体等,它为用户提供了无线的宽带互联网访问。在其他一些实施例中,该WiFi装置107也可以作为WiFi无线接入点,可以为其他设备提供WiFi网络接入。The WiFi device 107 is configured to provide the mobile phone 100 with network access complying with the WiFi-related standard protocol, and the mobile phone 100 can access the WiFi access point through the WiFi device 107, thereby helping the user to send and receive emails, browse web pages, and access streaming media. It provides users with wireless broadband Internet access. In some other embodiments, the WiFi device 107 can also function as a WiFi wireless access point, and can provide WiFi network access for other devices.
定位装置108,用于为手机100提供地理位置。可以理解的是,该定位装置108具体可以是全球定位系统(Global Positioning System,GPS)或北斗卫星导航系统、俄罗斯GLONASS等定位系统的接收器。定位装置108在接收到上述定位系统发送的地理位置后,将该信息发送给处理器101进行处理,或者发送给存储器103进行保存。在另外的一些实施例中,该定位装置108还可以是辅助全球卫星定位系统(Assisted Global Positioning System,AGPS)的接收器,AGPS系统通过作为辅助服务器来协助定位装置108完成测距和定位服务,在这种情况下,辅助定位服务器通过无线通信网络与设备例如手机100的定位装置108(即GPS接收器)通信而提供定位协助。在另外的一些实施例中,该定位装置108也可以是基于WiFi接入点的定位技术。由于每一个WiFi接入点都有一个全球唯一的媒体访问控制(Media Access Control,MAC)地址,设备在开启WiFi的情况下即可扫描并收集周围的WiFi接入点的广播信号,因此可以获取到WiFi接入点广播出来的MAC地址;设备将这些能够标示WiFi接入点的数据(例如MAC地址)通过无线通信网络发送给位置服务器,由位置服务器检索出每一个WiFi接入点的地理位置,并结合WiFi广播信号的强弱程度,计算出该设备的地理位置并发送到该设备的定位装置108中。The positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a Global Positioning System (GPS) or a Beidou satellite navigation system, or a Russian GLONASS. After receiving the geographical location transmitted by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103 for storage. In some other embodiments, the positioning device 108 can also be a receiver of an Assisted Global Positioning System (AGPS), which assists the positioning device 108 in performing ranging and positioning services by acting as an auxiliary server. In this case, the secondary location server provides location assistance over a wireless communication network in communication with a location device 108 (i.e., a GPS receiver) of the device, such as handset 100. In still other embodiments, the positioning device 108 can also be a WiFi access point based positioning technology. Since each WiFi access point has a globally unique Media Access Control (MAC) address, the device can scan and collect broadcast signals of surrounding WiFi access points when WiFi is turned on, so that it can be obtained. The MAC address broadcasted to the WiFi access point; the device sends the data (such as the MAC address) capable of indicating the WiFi access point to the location server through the wireless communication network, and the location server retrieves the geographical location of each WiFi access point. And in combination with the strength of the WiFi broadcast signal, the geographic location of the device is calculated and sent to the location device 108 of the device.
音频电路109、扬声器113、麦克风114可提供用户与手机100之间的音频接口。音频电路109可将接收到的音频数据转换后的电信号,传输到扬声器113,由扬声器113转换为声音信号输出;另一方面,麦克风114将收集的声音信号转换为电信号,由音频电路109接收后转换为音频数据,再将音频数据输出至RF电路102以发送给比如另一手机,或者将音频数据输出至存储器103以便进一步处理。The audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100. The audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 102 for transmission to, for example, another mobile phone, or the audio data is output to the memory 103 for further processing.
外设接口110,用于为外部的输入/输出设备(例如键盘、鼠标、外接显示器、外部存储器、用户识别模块卡等)提供各种接口。例如通过通用串行总线(Universal Serial Bus,USB)接口与鼠标连接,通过用户识别模块卡卡槽上的金属触点与电信运营商提供的用户识别模块卡(Subscriber Identification Module,SIM)卡进行连接。外设接口 110可以被用来将上述外部的输入/输出外围设备耦接到处理器101和存储器103。The peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.). For example, it is connected to the mouse through a Universal Serial Bus (USB) interface, and is connected to a Subscriber Identification Module (SIM) card provided by the service provider through a metal contact on the card slot of the subscriber identity module. . Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.
在本发明实施例中,手机100可通过外设接口110与设备组内的其他设备进行通信,例如,通过外设接口110可接收其他设备发送的显示数据进行显示等,本发明实施例对此不作任何限制。In the embodiment of the present invention, the mobile phone 100 can communicate with other devices in the device group through the peripheral interface 110. For example, the peripheral interface 110 can receive display data sent by other devices for display, etc. No restrictions are imposed.
手机100还可以包括给各个部件供电的电源装置111(比如电池和电源管理芯片),电池可以通过电源管理芯片与处理器101逻辑相连,从而通过电源装置111实现管理充电、放电、以及功耗管理等功能。The mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components. The battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
尽管图1未示出,手机100还可以包括摄像头(前置摄像头和/或后置摄像头)、闪光灯、微型投影装置、近场通信(Near Field Communication,NFC)装置等,在此不再赘述。Although not shown in FIG. 1, the mobile phone 100 may further include a camera (front camera and/or rear camera), a flash, a micro projection device, a near field communication (NFC) device, and the like, and details are not described herein.
其中,本申请实施例提供的图像分类方法的执行主体可以为图像处理装置,该图像处理装置是可以用于管理图像的设备(如图1所示的手机100),或者该设备的中央处理器(Central Processing Unit,CPU),或者该设备中的用于进行图像处理的控制模块,或者该设备中用于管理图像的客户端。本申请实施例中以上述设备执行图像分类方法为例,对本申请提供的图像分类方法进行说明。The execution body of the image classification method provided by the embodiment of the present application may be an image processing device, which is a device that can be used for managing images (such as the mobile phone 100 shown in FIG. 1), or a central processing unit of the device. (Central Processing Unit, CPU), or a control module for image processing in the device, or a client for managing images in the device. In the embodiment of the present application, the image classification method performed by the above device is taken as an example, and the image classification method provided by the present application is described.
以下对本申请中涉及的术语进行介绍:The following refers to the terms involved in this application:
(1)JPEG是一种国际图像压缩标准。(1) JPEG is an international image compression standard.
(2)JPEG档案交换格式(JPEG File Interchange Format,JFIF)是一个图像文件格式标准,JFIF是一种符合JPEG交换格式标准的JPEG编码文件的格式。JFIF文件中的图像数据使用JPEG压缩;因此,JFIF也被称为“JPEG/JFIF”。(2) JPEG File Interchange Format (JFIF) is an image file format standard. JFIF is a JPEG-encoded file format conforming to the JPEG exchange format standard. The image data in the JFIF file is compressed using JPEG; therefore, JFIF is also called "JPEG/JFIF".
其中,JPEG/JFIF是最普遍在万维网(World Wide Web)上被用来储存和传输图片的格式。Among them, JPEG/JFIF is the most commonly used format for storing and transmitting pictures on the World Wide Web.
如图2所示,JPEG图像文件(即JPEG格式的图像文件)均以字符串“0xFFD8”开头,并以字符串“0xFFD9”结束。其中,JPEG图像文件的文件头中有一系列“0xFF**”格式的字符串,称为“JPEG标识”或“JPEG段”,用于标记JPEG图像文件的信息段。其中,“0xFFD8”用于标记图像信息开始,“0xFFD9”用于标记图像信息结束,这两个JPEG标识后面没有信息,而其它JPEG标识后会紧跟一些信息字符。As shown in FIG. 2, the JPEG image file (that is, the image file in the JPEG format) starts with the character string "0xFFD8" and ends with the character string "0xFFD9". Among them, the JPEG image file has a series of "0xFF**" format characters in the file header, called "JPEG mark" or "JPEG segment", which is used to mark the information segment of the JPEG image file. Among them, “0xFFD8” is used to mark the beginning of image information, “0xFFD9” is used to mark the end of image information, and there is no information behind these two JPEG identifiers, and other JPEG identifiers will follow some information characters.
(3)EXIF图像文件(即EXIF格式的图像文件)是上述JPEG图像文件的一种,遵循JPEG标准。(3) The EXIF image file (that is, the image file in the EXIF format) is one of the above JPEG image files, and conforms to the JPEG standard.
EXIF图像在JPEG图像文件中增加了相机拍摄图像的拍摄参数(称为第一拍摄参数)。例如,该第一拍摄参数可以包括:拍摄日期、拍摄器材参数(如相机的品牌和型号、镜头参数、闪光灯参数等)、拍摄参数(如快门速度、光圈F值、ISO速度、焦距、测光模式等)、图像处理参数(如锐化、对比度、饱和度、白平衡等)和拍摄图像的GPS定位数据等。The EXIF image adds a shooting parameter (referred to as a first shooting parameter) of the camera-captured image in the JPEG image file. For example, the first shooting parameter may include: shooting date, shooting equipment parameters (such as camera brand and model, lens parameters, flash parameters, etc.), shooting parameters (such as shutter speed, aperture F value, ISO speed, focal length, metering) Mode, etc.), image processing parameters (such as sharpening, contrast, saturation, white balance, etc.) and GPS positioning data of captured images.
其中,如图3所示,EXIF图像文件中可以包括EXIF信息,该EXIF信息包括EXIF字段301(EXIF图像文件中用于保存上述第一拍摄参数的字段)和Maker Note字段302,上述第一拍摄参数保存在EXIF字段301中。Maker Note字段302是为厂商预留的用于保存厂商专有注释数据的字段。其中,上述EXIF信息在EXIF 图像中以字符串“0xFFE0”开头,并以字符串“0xFFEF”结束,该EXIF信息是64KB(千字节)。Wherein, as shown in FIG. 3, the EXIF image file may include EXIF information, where the EXIF information includes an EXIF field 301 (a field in the EXIF image file for saving the first shooting parameter) and a Maker Note field 302. The parameters are saved in the EXIF field 301. The Maker Note field 302 is a field reserved for the vendor to hold vendor-specific annotation data. The EXIF information starts with the character string "0xFFE0" in the EXIF image and ends with the character string "0xFFEF", which is 64 KB (kilobytes).
本申请实施例中的第一图像文件的格式可以是EXIF(即第一图像文件可以是EXIF图像文件),该第一图像文件中的特征信息可以保存在EXIF图像文件的Maker Note字段中。EXIF图像文件的图像数据字段303用于保存图像数据。如此,(其他设备或者该设备)对该第一图像文件执行图像分类操作时,便可以读第一图像文件中的特征信息,直接利用第一图像文件中的特征信息对第一图像文件执行图像分类操作;而不是对第一图像文件中的图像数据执行图像识别操作,通过大量的计算得到用于进行图像分类操作的特征信息。通过本方案,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。The format of the first image file in the embodiment of the present application may be EXIF (ie, the first image file may be an EXIF image file), and the feature information in the first image file may be saved in the Maker Note field of the EXIF image file. The image data field 303 of the EXIF image file is used to save image data. In this way, when the other device or the device performs the image classification operation on the first image file, the feature information in the first image file can be read, and the image is directly executed on the first image file by using the feature information in the first image file. The classification operation; instead of performing an image recognition operation on the image data in the first image file, the feature information for performing the image classification operation is obtained by a large number of calculations. Through this scheme, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
当然,本申请实施例中的第一图像文件的格式包括但不限于EXIF,本申请实施例中的第一图像文件还可以是其他包括预留字段的格式文件,该预留字段可以用于保存用于进行图像分类操作的特征信息。并且,上述预留字段包括但不限于上述Maker Note字段,该预留字段可以是第一图像文件中,可以用于保存用于进行图像分类操作的特征信息的任一字段,本申请实施例对此不作限制。Of course, the format of the first image file in the embodiment of the present application includes, but is not limited to, the EXIF. The first image file in the embodiment of the present application may also be another format file including a reserved field, and the reserved field may be used for saving. Feature information for performing image classification operations. Moreover, the foregoing reserved field includes, but is not limited to, the Maker Note field, and the reserved field may be used in the first image file, and may be used to save any field of the feature information for performing the image classification operation. This is not a limitation.
请参考图4,其示出了本申请实施例提供的一种图像分类方法的系统原理框架示意图。在本申请实施例中,第一设备可以在拍摄图像和执行图像分类操作的过程中,在第一图像文件保存图像数据的特征信息,并在执行图像分类操作时直接使用保存在图像文件中的特征信息。Please refer to FIG. 4 , which is a schematic diagram of a system principle framework of an image classification method provided by an embodiment of the present application. In the embodiment of the present application, the first device may save the feature information of the image data in the first image file during the process of capturing the image and performing the image classification operation, and directly use the image file stored in the image file when performing the image classification operation. Feature information.
具体的,如图4所示,该设备可以获取摄像头采集图像数据时的特征信息(本申请实施例中的图像生成信息),并在第一图像文件中保存该特征信息(即执行401),生成第一图像文件403;随后,设备的图像分类引擎402执行图像分类操作时,可以直接读取第一图像文件403中的特征信息(即图像生成信息),并识别第一图像文件的图像数据得到新的特征信息(如本申请实施例中的分类特征信息);然后利用读取到的特征信息和新的特征信息对第一图像文件执行图像分类操作;最后,还可以采用识别到的新的特征信息,更新第一图像文件中的特征信息(例如,增加新的特征信息)。Specifically, as shown in FIG. 4, the device may acquire feature information (image generation information in the embodiment of the present application) when the image data is collected by the camera, and save the feature information in the first image file (ie, perform 401). Generating a first image file 403; subsequently, when the image classification engine 402 of the device performs an image classification operation, the feature information (ie, image generation information) in the first image file 403 may be directly read, and the image data of the first image file is identified. Obtaining new feature information (such as classification feature information in the embodiment of the present application); then performing image classification operation on the first image file by using the read feature information and the new feature information; finally, the identified new Feature information, updating feature information in the first image file (eg, adding new feature information).
以下通过具体实施例详细说明本申请实施例提供的一种图像分类方法。An image classification method provided by an embodiment of the present application is described in detail below by using specific embodiments.
本申请实施例提供一种图像分类方法,这里以第一图像文件的格式为EXIF(即第一图像文件是EXIF图像文件),特征信息保存在EXIF图像文件的Maker Note字段为例,对本申请实施例提供的图像分类方法进行说明。如图5所示,本申请实施例的方法包括S501-S503:The embodiment of the present application provides an image classification method. Here, the format of the first image file is EXIF (ie, the first image file is an EXIF image file), and the feature information is stored in the Maker Note field of the EXIF image file as an example, and is implemented in the present application. The image classification method provided by the example is explained. As shown in FIG. 5, the method in this embodiment of the present application includes S501-S503:
S501、第一设备捕获图像数据,并获取捕获图像数据时的图像生成信息,生成包括图像数据和图像生成信息的第一图像文件。S501. The first device captures image data, and acquires image generation information when capturing image data, and generates a first image file including image data and image generation information.
其中,本申请实施例的第一图像文件(如EXIF图像文件)的预设字段(如Maker Note字段)可以用于保存该第一图像文件的图像数据的特征信息。该特征信息可以包括:图像生成信息。该图像生成信息是第一设备的摄像头捕获图像数据时,由第一设备获取的特征信息。其中,图像生成信息的详细示例可以参考本申请实施例后续描述, 这里不予赘述。The preset field (such as the Maker Note field) of the first image file (such as the EXIF image file) of the embodiment of the present application may be used to save the feature information of the image data of the first image file. The feature information may include image generation information. The image generation information is feature information acquired by the first device when the camera of the first device captures the image data. For a detailed example of the image generation information, reference may be made to the subsequent description of the embodiments of the present application, and details are not described herein.
一般而言,第一设备捕获图像数据后可以生成包括图像数据的第一图像文件。而本申请实施例中,第一设备拍摄得到图像文件的方法与传统方案中设备拍摄图像的方法不同。具体的,第一设备不仅获取摄像头捕获的图像数据,还可以获取捕获图像数据时的图像生成信息,然后生成包括该图像数据和图像生成信息的图像文件。In general, a first image file including image data may be generated after the first device captures image data. In the embodiment of the present application, the method for capturing an image file by the first device is different from the method for capturing an image by the device in the conventional solution. Specifically, the first device not only acquires image data captured by the camera, but also acquires image generation information when the image data is captured, and then generates an image file including the image data and the image generation information.
示例性的,上述图像生成信息可以包括:摄像头捕获图像数据时的拍摄参数(称为第二拍摄参数)。例如,上述第二拍摄参数可以包括:拍摄模式(如全景模式和普通模式等)的信息、拍摄场景(如人物拍摄场景、建筑物拍摄场景、自然风光拍摄场景、室内拍摄场景和室外拍摄场景等)的信息和摄像头类型(摄像头类型指示图像数据是采用前置摄像头或者后置摄像头捕获的)等。其中,第一设备可以响应于摄像头捕获图像数据时,用户对拍摄模式、拍摄场景以及前置摄像头或者后置摄像头的选择,确定上述拍摄模式信息、拍摄场景信息和摄像头类型等信息。本申请实施例中的普通模式是指使用后置摄像头拍摄照片的模式。Exemplarily, the image generation information may include a shooting parameter (referred to as a second shooting parameter) when the camera captures image data. For example, the second shooting parameters described above may include information of a shooting mode (such as a panoramic mode and a normal mode), a shooting scene (such as a human shooting scene, a building shooting scene, a natural scenery shooting scene, an indoor shooting scene, and an outdoor shooting scene, etc.) Information and camera type (camera type indicates that the image data is captured by the front camera or the rear camera). The first device may determine information such as the shooting mode information, the shooting scene information, and the camera type, in response to the selection of the shooting mode, the shooting scene, and the front camera or the rear camera when the camera captures the image data. The normal mode in the embodiment of the present application refers to a mode in which a photo is taken using a rear camera.
其中,当上述拍摄场景为人物拍摄场景时,上述图像生成信息还包括人物特征信息,该人物特征信息包括人脸个数、人脸指示信息、人脸位置信息、其他对象(如动物)的指示信息和其他对象的个数等信息。其中,人脸指示信息用于指示第一图像文件的图像数据中包括人脸或者不包括人脸;其他对象的指示信息用于指示图像数据中包括其他对象或者不包括其他对象;人脸位置信息用于指示人脸在图像数据中的位置。Wherein, when the shooting scene is a person shooting scene, the image generation information further includes character feature information, where the character feature information includes a number of faces, face indication information, face position information, and indications of other objects (such as animals). Information such as the number of information and other objects. The face indication information is used to indicate that the image data of the first image file includes a face or does not include a face; the indication information of the other object is used to indicate that the image data includes other objects or does not include other objects; Used to indicate the position of the face in the image data.
值得注意的是,在本申请实施例中的第二拍摄参数与图3所示的EXIF图像的EXIF字段301中保存的第一拍摄参数不同。一般而言,相机在拍摄图像时,仅可以在图像中保存上述第一拍摄参数,而不会记录本申请实施例中的第二拍摄参数;而该第二拍摄参数可以用作对图像执行图像分类操作。如此,在对图像执行图像分类操作时,则需要识别图像得到第二拍摄参数,则会造成图像分类操作的计算量较大的问题。本申请实施例中,第一设备可以在拍摄第一图像文件的图像数据时,生成包括图像数据和上述第二拍摄参数的第一图像文件。这样,在对图像执行图像分类操作时,则可以直接从第一图像文件中读取并利用第二拍摄参数执行图像分类操作,可以减少对图像执行图像分类操作时的计算量,进而可以提高图像分类效率。当然,本申请实施例中的第一图像文件(如EXIF图像文件)中也可以包括上述第一设参数。该第一拍摄参数保存在EXIF图像文件的EXIF字段中。It should be noted that the second shooting parameter in the embodiment of the present application is different from the first shooting parameter saved in the EXIF field 301 of the EXIF image shown in FIG. 3. In general, when the camera captures an image, the first shooting parameter can only be saved in the image without recording the second shooting parameter in the embodiment of the present application; and the second shooting parameter can be used to perform image classification on the image. operating. Thus, when performing an image classification operation on an image, it is necessary to identify the image to obtain a second shooting parameter, which causes a problem that the amount of calculation of the image classification operation is large. In the embodiment of the present application, the first device may generate a first image file including the image data and the second shooting parameter when the image data of the first image file is captured. In this way, when the image classification operation is performed on the image, the image classification operation can be directly read from the first image file and performed using the second imaging parameter, and the calculation amount when the image classification operation is performed on the image can be reduced, thereby improving the image. Classification efficiency. Of course, the first image parameter (such as an EXIF image file) in the embodiment of the present application may also include the foregoing first parameter. The first shooting parameter is saved in the EXIF field of the EXIF image file.
请参考图6,其示出了本申请实施例提供的一种拍摄图像时生成包括图像数据的特征信息的图像文件的原理示意图。如图6所示,相机引擎可以在摄像头捕获图像(即61)时,调用场景算法和人脸算法等算法,并识别摄像头捕获图像数据时的用户操作(即62),以收集图像的图像生成信息(即63);然后将收集的图像生成信息和摄像头捕获的图像交给由JPEG制作器64;由JPEG制作器64中的MakerNote制作器将来自63的图像生成信息打包成字节数组(简称特征字节数组),由JPEG制作器64中的EXIF制作器将来自61的图像打包成字节数组;然后,由JPEG制作器64生成包括图像数据和上述特征字节数组的图像文件(即第一图像文件)。Please refer to FIG. 6 , which is a schematic diagram showing the principle of generating an image file including feature information of image data when an image is captured according to an embodiment of the present application. As shown in FIG. 6, the camera engine can call an algorithm such as a scene algorithm and a face algorithm when the camera captures an image (ie, 61), and recognizes a user operation when the camera captures image data (ie, 62) to collect image generation of the image. Information (ie 63); then the collected image generation information and the image captured by the camera are passed to the JPEG creator 64; the image generation information from 63 is packed into a byte array by the MakerNote Maker in the JPEG Maker 64 (referred to as The feature byte array), the image from 61 is packed into a byte array by the EXIF maker in the JPEG maker 64; then, the image file including the image data and the above-described feature byte array is generated by the JPEG maker 64 (ie, An image file).
可选的,第一设备可以周期性的执行以下S502,对包括第一图像文件在内的多个图像文件进行图像分类。Optionally, the first device may periodically perform the following S502 to perform image classification on the plurality of image files including the first image file.
示例性的,假设第一设备是图7A中的(a)所示的手机100。如图7A中的(a)所示,该手机100的相册中包括照片1-照片8等照片,第一图像文件是手机100的相册中的任一张照片,如第一图像文件是图7A中的(a)所示的照片1。手机100可以周期性的对手机100的相册中的照片进行图像分类。如此,手机100便可以显示图7A中的(b)所示的相簿界面,在该相簿界面中,手机100显示对相册中的照片进行图像分类的结果。例如,手机100将照片1-照片8分为“人物”相簿b、“动物”相簿a和“景观”相簿c。其中,“人物”相簿b中包括图7A中的(a)所示的照片1、照片3、照片5和照片8,“动物”相簿a中包括图7A中的(a)所示的照片2和照片7,“景观”相簿c中包括图7A中的(a)所示的照片4和照片6。例如,以“人物”相簿b为例,当用户点击图7A中的(b)所示的“人物”相簿b时,手机100可以显示图7A中的(c)所示的“人物”相簿界面,该“人物”相簿界面中包括照片1、照片3、照片5和照片8。Exemplarily, it is assumed that the first device is the mobile phone 100 shown in (a) of FIG. 7A. As shown in (a) of FIG. 7A, the photo album of the mobile phone 100 includes photos 1 to 8 and the like, and the first image file is any photo in the album of the mobile phone 100, such as the first image file is FIG. 7A. Photo 1 shown in (a). The mobile phone 100 can periodically perform image classification on photos in the album of the mobile phone 100. Thus, the mobile phone 100 can display the album interface shown in (b) of FIG. 7A, in which the mobile phone 100 displays the result of image classification of the photos in the album. For example, the mobile phone 100 divides the photos 1 - 8 into "person" album b, "animal" album a, and "landscape" album c. Wherein, the "person" album b includes the photo 1, the photo 3, the photo 5, and the photo 8 shown in (a) of FIG. 7A, and the "animal" photo album a includes the one shown in (a) of FIG. 7A. Photo 2 and Photo 7, "Landscape" album c includes Photo 4 and Photo 6 shown in (a) of Fig. 7A. For example, taking the "person" album b as an example, when the user clicks on the "person" album b shown in (b) of FIG. 7A, the mobile phone 100 can display the "person" shown in (c) of FIG. 7A. In the album interface, the "People" album interface includes Photo 1, Photo 3, Photo 5, and Photo 8.
或者,第一设备可以响应于用户操作执行S502,对该第一设备的相册中的图片进行图像分类。基于图7A所示的实例,手机100也可以响应于用户对图7A中的(a)所示的“相簿”按钮的点击操作,对照片1-照片8进行图像分类,然后显示图7A中的(b)所示的相簿界面。Alternatively, the first device may perform image classification on the pictures in the album of the first device by performing S502 in response to the user operation. Based on the example shown in FIG. 7A, the mobile phone 100 can also perform image classification on the photos 1 - 8 in response to the user's click operation on the "alliance" button shown in (a) of FIG. 7A, and then display the image in FIG. 7A. The album interface shown in (b).
S502、第一设备利用图像生成信息,对第一图像文件执行图像分类操作。S502. The first device performs image classification operation on the first image file by using image generation information.
本申请实施例中,上述图像生成信息不是第一设备对图像数据执行图像识别操作得到的,即第一设备不会为了得到图像生成信息,而对图像数据执行图像识别操作。也就是说,第一设备可以跳过以下步骤:对图像数据执行图像识别操作,以分析图像数据得到图像生成信息,直接利用第一图像文件中已保存的图像生成信息,对第一图像文件执行图像分类操作。In the embodiment of the present application, the image generation information is not obtained by the first device performing an image recognition operation on the image data, that is, the first device does not perform an image recognition operation on the image data in order to obtain the image generation information. That is, the first device may skip the following steps: performing an image recognition operation on the image data, analyzing the image data to obtain image generation information, and directly using the saved image generation information in the first image file to perform execution on the first image file. Image classification operation.
可以理解,第一设备分析图像数据得到不同的特征信息所采用的图像分类算法不同。其中,第一设备分析图像数据得到图像生成信息所采用的图像分类算法包括第一算法。这样,第一设备不采用第一算法对图像数据执行图像识别操作,以分析图像数据得到与第一算法对应的图像生成信息。It can be understood that the image classification algorithm adopted by the first device to analyze the image data to obtain different feature information is different. The image classification algorithm used by the first device to analyze the image data to obtain the image generation information includes the first algorithm. In this way, the first device does not perform an image recognition operation on the image data by using the first algorithm, and analyzes the image data to obtain image generation information corresponding to the first algorithm.
需要说明的是,本申请实施例中的第一算法可以包括一个或多个图像分类算法。本申请实施例中的图像生成信息可以包括一个或多个属性的特征信息,每一种属性的特征信息对应一个图像分类算法。It should be noted that the first algorithm in the embodiment of the present application may include one or more image classification algorithms. The image generation information in the embodiment of the present application may include feature information of one or more attributes, and the feature information of each attribute corresponds to an image classification algorithm.
S503、设备响应于用于查看图库的操作,在图库的分类目录显示第一图像文件。S503. The device displays the first image file in a category directory of the gallery in response to the operation for viewing the gallery.
其中,本申请实施例中的图像分类目录可以按照执行S502得到的分类结果显示第一图像文件。The image classification directory in the embodiment of the present application may display the first image file according to the classification result obtained by executing S502.
示例性的,基于图7A所示的实例,手机100也可以响应于用户对图7A中的(a)所示的“相簿”按钮的点击操作,分类显示照片1-照片8,即显示图7A中的(b)所示的相簿界面(即图库的分类目录)。Exemplarily, based on the example shown in FIG. 7A, the mobile phone 100 may also display the photo 1 - photo 8 in a responsive manner to the user's click operation on the "alliance" button shown in (a) of FIG. 7A. The album interface shown in (b) of 7A (ie, the catalogue of the gallery).
可选的,上述用于查询图库的操作可以是用户在图库的搜索框中输入关键字,设备可以响应于用户在图库的搜索框中输入关键字,在图库的分类目录显示包括第一图像文件的多个图像文件,这多个图像文件与用户输入的关键字匹配。Optionally, the foregoing operation for querying the gallery may be that the user inputs a keyword in a search box of the gallery, and the device may input the keyword in the search box of the gallery in response to the user, and display the first image file in the category directory of the gallery. Multiple image files that match the keywords entered by the user.
示例性的,以手机100中保存图7A中的(a)所示的照片1-照片8为例,手机100执行上述S501-S502对图像文件(如照片1-照片8)执行了图像分类操作。当用户在 图库的搜索框中输入关键字“人物”时,手机100便可以在图库的分类目录显示照片1、照片3、照片5和照片8等人物图像文件。当用户在图库的搜索框中输入关键字“两个人”时,手机100便可以在图库的分类目录显示照片3和照片5等人物图像文件。Exemplarily, taking the photo 1 - photo 8 shown in (a) of FIG. 7A in the mobile phone 100 as an example, the mobile phone 100 performs the image classification operation on the image file (such as photo 1 - photo 8) by executing the above S501-S502. . When the user enters the keyword "person" in the search box of the gallery, the mobile phone 100 can display the character image files of the photo 1, the photo 3, the photo 5, and the photo 8 in the catalogue of the gallery. When the user inputs the keyword "two people" in the search box of the gallery, the mobile phone 100 can display the character image files of the photo 3 and the photo 5 in the catalogue of the gallery.
本申请实施例提供的图像分类方法,第一设备拍摄得到的图像文件不仅包括图像数据,该包括图像生成信息。这样,第一设备便可以直接利用图像生成信息,对第一图像文件执行图像分类操作;而不需要对图像数据不执行图像识别操作以分析图像数据得到图像生成信息,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。In the image classification method provided by the embodiment of the present application, the image file captured by the first device includes not only image data, but also image generation information. In this way, the first device can directly perform image classification operation on the first image file by using the image generation information; without performing image recognition operation on the image data to analyze the image data to obtain image generation information, the image classification process can be reduced. The amount of calculation can further improve the efficiency of image classification.
进一步的,由于第一设备执行图像分类操作所需要的特征信息不仅包括上述图像生成信息,还包括分类特征信息(对图像数据执行图像识别操作得到的特征信息,分类特征信息与图像生成信息不同);因此,在第一设备对第一图像文件执行图像分类操作之前,第一设备还可以对图像数据执行图像识别操作,以分析图像数据得到分类特征信息;然后再利用图像生成信息和分类特征信息,对第一图像文件执行图像分类操作。具体的,如图7B所示,上述S502可以包括S701-S702:Further, the feature information required for the first device to perform the image classification operation includes not only the image generation information but also the classification feature information (the feature information obtained by performing the image recognition operation on the image data, and the classification feature information is different from the image generation information) Therefore, before the first device performs an image classification operation on the first image file, the first device may further perform an image recognition operation on the image data to analyze the image data to obtain classification feature information; and then use the image generation information and the classification feature information. And performing an image classification operation on the first image file. Specifically, as shown in FIG. 7B, the foregoing S502 may include S701-S702:
S701、第一设备对图像数据执行图像识别操作,以分析图像数据得到分类特征信息。S701. The first device performs an image recognition operation on the image data to analyze the image data to obtain classification feature information.
其中,第一设备可以采用不同与上述第一算法的其他图像分类算法(如第二算法)对图像数据执行图像识别操作,以分析图像数据得到与第二算法对应的分类特征信息。其中,第二算法与上述第一算法不同,分类特征信息与上述图像生成信息不同。第一设备采用第二算法对图像数据执行图像识别操作以得到分类特征信息的方法,可以参考传统技术中对图像数据执行图像识别操作以得到分类特征信息的方法,本申请实施例这里不予赘述。The first device may perform image recognition operations on the image data by using different image classification algorithms (such as the second algorithm) different from the foregoing first algorithm to analyze the image data to obtain classification feature information corresponding to the second algorithm. The second algorithm is different from the first algorithm described above, and the classification feature information is different from the image generation information. The method for performing the image recognition operation on the image data by the first device to obtain the classification feature information by using the second algorithm may refer to the method for performing the image recognition operation on the image data to obtain the classification feature information in the conventional technology, which is not described herein. .
S702、第一设备利用图像生成信息和分类特征信息,对第一图像文件执行图像分类操作。S702. The first device performs image classification operation on the first image file by using image generation information and classification feature information.
其中,第一设备利用图像生成信息和分类特征信息对第一图像文件执行图像分类操作的方法,可以参考常规技术中设备根据图像文件的特征信息对图像文件执行图像分类操作的方法,本申请实施例这里不再赘述。The method for performing an image classification operation on the first image file by using the image generation information and the classification feature information by the first device may refer to a method for performing an image classification operation on the image file according to the feature information of the image file in the conventional technology. The examples are not described here.
进一步的,如图7B所示,在S701之后,本申请实施例的方法还包括S703:Further, as shown in FIG. 7B, after S701, the method in the embodiment of the present application further includes S703:
S703、第一设备在第一图像文件中保存分类特征信息,得到更新后的第一图像文件。S703. The first device saves the classification feature information in the first image file to obtain the updated first image file.
可以理解,在S703之后,第一图像文件中包括图像生成信息和分类特征信息。本申请实施例中,将图像生成信息和分类特征信息统称为第一图像文件的特征信息。本申请实施例中的特征信息(图像生成信息和分类特征信息)保存在第一图像文件的预设字段。本申请实施例以预设字段是图3所示的Maker Note字段302为例,对预设字段的格式以及预设字段中保存特征信息的方式进行说明:It can be understood that, after S703, the image generation information and the classification feature information are included in the first image file. In the embodiment of the present application, the image generation information and the classification feature information are collectively referred to as feature information of the first image file. The feature information (image generation information and classification feature information) in the embodiment of the present application is saved in a preset field of the first image file. In the embodiment of the present application, the Maker Note field 302 shown in FIG. 3 is used as an example to describe the format of the preset field and the manner in which the feature information is saved in the preset field:
示例性的,本申请实施例中的特征信息可以采用TIFF保存在Maker Note字段。其中,Maker Note字段302中保存特征信息的数据格式包括但不限于TIFF,其他用于在Maker Note字段302中保存特征信息的数据格式本申请实施例这里不予赘述。Exemplarily, the feature information in the embodiment of the present application may be saved in the Maker Note field by using TIFF. The data format for storing the feature information in the Maker Note field 302 includes, but is not limited to, TIFF. Other data formats for storing the feature information in the Maker Note field 302 are not described herein.
如图3所示,Maker Note字段302包括:信息头302a、校验字段302b、标签图像文件格式(Tagged Image File Format,TIFF)头302c和TIFF字段302d。As shown in FIG. 3, the Maker Note field 302 includes an information header 302a, a check field 302b, a Tagged Image File Format (TIFF) header 302c, and a TIFF field 302d.
其中,如图8所示,信息头302a用于保存厂商信息。例如,信息头302a中可以保存“huawei”;校验字段302b用于保存校验信息,该校验信息标识Maker Note字段302中保存的信息的完整性和准确性,例如,该校验信息可以是循环冗余校验(Cyclic Redundancy Check,CRC)。TIFF头302c用于保存TIFF指示信息,该TIFF指示信息用于指示TIFF字段302d中保存特征信息的格式为图像文件目录(Image File Directory,IFD)格式。TIFF字段302d用于保存特征信息(如图像生成信息和分类特征信息)。Among them, as shown in FIG. 8, the information header 302a is used to store vendor information. For example, "huawei" may be saved in the information header 302a; the verification field 302b is used to store verification information, which identifies the integrity and accuracy of the information held in the Maker Note field 302, for example, the verification information may It is a Cyclic Redundancy Check (CRC). The TIFF header 302c is configured to save TIFF indication information for indicating that the format of the feature information stored in the TIFF field 302d is an Image File Directory (IFD) format. The TIFF field 302d is used to hold feature information such as image generation information and classification feature information.
本申请实施例中,设备对图像执行图像分类操作,得到新的特征信息(即分类特征信息)后便可以更新Maker Note字段302中保存的特征信息(即修改Maker Note字段302)。在这种情况下,为了避免由于设备更新Maker Note字段302中保存的特征信息,使得校验字段302b保存的校验信息指示Maker Note字段302被篡改;设备可以在更新Maker Note字段302中保存的特征信息时,为校验字段302b生成新的校验信息。In the embodiment of the present application, the device performs an image classification operation on the image, and after obtaining new feature information (ie, classification feature information), the feature information saved in the Maker Note field 302 can be updated (ie, the Maker Note field 302 is modified). In this case, in order to avoid that the verification information held by the check field 302b indicates that the Maker Note field 302 has been tampered with due to the device updating the feature information held in the Maker Note field 302; the device may be saved in the update Maker Note field 302. At the time of the feature information, new check information is generated for the check field 302b.
请参考图8,其示出了本申请实施例提供的一种Maker Note字段的数据结构实例示意图。在图8中TIFF字段302d中保存特征信息的格式为IFD格式。Please refer to FIG. 8 , which is a schematic diagram showing an example of a data structure of a Maker Note field provided by an embodiment of the present application. The format in which the feature information is saved in the TIFF field 302d in FIG. 8 is the IFD format.
具体的,TIFF字段302d中可以保存一个或多个IFD。例如,如图8所示,TIFF字段302d中包括IFD0和IFD1。以IFD0为例,对TIFF字段302d中的IFD信息进行说明:Specifically, one or more IFDs may be saved in the TIFF field 302d. For example, as shown in FIG. 8, IFD0 and IFD1 are included in the TIFF field 302d. Taking IFD0 as an example, the IFD information in the TIFF field 302d is described:
IFD0包括目录域和数据域,IFD0的目录域用于存储IFD0中的子IFD(如子IFD1和子IFD2等)的目录和该IFD0的连接结束标签。IFD0的数据域用于保存子IFD(如子IFD1和子IFD2等)。IFD0的连接结束标签用于指示IFD0结束的位置。本申请实施例中的特征信息可以保存在目录中,也可以保存在数据域。IFD0 includes a directory field and a data field, and the directory field of IFD0 is used to store a directory of sub-IFDs (such as sub-IFD1 and sub-IFD2) in IFD0 and a connection end tag of the IFD0. The data field of IFD0 is used to store sub-IFDs (such as sub-IFD1 and sub-IFD2, etc.). The connection end tag of IFD0 is used to indicate the location where IFD0 ends. The feature information in the embodiment of the present application may be saved in a directory or may be saved in a data domain.
可选的,每个子IFD也可以包括目录域和数据域。例如,IFD0中的子IFD1包括目录域和数据域。其中,子IFD1的目录域和数据域的功能可以参考本申请实施例对IFD的目录域和数据域的介绍,本申请实施例这里不再赘述。Optionally, each sub-IFD may also include a directory domain and a data domain. For example, the sub-IFD1 in IFD0 includes a directory domain and a data domain. For the function of the directory domain and the data domain of the sub-IFD1, reference may be made to the description of the directory domain and the data domain of the IFD in the embodiment of the present application.
假设TIFF字段302d中包括三个IFD(如IFD1-IFD3),且IFD-0IFD2不包括子IFD。请参考图9,其示出了图8所示的IFD的目录的结构示意图。如图9所示,IFD的目录包括多个标签项,每个标签项包括标签标识(Identity,ID)、标签类型和标签值。It is assumed that three IFDs (such as IFD1-IFD3) are included in the TIFF field 302d, and IFD-0IFD2 does not include the sub-IFD. Please refer to FIG. 9, which shows a schematic structural diagram of a directory of the IFD shown in FIG. As shown in FIG. 9, the directory of the IFD includes a plurality of tag items, and each tag item includes a tag identifier (Identity, ID), a tag type, and a tag value.
其中,本申请实施例中的标签值可以为特征信息;或者,特征信息保存在IFD的数据域,而该标签值则是特征信息在数据域的地址偏移量。例如,当一个特征信息小于或等于4字节(byte)时,标签值是该特征信息;当一个特征信息大于4byte时,该特征信息则需要保存在数据域中,标签值是特征信息在数据域的地址偏移量。The tag value in the embodiment of the present application may be feature information; or, the feature information is stored in a data domain of the IFD, and the tag value is an address offset of the feature information in the data domain. For example, when a feature information is less than or equal to 4 bytes, the tag value is the feature information; when a feature information is greater than 4 bytes, the feature information needs to be saved in the data domain, and the tag value is the feature information in the data. The address offset of the domain.
例如,以IFD0中包括4个标签项为例,这三个标签项的标签ID分别为0x001、0x002、0x003和0x004。其中,标签ID 0x001对应的标签类型为Unsigned long,其标签值用于指示图像的拍摄模式信息(例如,当标签值为0时,表示图像是采 用自拍模式拍摄的;当标签值为1时,表示图像是采用全景模式拍摄的)。标签ID0x002对应的标签类型为Unsigned byte,其标签值用于指示图像的摄像头类型(例如,当标签值为0时,表示图像是使用后置摄像头拍摄的;当标签值为1时,表示图像是采用前置摄像头拍摄的)。标签ID0x003对应的标签类型为Undefined,其标签值用于指示人脸指示信息(例如,当标签值为0时,表示图像中没有人脸;当标签值为1时,表示图像中有人脸)。标签ID 0x004对应的标签类型为Unsigned byte,其标签值为地址偏移量1,该地址偏移量1是人脸位置信息在IFD0的数据域的地址偏移量。For example, taking four tag entries in IFD0 as an example, the tag IDs of the three tag entries are 0x001, 0x002, 0x003, and 0x004, respectively. The tag type corresponding to the tag ID 0x001 is Unsigned long, and the tag value is used to indicate the shooting mode information of the image (for example, when the tag value is 0, the image is taken in the self-timer mode; when the tag value is 1, Indicates that the image was taken in panorama mode). The tag type corresponding to the tag ID0x002 is Unsigned byte, and its tag value is used to indicate the camera type of the image (for example, when the tag value is 0, it indicates that the image was taken using the rear camera; when the tag value is 1, the image is Shot with the front camera). The label type corresponding to the label ID0x003 is Undefined, and the label value is used to indicate the face indication information (for example, when the label value is 0, it means that there is no face in the image; when the label value is 1, it indicates a human face in the image). The tag type corresponding to tag ID 0x004 is Unsigned byte, and its tag value is address offset 1, which is the address offset of the face location information in the data field of IFD0.
其中,第一设备在对第一图像文件执行图像分类操作时,所需要使用的特征信息可以包括预设多个属性的特征信息。一般而言,第一设备可以采用不同的算法识别第一图像文件中的图像数据,以得到对应属性的特征信息。本申请实施例中的“预设多个属性”由第一设备中的图像分类客户端(简称客户端)决定。具体的,该“预设多个属性”由第一设备中的图像分类客户端对图像执行图像分类操作时,所需要识别的特征信息的属性决定。The feature information that the first device needs to use when performing the image classification operation on the first image file may include the feature information of the preset multiple attributes. In general, the first device may use different algorithms to identify image data in the first image file to obtain feature information of the corresponding attribute. The "preset multiple attributes" in the embodiment of the present application is determined by an image classification client (referred to as a client) in the first device. Specifically, the “preset multiple attributes” is determined by the attribute of the feature information that needs to be identified when the image classification client in the first device performs an image classification operation on the image.
举例来说,假设第一设备的客户端对图像文件执行图像分类操作时所需要识别的特征信息的属性包括:人脸属性、场景属性和模式属性。其中,人脸属性对应人脸个数和上述人脸指示信息等,场景属性对应上述拍摄场景信息,模式属性对应上述拍摄模式信息。那么,上述预设多个属性则包括人脸属性、场景属性和模式属性。其中,每一种属性的特征信息对应一个图像分类算法。例如,人脸属性的特征信息对应人脸算法。For example, assume that the attributes of the feature information that the client of the first device needs to recognize when performing an image classification operation on the image file include: a face attribute, a scene attribute, and a mode attribute. The face attribute corresponds to the number of faces and the face indication information, and the scene attribute corresponds to the shooting scene information, and the mode attribute corresponds to the shooting mode information. Then, the preset plurality of attributes include a face attribute, a scene attribute, and a mode attribute. The feature information of each attribute corresponds to an image classification algorithm. For example, the feature information of the face attribute corresponds to the face algorithm.
当一个属性的特征信息比较复杂或者较多时,为了便于该属性的特征信息的提取和存储,可以在一个IFD的子IFD中保存该属性的特征信息。例如,人脸属性的特征信息可以包括:人脸算法的版本、人脸个数和人脸位置信息等。假设IFD0中包括三个子IFD(如子IFD1-子IFD3)。When the feature information of an attribute is more complicated or more, in order to facilitate the extraction and storage of the feature information of the attribute, the feature information of the attribute may be saved in the sub-IFD of an IFD. For example, the feature information of the face attribute may include: a version of the face algorithm, a number of faces, and face position information. Assume that IFD0 includes three sub-IFDs (such as sub-IFD1-sub-IFD3).
请参考图10,其示出了图8所示的IFD0中的子IFD的目录的结构。如图10所示,IFD中的子IFD的目录中包括多个标签项,每个标签项包括标签ID、标签类型和标签值。以IFD0的子IFD1为例,假设IFD0的子IFD1中包括3个标签项,这三个标签项的标签ID分别为0x001、0x002和0x003。其中,标签ID 0x001对应的标签类型为Unsigned long,其标签值用于指示人脸算法的版本;标签ID 0x002对应的标签类型为Unsigned long,其标签值用于指示人脸个数。标签ID 0x003对应的标签类型为Unsigned byte,其标签值为地址偏移量2,该地址偏移量2是人脸位置信息在IFD0的子IFD1的数据域的地址偏移量。Please refer to FIG. 10, which shows the structure of the directory of the sub-IFD in the IFD0 shown in FIG. As shown in FIG. 10, the directory of the sub-IFD in the IFD includes a plurality of tag items, and each tag item includes a tag ID, a tag type, and a tag value. Taking the sub-IFD1 of IFD0 as an example, it is assumed that the sub-IFD1 of IFD0 includes three tag items, and the tag IDs of the three tag items are 0x001, 0x002, and 0x003, respectively. The tag type corresponding to the tag ID 0x001 is Unsigned long, and the tag value is used to indicate the version of the face algorithm; the tag type corresponding to the tag ID 0x002 is Unsigned long, and the tag value is used to indicate the number of faces. The tag type corresponding to the tag ID 0x003 is Unsigned byte, and the tag value is the address offset 2, which is the address offset of the face location information in the data field of the sub-IFD1 of the IFD0.
需要说明的是,本申请实施例中不同IFD中的标签项中的标签ID可以相同。具体的,由于不同的IFD的标识(如ID)不同,那么即使两个IFD中的标签项使用相同的标签ID,设备也可以区分出不同IFD中的标签项。并且,不同子IFD中的标签项中的标签ID可以相同。具体的,由于不同的子IFD的标识(如ID)不同,那么即使两个子IFD中的标签项使用相同的标签ID,设备也可以区分出不同子IFD中的标签项。并且,本申请实施例中的标签类型包括但不限于上述Unsigned long、Unsigned byte和Undefined。It should be noted that the tag IDs in the tag entries in different IFDs in this embodiment may be the same. Specifically, since the identifiers (such as IDs) of different IFDs are different, even if the label items in the two IFDs use the same label ID, the device can distinguish the label items in different IFDs. Also, the tag IDs in the tag entries in different sub-IFDs may be the same. Specifically, since the identifiers (such as IDs) of different sub-IFDs are different, even if the label items in the two sub-IFDs use the same label ID, the device can distinguish the label items in different sub-IFDs. Moreover, the tag types in the embodiments of the present application include, but are not limited to, the Unsigned long, the Unsigned byte, and the Undefined.
可以理解,当第一设备在TIFF字段302d中增加新的特征信息时,则可以在TIFF字段302d中增设标签项、IFD或者子IFD,用于保存新属性的特征信息。例如,在图8所示的TIFF字段302d中增设IFD2。It can be understood that when the first device adds new feature information in the TIFF field 302d, a tag item, an IFD or a sub-IFD may be added in the TIFF field 302d for saving the feature information of the new attribute. For example, IFD2 is added to the TIFF field 302d shown in FIG.
请参考图11,其示出了本申请实施例提供的一种执行图像分类操作的原理示意图。如图11所示,当设备要对一个图像文件执行图像分类操作时,图像分类引擎可以先读该图像文件的Maker Note字段中保存的特征信息,当读取到特征信息后,由Maker Note解析器和EXIF解析器解析读取到的特征信息(即1101);针对部分属性的特征信息(即Maker Note字段中未保存的特征信息),可以采用图像分类算法(即1102)识别图像数据得到该属性的特征信息(即新的特征信息1104);然后可以使用执行1101读取到的特征信息和新的特征信息(即1104)对图像执行图像分类操作(即1103);并且,图像分类引擎还可以采用新的特征信息1104更新图像文件的Maker Note字段中保存的特征信息(即1105),得到更新后的图像文件。Please refer to FIG. 11 , which is a schematic diagram of a principle for performing an image classification operation according to an embodiment of the present application. As shown in FIG. 11, when the device performs an image classification operation on an image file, the image classification engine may first read the feature information saved in the Maker Note field of the image file, and after reading the feature information, the image is parsed by Maker Note. And the EXIF parser parsing the read feature information (ie, 1101); for the feature information of the partial attribute (ie, the feature information not saved in the Maker Note field), the image classification algorithm (ie, 1102) may be used to identify the image data to obtain the Feature information of the attribute (ie, new feature information 1104); then the image classification operation (ie, 1103) may be performed on the image using the feature information read by the execution 1101 and the new feature information (ie, 1104); and the image classification engine further The new feature information 1104 can be used to update the feature information (ie, 1105) stored in the Maker Note field of the image file to obtain an updated image file.
可以理解,第一设备在对图像文件执行图像分类操作之前,可以先获取图像文件。具体的,如图12所示,本申请实施例的方法包括S1200:It can be understood that the first device may acquire the image file before performing the image classification operation on the image file. Specifically, as shown in FIG. 12, the method in this embodiment of the present application includes S1200:
S1200、第一设备获取第一图像文件。S1200. The first device acquires the first image file.
其中,第一设备获取第一图像文件的方式包括但不限于上述S501所示的方式,第一设备还可以接收第二设备发送的图像文件。即S1200可以包括S1200a:第一设备接收第二设备发送的第一图像文件。其中,在不同实现方式(实现方式a-d)中,第一设备接收自第二设备的第一图像文件不同:The manner in which the first device acquires the first image file includes, but is not limited to, the manner shown in the above S501, and the first device may further receive the image file sent by the second device. That is, S1200 may include S1200a: the first device receives the first image file sent by the second device. Wherein, in different implementation manners (implementation manners a-d), the first image file received by the first device from the second device is different:
实现方式a:第二设备采用S501所示的拍摄图像文件的方式拍摄得到第一图像文件。第一设备接收自第二设备的第一图像文件中包括图像生成信息。Implementation mode a: The second device captures the first image file by capturing an image file as shown in S501. The first device receives image generation information from the first image file of the second device.
实现方式b:第二设备采用S501所示的拍摄图像文件的方式拍摄得到第一图像文件;并且,第二设备执行本申请实施例提供的图像分类方法对该第一图像文件执行了图像分类操作。其中,第二设备对该第一图像文件执行了图像分类操作后,在该第一图像文件中保存了该第一图像文件中的图像数据的分类特征信息。即第一设备接收自第二设备的第一图像文件中包括图像生成信息和分类特征信息。Implementation b: the second device captures the first image file by using the captured image file shown in S501; and the second device performs the image classification operation on the first image file by performing the image classification method provided by the embodiment of the present application. . After the second device performs the image classification operation on the first image file, the classification feature information of the image data in the first image file is saved in the first image file. That is, the first device receives the image generation information and the classification feature information from the first image file of the second device.
实现方式c:第二设备不具备采用S501所示的方式拍摄图像文件的功能,第一设备接收自第二设备的第一图像文件中不包括图像生成信息,且不包括分类特征信息。Implementation c: The second device does not have the function of capturing an image file in the manner shown in S501, and the first device does not include the image generation information in the first image file received from the second device, and does not include the classification feature information.
实现方式d:第二设备不具备采用S501所示的方式拍摄图像文件的功能;但是,第二设备执行本申请实施例提供的图像分类方法对该第一图像文件执行了图像分类操作。其中,第二设备对该第一图像文件执行了图像分类操作后,在该第一图像文件中包括分类特征信息,不包括图像生成信息。Implementation d: The second device does not have the function of capturing the image file in the manner shown in S501; however, the second device performs the image classification operation on the first image file by performing the image classification method provided by the embodiment of the present application. After the second device performs an image classification operation on the first image file, the second image file includes classification feature information, and does not include image generation information.
在S502之后,第一设备再次对第一图像文件执行图像分类操作时,或者,当第一设备对接收自第二设备的第一图像文件执行图像分类操作时,该第一设备可以先读取第一图像文件中保存的特征信息;当确定第一图像文件中保存有第一特征信息(图像生成信息和/或分类特征信息)时,则可以不采用第三算法对图像数据执行图像识别操作,以分析图像数据得到与第三算法对应的第一特征信息;直接利用第一特征信息对 第一图像文件进行图像分类操作。即设备可以跳过采用第三算法识别图像数据,以分析图像数据得到第一特征信息,直接利用第一特征信息对第一图像文件执行图像分类操作,可以减少执行图像分类操作的计算量。具体的,在S502之后,或者在S1200a之后,本申请实施例的方法还包括S1201-S1205。例如,如图12所示,在S1200之后本申请实施例的方法还包括S1201-S1205:After S502, when the first device performs an image classification operation on the first image file again, or when the first device performs an image classification operation on the first image file received from the second device, the first device may first read Feature information saved in the first image file; when it is determined that the first feature information (image generation information and/or classification feature information) is stored in the first image file, the image recognition operation may be performed on the image data without using the third algorithm And analyzing the image data to obtain first feature information corresponding to the third algorithm; and directly performing image classification operation on the first image file by using the first feature information. That is, the device can skip the recognition of the image data by using the third algorithm to analyze the image data to obtain the first feature information, and directly perform the image classification operation on the first image file by using the first feature information, thereby reducing the calculation amount of performing the image classification operation. Specifically, after S502, or after S1200a, the method in the embodiment of the present application further includes S1201-S1205. For example, as shown in FIG. 12, after S1200, the method of the embodiment of the present application further includes S1201-S1205:
S1201、第一设备获取第一图像文件中的特征信息。S1201: The first device acquires feature information in the first image file.
其中,本申请实施例中的特征信息(如图像生成信息和分类特征信息)保存在第一图像文件的预设字段(如Maker Note字段)。特征信息在图像文件中的具体存储方式可以参考本申请实施例对图8-图10中的详细描述,本申请实施例这里不再赘述。The feature information (such as image generation information and classification feature information) in the embodiment of the present application is saved in a preset field of the first image file (such as a Maker Note field). For a specific storage manner of the feature information in the image file, reference may be made to the detailed description of the embodiment of the present application in FIG. 8 to FIG. 10, and details are not described herein again.
S1202、第一设备判断第一图像文件中是否包括第一特征信息。S1202: The first device determines whether the first feature information is included in the first image file.
具体的,当第一设备确定第一图像文件中包括第一特征信息,即第一图像文件的预设字段(如Maker Note字段)中保存有第一特征信息时,第一设备可以不采用第三算法识别图像数据以得到第一特征信息(即跳过S1203),直接利用第一特征信息执行图像分类操作(即执行S1205)。Specifically, when the first device determines that the first feature information is included in the first image file, that is, the first feature information is saved in a preset field (such as a Maker Note field) of the first image file, the first device may not adopt the first feature information. The three algorithms recognize the image data to obtain the first feature information (ie, skip S1203), and directly perform the image classification operation using the first feature information (ie, perform S1205).
当第一设备确定第一图像文件中不包括第一特征信息,即第一图像文件的预设字段(如Maker Note字段)中未保存第一特征信息时,第一设备则可以采用第三算法识别第一图像文件中的图像数据以得到第一特征信息(即执行S1203),然后再利用执行S1203得到的第一特征信息执行图像分类操作(即执行S1205)。When the first device determines that the first feature information is not included in the first image file, that is, the first feature information is not saved in the preset field (such as the Maker Note field) of the first image file, the first device may adopt the third algorithm. The image data in the first image file is identified to obtain the first feature information (ie, execution S1203), and then the image classification operation is performed using the first feature information obtained in S1203 (ie, execution S1205).
其中,上述第一特征信息可以是第一属性的特征信息,上述第三算法是用于识别该第一属性的特征信息的图像分类算法。示例性的,假设一个图像的特征信息可以按照特征信息的属性分为:属性a的特征信息和属性b的特征信息;那么上述多个预设属性可以包括:该属性a和属性b。例如,一个图像的特征信息可以按照特征信息的属性分为拍摄属性的特征信息和人脸属性的特征信息。The first feature information may be feature information of the first attribute, and the third algorithm is an image classification algorithm for identifying feature information of the first attribute. Exemplarily, it is assumed that the feature information of an image can be divided into the feature information of the attribute a and the feature information of the attribute b according to the attribute of the feature information; then the plurality of preset attributes may include: the attribute a and the attribute b. For example, the feature information of one image may be classified into feature information of a shooting attribute and feature information of a face attribute according to attributes of the feature information.
当上述属性a的特征信息可以按照特征信息的子属性分为:属性a-子属性1的特征信息和属性a-子属性2的特征信息时,上述预设属性则可以包括:属性a-子属性1、属性a-子属性2和属性b。例如,上述拍摄属性的特征信息可以包括拍摄模式信息和拍摄场景信息等;上述人脸属性的特征信息可以分为:人脸指示信息、人脸算法的版本和人脸个数。When the feature information of the attribute a can be divided into the feature information of the attribute a-sub-attribute 1 and the feature information of the attribute a-sub-attribute 2, the preset attribute may include: attribute a-sub Attribute 1, attribute a-sub attribute 2 and attribute b. For example, the feature information of the shooting attribute may include shooting mode information and shooting scene information, etc.; the feature information of the face attribute may be divided into: face indication information, a version of the face algorithm, and a number of faces.
其中,第一图像文件的Maker Note字段可以按照不同的属性保存图像数据的特征信息。例如,如图8所示,一个IFD中保存一个属性的特征信息,不同IFD中保存的特征信息的属性不同。The Maker Note field of the first image file may save the feature information of the image data according to different attributes. For example, as shown in FIG. 8, the feature information of one attribute is stored in one IFD, and the attribute information of the feature information stored in different IFDs is different.
例如,图8所示的IFD0可以用于保存上述拍摄模式信息和拍摄场景信息等拍摄属性的特征信息,IFD0中的子IFD1中保存拍摄模式信息,IFD0中的子IFD2中保存拍摄场景信息;图8所示的IFD1可以用于保存人脸属性的特征信息(人脸算法的版本、人脸个数和人脸位置信息等),IFD1中的子IFD1中保存人脸算法的版本,IFD1中的子IFD2中保存人脸个数,IFD1中的子IFD3中保存人脸位置信息。For example, the IFD0 shown in FIG. 8 can be used to save feature information of shooting attributes such as the above-described shooting mode information and shooting scene information, and the shooting mode information is saved in the sub-IFD1 in the IFD0, and the shooting scene information is saved in the sub-IFD2 in the IFD0; The IFD1 shown in 8 can be used to save the feature information of the face attribute (the version of the face algorithm, the number of faces and the face position information, etc.), and the version of the face algorithm is saved in the child IFD1 in the IFD1, in the IFD1. The number of faces is saved in the child IFD2, and the face position information is saved in the child IFD3 in the IFD1.
每个IFD保存的特征信息的属性可以预先约定。例如,每个IFD中保存标签对应一种属性的特征信息。其中,每个IFD中可以保存多个标签,每个标签包括标签ID、标签类型和标签值。The attributes of the feature information saved by each IFD can be pre-agreed. For example, each IFD saves the feature information of the tag corresponding to an attribute. Among them, multiple tags can be saved in each IFD, and each tag includes a tag ID, a tag type, and a tag value.
当Maker Note字段(Maker Note字段中的TIFF字段)中未保存第一属性的特征信息时,TIFF字段的IFD中可以不包括该第一属性对应的标签。当然,本申请实施例中,也可以通过设置该第一属性的特征信息的标签值为空(如Null)来指示未保存第一属性的特征信息。When the feature information of the first attribute is not saved in the Maker Note field (the TIFF field in the Maker Note field), the tag corresponding to the first attribute may not be included in the IFD of the TIFF field. Of course, in the embodiment of the present application, the feature information of the first attribute may not be saved by setting the label value of the feature information of the first attribute to be empty (such as Null).
S1203、第一设备采用第三算法识别第一图像文件中的图像数据,以得到第一特征信息。S1203. The first device uses a third algorithm to identify image data in the first image file to obtain first feature information.
其中,第一设备采用图像分类算法(如第三算法)识别第一图像文件的图像数据,以得到第一特征信息的方法,可以参考常规技术中设备对图像执行图像分类操作时,采用图像分类算法识别图像数据的特征信息的方法,本申请实施例这里不再赘述。The method for identifying the image data of the first image file by using the image classification algorithm (such as the third algorithm) to obtain the first feature information may refer to the image classification operation when the device performs the image classification operation on the image in the conventional technology. The method for identifying the feature information of the image data by the algorithm is not described herein again in the embodiment of the present application.
S1204、第一设备在第一图像文件中保存第一特征信息,得到更新后的第一图像文件。S1204. The first device saves the first feature information in the first image file to obtain the updated first image file.
其中,由于每个IFD保存的特征信息的属性可以预先约定,例如,每个IFD的标签ID对应一种属性的特征信息;因此,第一设备可以将上述第一属性的特征信息保存在Maker Note字段中的TIFF字段中、该第一属性的标签ID对应的标签值中。The attribute of the feature information saved by each IFD may be pre-agreed. For example, the tag ID of each IFD corresponds to the feature information of one attribute; therefore, the first device may save the feature information of the first attribute in the Maker Note. In the TIFF field in the field, the tag value corresponding to the tag ID of the first attribute.
S1205、第一设备利用第一特征信息,对第一图像文件执行图像分类操作。S1205. The first device performs an image classification operation on the first image file by using the first feature information.
其中,第一设备利用第一特征信息对第一图像文件执行图像分类操作的方法,可以参考常规技术中设备根据图像文件的特征信息对图像文件执行图像分类操作的方法,本申请实施例这里不再赘述。其中,本申请实施例提供的一种执行图像分类操作的原理可以参考图11所示的原理示意图,本申请实施例这里不再赘述。The method for performing an image classification operation on the first image file by using the first feature information by the first device may refer to a method for performing an image classification operation on the image file according to the feature information of the image file in the conventional technology. Let me repeat. For the principle of performing the image classification operation provided by the embodiment of the present application, reference may be made to the schematic diagram shown in FIG. 11 , which is not repeatedly described herein.
本申请实施例提供的图像分类方法,第一设备对第一图像文件执行图像分类操作时,可以先获取第一图像文件中的特征信息,判断第一图像文件中是否包括第一特征信息;当第一图像文件中包括第一特征信息时,则可以跳过采用第三算法识别第一图像文件的图像数据以得到第一特征信息。这样,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。In the image classification method provided by the embodiment of the present application, when performing the image classification operation on the first image file, the first device may first acquire the feature information in the first image file, and determine whether the first feature information is included in the first image file; When the first feature information is included in the first image file, the image data of the first image file may be skipped by using the third algorithm to obtain the first feature information. In this way, the amount of calculation in the image classification process can be reduced, and the image classification efficiency can be improved.
可以理解,设备执行图像分类操作所采用的算法版本会随着时间的变化不断更新,并且采用不同版本的同一算法识别第一图像文件的图像数据得到的特征信息不同。基于此,上述分类特征信息中还包括图像分类算法的版本。在这种情况下,即使上述预设字段中保存有第一特征信息,识别该第一特征信息的算法版本与第三算法的版本也不一定相同。鉴于这种情况,在S1202之后,当第一图像文件中包括第一特征信息时,本申请实施例的方法还包括S1301,上述S1204可以替换为S1204a:It can be understood that the version of the algorithm used by the device to perform the image classification operation is continuously updated with time, and the feature information obtained by using the same algorithm of different versions to identify the image data of the first image file is different. Based on this, the classification feature information further includes a version of the image classification algorithm. In this case, even if the first feature information is stored in the preset field, the algorithm version identifying the first feature information is not necessarily the same as the version of the third algorithm. In view of the situation, after the S1202, when the first image information is included in the first image file, the method of the embodiment of the present application further includes S1301, and the S1204 may be replaced by S1204a:
S1301、第一设备判断识别第一特征信息的算法版本与第三算法的版本是否相同。S1301: The first device determines whether the algorithm version identifying the first feature information is the same as the version of the third algorithm.
具体的,当识别第一特征信息的算法版本与第三算法的版本相同时,第一设备可以跳过采用第三算法识别第一图像文件中的图像数据以得到第一特征信息(即跳过S1203),直接利用第一特征信息对第一图像文件执行图像分类操作(即执行S1205)。Specifically, when the algorithm version identifying the first feature information is the same as the version of the third algorithm, the first device may skip using the third algorithm to identify the image data in the first image file to obtain the first feature information (ie, skipping S1203), performing an image classification operation on the first image file directly by using the first feature information (ie, executing S1205).
当识别第一特征信息的算法版本与第三算法的版本不同时,第一设备则可以采用第三算法识别图像数据以得到第一特征信息(即执行S1203),然后再使用执行S1203得到的第一特征信息对第一图像文件执行图像分类操作(即执行S1205)。When the algorithm version identifying the first feature information is different from the version of the third algorithm, the first device may identify the image data by using the third algorithm to obtain the first feature information (ie, execute S1203), and then use the method obtained by executing S1203. A feature information performs an image classification operation on the first image file (ie, execution S1205).
S1204a、第一设备采用第一特征信息,更新第一图像文件中已保存的第一特征信 息,得到更新后的第一图像文件。S1204a: The first device uses the first feature information to update the saved first feature information in the first image file to obtain the updated first image file.
可以理解,当第一图像文件中不包括第一特征信息时,第一设备更新第一图像文件中保存特征信息的方法为:第一设备在第一图像文件的预设字段中增加第一特征信息。It can be understood that, when the first feature information is not included in the first image file, the method for saving the feature information in the first image file by the first device is: the first device adds the first feature to the preset field of the first image file. information.
当第一图像文件中包括第一特征信息,但识别第一特征信息的算法版本与第三算法的版本不同时,第一设备更新第一图像文件中保存的特征信息的方法为:第一设备采用识别到的第一特征信息替换第一图像文件的预设字段中已保存的第一特征信息。When the first feature information is included in the first image file, but the algorithm version identifying the first feature information is different from the version of the third algorithm, the method for the first device to update the feature information saved in the first image file is: the first device The saved first feature information in the preset field of the first image file is replaced with the identified first feature information.
进一步的,识别第一特征信息的算法版本与第三算法的版本不同,可以分为两种情况:(1)识别第一特征信息的算法版本低于第三算法的版本;(2)识别第一特征信息的算法版本高于第三算法的版本。具体的,在S1301之后,当识别第一特征信息的算法版本与第三算法的版本不同时,本申请实施例的方法还包括S1401:Further, the algorithm version for identifying the first feature information is different from the version of the third algorithm, and may be divided into two cases: (1) the algorithm version identifying the first feature information is lower than the version of the third algorithm; and (2) identifying the first The algorithm version of a feature information is higher than the version of the third algorithm. Specifically, after the S1301, when the algorithm version of the first feature information is different from the version of the third algorithm, the method of the embodiment of the present application further includes S1401:
S1401、第一设备判断识别第一特征信息的算法版本是否低于第三算法的版本。S1401: The first device determines whether an algorithm version that identifies the first feature information is lower than a version of the third algorithm.
具体的,当识别第一特征信息的算法版本低于第三算法的版本时,第一设备则可以采用版本较高的第三算法识别第一图像文件中的图像数据以得到第一属性的特征信息(即执行S1203),然后再使用执行S1203得到的第一特征信息对第一图像文件执行图像分类操作(即执行S1205)。Specifically, when the algorithm version identifying the first feature information is lower than the version of the third algorithm, the first device may use the third algorithm with a higher version to identify the image data in the first image file to obtain the feature of the first attribute. The information (ie, S1203 is performed), and then the image classification operation is performed on the first image file using the first feature information obtained in S1203 (ie, S1205 is performed).
当识别第一特征信息的算法版本高于第三算法的版本时,第一设备则不需要采用较低版本的第三算法识别图像数据得到第一属性的特征信息,可以跳过采用第三算法识别图像数据以得到第一属性的特征信息(即跳过S1203),直接利用第一特征信息对第一图像文件执行图像分类操作(即执行S1205)。When the algorithm version of the first feature information is higher than the version of the third algorithm, the first device does not need to use the third version of the lower algorithm to identify the image data to obtain the feature information of the first attribute, and may skip the third algorithm. The image data is identified to obtain the feature information of the first attribute (ie, skipping S1203), and the image classification operation is performed on the first image file directly using the first feature information (ie, execution S1205).
本申请实施例提供的图像分类方法,第一设备可以在执行图像分类操作所采用的算法版本更新后,及时采用版本更新后的算法识别出的特征信息更新预设字段中保存的特征信息。如此,再次对图像执行图像分类操作时,便可以直接使用预设字段中保存的更新后的特征信息,可以减少图像分类过程中的计算量,进而可以提高图像分类效率。In the image classification method provided by the embodiment of the present application, after the algorithm version used in the image classification operation is updated, the first device may update the feature information saved in the preset field by using the feature information identified by the version updated algorithm. In this way, when the image classification operation is performed on the image again, the updated feature information saved in the preset field can be directly used, the calculation amount in the image classification process can be reduced, and the image classification efficiency can be improved.
可以理解的是,上述第一设备和第二设备等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。It can be understood that, in order to implement the above functions, the first device and the second device and the like described above include hardware structures and/or software modules corresponding to each function. Those skilled in the art will readily appreciate that the embodiments of the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.
本申请实施例可以根据上述方法示例对上述第一设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。The embodiment of the present application may perform the division of the function module on the first device according to the foregoing method example. For example, each function module may be divided according to each function, or two or more functions may be integrated into one processing module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
在采用对应各个功能划分各个功能模块的情况下,如图13所示,本申请实施例提供一种设备1300,该设备1300是上述方法实施例中的第一设备。该设备1300包括:获取单元1301、分类单元1302和显示单元1303。As shown in FIG. 13 , the device 1300 is the first device in the foregoing method embodiment. The device 1300 includes an acquisition unit 1301, a classification unit 1302, and a display unit 1303.
其中,上述获取单元1301用于支持设备1300执行上述方法实施例中的S501、S1200、S1201,和/或用于本文所描述的技术的其它过程。上述分类单元1302,用于支持设备1300执行上述方法实施例中的S502、S701、S702、S1203、S1205,和/或用于本文所描述的技术的其它过程。上述显示单元1303用于支持设备1300执行上述方法实施例中的S503,和/或用于本文所描述的技术的其它过程。The obtaining unit 1301 is configured to support the device 1300 to perform S501, S1200, S1201 in the foregoing method embodiments, and/or other processes for the techniques described herein. The foregoing classification unit 1302 is configured to support the device 1300 to perform S502, S701, S702, S1203, S1205 in the foregoing method embodiments, and/or other processes for the techniques described herein. The above display unit 1303 is used to support the device 1300 to perform S503 in the above method embodiments, and/or other processes for the techniques described herein.
进一步的,上述设备1300还包括更新单元。该更新单元,用于支持设备1300执行上述方法实施例中的S703、S1204、S1204a,和/或用于本文所描述的技术的其它过程。Further, the above device 1300 further includes an update unit. The update unit is configured to support the device 1300 to perform S703, S1204, S1204a in the above method embodiments, and/or other processes for the techniques described herein.
进一步的,上述设备1300还包括判断单元。该判断单元用于支持设备1300执行上述方法实施例中的S1202、S1301、S1401,和/或用于本文所描述的技术的其它过程。Further, the above device 1300 further includes a determining unit. The determining unit is configured to support the device 1300 to perform S1202, S1301, S1401 in the above method embodiments, and/or other processes for the techniques described herein.
当然,上述设备1300还可以包括其他的单元模块。例如,上述设备1300还包括:存储单元。该存储单元用于保存第一图像文件。或者,该第一图像文件可以保存在云服务器,设备1300可以对云服务器中的图像文件执行图像分类操作。上述设备1300还可以包括:收发单元。该设备1300可以通过收发单元与其他设备交互。例如,设备1300可以通过收发单元向其他设备发送图像文件,或者接收其他设备发送的图像文件。Of course, the above device 1300 may also include other unit modules. For example, the above device 1300 further includes: a storage unit. The storage unit is for saving the first image file. Alternatively, the first image file may be saved in a cloud server, and the device 1300 may perform an image classification operation on the image file in the cloud server. The above device 1300 may further include: a transceiver unit. The device 1300 can interact with other devices through a transceiver unit. For example, the device 1300 can transmit an image file to other devices through the transceiver unit, or receive an image file sent by other devices.
在采用集成单元的情况下,上述获取单元1301和分类单元1302等可以集成在一个处理模块中实现,上述收发单元可以是设备1300的RF电路、WiFi模块或者蓝牙模块,上述存储单元可以是设备1300的存储模块,上述显示单元1303可以是显示模块,如显示器(触摸屏)。In the case of the integrated unit, the obtaining unit 1301 and the classifying unit 1302 and the like may be integrated into one processing module. The transceiver unit may be an RF circuit, a WiFi module or a Bluetooth module of the device 1300, and the storage unit may be the device 1300. The display unit 1303 may be a display module such as a display (touch screen).
图14示出了上述实施例中所涉及的终端的一种可能的结构示意图。该设备1400包括:处理模块1401、存储模块1402和显示模块1403。FIG. 14 is a schematic diagram showing a possible structure of a terminal involved in the above embodiment. The device 1400 includes a processing module 1401, a storage module 1402, and a display module 1403.
处理模块1401用于对设备1400进行控制管理。显示模块1403用于显示图像文件和图像文件的分类结果。存储模块1402,用于保存设备1400的程序代码和数据。上述设备1400还可以包括通信模块1404,该通信模块1404用于与其他设备通信。如通信模块1404用于接收或者向其他设备发送的消息或者图像文件。The processing module 1401 is configured to perform control management on the device 1400. The display module 1403 is configured to display classification results of image files and image files. The storage module 1402 is configured to save program code and data of the device 1400. The device 1400 described above may also include a communication module 1404 for communicating with other devices. For example, the communication module 1404 is used to receive or send a message or image file to other devices.
其中,处理模块1401可以是处理器或控制器,例如可以是CPU,通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1404可以是收发器、收发电路或通信接口等。存储模块1402可以是存储器。The processing module 1401 may be a processor or a controller, and may be, for example, a CPU, a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), and field programmable. Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure. The processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like. The communication module 1404 can be a transceiver, a transceiver circuit, a communication interface, or the like. The storage module 1402 can be a memory.
当处理模块1401为处理器(如图1所示的处理器101),通信模块1404为射频电路(如图1所示的射频电路102),存储模块1402为存储器(如图1所示的存储器103),显示模块1403为触摸屏(包括图1所示的触控板104-1和显示板104-2时, 本申请所提供的设备可以为图1所示的手机100。其中,上述通信模块1404不仅可以包括射频电路,还可以包括WiFi模块和蓝牙模块。射频电路、WiFi模块和蓝牙模块等通信模块可以统称为通信接口。其中,上述处理器、通信接口、触摸屏和存储器可以通过总线耦合在一起。When the processing module 1401 is a processor (such as the processor 101 shown in FIG. 1), the communication module 1404 is a radio frequency circuit (such as the radio frequency circuit 102 shown in FIG. 1), and the storage module 1402 is a memory (such as the memory shown in FIG. 1). 103), when the display module 1403 is a touch screen (including the touch panel 104-1 and the display panel 104-2 shown in FIG. 1 ), the device provided by the present application may be the mobile phone 100 shown in FIG. 1 . The 1404 may include not only a radio frequency circuit but also a WiFi module and a Bluetooth module. The communication modules such as the radio frequency circuit, the WiFi module, and the Bluetooth module may be collectively referred to as a communication interface, wherein the processor, the communication interface, the touch screen, and the memory may be coupled through a bus. together.
本申请实施例还提供一种控制设备,包括处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,执行如上述方法实施例所述的图像分类方法。The embodiment of the present application further provides a control device, including a processor and a memory, where the memory is used to store computer program code, where the computer program code includes computer instructions, when the processor executes the computer instruction, The image classification method described in the above method embodiment.
本申请实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序代码,当上述处理器执行该计算机程序代码时,设备执行图5或图12中的相关方法步骤实现上述实施例中的方法。The embodiment of the present application further provides a computer storage medium, where the computer program code is stored, and when the processor executes the computer program code, the device performs the related method steps in FIG. 5 or FIG. 12 to implement the foregoing embodiment. The method in .
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行图5或图12中的相关方法步骤实现上述实施例中的方法。The embodiment of the present application further provides a computer program product, when the computer program product is run on a computer, causing the computer to perform the related method steps in FIG. 5 or FIG. 12 to implement the method in the foregoing embodiment.
其中,本申请提供的设备1300和设备1400、计算机存储介质或者计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。The device 1300 and the device 1400, the computer storage medium or the computer program product provided by the present application are all used to perform the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can be referred to the corresponding ones provided above. The beneficial effects in the method are not described here.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Through the description of the above embodiments, those skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of the above functional modules is illustrated. In practical applications, the above functions can be allocated according to needs. It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. For the specific working process of the system, the device and the unit described above, reference may be made to the corresponding process in the foregoing method embodiments, and details are not described herein again.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动 硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in various embodiments of the present application. The foregoing storage medium includes: a flash memory, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk, and the like, which can store program codes.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The foregoing is only a specific embodiment of the present application, but the scope of protection of the present application is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present application. It should be covered by the scope of protection of this application. Therefore, the scope of protection of the present application should be determined by the scope of the claims.

Claims (30)

  1. 一种图像分类方法,其特征在于,包括:An image classification method, comprising:
    设备捕获图像数据,并获取捕获所述图像数据时的图像生成信息,生成包括所述图像数据和所述图像生成信息的第一图像文件;The device captures image data, and acquires image generation information when the image data is captured, and generates a first image file including the image data and the image generation information;
    所述设备利用所述图像生成信息,对所述第一图像文件执行图像分类操作;The device performs an image classification operation on the first image file by using the image generation information;
    所述设备响应于用于查看图库的操作,在所述图库的分类目录显示所述第一图像文件。The device displays the first image file in a catalogue of the gallery in response to an operation for viewing a gallery.
  2. 根据权利要求1所述的方法,其特征在于,所述图像生成信息包括拍摄参数的信息、拍摄模式的信息、拍摄场景的信息和摄像头类型的信息中的至少一种。The method according to claim 1, wherein the image generation information includes at least one of information of a shooting parameter, information of a shooting mode, information of a shooting scene, and information of a camera type.
  3. 根据权利要求2所述的方法,其特征在于,所述拍摄参数包括曝光值,所述拍摄模式包括全景模式和普通模式,所述拍摄场景包括人物拍摄场景、建筑物拍摄场景和自然风光拍摄场景,所述摄像头类型用于指示所述图像数据是采用前置摄像头或者后置摄像头捕获的;The method according to claim 2, wherein the photographing parameter comprises an exposure value, the photographing mode comprises a panorama mode and a normal mode, the photographing scene including a person photographing scene, a building photographing scene, and a natural scenery photographing scene The camera type is used to indicate that the image data is captured by a front camera or a rear camera;
    其中,当所述拍摄场景为所述人物拍摄场景时,所述图像生成信息还包括人物特征信息。Wherein, when the shooting scene is a scene shot by the character, the image generation information further includes character feature information.
  4. 根据权利要求1-3中任意一项所述的方法,其特征在于,所述图像生成信息不是执行图像识别操作得到的。The method according to any one of claims 1 to 3, wherein the image generation information is not obtained by performing an image recognition operation.
  5. 根据权利要求1-4中任意一项所述的方法,其特征在于,所述设备利用所述图像生成信息,对所述第一图像文件执行图像分类操作,包括:The method according to any one of claims 1 to 4, wherein the device performs an image classification operation on the first image file by using the image generation information, including:
    所述设备对所述图像数据执行图像识别操作,以分析所述图像数据得到分类特征信息;Performing an image recognition operation on the image data to analyze the image data to obtain classification feature information;
    所述设备利用所述图像生成信息和所述分类特征信息,对所述第一图像文件执行图像分类操作。The device performs an image classification operation on the first image file using the image generation information and the classification feature information.
  6. 根据权利要求5所述的方法,其特征在于,在所述设备对所述图像数据执行图像识别操作,以分析所述图像数据得到分类特征信息之后,所述方法还包括:The method according to claim 5, wherein after the device performs an image recognition operation on the image data to analyze the image data to obtain classification feature information, the method further includes:
    所述设备在所述第一图像文件中保存所述分类特征信息,得到更新后的第一图像文件。The device saves the classification feature information in the first image file to obtain an updated first image file.
  7. 根据权利要求1-6中任意一项所述的方法,其特征在于,所述第一图像文件的格式为可交换图像文件格式EXIF。The method according to any one of claims 1 to 6, wherein the format of the first image file is an exchangeable image file format EXIF.
  8. 据权利要求1-7中任意一项所述的方法,其特征在于,所述图像生成信息保存在所述第一图像文件的厂商注释字段。The method of any of claims 1-7, wherein the image generation information is stored in a vendor comment field of the first image file.
  9. 据权利要求8所述的方法,其特征在于,所述图像生成信息采用标签图像文件格式TIFF保存在所述厂商注释字段。The method of claim 8 wherein said image generation information is stored in said vendor annotation field in a tag image file format TIFF.
  10. 一种设备,其特征在于,包括:An apparatus, comprising:
    获取单元,用于捕获图像数据,并获取捕获所述图像数据时的图像生成信息,生成包括所述图像数据和所述图像生成信息的第一图像文件;An acquiring unit, configured to capture image data, and acquire image generation information when the image data is captured, and generate a first image file including the image data and the image generation information;
    分类单元,用于利用所述获取单元获取的所述图像生成信息,对所述第一图像文件执行图像分类操作;a classifying unit, configured to perform an image classification operation on the first image file by using the image generation information acquired by the acquiring unit;
    显示单元,用于响应于用于查看图库的操作,按照所述分类单元的分类结果分类 显示所述第一图像文件。And a display unit configured to display the first image file according to the classification result of the classification unit in response to an operation for viewing the gallery.
  11. 根据权利要求10所述的设备,其特征在于,所述获取单元获取的所述图像生成信息包括拍摄参数的信息、拍摄模式的信息、拍摄场景的信息和摄像头类型的信息中的至少一种。The device according to claim 10, wherein the image generation information acquired by the acquisition unit includes at least one of information of a shooting parameter, information of a shooting mode, information of a shooting scene, and information of a camera type.
  12. 根据权利要求11所述的设备,其特征在于,所述拍摄参数包括曝光值,所述拍摄模式包括全景模式和普通模式,所述拍摄场景包括人物拍摄场景、建筑物拍摄场景和自然风光拍摄场景,所述摄像头类型用于指示所述图像数据是采用前置摄像头或者后置摄像头捕获的;The apparatus according to claim 11, wherein said photographing parameter includes an exposure value, said photographing mode includes a panorama mode and a normal mode, said photographing scene including a person photographing scene, a building photographing scene, and a natural scenery photographing scene The camera type is used to indicate that the image data is captured by a front camera or a rear camera;
    其中,当所述拍摄场景为所述人物拍摄场景时,所述图像生成信息还包括人物特征信息。Wherein, when the shooting scene is a scene shot by the character, the image generation information further includes character feature information.
  13. 根据权利要求10-12中任意一项所述的设备,其特征在于,所述分类单元,还用于对所述图像数据执行图像识别操作;The apparatus according to any one of claims 10 to 12, wherein the classification unit is further configured to perform an image recognition operation on the image data;
    其中,所述图像生成信息不是所述分类单元识别所述图像数据得到的。The image generation information is not obtained by the classification unit identifying the image data.
  14. 根据权利要求10-13中任意一项所述的设备,其特征在于,所述分类单元,具体用于对所述图像数据执行图像识别操作,以分析所述图像数据得到分类特征信息;利用所述图像生成信息和所述分类特征信息,对所述第一图像文件执行图像分类操作。The device according to any one of claims 10 to 13, wherein the classifying unit is specifically configured to perform an image recognition operation on the image data to analyze the image data to obtain classification feature information; The image generation information and the classification feature information are performed, and an image classification operation is performed on the first image file.
  15. 根据权利要求14所述的设备,其特征在于,所述设备还包括:The device according to claim 14, wherein the device further comprises:
    更新单元,用于在所述分类单元对所述图像数据执行图像识别操作,以分析所述图像数据得到分类特征信息之后,在所述第一图像文件中保存所述分类特征信息,得到更新后的第一图像文件。And an updating unit, configured to perform an image recognition operation on the image data by the classification unit to analyze the image data to obtain classification feature information, save the classification feature information in the first image file, and obtain an updated The first image file.
  16. 根据权利要求10-15中任意一项所述的设备,其特征在于,所述第一图像文件的格式为可交换图像文件格式EXIF。The device according to any one of claims 10-15, wherein the format of the first image file is an exchangeable image file format EXIF.
  17. 据权利要求10-16中任意一项所述的设备,其特征在于,所述图像生成信息保存在所述第一图像文件的厂商注释字段。The apparatus of any of claims 10-16, wherein the image generation information is stored in a vendor annotation field of the first image file.
  18. 据权利要求17所述的设备,其特征在于,所述图像生成信息采用标签图像文件格式TIFF保存在所述厂商注释字段。The apparatus according to claim 17, wherein said image generation information is stored in said vendor comment field in a tag image file format TIFF.
  19. 一种设备,其特征在于,包括:所述设备包括:处理器、存储器、摄像头和显示器;所述存储器和所述显示器与所述处理器耦合,所述显示器用于显示图像文件,所述存储器包括非易失性存储介质,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,An apparatus, comprising: the apparatus comprising: a processor, a memory, a camera, and a display; the memory and the display being coupled to the processor, the display for displaying an image file, the memory A non-volatile storage medium for storing computer program code, the computer program code comprising computer instructions, when the processor executes the computer instructions
    所述摄像头,用于捕获图像数据;The camera is configured to capture image data;
    所述处理器,用于获取所述摄像头捕获所述图像数据时的图像生成信息,生成包括所述图像数据和所述图像生成信息的第一图像文件;利用所述图像生成信息,对所述第一图像文件执行图像分类操作;The processor is configured to acquire image generation information when the camera captures the image data, generate a first image file including the image data and the image generation information, and generate information by using the image generation information The first image file performs an image classification operation;
    所述显示器,用于响应于用于查看图库的操作,分类显示所述第一图像文件。The display is configured to display the first image file in a category in response to an operation for viewing a gallery.
  20. 根据权利要求19所述的设备,其特征在于,所述处理器获取的所述图像生成信息包括拍摄参数的信息、拍摄模式的信息、拍摄场景的信息和摄像头类型的信息中的至少一种。The device according to claim 19, wherein the image generation information acquired by the processor includes at least one of information of a shooting parameter, information of a shooting mode, information of a shooting scene, and information of a camera type.
  21. 根据权利要求20所述的设备,其特征在于,所述拍摄参数包括曝光值,所述 拍摄模式包括全景模式和普通模式,所述拍摄场景包括人物拍摄场景、建筑物拍摄场景和自然风光拍摄场景,所述摄像头类型用于指示所述图像数据是采用前置摄像头或者后置摄像头捕获的;The apparatus according to claim 20, wherein said photographing parameter includes an exposure value, said photographing mode includes a panorama mode and a normal mode, said photographing scene including a person photographing scene, a building photographing scene, and a natural scenery photographing scene The camera type is used to indicate that the image data is captured by a front camera or a rear camera;
    其中,当所述拍摄场景为所述人物拍摄场景时,所述图像生成信息还包括人物特征信息。Wherein, when the shooting scene is a scene shot by the character, the image generation information further includes character feature information.
  22. 根据权利要求19-21中任意一项所述的设备,其特征在于,所述图像生成信息不是所述处理器对所述图像数据执行图像识别操作得到的。The apparatus according to any one of claims 19 to 21, wherein the image generation information is not obtained by the processor performing an image recognition operation on the image data.
  23. 根据权利要求19-22中任意一项所述的设备,其特征在于,所述处理器,具体用于对所述图像数据执行图像识别操作,以分析所述图像数据得到分类特征信息;利用所述图像生成信息和所述分类特征信息,对所述第一图像文件执行图像分类操作。The device according to any one of claims 19 to 22, wherein the processor is specifically configured to perform an image recognition operation on the image data to analyze the image data to obtain classification feature information; The image generation information and the classification feature information are performed, and an image classification operation is performed on the first image file.
  24. 根据权利要求23所述的设备,其特征在于,所述处理器,还用于在对所述图像数据执行图像识别操作,以分析所述图像数据得到分类特征信息之后,在所述第一图像文件中保存所述分类特征信息,得到更新后的第一图像文件。The device according to claim 23, wherein the processor is further configured to perform an image recognition operation on the image data to analyze the image data to obtain classification feature information, in the first image The classification feature information is saved in the file to obtain an updated first image file.
  25. 根据权利要求19-24中任意一项所述的设备,其特征在于,所述第一图像文件的格式为可交换图像文件格式EXIF。The device according to any one of claims 19-24, wherein the format of the first image file is an exchangeable image file format EXIF.
  26. 据权利要求19-25中任意一项所述的设备,其特征在于,所述图像生成信息保存在所述第一图像文件的厂商注释字段。Apparatus according to any of claims 19-25, wherein said image generation information is stored in a vendor comment field of said first image file.
  27. 根据权利要求26所述的设备,其特征在于,所述图像生成信息采用标签图像文件格式TIFF保存在所述厂商注释字段。The apparatus according to claim 26, wherein said image generation information is stored in said vendor comment field in a tag image file format TIFF.
  28. 一种控制设备,其特征在于,所述控制设备包括处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,所述控制设备执行如权利要求1-9中任一项所述的方法。A control device, comprising: a processor and a memory, the memory for storing computer program code, the computer program code comprising computer instructions, when the processor executes the computer instruction The control device performs the method of any of claims 1-9.
  29. 一种计算机存储介质,其特征在于,所述计算机存储介质包括计算机指令,当所述计算机指令在设备上运行时,使得所述设备执行如权利要求1-9中任一项所述的方法。A computer storage medium, characterized in that the computer storage medium comprises computer instructions that, when executed on a device, cause the device to perform the method of any one of claims 1-9.
  30. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-9中任一项所述的方法。A computer program product, wherein the computer program product, when run on a computer, causes the computer to perform the method of any of claims 1-9.
PCT/CN2018/076081 2018-02-09 2018-02-09 Image classification method and device WO2019153286A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/076081 WO2019153286A1 (en) 2018-02-09 2018-02-09 Image classification method and device
CN201880085333.5A CN111566639A (en) 2018-02-09 2018-02-09 Image classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/076081 WO2019153286A1 (en) 2018-02-09 2018-02-09 Image classification method and device

Publications (1)

Publication Number Publication Date
WO2019153286A1 true WO2019153286A1 (en) 2019-08-15

Family

ID=67549189

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076081 WO2019153286A1 (en) 2018-02-09 2018-02-09 Image classification method and device

Country Status (2)

Country Link
CN (1) CN111566639A (en)
WO (1) WO2019153286A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191522A (en) * 2019-12-11 2020-05-22 武汉光庭信息技术股份有限公司 Image scene information storage method and system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986096A (en) * 2021-12-29 2022-01-28 北京亮亮视野科技有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN115327562A (en) * 2022-10-16 2022-11-11 常州海图信息科技股份有限公司 Handheld visual laser rangefinder

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1955909A (en) * 2005-10-26 2007-05-02 奥林巴斯映像株式会社 Image managing apparatus and image managing method
US20100157096A1 (en) * 2008-12-18 2010-06-24 Samsung Electronics Co., Ltd Apparatus to automatically tag image and method thereof
CN103685815A (en) * 2012-09-20 2014-03-26 卡西欧计算机株式会社 Image classifying apparatus, electronic album creating apparatus, image classifying method, and program
CN108235765A (en) * 2017-12-05 2018-06-29 华为技术有限公司 A kind of display methods and device of story photograph album

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824859A (en) * 2015-01-09 2016-08-03 中兴通讯股份有限公司 Picture classification method and device as well as intelligent terminal
CN105138578A (en) * 2015-07-30 2015-12-09 北京奇虎科技有限公司 Sorted storage method for target picture and terminal employing sorted storage method
CN105302872A (en) * 2015-09-30 2016-02-03 努比亚技术有限公司 Image processing device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1955909A (en) * 2005-10-26 2007-05-02 奥林巴斯映像株式会社 Image managing apparatus and image managing method
US20100157096A1 (en) * 2008-12-18 2010-06-24 Samsung Electronics Co., Ltd Apparatus to automatically tag image and method thereof
CN103685815A (en) * 2012-09-20 2014-03-26 卡西欧计算机株式会社 Image classifying apparatus, electronic album creating apparatus, image classifying method, and program
CN108235765A (en) * 2017-12-05 2018-06-29 华为技术有限公司 A kind of display methods and device of story photograph album

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191522A (en) * 2019-12-11 2020-05-22 武汉光庭信息技术股份有限公司 Image scene information storage method and system

Also Published As

Publication number Publication date
CN111566639A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN109644229B (en) Method for controlling camera and electronic device thereof
US9584694B2 (en) Predetermined-area management system, communication method, and computer program product
US20110184980A1 (en) Apparatus and method for providing image
KR101485458B1 (en) Method For Creating Image File Including Information of Individual And Apparatus Thereof
EP3706015A1 (en) Method and device for displaying story album
CN110059686B (en) Character recognition method, device, equipment and readable storage medium
CN111125601B (en) File transmission method, device, terminal, server and storage medium
US20170048581A1 (en) Method and device for generating video content
WO2018184260A1 (en) Correcting method and device for document image
WO2019169587A1 (en) Method for installing application according to function modules
WO2019153286A1 (en) Image classification method and device
US20220215050A1 (en) Picture Search Method and Device
CN104798065A (en) Enabling a metadata storage subsystem
KR20150027934A (en) Apparatas and method for generating a file of receiving a shoot image of multi angle in an electronic device
US20150019579A1 (en) Method for an electronic device to execute an operation corresponding to a common object attribute among a plurality of objects
CN115115679A (en) Image registration method and related equipment
US11238622B2 (en) Method of providing augmented reality contents and electronic device therefor
US8477215B2 (en) Wireless data module for imaging systems
CN114842069A (en) Pose determination method and related equipment
WO2021244614A1 (en) Storage content searching method and system and electronic device
CN115134316B (en) Topic display method, device, terminal and storage medium
KR20100101960A (en) Digital camera, system and method for grouping photography
US11202107B2 (en) Method and apparatus for providing task based multimedia data
EP3751431A1 (en) Terminal searching for vr resource by means of image
JP5651975B2 (en) Image browsing device and camera

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905541

Country of ref document: EP

Kind code of ref document: A1