WO2015058650A1 - Systems and methods for image processing - Google Patents

Systems and methods for image processing Download PDF

Info

Publication number
WO2015058650A1
WO2015058650A1 PCT/CN2014/088724 CN2014088724W WO2015058650A1 WO 2015058650 A1 WO2015058650 A1 WO 2015058650A1 CN 2014088724 W CN2014088724 W CN 2014088724W WO 2015058650 A1 WO2015058650 A1 WO 2015058650A1
Authority
WO
WIPO (PCT)
Prior art keywords
additional information
image
terminal
identification
storing
Prior art date
Application number
PCT/CN2014/088724
Other languages
French (fr)
Inventor
Daosheng TANG
Zhihao Zheng
Zhu Liang
Yunsheng Wu
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015058650A1 publication Critical patent/WO2015058650A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32106Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file
    • H04N1/32122Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file in a separate device, e.g. in a memory or on a display separate from image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32106Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file
    • H04N1/32112Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file in a separate computer file, document page or paper sheet, e.g. a fax cover sheet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/328Processing of the additional information

Definitions

  • Certain embodiments of the present invention are directed to computer technology. More particularly, some embodiments of the invention provide devices and methods for data processing. Merely by way of example, some embodiments of the invention have been applied to image processing. But it would be recognized that the invention has a much broader range of applicability.
  • images can be seen in various occasions and bear basic visual information. With technological development, images are used to bear more additional information as well as visual information.
  • EXIF Exchangeable image file format
  • JPG Joint Photographic Experts Group
  • the additional information is shown on the image in a watermark manner, where a watermark overlaps information, such as a geographical position, weather, time, etc. , on the image in a semi-transparent picture display format.
  • a social application “PaPa” can bear voice in a picture.
  • a user can share a picture with friends via “PaPa, ” with voice introduction, and friends can comment and interact through voice.
  • the above-noted conventional technology has some disadvantages.
  • fields of image file formats are limited.
  • a particular file format is often applicable to one scenario, and cannot bear additional information which becomes more abundant.
  • many restrictions apply to the additional information.
  • the EXIF field in the JPG file cannot conveniently store sound information.
  • the additional information is usually statically appended to the file so that the file size is increased, which may render it difficult for transmission. Directly overlapping the additional information on the image may not accommodate sufficient information, and may negatively affect the image. In addition, deeper secondary processing on the image may not be performed.
  • the additional information is often fixedly overlapped on the image, which may make it hard to add abundant additional information on the image.
  • a method for image processing. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately.
  • a terminal includes: an acquisition unit configured to start an image shooting function and acquire first additional information associated with an image; a configuration unit configured to configure an identification for the first additional information; and a storage processing unit configured to store the identification associated with the image and the first additional information and store the first additional information and the image separately.
  • an image processing system includes: a terminal; and a cloud server configured to communicate with the terminal.
  • the terminal includes: an acquisition unit configured to start an image shooting function and acquire additional information associated with an image; a configuration unit configured to configure an identification for the additional information; and a storage processing unit configured to store the identification associated with the image and the additional information and store the additional information and the image separately.
  • the cloud server is configured to: store the additional information transmitted by the terminal and the identification of the additional information; search for the additional information according to the identification carried in an additional information request upon the receipt of the additional information request; and transmit the additional information to the terminal.
  • a non-transitory computer readable storage medium includes programming instructions for image processing.
  • the programming instructions are configured to cause one or more data processors to execute certain operations. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately.
  • Figure 1 is a simplified diagram showing a method for image processing according to one embodiment of the present invention.
  • Figure 2– Figure 4 are simplified diagrams showing application scenarios for image processing according to some embodiments of the present invention.
  • Figure 5 is a simplified diagram showing a terminal for image processing according to one embodiment of the present invention.
  • Figure 6 is a simplified diagram showing a terminal for image processing according to another embodiment of the present invention.
  • Figure 7 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • Figure 8 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • Figure 9 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • Figure 10 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • Figure 11 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • Figure 12 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • Figure 13 is a simplified diagram showing a system for image processing according to one embodiment of the present invention.
  • FIG. 1 is a simplified diagram showing a method for image processing according to one embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the method 100 includes processes 101-103.
  • the method 100 is executed by a terminal that includes a device with a shoot function, such as a mobile phone, a Personal Digital Assistant (PDA) , and a pad.
  • PDA Personal Digital Assistant
  • the process 101 includes: starting an image shooting function and acquiring first additional information associated with an image.
  • the first additional information associated with the image is acquired from the start of the shoot function and includes information such as texts or voices input by a user after the image is shot.
  • the additional information associated with the image includes: a resolution associated with a photo or a video, a weather situation of image shooting, sound information, speed information and geographical position information, photo-related information such as shot content (e. g. , food, people, etc. ) , mood input by a user, and a user's voice, human face charm values of the user acquired via human face detection, etc.
  • the geographical position information can be acquired via a Global Positioning System (GPS) mounted on the terminal, the sound information can be acquired via a sound sensor, and the speed information can be acquired via a motion sensor, according to some embodiments.
  • GPS Global Positioning System
  • external input information can be acquired through user input.
  • the process 102 includes: configuring an identification for the first additional information.
  • the terminal configures a unique identification for additional information of each image.
  • the unique identification can be obtained by calculating an information abstract value of the additional information via an Message Digest Algorithm (MD5) and can also be directly assigned by the terminal.
  • MD5 Message Digest Algorithm
  • the process 103 includes: storing the identification associated with the image and the first additional information and storing the first additional information and the image separately.
  • the unique identification is associatively stored with the image and the additional information respectively, and the additional information and the image are separately stored.
  • the additional information can be directly stored on the terminal or a storage device connected with the terminal, or can be transmitted to a cloud server for storage, and thus the memory space of the terminal can be saved.
  • the image and the additional information are encoded and stored separately and different types of additional information can be flexibly displayed when the image is displayed, thereby improving coordination of the image and the additional information, in some embodiments.
  • the method 100 further includes: classifying the first additional information based on at least information associated with one or more preset classification rules to obtain different types of second additional information and packaging the different types of second additional information using package formats corresponding to the different types of second additional information.
  • the preset classification rules include a numerical type, a simple text type, a complex text information type and a multimedia type.
  • the numerical type is set for specific values. For example, 500 m of altitude, 15 db of sound, etc. , belongs to the numerical type.
  • the terminal classifies the additional information in the numerical type, according to certain embodiments.
  • the simple text type is set for additional information that contains a few texts.
  • a threshold value of simple text information is preset on the terminal.
  • the terminal classifies the additional information in the simple text type, according to certain embodiments. For instance, when the threshold value is 5 and the additional information contains contents such as “good morning, ” “get up after a sound sleep, ” etc. , the terminal classifies the additional information in the simple text type.
  • the complex text information type is set for additional information containing many texts.
  • the additional information including a number of words/terms that exceeds the threshold value of the simple text type
  • the additional information is classified into the complex text information type.
  • the threshold value is 5 and the additional information contains more than 5 words, such as “let us go to climb together”
  • the terminal classifies the additional information into the complex text information type.
  • sounds and videos can be classified in a multimedia type. For example, when the terminal identifies that the format of the additional information is in the format of sounds and videos, the terminal classifies the additional information in the multimedia type.
  • the classified additional information is packaged in formatting.
  • the additional information in different classifications corresponds to different package formats.
  • the additional information can be combined together by utilizing a JSON (JavaScript Object Notation) structure.
  • a template identification (ID) and the additional information can be stored together, where the watermark corresponds to overlapping certain information (e. g. , a geographical position, weather and time) on the image in a semitransparent picture display format and displaying such information more clearly in the light.
  • a watermark template includes a pre-designed template applied to the watermark.
  • sound can be firstly segmented and encoded into an MP3 (Moving Picture Experts Group Audio Layer-3) format, etc.
  • the process 103 further includes: storing the identification associated with the first additional information and the image respectively on a terminal, the first additional information being stored in a first region, the image being stored in a second region.
  • the unique identification of the first additional information is generated for the packaged information by utilizing MD5, and the unique identification is stored in a comment domain of an EXIF (Exchangeable Image File) segment of a JPG (Joint Picture Group) file.
  • the image, the first additional information and the unique identification of the first additional information are all stored on the terminal.
  • the unique identification is associated with the image and the first additional information respectively.
  • the image and the first additional information are separately stored, which is different from the situation where an image and associated additional information are overlapped together for encoding and are stored together.
  • the method 100 facilitates flexible configuration of the image and the first additional information, according to certain embodiments.
  • the process 103 further includes: storing the identification associated with the image on a terminal; and transmitting the identification and the first additional information to a storage device connected with the terminal so that the storage device stores the identification associated with the first additional information.
  • the image and the unique identification are associatively stored on the terminal.
  • the first additional information and the unique identification are associatively stored on the storage device connected with the terminal, and the unique identification and the first additional information are transmitted via the terminal and a communication interface of the storage device.
  • the terminal directly reads the additional information from the storage device via the communication interface.
  • the storage device includes a storage card mounted on the terminal.
  • the process 103 further includes: storing the identification and the image on the terminal; and transmitting the identification and the first additional information to a cloud server so that the cloud server stores the identification associated with the first additional information.
  • the unique identification and the image are associatively stored on the terminal.
  • the unique identification and the first additional information are associatively stored on the cloud server.
  • the terminal can transmit an additional information acquisition request carrying the unique identification of the first additional information to the cloud server, and the cloud server can search for corresponding additional information according to the unique identification of the first additional information and return the corresponding additional information to the terminal, thereby reducing the content of the terminal.
  • the method 100 further includes: upon the image being displayed, acquiring the first additional information according to the identification associated with the first additional information; and displaying at least part of the first additional information over the image. For example, when a user browses the image, the terminal displays the image. After displaying the image, the terminal acquires the first additional information according to the unique identification of the first additional information associated with the image. As an example, the terminal can display all of the first additional information associated with the image over the image once and can also display part of the first additional information. As another example, after acquiring the additional information, the terminal can de-package the packaged additional information, check the safety of the additional information and display an alarm prompt message if the additional information is checked to be modified or has other potential safety hazards.
  • the displaying at least part of the first additional information over the image includes: receiving a display instruction input by a user; determining an interface partition associated with the display instruction input by the user via a display interface; determining second additional information corresponding to the interface partition according to a preset association relationship between the second additional information and the interface partition; and displaying the second additional information corresponding to the interface partition over the image.
  • the terminal displays the corresponding additional information according to user instructions.
  • the terminal displays the additional information of weather according to user instructions.
  • a specific identification process includes: a developer divides a display interface of the image into 10 interface partitions or other number of interface partitions in advance, and an interface partition is preset for each type of additional information, according to some embodiments. For example, the developer divides 10 interface partitions in advance.
  • a first interface partition is associated with the additional information of weather
  • a second interface partition is associated with the additional information of a user's moods.
  • the user touches the first interface partition the additional information of weather is displayed.
  • the second interface partition the information of the user's moods is displayed.
  • other interface partitions associated with different types of additional information the user touches the corresponding interface partition, and the corresponding additional information is displayed.
  • the method 100 further includes: receiving an additional information modification instruction input by a user, wherein the additional information modification instruction is configured to modify specified second additional information; and replacing the specified second additional information with third additional information according to the modification instruction.
  • the original additional information is replaced with the additional information newly input by the user.
  • the original additional information of the user's mood is “good morning. ” If the user intends to modify it to “morning” , the user replaces “good morning” with “morning. ”
  • Figure 2– Figure 4 are simplified diagrams showing application scenarios for image processing according to some embodiments of the present invention.
  • the diagrams are merely examples, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the mobile phone acquires additional information, according to some embodiments.
  • the additional information includes a resolution (e. g. , 8 million pixels) , a mobile phone model (e. g. , iPhone 5S) , a temperature (e. g. , 25 degrees centigrade) , a humidity (e. g. , 50%) , a snapshot speed (e. g. , 5 m/s) , a mood input by the user (e. g. , “goddess, good morning” ) , a voice/sound (e. g.
  • the mobile phone classifies the additional information. For example, the mobile identifies that the resolution (e. g. , 8 million pixels) , the mobile phone model (e. g. , iPhone 5S) , the temperature (e. g. , 25 degrees centigrade) , the humidity (e. g. , 50%) , and the snapshot speed (e. g. , 5 m/s) contain numerical information, and the mobile phone classifies such additional information as a numerical type.
  • the resolution e. g. , 8 million pixels
  • the mobile phone model e. g. , iPhone 5S
  • the temperature e. g. , 25 degrees centigrade
  • the humidity e. g. , 50%
  • the snapshot speed e. g. , 5 m/s
  • the mobile phone identifies that the number of words/terms in the mood input by the user (e. g. , “goddess, good morning” ) is smaller than a preset threshold value 5, and the mobile phone classifies the mood input by the user as a simple text type.
  • the mobile phone identifies that “get up with a good mood and start a new life of the day” is in a sound format, and the mobile phone classifies “get up with a good mood and start a new life of the day” as a multimedia type.
  • the mobile phone packages the classified additional information respectively.
  • the mobile phone packages e. g. , according to a preset JSON structure
  • the additional information of the numerical type such as the resolution (e. g. , 8 million pixels)
  • the mobile phone model e. g. , iPhone 5S
  • the temperature e. g. , 25 degrees centigrade
  • the humidity e. g. , 50%
  • the snapshot speed e. g. , 5 m/s
  • the mobile phone packages e. g. , according to the JSON structure
  • the additional information of the simple text type such as “goddess, good morning. ”
  • the mobile phone packages e. g. , according to a preset MP3 format
  • the additional information of the multimedia type such as “get up with a good mood and start a new life of the day. ”
  • the packaged additional information is stored on the mobile phone, and a unique identification is generated for the additional information using an information abstract algorithm (e. g. , MD5) .
  • an information abstract algorithm e. g. , MD5
  • the additional information and the unique identification of the additional information are transmitted to a cloud server and stored associatively.
  • the unique identification and the related image are stored associatively.
  • the mobile phone acquires the additional information associated with the photo from a storage of the mobile phone or from the cloud server for displaying over the photo.
  • the image display interface can be divided into a plurality of partitions in advance, e. g. , 10 (or any other number of) interface partitions.
  • a first interface partition is associated with the additional information of weather
  • a second interface partition is associated with the additional information of user's moods
  • other interface partitions are associated with other types of corresponding additional information, as shown in Figure 3.
  • the mobile phone displays the corresponding additional information (e. g. , the user's mood) . For example, “goddess, good morning” is displayed over the image.
  • FIG. 5 is a simplified diagram showing a terminal for image processing according to one embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the terminal 20 includes: an acquisition unit 201 configured to start an image shooting function and acquire first additional information associated with an image; a configuration unit 202 configured to configure an identification for the first additional information; and a storage processing unit 203 configured to store the identification associated with the image and the first additional information and store the first additional information and the image separately.
  • the terminal 20 separately stores the additional information and the image, and flexibly matches the image with the additional information when the additional information needs to be displayed, so as to display more abundant additional information.
  • FIG. 6 is a simplified diagram showing a terminal for image processing according to another embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the terminal 20 further includes: a classification unit 204 configured to classify the first additional information based on at least information associated with one or more preset classification rules to obtain different types of second additional information; and a package unit 205 configured to package the different types of second additional information using package formats corresponding to the different types of second additional information.
  • a classification unit 204 configured to classify the first additional information based on at least information associated with one or more preset classification rules to obtain different types of second additional information
  • a package unit 205 configured to package the different types of second additional information using package formats corresponding to the different types of second additional information.
  • the storage processing unit 203 is configured to store the identification associated with the first additional information and the image respectively on the terminal 20, the first additional information being stored in a first region, the image being stored in a second region.
  • FIG. 7 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the storage processing unit 203 includes: a first storage processing subunit 2031 configured to store the identification associated with the image on the terminal 20; and a first transmission subunit 2032 configured to transmit the identification and the first additional information to a storage device connected with the terminal 20 so that the storage device stores the identification associated with the first additional information.
  • FIG 8 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the storage processing unit 203 includes: a second storage processing subunit 2033 configured to store the identification and the image on the terminal; and a second transmission subunit 2034 configured to transmit the identification and the first additional information to a cloud server so that the cloud server stores the identification associated with the first additional information.
  • FIG. 9 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the acquisition unit 201 is further configured to, upon the image being displayed, acquire the first additional information according to the identification associated with the first additional information.
  • the terminal 20 further includes: a display unit 206 configured to display at least part of the first additional information over the image.
  • FIG 10 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the display unit 206 includes: a reception subunit 2061 configured to receive a display instruction input by a user; a first determination subunit 2062 configured to determine an interface partition associated with the display instruction input by the user via a display interface; a second determination subunit 2063 configured to determine second additional information corresponding to the interface partition according to a preset association relationship between the second additional information and the interface partition; and a display subunit 2064 configured to display the second additional information corresponding to the interface partition over the image.
  • FIG 11 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the terminal 20 further includes: a reception unit 207 configured to receive an additional information modification instruction input by a user, wherein the additional information modification instruction is configured to modify specified second additional information; and a modification unit 208 configured to replace the specified second additional information with third additional information according to the modification instruction.
  • FIG 12 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the terminal 1200 (e. g. , a mobile phone) includes a RF (i. e. , radio frequency) circuit 110, a memory 120 (e. g. , including one or more computer-readable storage media) , an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a wireless communication module 170, one or more processors 180 that includes one or more processing cores, and a power supply 190.
  • the terminal 1200 further includes a camera 105 configured to shoot an image (e. g. , taking a photo or recording a video) , and a GPS module 106 configured to acquire geographical position additional information.
  • the RF circuit 110 is configured to send/receive messages or signals in communication.
  • the RF circuit 110 receives a base station’s downlink information, delivers to the processors 180 for processing, and sends uplink data to the base station.
  • the RF circuit 110 includes an antenna, at least one amplifier, a tuner, one or several oscillators, SIM (Subscriber Identity Module) card, a transceiver, a coupler, an LNA (Low Noise Amplifier) and/or a duplexer.
  • SIM Subscriber Identity Module
  • LNA Low Noise Amplifier
  • the RF circuit 110 communicates with the network and other equipments via wireless communication based on any communication standard or protocols, such as GSM (Global System of Mobile communication) , GPRS (General Packet Radio Service) , CDMA (Code Division Multiple Access) , WCDMA (Wideband Code Division Multiple Access) , LTE (Long Term Evolution) , email, SMS (Short Messaging Service) , etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • email Short Messaging Service
  • the memory 120 is configured to store software programs and modules.
  • the processors 180 are configured to execute various functional applications and data processing by running the software programs and modules stored in the memory 120.
  • the memory 120 includes a program storage area and a data storage area, where the program storage area may store the operating system, and the application (s) required by one or more functions (e. g. , an audio player or a video player) , in some embodiments.
  • the data storage area stores the data created based on the use of the terminal 1200 (e. g. , audio data or a phone book) .
  • the memory 120 includes a high-speed random access storage, a non-volatile memory, one or more floppy disc storage devices, a flash storage device or other volatile solid storage devices.
  • the memory 120 further includes a memory controller to enable access to the memory 120 by the processors 180 and the input unit 130.
  • the input unit 130 is configured to receive additional information input by a user, a display instruction input by the user, and an additional information modification instruction.
  • the input unit 130 is configured to receive an input number or character data and generate inputs for a keyboard, a mouse, and a joystick, optical or track signals relating to user setting and functional control.
  • the input unit 130 includes a touch-sensitive surface 131 and other input devices 132.
  • the touch-sensitive surface 131 e. g. , a touch screen or a touch panel
  • the touch-sensitive surface 131 is configured to receive the user’s touch operations thereon or nearby (e. g.
  • the touch-sensitive surface 131 includes two parts, namely a touch detector and a touch controller.
  • the touch detector detects the position of user touch and the signals arising from such touches and sends the signals to the touch controller.
  • the touch controller receives touch data from the touch detector, converts the touch data into the coordinates of the touch point, sends the coordinates to the processors 180 and receives and executes the commands received from the processors 180.
  • the touch-sensitive surface 131 is of a resistance type, a capacitance type, an infrared type and a surface acoustic wave type.
  • the input unit 130 includes the other input devices 132.
  • the other input devices 132 include one or more physical keyboards, one or more functional keys (e. g. , volume control keys or switch keys) , a track ball, a mouse and/or a joystick.
  • the display unit 140 is configured to display images and display additional information over the images.
  • the display unit 140 is configured to display data input from a user or provided to the user, and includes various graphical user interfaces of the terminal 1200.
  • these graphical user interfaces include menus, graphs, texts, icons, videos and a combination thereof.
  • the display unit 140 includes a display panel 141 which contains a LCD (liquid crystal display) , an OLED (organic light-emitting diode) .
  • the touch-sensitive surface can cover the display panel 141.
  • the touch-sensitive surface upon detecting any touch operations thereon or nearby, sends signals to the processors 180 to determine the type of the touch events and then the processors 180 provides corresponding visual outputs on the display panel 141 according to the type of the touch events.
  • the touch-sensitive surface 131 and the display panel 141 are two independent parts for input and output respectively, the touch-sensitive surface 131 and the display panel 141 can be integrated for input and output, in some embodiments.
  • the terminal 1200 includes a sensor 150 (e. g. , an optical sensor for collecting brightness data, a motion sensor for collecting speed data associated with image shooting, or other sensors) .
  • the sensor 150 includes an environment optical sensor and adjusts the brightness of the display panel 141 according to the environmental luminance.
  • the sensor 150 includes a proximity sensor and turns off or backlights the display panel when the terminal 1200 moves close to an ear of a user.
  • the sensor 150 includes a motion sensor (e. g. , a gravity acceleration sensor) and detects a magnitude of acceleration in all directions (e. g. , three axes) . Particularly, the sensor 150 detects a magnitude and a direction of gravity when staying still.
  • the senor 150 is used for identifying movements of a cell phone (e. g. , a switch of screen direction between horizontal and vertical, related games, and a calibration related to a magnetometer) and features related to vibration identification (e. g. , a pedometer or a strike) .
  • the sensor 150 includes a gyroscope, a barometer, a hygroscope, a thermometer and/or an infrared sensor.
  • the audio circuit 160 for recording or playing audio additional information, a speaker 161 for outputting audio additional information, and a microphone 162 are configured to provide an audio interface between a user and the terminal 1200.
  • the audio circuit 160 is configured to transmit electrical signals converted from certain audio data to the speaker that converts such electrical signals into some output audio signals.
  • the microphone 162 is configured to convert audio signals into electrical signals which are converted into audio data by the audio circuit 160.
  • the audio data are processed in the processors 180 and received by the RF circuit 110 before being sent to another terminal, in some embodiments.
  • the audio data are output to the memory 120 for further processing.
  • the audio circuit 160 includes an earphone jack for communication between a peripheral earphone and the terminal 1200.
  • the wireless communication module 170 includes a WiFi (e. g. , wireless fidelity, a short-distance wireless transmission technology) module, a Bluetooth module, an infrared communication module, etc.
  • the terminal 1200 enables the user to receive and send emails, browse webpages, and/or access stream media.
  • the terminal 1200 is configured to provide the user with a wireless broadband Internet access.
  • the wireless communication module 170 is omitted in the terminal 1200.
  • the processors 180 are the control center of the terminal 1200.
  • the processors 180 is connected to various parts of the terminal 1200 (e. g. , a cell phone) via various interfaces and circuits, and executes various features of the terminal 1200 and processes various data through operating or executing the software programs and/or modules stored in the memory 120 and calling the data stored in the memory 120, so as to monitor and control the terminal 1200 (e. g. , a cell phone) .
  • the processors 180 include one or more processing cores.
  • the processors 180 is integrated with an application processor and a modem processor, where the application processor mainly handles the operating system, the user interface and the applications and the modem processor mainly handles wireless communications. In some embodiments, the modem processor is not integrated into the processors 180.
  • the terminal 1200 includes the power supply 190 (e. g. , a battery) that powers up various parts.
  • the power supply 190 is logically connected to the processors 180 via a power source management system so that the charging, discharging and power consumption can be managed via the power source management system.
  • the power supply 190 includes one or more DC or AC power sources, a recharging system, a power-failure-detection circuit, a power converter, an inverter, a power source state indicator, or other components.
  • the terminal 1200 includes a camcorder, a Bluetooth module, a near field communication module, etc.
  • the processors 180 of the terminal 1200 load executable files/codes associated with one or more applications to the memory 120 and run the applications stored in the memory 120 according to the method 100 as shown in Figure 1.
  • FIG 13 is a simplified diagram showing a system for image processing according to one embodiment of the present invention.
  • the diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the system 1300 includes a terminal 20 and a cloud server 30 in communication with the terminal 20.
  • the terminal 20 is configured to start an image shooting function and acquire additional information associated with an image; configure an identification for the additional information; and store the identification associated with the image and the additional information.
  • the terminal 20 is configured to transmit the additional information and the identification of the additional information to the server 30 that is configured to store the additional information and the identification of the additional information.
  • the terminal 20 is further configured to send an additional information request to the server 30 upon the image being displayed, where the additional information request carries the identification of the additional information.
  • the server 30 is configured to search for the additional information according to the identification carried in the additional information request upon the receipt of the additional information request; and transmit the additional information to the terminal 20.
  • the terminal 20 is further configured to display at least part of the additional information over the image.
  • a method for image processing. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately.
  • the method is implemented according to at least Figure 1.
  • a terminal includes: an acquisition unit configured to start an image shooting function and acquire first additional information associated with an image; a configuration unit configured to configure an identification for the first additional information; and a storage processing unit configured to store the identification associated with the image and the first additional information and store the first additional information and the image separately.
  • the terminal is implemented according to at least Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, and/or Figure 11.
  • an image processing system includes: a terminal; and a cloud server configured to communicate with the terminal.
  • the terminal includes: an acquisition unit configured to start an image shooting function and acquire additional information associated with an image; a configuration unit configured to configure an identification for the additional information; and a storage processing unit configured to store the identification associated with the image and the additional information and store the additional information and the image separately.
  • the cloud server is configured to: store the additional information transmitted by the terminal and the identification of the additional information; search for the additional information according to the identification carried in an additional information request upon the receipt of the additional information request; and transmit the additional information to the terminal.
  • the system is implemented according to at least Figure 13.
  • a non-transitory computer readable storage medium includes programming instructions for image processing.
  • the programming instructions are configured to cause one or more data processors to execute certain operations. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately.
  • the storage medium is implemented according to at least Figure 1.
  • some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented using one or more software components, one or more hardware components, and/or one or more combinations of software and hardware components.
  • some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented in one or more circuits, such as one or more analog circuits and/or one or more digital circuits.
  • various embodiments and/or examples of the present invention can be combined.
  • the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem.
  • the software program instructions may includesource code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein.
  • Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to perform the methods and systems described herein.
  • data e. g. , associations, mappings, data input, data output, intermediate data results, final data results, etc.
  • data stores such as different types of storage devices and programming constructs (e. g. , RAM, ROM, EEPROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, application programming interface, etc. ) .
  • data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
  • the systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e. g. , CD-ROM, diskette, RAM, flash memory, computer’s hard drive, DVD, etc. ) that contain instructions (e. g. , software) for use in execution by a processor to perform the methods’ operations and implement the systems described herein.
  • computer storage mechanisms e. g. , CD-ROM, diskette, RAM, flash memory, computer’s hard drive, DVD, etc.
  • instructions e. g. , software
  • the computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations.
  • a module or processor includes a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm) , or as an applet, or in a computer script language, or as another type of computer code.
  • the software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
  • the computing system can include client devices and servers.
  • a client device and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client device and server arises by virtue of computer programs running on the respective computers and having a client device-server relationship to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Systems and methods are provided for image processing. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately.

Description

SYSTEMS AND METHODS FOR IMAGE PROCESSING
CROSS-REFERENCES TO RELATED APPLICATIONS
The application claims priority to Chinese Patent Application No. 201310511634.9, filed October 24, 2013, incorporated by reference herein for all purposes.
BACKGROUND OF THE INVENTION
Certain embodiments of the present invention are directed to computer technology. More particularly, some embodiments of the invention provide devices and methods for data processing. Merely by way of example, some embodiments of the invention have been applied to image processing. But it would be recognized that the invention has a much broader range of applicability.
As one of the most common information carriers in the information era, images can be seen in various occasions and bear basic visual information. With technological development, images are used to bear more additional information as well as visual information.
At present, there are some methods of using images to bear additional information:
1) information bearing through defining related fields of an image file format. For example, the EXIF (Exchangeable image file format) field available in the JPG file format defined by JEPG (Joint Photographic Experts Group) is configured to store additional information of a picture, such as a resolution, a shooting device, a geographical position, etc.
2) information bearing by directly superimposing the additional information on the image. For example, the additional information is shown on the image in a watermark manner, where a watermark overlaps information, such as a geographical position, weather, time, etc. , on the image in a semi-transparent picture display format.
3) information bearing through a specific application. For example, a social application “PaPa” can bear voice in a picture. A user can share a picture with friends via “PaPa, ” with voice introduction, and friends can comment and interact through voice.
The above-noted conventional technology has some disadvantages. For example, fields of image file formats are limited. A particular file format is often applicable to one scenario, and cannot bear additional information which becomes more abundant. In addition, many restrictions apply to the additional information. As an example, the EXIF field in the JPG file cannot conveniently store sound information. Besides, the additional information is usually statically  appended to the file so that the file size is increased, which may render it difficult for transmission. Directly overlapping the additional information on the image may not accommodate sufficient information, and may negatively affect the image. In addition, deeper secondary processing on the image may not be performed. The additional information is often fixedly overlapped on the image, which may make it hard to add abundant additional information on the image.
Hence it is highly desirable to improve the techniques for image processing.
BRIEF SUMMARY OF THE INVENTION
According to one embodiment, a method is provided for image processing. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately.
According to another embodiment, a terminal includes: an acquisition unit configured to start an image shooting function and acquire first additional information associated with an image; a configuration unit configured to configure an identification for the first additional information; and a storage processing unit configured to store the identification associated with the image and the first additional information and store the first additional information and the image separately.
According to yet another embodiment, an image processing system includes: a terminal; and a cloud server configured to communicate with the terminal. The terminal includes: an acquisition unit configured to start an image shooting function and acquire additional information associated with an image; a configuration unit configured to configure an identification for the additional information; and a storage processing unit configured to store the identification associated with the image and the additional information and store the additional information and the image separately. The cloud server is configured to: store the additional information transmitted by the terminal and the identification of the additional information; search for the additional information according to the identification carried in an additional information request upon the receipt of the additional information request; and transmit the additional information to the terminal.
According to yet another embodiment, a non-transitory computer readable storage medium includes programming instructions for image processing. The programming instructions are configured to cause one or more data processors to execute certain operations. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored  associated with the image and the first additional information; and the first additional information and the image are stored separately.
Depending upon embodiment, one or more benefits may be achieved. These benefits and various additional objects, features and advantages of the present invention can be fully appreciated with reference to the detailed description and accompanying drawings that follow.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a simplified diagram showing a method for image processing according to one embodiment of the present invention.
Figure 2–Figure 4 are simplified diagrams showing application scenarios for image processing according to some embodiments of the present invention.
Figure 5 is a simplified diagram showing a terminal for image processing according to one embodiment of the present invention.
Figure 6 is a simplified diagram showing a terminal for image processing according to another embodiment of the present invention.
Figure 7 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
Figure 8 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
Figure 9 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
Figure 10 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
Figure 11 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
Figure 12 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention.
Figure 13 is a simplified diagram showing a system for image processing according to one embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
Figure 1 is a simplified diagram showing a method for image processing according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method 100 includes processes 101-103. For example, the method 100 is executed by a terminal that includes a device with a shoot function, such as a mobile phone, a Personal Digital Assistant (PDA) , and a pad.
According to one embodiment, the process 101 includes: starting an image shooting function and acquiring first additional information associated with an image. For example, the first additional information associated with the image is acquired from the start of the shoot function and includes information such as texts or voices input by a user after the image is shot. As an example, the additional information associated with the image includes: a resolution associated with a photo or a video, a weather situation of image shooting, sound information, speed information and geographical position information, photo-related information such as shot content (e. g. , food, people, etc. ) , mood input by a user, and a user's voice, human face charm values of the user acquired via human face detection, etc.
When the image is shot, the geographical position information can be acquired via a Global Positioning System (GPS) mounted on the terminal, the sound information can be acquired via a sound sensor, and the speed information can be acquired via a motion sensor, according to some embodiments. As an example, external input information can be acquired through user input.
According to another embodiment, the process 102 includes: configuring an identification for the first additional information. For example, the terminal configures a unique identification for additional information of each image. As an example, the unique identification can be obtained by calculating an information abstract value of the additional information via an Message Digest Algorithm (MD5) and can also be directly assigned by the terminal.
According to yet another embodiment, the process 103 includes: storing the identification associated with the image and the first additional information and storing the first additional information and the image separately. For example, the unique identification is associatively stored with the image and the additional information respectively, and the additional information and the image are separately stored. As an example, the additional information can be directly stored on the terminal or a storage device connected with the terminal, or can be transmitted to a cloud server for storage, and thus the memory space of the terminal can be saved. The image and the additional information are encoded and stored separately and different types of additional information can be  flexibly displayed when the image is displayed, thereby improving coordination of the image and the additional information, in some embodiments.
According to yet another embodiment, after acquiring the first additional information associated with the image, the method 100 further includes: classifying the first additional information based on at least information associated with one or more preset classification rules to obtain different types of second additional information and packaging the different types of second additional information using package formats corresponding to the different types of second additional information. In some embodiments, the preset classification rules include a numerical type, a simple text type, a complex text information type and a multimedia type. As an example, the numerical type is set for specific values. For example, 500 m of altitude, 15 db of sound, etc. , belongs to the numerical type. When identifying that values are available in the additional information, the terminal classifies the additional information in the numerical type, according to certain embodiments.
In one embodiment, the simple text type is set for additional information that contains a few texts. For example, a threshold value of simple text information is preset on the terminal. When the terminal identifies that the additional information includes text information and the number of words/terms in the additional information is smaller than the threshold value, the terminal classifies the additional information in the simple text type, according to certain embodiments. For instance, when the threshold value is 5 and the additional information contains contents such as “good morning, ” “get up after a sound sleep, ” etc. , the terminal classifies the additional information in the simple text type.
In another embodiment, the complex text information type is set for additional information containing many texts. For example, the additional information including a number of words/terms that exceeds the threshold value of the simple text type, the additional information is classified into the complex text information type. As an example, when the threshold value is 5 and the additional information contains more than 5 words, such as “let us go to climb together” , the terminal classifies the additional information into the complex text information type.
In yet another embodiment, sounds and videos can be classified in a multimedia type. For example, when the terminal identifies that the format of the additional information is in the format of sounds and videos, the terminal classifies the additional information in the multimedia type.
According to some embodiments, the classified additional information is packaged in formatting. For example, the additional information in different classifications corresponds to different package formats. As an example, for a simple text and numerical information, the  additional information can be combined together by utilizing a JSON (JavaScript Object Notation) structure. As another example, for watermark template additional information, a template identification (ID) and the additional information (e. g. , movement, editing, etc. , of a watermark) can be stored together, where the watermark corresponds to overlapping certain information (e. g. , a geographical position, weather and time) on the image in a semitransparent picture display format and displaying such information more clearly in the light. As yet another example, a watermark template includes a pre-designed template applied to the watermark. In some embodiments, for the sound additional information, sound can be firstly segmented and encoded into an MP3 (Moving Picture Experts Group Audio Layer-3) format, etc.
According to certain embodiments, the process 103 further includes: storing the identification associated with the first additional information and the image respectively on a terminal, the first additional information being stored in a first region, the image being stored in a second region. For example, the unique identification of the first additional information is generated for the packaged information by utilizing MD5, and the unique identification is stored in a comment domain of an EXIF (Exchangeable Image File) segment of a JPG (Joint Picture Group) file. According to some embodiments, the image, the first additional information and the unique identification of the first additional information are all stored on the terminal. For example, the unique identification is associated with the image and the first additional information respectively. As an example, the image and the first additional information are separately stored, which is different from the situation where an image and associated additional information are overlapped together for encoding and are stored together. The method 100 facilitates flexible configuration of the image and the first additional information, according to certain embodiments.
According to some embodiments, the process 103 further includes: storing the identification associated with the image on a terminal; and transmitting the identification and the first additional information to a storage device connected with the terminal so that the storage device stores the identification associated with the first additional information. For example, the image and the unique identification are associatively stored on the terminal. As an example, the first additional information and the unique identification are associatively stored on the storage device connected with the terminal, and the unique identification and the first additional information are transmitted via the terminal and a communication interface of the storage device. As another example, when the terminal needs the additional information, the terminal directly reads the additional information from the storage device via the communication interface. As yet another example, the storage device includes a storage card mounted on the terminal.
According to certain embodiments, the process 103 further includes: storing the identification and the image on the terminal; and transmitting the identification and the first additional information to a cloud server so that the cloud server stores the identification associated with the first additional information. For example, the unique identification and the image are associatively stored on the terminal. As an example, the unique identification and the first additional information are associatively stored on the cloud server. As another example, the terminal can transmit an additional information acquisition request carrying the unique identification of the first additional information to the cloud server, and the cloud server can search for corresponding additional information according to the unique identification of the first additional information and return the corresponding additional information to the terminal, thereby reducing the content of the terminal.
According to some embodiments, the method 100 further includes: upon the image being displayed, acquiring the first additional information according to the identification associated with the first additional information; and displaying at least part of the first additional information over the image. For example, when a user browses the image, the terminal displays the image. After displaying the image, the terminal acquires the first additional information according to the unique identification of the first additional information associated with the image. As an example, the terminal can display all of the first additional information associated with the image over the image once and can also display part of the first additional information. As another example, after acquiring the additional information, the terminal can de-package the packaged additional information, check the safety of the additional information and display an alarm prompt message if the additional information is checked to be modified or has other potential safety hazards.
According to certain embodiments, the displaying at least part of the first additional information over the image includes: receiving a display instruction input by a user; determining an interface partition associated with the display instruction input by the user via a display interface; determining second additional information corresponding to the interface partition according to a preset association relationship between the second additional information and the interface partition; and displaying the second additional information corresponding to the interface partition over the image.
According to some embodiments, after acquiring the additional information associated with the image, the terminal displays the corresponding additional information according to user instructions. For example, the terminal displays the additional information of weather according to user instructions. A specific identification process includes: a developer divides a display interface of the image into 10 interface partitions or other number of interface partitions in advance, and an  interface partition is preset for each type of additional information, according to some embodiments. For example, the developer divides 10 interface partitions in advance. A first interface partition is associated with the additional information of weather, and a second interface partition is associated with the additional information of a user's moods. As an example, when the user touches the first interface partition, the additional information of weather is displayed. When the user touches the second interface partition, the information of the user's moods is displayed. When other interface partitions associated with different types of additional information, the user touches the corresponding interface partition, and the corresponding additional information is displayed.
According to certain embodiments, the method 100 further includes: receiving an additional information modification instruction input by a user, wherein the additional information modification instruction is configured to modify specified second additional information; and replacing the specified second additional information with third additional information according to the modification instruction. For example, when the user wants to modify the additional information, the original additional information is replaced with the additional information newly input by the user. As an example, the original additional information of the user's mood is “good morning. ” If the user intends to modify it to “morning” , the user replaces “good morning” with “morning. ”
Figure 2–Figure 4 are simplified diagrams showing application scenarios for image processing according to some embodiments of the present invention. The diagrams are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
As shown in Figure 2, when a user takes a photo of himself/herself using a mobile phone, the mobile phone acquires additional information, according to some embodiments. For example, the additional information includes a resolution (e. g. , 8 million pixels) , a mobile phone model (e. g. , iPhone 5S) , a temperature (e. g. , 25 degrees centigrade) , a humidity (e. g. , 50%) , a snapshot speed (e. g. , 5 m/s) , a mood input by the user (e. g. , “goddess, good morning” ) , a voice/sound (e. g. , “get up with a good mood and start a new life of the day” ) , and a human face charm value of the user (e. g. , score of 90) . The mobile phone classifies the additional information. For example, the mobile identifies that the resolution (e. g. , 8 million pixels) , the mobile phone model (e. g. , iPhone 5S) , the temperature (e. g. , 25 degrees centigrade) , the humidity (e. g. , 50%) , and the snapshot speed (e. g. , 5 m/s) contain numerical information, and the mobile phone classifies such additional information as a numerical type. As an example, the mobile phone identifies that the number of words/terms in the mood input by the user (e. g. , “goddess, good morning” ) is smaller than a preset threshold value 5, and the mobile phone classifies the mood input by the user as a simple text type. As another example, the mobile phone identifies that “get up with a good mood and start a new life of the day”  is in a sound format, and the mobile phone classifies “get up with a good mood and start a new life of the day” as a multimedia type.
According to another embodiment, the mobile phone packages the classified additional information respectively. For example, the mobile phone packages (e. g. , according to a preset JSON structure) the additional information of the numerical type, such as the resolution (e. g. , 8 million pixels) , the mobile phone model (e. g. , iPhone 5S) , the temperature (e. g. , 25 degrees centigrade) , the humidity (e. g. , 50%) , and the snapshot speed (e. g. , 5 m/s) . In another example, the mobile phone packages (e. g. , according to the JSON structure) the additional information of the simple text type, such as “goddess, good morning. ” In yet another example, the mobile phone packages (e. g. , according to a preset MP3 format) the additional information of the multimedia type, such as “get up with a good mood and start a new life of the day. ”
According to yet another embodiment, the packaged additional information is stored on the mobile phone, and a unique identification is generated for the additional information using an information abstract algorithm (e. g. , MD5) . For example, the additional information and the unique identification of the additional information are transmitted to a cloud server and stored associatively. As an example, the unique identification and the related image are stored associatively. As another example, when the user browses the photo, the mobile phone acquires the additional information associated with the photo from a storage of the mobile phone or from the cloud server for displaying over the photo. In another example, the image display interface can be divided into a plurality of partitions in advance, e. g. , 10 (or any other number of) interface partitions. For instance, a first interface partition is associated with the additional information of weather, a second interface partition is associated with the additional information of user's moods, and other interface partitions are associated with other types of corresponding additional information, as shown in Figure 3. In one embodiment, when the user touches the second interface partition, as shown in Figure 4, the mobile phone displays the corresponding additional information (e. g. , the user's mood) . For example, “goddess, good morning” is displayed over the image.
Figure 5 is a simplified diagram showing a terminal for image processing according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to one embodiment, the terminal 20 includes: an acquisition unit 201 configured to start an image shooting function and acquire first additional information associated with an image; a configuration unit 202 configured to configure an identification for the first additional information; and a storage processing unit 203 configured to store the identification  associated with the image and the first additional information and store the first additional information and the image separately.
According to some embodiments, the terminal 20 separately stores the additional information and the image, and flexibly matches the image with the additional information when the additional information needs to be displayed, so as to display more abundant additional information.
Figure 6 is a simplified diagram showing a terminal for image processing according to another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to certain embodiments, the terminal 20 further includes: a classification unit 204 configured to classify the first additional information based on at least information associated with one or more preset classification rules to obtain different types of second additional information; and a package unit 205 configured to package the different types of second additional information using package formats corresponding to the different types of second additional information.
According to one embodiment, the storage processing unit 203 is configured to store the identification associated with the first additional information and the image respectively on the terminal 20, the first additional information being stored in a first region, the image being stored in a second region.
Figure 7 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to one embodiment, the storage processing unit 203 includes: a first storage processing subunit 2031 configured to store the identification associated with the image on the terminal 20; and a first transmission subunit 2032 configured to transmit the identification and the first additional information to a storage device connected with the terminal 20 so that the storage device stores the identification associated with the first additional information.
Figure 8 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to one embodiment, the storage processing unit 203 includes: a second storage processing subunit 2033 configured to store the identification and the image on the terminal; and a second transmission subunit 2034 configured to transmit the identification and the first additional information to a cloud server so that the cloud server stores the identification associated with the first additional information.
Figure 9 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to one embodiment, the acquisition unit 201 is further configured to, upon the image being displayed, acquire the first additional information according to the identification associated with the first additional information. For example, the terminal 20 further includes: a display unit 206 configured to display at least part of the first additional information over the image. 
Figure 10 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to one embodiment, the display unit 206 includes: a reception subunit 2061 configured to receive a display instruction input by a user; a first determination subunit 2062 configured to determine an interface partition associated with the display instruction input by the user via a display interface; a second determination subunit 2063 configured to determine second additional information corresponding to the interface partition according to a preset association relationship between the second additional information and the interface partition; and a display subunit 2064 configured to display the second additional information corresponding to the interface partition over the image.
Figure 11 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to one embodiment, the terminal 20 further includes: a reception unit 207 configured to receive an additional information modification instruction input by a user, wherein the additional information modification instruction is configured to modify specified second additional  information; and a modification unit 208 configured to replace the specified second additional information with third additional information according to the modification instruction.
Figure 12 is a simplified diagram showing a terminal for image processing according to yet another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
According to one embodiment, the terminal 1200 (e. g. , a mobile phone) includes a RF (i. e. , radio frequency) circuit 110, a memory 120 (e. g. , including one or more computer-readable storage media) , an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a wireless communication module 170, one or more processors 180 that includes one or more processing cores, and a power supply 190. For example, the terminal 1200 further includes a camera 105 configured to shoot an image (e. g. , taking a photo or recording a video) , and a GPS module 106 configured to acquire geographical position additional information. In another example, the RF circuit 110 is configured to send/receive messages or signals in communication. As an example, the RF circuit 110 receives a base station’s downlink information, delivers to the processors 180 for processing, and sends uplink data to the base station. For example, the RF circuit 110 includes an antenna, at least one amplifier, a tuner, one or several oscillators, SIM (Subscriber Identity Module) card, a transceiver, a coupler, an LNA (Low Noise Amplifier) and/or a duplexer. In another example, the RF circuit 110 communicates with the network and other equipments via wireless communication based on any communication standard or protocols, such as GSM (Global System of Mobile communication) , GPRS (General Packet Radio Service) , CDMA (Code Division Multiple Access) , WCDMA (Wideband Code Division Multiple Access) , LTE (Long Term Evolution) , email, SMS (Short Messaging Service) , etc.
According to another embodiment, the memory 120 is configured to store software programs and modules. For example, the processors 180 are configured to execute various functional applications and data processing by running the software programs and modules stored in the memory 120. The memory 120 includes a program storage area and a data storage area, where the program storage area may store the operating system, and the application (s) required by one or more functions (e. g. , an audio player or a video player) , in some embodiments. For example, the data storage area stores the data created based on the use of the terminal 1200 (e. g. , audio data or a phone book) . In another example, the memory 120 includes a high-speed random access storage, a non-volatile memory, one or more floppy disc storage devices, a flash storage device or other volatile solid storage devices. As an example, the memory 120 further includes a memory controller to enable access to the memory 120 by the processors 180 and the input unit 130.
According to yet another embodiment, the input unit 130 is configured to receive additional information input by a user, a display instruction input by the user, and an additional information modification instruction. As an example, the input unit 130 is configured to receive an input number or character data and generate inputs for a keyboard, a mouse, and a joystick, optical or track signals relating to user setting and functional control. For example, the input unit 130 includes a touch-sensitive surface 131 and other input devices 132. The touch-sensitive surface 131 (e. g. , a touch screen or a touch panel) is configured to receive the user’s touch operations thereon or nearby (e. g. , the user's operations on or near the touch-sensitive surface with a finger, a touch pen or any other appropriate object or attachment) and drive the corresponding connected devices according to the predetermined program. For example, the touch-sensitive surface 131 includes two parts, namely a touch detector and a touch controller. The touch detector detects the position of user touch and the signals arising from such touches and sends the signals to the touch controller. The touch controller receives touch data from the touch detector, converts the touch data into the coordinates of the touch point, sends the coordinates to the processors 180 and receives and executes the commands received from the processors 180. For example, the touch-sensitive surface 131 is of a resistance type, a capacitance type, an infrared type and a surface acoustic wave type. In another example, other than the touch-sensitive surface, the input unit 130 includes the other input devices 132. For example, the other input devices 132 include one or more physical keyboards, one or more functional keys (e. g. , volume control keys or switch keys) , a track ball, a mouse and/or a joystick.
According to yet another embodiment, the display unit 140 is configured to display images and display additional information over the images. As an example, the display unit 140 is configured to display data input from a user or provided to the user, and includes various graphical user interfaces of the terminal 1200. For example, these graphical user interfaces include menus, graphs, texts, icons, videos and a combination thereof. The display unit 140 includes a display panel 141 which contains a LCD (liquid crystal display) , an OLED (organic light-emitting diode) . As an example, the touch-sensitive surface can cover the display panel 141. For example, upon detecting any touch operations thereon or nearby, the touch-sensitive surface sends signals to the processors 180 to determine the type of the touch events and then the processors 180 provides corresponding visual outputs on the display panel 141 according to the type of the touch events. Although the touch-sensitive surface 131 and the display panel 141 are two independent parts for input and output respectively, the touch-sensitive surface 131 and the display panel 141 can be integrated for input and output, in some embodiments.
In one embodiment, the terminal 1200 includes a sensor 150 (e. g. , an optical sensor for collecting brightness data, a motion sensor for collecting speed data associated with image shooting,  or other sensors) . For example, the sensor 150 includes an environment optical sensor and adjusts the brightness of the display panel 141 according to the environmental luminance. In another example, the sensor 150 includes a proximity sensor and turns off or backlights the display panel when the terminal 1200 moves close to an ear of a user. In yet another example, the sensor 150 includes a motion sensor (e. g. , a gravity acceleration sensor) and detects a magnitude of acceleration in all directions (e. g. , three axes) . Particularly, the sensor 150 detects a magnitude and a direction of gravity when staying still. In some embodiments, the sensor 150 is used for identifying movements of a cell phone (e. g. , a switch of screen direction between horizontal and vertical, related games, and a calibration related to a magnetometer) and features related to vibration identification (e. g. , a pedometer or a strike) . In certain embodiments, the sensor 150 includes a gyroscope, a barometer, a hygroscope, a thermometer and/or an infrared sensor.
In another embodiment, the audio circuit 160 for recording or playing audio additional information, a speaker 161 for outputting audio additional information, and a microphone 162 are configured to provide an audio interface between a user and the terminal 1200. For example, the audio circuit 160 is configured to transmit electrical signals converted from certain audio data to the speaker that converts such electrical signals into some output audio signals. In another example, the microphone 162 is configured to convert audio signals into electrical signals which are converted into audio data by the audio circuit 160. The audio data are processed in the processors 180 and received by the RF circuit 110 before being sent to another terminal, in some embodiments. For example, the audio data are output to the memory 120 for further processing. As an example, the audio circuit 160 includes an earphone jack for communication between a peripheral earphone and the terminal 1200.
According to some embodiments, the wireless communication module 170 includes a WiFi (e. g. , wireless fidelity, a short-distance wireless transmission technology) module, a Bluetooth module, an infrared communication module, etc. In some embodiments, through the wireless communication module 170, the terminal 1200 enables the user to receive and send emails, browse webpages, and/or access stream media. For example, the terminal 1200 is configured to provide the user with a wireless broadband Internet access. In some embodiments, the wireless communication module 170 is omitted in the terminal 1200.
According to one embodiment, the processors 180 are the control center of the terminal 1200. For example, the processors 180 is connected to various parts of the terminal 1200 (e. g. , a cell phone) via various interfaces and circuits, and executes various features of the terminal 1200 and processes various data through operating or executing the software programs and/or modules stored in the memory 120 and calling the data stored in the memory 120, so as to monitor and control the  terminal 1200 (e. g. , a cell phone) . As an example, the processors 180 include one or more processing cores. In another example, the processors 180 is integrated with an application processor and a modem processor, where the application processor mainly handles the operating system, the user interface and the applications and the modem processor mainly handles wireless communications. In some embodiments, the modem processor is not integrated into the processors 180.
According to another embodiment, the terminal 1200 includes the power supply 190 (e. g. , a battery) that powers up various parts. For example, the power supply 190 is logically connected to the processors 180 via a power source management system so that the charging, discharging and power consumption can be managed via the power source management system. In another example, the power supply 190 includes one or more DC or AC power sources, a recharging system, a power-failure-detection circuit, a power converter, an inverter, a power source state indicator, or other components. In yet another example, the terminal 1200 includes a camcorder, a Bluetooth module, a near field communication module, etc. According to some embodiments, the processors 180 of the terminal 1200 load executable files/codes associated with one or more applications to the memory 120 and run the applications stored in the memory 120 according to the method 100 as shown in Figure 1.
Figure 13 is a simplified diagram showing a system for image processing according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The system 1300 includes a terminal 20 and a cloud server 30 in communication with the terminal 20.
According to one embodiment, the terminal 20 is configured to start an image shooting function and acquire additional information associated with an image; configure an identification for the additional information; and store the identification associated with the image and the additional information. For example, the terminal 20 is configured to transmit the additional information and the identification of the additional information to the server 30 that is configured to store the additional information and the identification of the additional information. As an example, the terminal 20 is further configured to send an additional information request to the server 30 upon the image being displayed, where the additional information request carries the identification of the additional information. As another example, the server 30 is configured to search for the additional information according to the identification carried in the additional information request upon the receipt of the additional information request; and transmit the additional information to the terminal  20. As yet another example, the terminal 20 is further configured to display at least part of the additional information over the image.
According to another embodiment, a method is provided for image processing. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately. For example, the method is implemented according to at least Figure 1.
According to another embodiment, a terminal includes: an acquisition unit configured to start an image shooting function and acquire first additional information associated with an image; a configuration unit configured to configure an identification for the first additional information; and a storage processing unit configured to store the identification associated with the image and the first additional information and store the first additional information and the image separately. For example, the terminal is implemented according to at least Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, and/or Figure 11.
According to yet another embodiment, an image processing system includes: a terminal; and a cloud server configured to communicate with the terminal. The terminal includes: an acquisition unit configured to start an image shooting function and acquire additional information associated with an image; a configuration unit configured to configure an identification for the additional information; and a storage processing unit configured to store the identification associated with the image and the additional information and store the additional information and the image separately. The cloud server is configured to: store the additional information transmitted by the terminal and the identification of the additional information; search for the additional information according to the identification carried in an additional information request upon the receipt of the additional information request; and transmit the additional information to the terminal. For example, the system is implemented according to at least Figure 13.
According to yet another embodiment, a non-transitory computer readable storage medium includes programming instructions for image processing. The programming instructions are configured to cause one or more data processors to execute certain operations. For example, an image shooting function is started; first additional information associated with an image is acquired; an identification for the first additional information is configured; the identification is stored associated with the image and the first additional information; and the first additional information and the image are stored separately. For example, the storage medium is implemented according to at least Figure 1.
The above only describes several scenarios presented by this invention, and the description is relatively specific and detailed, yet it cannot therefore be understood as limiting the scope of this invention. It should be noted that ordinary technicians in the field may also, without deviating from the invention’s conceptual premises, make a number of variations and modifications, which are all within the scope of this invention. As a result, in terms of protection, the patent claims shall prevail.
For example, some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented using one or more software components, one or more hardware components, and/or one or more combinations of software and hardware components. In another example, some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented in one or more circuits, such as one or more analog circuits and/or one or more digital circuits. In yet another example, various embodiments and/or examples of the present invention can be combined.
Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may includesource code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to perform the methods and systems described herein.
The systems’ and methods’ data (e. g. , associations, mappings, data input, data output, intermediate data results, final data results, etc. ) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e. g. , RAM, ROM, EEPROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, application programming interface, etc. ) . It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e. g. , CD-ROM, diskette, RAM, flash memory, computer’s hard drive, DVD, etc. ) that contain instructions (e. g. , software) for use in execution by a processor to perform the methods’ operations and implement the systems described herein. The computer components, software modules, functions, data stores and data structures described herein  may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm) , or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
The computing system can include client devices and servers. A client device and server are generally remote from each other and typically interact through a communication network. The relationship of client device and server arises by virtue of computer programs running on the respective computers and having a client device-server relationship to each other.
This specification contains many specifics for particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations, one or more features from a combination can in some cases be removed from the combination, and a combination may, for example, be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Although specific embodiments of the present invention have been described, it is understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims.

Claims (19)

  1. A method for image processing, comprising:
    starting an image shooting function;
    acquiring first additional information associated with an image;
    configuring an identification for the first additional information;
    storing the identification associated with the image and the first additional information; and
    storing the first additional information and the image separately.
  2. The method of claim 1, further comprising:
    classifying the first additional information based on at least information associated with one or more preset classification rules to obtain different types of second additional information; and
    packaging the different types of second additional information using package formats corresponding to the different types of second additional information.
  3. The method of claim 1 or 2, wherein the storing the identification associated with the image and the first additional information and the storing the first additional information and the image separately include:
    storing the identification associated with the first additional information and the image respectively on a terminal, the first additional information being stored in a first region, the image being stored in a second region.
  4. The method of claim 1 or 2, wherein the storing the identification associated with the image and the first additional information and the storing the first additional information and the image separately include:
    storing the identification associated with the image on a terminal; and
    transmitting the identification and the first additional information to a storage device connected with the terminal so that the storage device stores the identification associated with the first additional information.
  5. The method of claim 1 or 2, wherein the storing the identification associated with the image and the first additional information and the storing the first additional information and the image separately include:
    storing the identification and the image on the terminal; and
    transmitting the identification and the first additional information to a cloud server so that the cloud server stores the identification associated with the first additional information.
  6. The method of claim 1 or 2, further comprising:
    upon the image being displayed, acquiring the first additional information according to the identification associated with the first additional information; and
    displaying at least part of the first additional information over the image.
  7. The method of claim 6, wherein the displaying at least part of the first additional information over the image includes:
    receiving a display instruction input by a user;
    determining an interface partition associated with the display instruction input by the user via a display interface;
    determining second additional information corresponding to the interface partition according to a preset association relationship between the second additional information and the interface partition; and
    displaying the second additional information corresponding to the interface partition over the image.
  8. The method of claim 6, further comprising:
    receiving an additional information modification instruction input by a user, wherein the additional information modification instruction is configured to modify specified second additional information; and
    replacing the specified second additional information with third additional information according to the modification instruction.
  9. A terminal comprising:
    an acquisition unit configured to start an image shooting function and acquire first additional information associated with an image;
    a configuration unit configured to configure an identification for the first additional information; and
    a storage processing unit configured to store the identification associated with the image and the first additional information and store the first additional information and the image separately.
  10. The terminal of claim 9, further comprising:
    a classification unit configured to classify the first additional information based on at least information associated with one or more preset classification rules to obtain different types of second additional information; and
    a package unit configured to package the different types of second additional information using package formats corresponding to the different types of second additional information.
  11. The terminal of claim 9 or 10, wherein the storage processing unit is configured to store the identification associated with the first additional information and the image respectively on the terminal, the first additional information being stored in a first region, the image being stored in a second region.
  12. The terminal of claim 9 or 10, wherein the storage processing unit includes:
    a first storage processing subunit configured to store the identification associated with the image on the terminal; and
    a first transmission subunit configured to transmit the identification and the first additional information to a storage device connected with the terminal so that the storage device stores the identification associated with the first additional information.
  13. The terminal of claim 9 or 10, wherein the storage processing unit includes:
    a second storage processing subunit configured to store the identification and the image on the terminal; and
    a second transmission subunit configured to transmit the identification and the first additional information to a cloud server so that the cloud server stores the identification associated with the first additional information.
  14. The terminal of claim 9 or 10, wherein:
    the acquisition unit is further configured to, upon the image being displayed, acquire the first additional information according to the identification associated with the first additional information; and
    the terminal further includes: a display unit configured to display at least part of the first additional information over the image.
  15. The terminal of claim 14, wherein the display unit includes:
    a reception subunit configured to receive a display instruction input by a user;
    a first determination subunit configured to determine an interface partition associated with the display instruction input by the user via a display interface;
    a second determination subunit configured to determine second additional information corresponding to the interface partition according to a preset association relationship between the second additional information and the interface partition; and
    a display subunit configured to display the second additional information corresponding to the interface partition over the image.
  16. The terminal of claim 14, further comprising:
    a reception unit configured to receive an additional information modification instruction input by a user, wherein the additional information modification instruction is configured to modify specified second additional information; and
    a modification unit configured to replace the specified second additional information with third additional information according to the modification instruction.
  17. The terminal of claim 9, further comprising:
    one or more data processors; and
    a computer-readable storage medium;
    wherein one or more of the acquisition unit, the configuration unit, and the storage processing unit are stored in the storage medium and configured to be executed by the one or more data processors.
  18. An image processing system comprising:
    a terminal; and
    a cloud server configured to communicate with the terminal;
    wherein the terminal includes:
    an acquisition unit configured to start an image shooting function and acquire additional information associated with an image;
    a configuration unit configured to configure an identification for the additional information; and
    a storage processing unit configured to store the identification associated with the image and the additional information and store the additional information and the image separately; and
    wherein the cloud server is configured to:
    store the additional information transmitted by the terminal and the identification of the additional information;
    search for the additional information according to the identification carried in an additional information request upon the receipt of the additional information request; and
    transmit the additional information to the terminal.
  19. A non-transitory computer readable storage medium comprising programming instructions for image processing, the programming instructions configured to cause one or more data processors to execute operations comprising:
    turning on an image shooting function;
    acquiring first additional information associated with an image;
    configuring an identification for the first additional information;
    storing the identification associated with the image and the first additional information; and
    storing the first additional information and the image separately.
PCT/CN2014/088724 2013-10-24 2014-10-16 Systems and methods for image processing WO2015058650A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310511634.9 2013-10-24
CN201310511634.9A CN103533248A (en) 2013-10-24 2013-10-24 Image processing method, terminal and system

Publications (1)

Publication Number Publication Date
WO2015058650A1 true WO2015058650A1 (en) 2015-04-30

Family

ID=49934885

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/088724 WO2015058650A1 (en) 2013-10-24 2014-10-16 Systems and methods for image processing

Country Status (2)

Country Link
CN (1) CN103533248A (en)
WO (1) WO2015058650A1 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533248A (en) * 2013-10-24 2014-01-22 腾讯科技(上海)有限公司 Image processing method, terminal and system
CN104867095B (en) * 2014-02-21 2020-04-07 腾讯科技(深圳)有限公司 Image processing method and device
CN104867096B (en) * 2014-02-25 2020-04-07 腾讯科技(深圳)有限公司 Method and device for dynamically transforming watermark content
CN104980643A (en) * 2014-04-10 2015-10-14 腾讯科技(深圳)有限公司 Picture processing method, terminal and system
CN105338237A (en) * 2014-08-06 2016-02-17 腾讯科技(深圳)有限公司 Image processing method and device
CN105488082A (en) * 2014-10-13 2016-04-13 中兴通讯股份有限公司 Information storage method and apparatus, and terminal
CN104536990B (en) * 2014-12-10 2018-03-27 广东欧珀移动通信有限公司 A kind of image display method and terminal
US9754355B2 (en) 2015-01-09 2017-09-05 Snap Inc. Object recognition based photo filters
CN104820683B (en) * 2015-04-17 2019-04-23 深圳市金立通信设备有限公司 A kind of picture processing terminal
CN104834687B (en) * 2015-04-17 2019-04-30 深圳市金立通信设备有限公司 A kind of image display method
CN104809204A (en) * 2015-04-27 2015-07-29 小米科技有限责任公司 Picture processing method and picture processing device
KR102647735B1 (en) * 2015-06-11 2024-03-15 가시오게산키 가부시키가이샤 Image shooting apparatus, information acquiring apparatus, information acquiring system, transmission control method, information acquiring method, and computer program stored in recording medium
CN105516606A (en) * 2016-01-21 2016-04-20 努比亚技术有限公司 Shooting device and method
CN107682643A (en) * 2016-08-02 2018-02-09 杭州海康威视数字技术股份有限公司 A kind of image processing method and device
CN106604127A (en) * 2016-12-26 2017-04-26 乐蜜科技有限公司 Multimedia information sharing method, device and terminal device
CN106790202A (en) * 2017-01-03 2017-05-31 努比亚技术有限公司 A kind of multimedia file shares processing method, device and terminal
CN109300177B (en) * 2017-07-24 2024-01-23 中兴通讯股份有限公司 Picture processing method and device
CN109427031A (en) * 2017-08-22 2019-03-05 阿里巴巴集团控股有限公司 A kind of data processing method and equipment
CN107483826B (en) * 2017-09-11 2018-10-23 广州酷狗计算机科技有限公司 The method and apparatus for generating video file
CN108090126B (en) * 2017-11-14 2021-09-24 维沃移动通信有限公司 Image processing method and device, mobile terminal, image identification method and server
CN108182439B (en) * 2017-12-20 2022-03-15 电子科技大学 Window-based counting method and device based on multi-classification incremental learning
KR102448382B1 (en) 2018-01-22 2022-09-28 삼성전자주식회사 Electronic device for providing image related with text and operation method thereof
CN109063562A (en) * 2018-06-29 2018-12-21 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN108924430A (en) * 2018-08-27 2018-11-30 Oppo广东移动通信有限公司 photo storage method, device, terminal and computer readable storage medium
CN109492523A (en) * 2018-09-17 2019-03-19 深圳壹账通智能科技有限公司 Face identification system performance test methods, device, equipment and storage medium
CN109325137A (en) * 2018-09-30 2019-02-12 华勤通讯技术有限公司 A kind of picture storage method, display methods and terminal device
CN109992683A (en) * 2019-04-09 2019-07-09 百度在线网络技术(北京)有限公司 Image information storage method, acquisition methods, terminal, server and medium
CN113079273A (en) * 2021-03-29 2021-07-06 维沃移动通信有限公司 Watermark processing method, device, electronic equipment and medium
CN114567629A (en) * 2022-03-03 2022-05-31 上海电气风电集团股份有限公司 Wind field data processing method, processing device, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004179783A (en) * 2002-11-25 2004-06-24 Nippon Telegr & Teleph Corp <Ntt> Mobile client apparatus, mobile client support system, id interpreting method, and id interpreting program
CN101359989A (en) * 2008-09-03 2009-02-04 深圳华为通信技术有限公司 Method, apparatus and mobile communication terminal generating safe digital photograph
CN101556617A (en) * 2008-04-11 2009-10-14 奥多比公司 Systems and methods for associating metadata with media
CN102547090A (en) * 2010-11-24 2012-07-04 三星电子株式会社 Digital photographing apparatus and methods of providing pictures thereof
CN103533248A (en) * 2013-10-24 2014-01-22 腾讯科技(上海)有限公司 Image processing method, terminal and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100371952C (en) * 2003-04-21 2008-02-27 日本电气株式会社 Video object recognition device and recognition method, video annotation giving device and giving method, and program
RU2463663C2 (en) * 2007-05-31 2012-10-10 Панасоник Корпорэйшн Image capturing apparatus, additional information providing and additional information filtering system
WO2013035834A1 (en) * 2011-09-09 2013-03-14 株式会社エヌ・ティ・ティ・ドコモ Communication terminal, video display system, method of executing communication terminal processing, and program
CN102364521A (en) * 2011-10-21 2012-02-29 吴思 Distributed image information management method based on secondary information fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004179783A (en) * 2002-11-25 2004-06-24 Nippon Telegr & Teleph Corp <Ntt> Mobile client apparatus, mobile client support system, id interpreting method, and id interpreting program
CN101556617A (en) * 2008-04-11 2009-10-14 奥多比公司 Systems and methods for associating metadata with media
CN101359989A (en) * 2008-09-03 2009-02-04 深圳华为通信技术有限公司 Method, apparatus and mobile communication terminal generating safe digital photograph
CN102547090A (en) * 2010-11-24 2012-07-04 三星电子株式会社 Digital photographing apparatus and methods of providing pictures thereof
CN103533248A (en) * 2013-10-24 2014-01-22 腾讯科技(上海)有限公司 Image processing method, terminal and system

Also Published As

Publication number Publication date
CN103533248A (en) 2014-01-22

Similar Documents

Publication Publication Date Title
WO2015058650A1 (en) Systems and methods for image processing
KR102301880B1 (en) Electronic apparatus and method for spoken dialog thereof
US9515627B2 (en) Devices and methods for audio volume adjustment
US20150138243A1 (en) Systems and Methods for Dynamic Wall Paper for Mobile Terminals
AU2018273505B2 (en) Method for capturing fingerprint and associated products
US11658932B2 (en) Message sending method and terminal device
US9363300B2 (en) Systems and methods for voice communication
CN108958606B (en) Split screen display method and device, storage medium and electronic equipment
KR20150090966A (en) Method For Providing Search Result And Electronic Device Using The Same
CN106127829B (en) Augmented reality processing method and device and terminal
CN105630846B (en) Head portrait updating method and device
CN109003194A (en) Comment on sharing method, terminal and storage medium
CN108984142B (en) Split screen display method and device, storage medium and electronic equipment
US10204164B2 (en) Systems and methods for filtering microblogs
CN107103074B (en) Processing method of shared information and mobile terminal
US11908489B2 (en) Tap to advance by subtitles
US10970522B2 (en) Data processing method, electronic device, and computer-readable storage medium
CN105094872B (en) A kind of method and apparatus showing web application
EP2869233A1 (en) Method, device and terminal for protecting application program
CN109451295A (en) A kind of method and system obtaining virtual information
WO2015149527A1 (en) Systems and methods for information interaction
CN108920086B (en) Split screen quitting method and device, storage medium and electronic equipment
CN115379113B (en) Shooting processing method, device, equipment and storage medium
CN105528220B (en) Method and device for loading dynamic shared object
CN105320532B (en) Method, device and terminal for displaying interactive interface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14855440

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS EPO FORM 1205A DATED 07-09-2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14855440

Country of ref document: EP

Kind code of ref document: A1