WO2021017932A1 - 一种图像显示方法与电子设备 - Google Patents

一种图像显示方法与电子设备 Download PDF

Info

Publication number
WO2021017932A1
WO2021017932A1 PCT/CN2020/103115 CN2020103115W WO2021017932A1 WO 2021017932 A1 WO2021017932 A1 WO 2021017932A1 CN 2020103115 W CN2020103115 W CN 2020103115W WO 2021017932 A1 WO2021017932 A1 WO 2021017932A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
input operation
mobile phone
thumbnail
Prior art date
Application number
PCT/CN2020/103115
Other languages
English (en)
French (fr)
Inventor
罗芊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080052474.4A priority Critical patent/CN114127713A/zh
Priority to EP20847568.1A priority patent/EP3979100A4/en
Priority to US17/626,701 priority patent/US20220269720A1/en
Publication of WO2021017932A1 publication Critical patent/WO2021017932A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00352Input means
    • H04N1/00395Arrangements for reducing operator input
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00408Display of information to the user, e.g. menus
    • H04N1/0044Display of information to the user, e.g. menus for image preview or review, e.g. to help the user position a sheet
    • H04N1/00442Simultaneous viewing of a plurality of images, e.g. using a mosaic display arrangement of thumbnails
    • H04N1/00453Simultaneous viewing of a plurality of images, e.g. using a mosaic display arrangement of thumbnails arranged in a two dimensional array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • This application relates to the field of terminal technology, and in particular to an image display method and electronic equipment.
  • the image capture function is one of the functions that users use frequently. Therefore, a large number of pictures may be stored in the mobile phone.
  • This application provides an image display method and electronic equipment, which can help users quickly locate a target image and facilitate user operations.
  • an embodiment of the present application provides an image display method, which can be executed by an electronic device.
  • the method includes: detecting an input operation; in response to the input operation, displaying an image selection interface on the display screen; and determining at least one related to the input operation from a set of associated images stored in a local storage or a cloud. Images; display thumbnails of at least one image in the image selection interface, and hide the remaining images; detect the first operation of selecting the first thumbnail in the image selection interface; perform the first thumbnail The processing flow corresponding to the input operation.
  • the electronic device may determine at least one image associated with the input operation from a set of images according to the input operation.
  • the electronic device detects that the user selects a target image from at least one image, it can perform a processing flow corresponding to the input operation on the target image.
  • the electronic device can filter out images that meet the conditions (related to the input operation) from a large number of images, and then the user can find the target image from the images that have been filtered out by the electronic device, which facilitates user operations and improves user experience.
  • the hiding the remaining images includes: hiding other images in the at least one image from the group of images.
  • the electronic device may determine at least one image associated with the input operation from a set of images according to the input operation.
  • the electronic device displays the at least one image, it can hide other images in the group of images, which is convenient for the user to view, can help the user to quickly locate the target image, and facilitate user operations.
  • the electronic device may also display marking information, where the marking information is used to mark that the at least one image is related to the input operation.
  • the electronic device may display tag information to help the user quickly locate the target picture and facilitate user operations.
  • the displaying the mark information includes: displaying the mark information on the thumbnail of each image in the at least one image; or not displaying the mark information in the image selection interface The area of at least one image displays the mark information.
  • the electronic device may display marking information in any form, as long as the marking information can indicate that the at least one image is related to the input operation, which is not limited in the embodiment of the present application.
  • the marking information includes one or more of icons, text, and pictures; or, displaying the marking information on the thumbnail of each image in the at least one image, including : The edge of the thumbnail of each image in the at least one image is highlighted.
  • the associated group of images includes: a group of images containing the same subject; and/or a group of images with a shooting time difference less than a preset time difference; and/or, shooting A group of images in the same place; and/or a group of images belonging to the same album, and/or a group of images containing the same content but different resolutions; and/or, a group of images with different resolutions for the same image Get a set of images after retouching.
  • the electronic device may also; preset an associated image for each type of input operation.
  • the electronic device may set the associated image for each type of input operation in advance. In this case, after detecting the input operation, the electronic device can determine at least one image corresponding to the input operation. With this method, the user does not need to find the target image from a large number of images, which is convenient for the user to operate and enhance the user experience.
  • the input operation is an operation for publishing an image
  • performing a processing flow corresponding to the input operation on the first thumbnail includes: The image executes an image publishing process; or the input operation is an operation for sending an image to a contact, and the processing flow corresponding to the input operation is performed on the first thumbnail, including: transferring the first thumbnail The corresponding image is sent to the contact.
  • a thumbnail of at least one image related to the publishing operation when the electronic device detects an operation for publishing an image, a thumbnail of at least one image related to the publishing operation.
  • the electronic device detects the user's operation of displaying the first thumbnail from among the thumbnails of at least one image it can perform a publishing process on the image corresponding to the first thumbnail.
  • the electronic device detects an operation for sending an image to a contact a thumbnail of at least one image related to the operation.
  • the electronic device detects the user's operation of displaying the first thumbnail from the thumbnails of at least one image, it may send the image corresponding to the first thumbnail to the contact.
  • the electronic device can filter out the images related to the input operation based on the input operation, that is, the electronic device can filter out the images that meet the conditions (related to the input operation) from a large number of images, and then the user can download Select the target image from the images that have been screened by the device to facilitate user operations and improve user experience.
  • the determining at least one image related to the input operation from a group of associated images stored in local storage or cloud storage includes: determining an operation type of the input operation; The operation type determines at least one image associated with the operation type.
  • the electronic device determines an image suitable for publishing. For another example, when the operation type of the input operation is to share an image with a contact, the electronic device determines that it is suitable for sharing the contact's image.
  • the determining the operation type of the input operation includes: determining that the input operation is an operation for publishing an image; according to the operation type, determining at least the operation type associated with the operation type An image includes: determining at least one image suitable for publishing according to the operation type.
  • the electronic device can determine at least one image suitable for publishing from a large number of images, and the user does not need to find a target image from a large number of images, which facilitates user operations and improves user experience.
  • the determining the operation type of the input operation includes: determining that the input operation is an operation of communicating with other contacts; and determining the operation type associated with the operation type according to the operation type
  • the at least one image includes: determining at least one image suitable for sending other contacts according to the operation type.
  • the electronic device can determine at least one image suitable for sharing a contact from a large number of images, and the user does not need to find a target image from a large number of images, which facilitates user operations and improves user experience.
  • the at least one image suitable for publishing includes: an image of the same type as an image that has been published; and/or at least one image whose number of retouching reaches a preset number of times.
  • the at least one image suitable for sending other contacts includes: the image includes the image of the other contact; and/or is the same as the image once sent to the other contact Type of image.
  • the determining at least one image related to the input operation from a set of associated images stored in local storage or cloud storage includes: determining an application targeted by the input operation Related information; According to the related information of the application, determine at least one image associated with the related information of the application.
  • the electronic device may determine at least one image associated with the relevant information of the application according to the relevant information of the application.
  • determining information related to the application targeted by the input operation includes: determining the type or function of the application targeted by the input operation;
  • the at least one image associated with the related information of the application includes: determining at least one image that matches the type or function according to the type or function of the application.
  • the electronic device may determine at least one image according to the type or function of the application. With this method, the user does not need to find the target image from a large number of images, which is convenient for the user to operate and enhance the user experience.
  • determining information related to the application targeted by the input operation includes: determining a history of publishing or sharing images with the application targeted by the input operation; according to the related information of the application, Determining at least one image associated with the related information of the application includes: determining at least one image that matches the historical record according to the historical record of the application.
  • the electronic device may determine at least one image based on the history record of the application. With this method, the user does not need to find the target image from a large number of images, which is convenient for the user to operate and enhance the user experience.
  • the determining at least one image related to the input operation from a group of associated images stored in local storage or cloud storage includes: determining time information corresponding to the input operation ; According to the time information, and at least one image matching the time information.
  • the electronic device may determine at least one image according to the time information of the input operation. With this method, the user does not need to find the target image from a large number of images, which is convenient for the user to operate and enhance the user experience.
  • determining at least one image related to the input operation from a set of associated images in a local storage or cloud storage includes: reading or loading from the local storage or cloud storage All the images in the associated group of images; determine at least one image related to the input operation from all the images in the group of images; display at least one image in the image selection interface And hiding the remaining images includes: displaying the thumbnail of the at least one image on the image selection interface, and not displaying the thumbnails of the other images in the image selection interface.
  • the electronic device may read all images in a group of images from local storage or cloud storage, and then select at least one image from all the images read.
  • the electronic device may only display thumbnails of at least one selected image in the image selection interface, and not display thumbnails of other images, for example, may discard other images.
  • displaying the thumbnail of at least one image in the image selection interface and hiding the remaining images includes: reading or loading the at least one image from the local storage or cloud storage , Displaying the thumbnail of the at least one image on the image selection interface; not reading or loading the set of images from the local storage or cloud storage except for the at least one image Other images.
  • the electronic device may only read at least one image related to the input operation from local storage or cloud storage, and not read other images. Therefore, the electronic device may only display the thumbnail of at least one read image in the image selection interface, and not display the thumbnails of other images.
  • displaying the thumbnail of at least one image in the image selection interface and hiding the remaining images includes: preloading the thumbnail of the at least one image from the local storage or cloud storage , The thumbnails of the other images in the at least one image are not preloaded in the group of images; the thumbnails of the at least one image are displayed in the image selection interface.
  • the electronic device may not completely load any images, but pre-load thumbnails of at least one image related to the input operation, without pre-loading thumbnails of other images. Therefore, the electronic device can display the preloaded thumbnail of at least one image in the image selection interface, and not display the thumbnails of other images.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device includes a display screen, at least one processor, and a memory; the memory is used to store one or more computer programs; when the one or more computer programs stored in the memory are executed by the at least one processor,
  • the electronic device can implement the foregoing first aspect and any possible design technical solution of the first aspect.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device includes modules/units that execute the above-mentioned first aspect or any one of the possible design methods of the first aspect; these modules/units can be Hardware implementation can also be implemented by hardware executing corresponding software.
  • an embodiment of the present application also provides a chip, which is coupled with a memory in an electronic device, and is used to call a computer program stored in the memory and execute the first aspect of the embodiment of the present application and any one of the first aspect thereof.
  • Possible designed technical solutions; “coupled” in the embodiments of this application refers to two components directly or indirectly combined with each other.
  • an embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium includes a computer program.
  • the computer program runs on an electronic device, the electronic device executes the first On the one hand and any possible design technical solutions of the first aspect.
  • a program product in the embodiments of the present application includes instructions, when the program product runs on an electronic device, the electronic device is caused to execute the first aspect of the embodiments of the present application and any one of the first aspects thereof. Possible technical solutions designed.
  • FIG. 1A is a schematic diagram of the hardware structure of a mobile phone 100 according to an embodiment of the application
  • FIG. 1B is a schematic diagram of the software structure of the mobile phone 100 provided by an embodiment of the application.
  • 2A is a schematic diagram of a graphical user interface of the mobile phone 100 provided by an embodiment of the application;
  • FIG. 2B is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application
  • 3A is a schematic diagram of a graphical user interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 3B is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application.
  • 4A is a schematic diagram of a graphical user interface of the mobile phone 100 provided by an embodiment of the application.
  • FIG. 4B is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 5A is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 5B is a schematic diagram of a graphical user interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 6 is a schematic diagram of the flow of an image classification method provided by an embodiment of this application.
  • FIG. 7 is a schematic diagram of a model provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram of a model training process provided by an embodiment of this application.
  • FIG. 9A is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 9B is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 9C is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 9D is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 10 is a schematic flowchart of an image display method provided by an embodiment of the application.
  • the application program (application, app for short) involved in the embodiments of the present application is a software program that can implement one or more specific functions.
  • multiple applications can be installed in the terminal.
  • the application program mentioned in the following may be an application program that has been installed when the terminal leaves the factory, or an application program that the user downloads from the network or obtains from other terminals when using the terminal.
  • the social applications (or called social platforms) involved in the embodiments of the present application are applications that can realize content (such as pictures and text) sharing.
  • content such as pictures and text
  • the image selection interface (also referred to as the image selection interface) involved in the embodiment of this application is an interface that can display thumbnails of multiple images for the user to select, for example, the interface 203 in Figure 2B below, or Figure 3B In the interface 305 and so on.
  • the thumbnails involved in the embodiments of the present application are an incomplete image of an image produced to facilitate the user to browse images or display more images.
  • the incomplete image may be an image obtained by compressing an image, or an image obtained by reducing the size of an image, or an image obtained by sampling some pixels on an image, or displaying only one image.
  • An image of part of the content on an image, or an image stored in the cloud, and only the fuzzy outline of the image (an image not downloaded from the cloud) can be displayed locally.
  • the interface 203 in FIG. 2B or the interface 305 in FIG. 3B may display thumbnails, and the user selects an image from the thumbnails.
  • the electronic device may be a portable terminal including a display screen, such as a mobile phone, a tablet computer, and the like.
  • portable electronic devices include but are not limited to carrying Or portable electronic devices with other operating systems.
  • the aforementioned portable electronic device may also be other portable electronic devices, such as a digital camera.
  • the above-mentioned electronic device may not be a portable electronic device, but a desktop computer with a display screen.
  • electronic devices can support multiple applications. For example, one or more of the following applications: camera application, instant messaging application, photo management application, etc. Among them, there can be multiple instant messaging applications. Such as Wechat (Wechat), Weibo, Tencent chat software (QQ), WhatsApp Messenger, Line, Photo Sharing (Instagram), Kakao Talk, DingTalk, etc. Users can send text, voice, pictures, video files, and various other files to other contacts (or other contacts) through instant messaging applications; or, users can communicate with other contacts through instant messaging applications Video or audio call.
  • Wechat Wechat
  • Weibo Tencent chat software
  • WhatsApp Messenger WhatsApp Messenger
  • Line Photo Sharing
  • Kakao Talk Kakao Talk
  • DingTalk DingTalk
  • Users can send text, voice, pictures, video files, and various other files to other contacts (or other contacts) through instant messaging applications; or, users can communicate with other contacts through instant messaging applications Video or audio call.
  • FIG. 1A shows a schematic structural diagram of the mobile phone 100.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and user An identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 100 may run the software code of the image sharing algorithm provided in the embodiment of the present application to implement the image sharing process.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the mobile phone 100, and can also be used to transfer data between the mobile phone 100 and peripheral devices.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the wireless communication function of the mobile phone 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied on the mobile phone 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the wireless communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the mobile phone 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the mobile phone 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the mobile phone 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the mobile phone 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the camera 193 is used to capture still images or videos.
  • the camera 193 may include a front camera and a rear camera.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store an operating system, and software codes of at least one application program (such as a camera application, a WeChat application, etc.).
  • the data storage area can store data (such as images, videos, etc.) generated during the use of the mobile phone 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • UFS universal flash storage
  • the internal memory 121 may also store the software code of the image sharing method provided in the embodiment of the present application.
  • the processor 110 runs the software code, the process steps of the image sharing method are executed to realize the image sharing process.
  • the internal memory 121 may also store images, models, classification tags of pictures, etc. obtained by shooting.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the software code of the image sharing method provided in the embodiment of the present application can also be stored in an external memory, and the processor 110 can run the software code through the external memory interface 120 to execute the process steps of the image sharing method to realize the image sharing process.
  • the images, models, classification tags of pictures, etc. captured by the mobile phone 100 may also be stored in an external memory.
  • the user can specify whether to store the image in the internal memory 121 or the external memory.
  • the mobile phone 100 when the mobile phone 100 is currently connected to an external memory, if the mobile phone 100 captures an image, a prompt message can be popped up to prompt the user to store the image in the external memory or the internal memory 121; of course, there are other designated methods.
  • the application embodiment is not limited; alternatively, when the mobile phone 100 detects that the memory amount of the internal memory 121 is less than the preset amount, it can automatically store the image in the external memory.
  • the mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the gyroscope sensor 180B can be used to determine the movement posture of the mobile phone 100. In some embodiments, the angular velocity of the mobile phone 100 around three axes (ie, x, y, and z axes) can be determined by the gyroscope sensor 180B. The gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the mobile phone 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the mobile phone 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the mobile phone 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the mobile phone 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the mobile phone 100 is stationary. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers and so on.
  • the mobile phone 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the mobile phone 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone 100 emits infrared light to the outside through the light emitting diode.
  • the mobile phone 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the mobile phone 100. When insufficient reflected light is detected, the mobile phone 100 can determine that there is no object near the mobile phone 100.
  • the mobile phone 100 can use the proximity light sensor 180G to detect that the user holds the mobile phone 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the mobile phone 100 can adaptively adjust the brightness of the display 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the mobile phone 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the mobile phone 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the mobile phone 100 performs a reduction in the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the mobile phone 100 when the temperature is lower than another threshold, the mobile phone 100 heats the battery 142 to avoid abnormal shutdown of the mobile phone 100 due to low temperature.
  • the mobile phone 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the mobile phone 100 can receive key input, and generate key signal input related to user settings and function control of the mobile phone 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be connected to and separated from the mobile phone 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • thumbnails of all images are stored in the mobile phone 100, and when the mobile phone 100 detects an input operation (such as an operation for publishing an image, or an operation for sending an image to other contacts)
  • an input operation such as an operation for publishing an image, or an operation for sending an image to other contacts
  • thumbnails usually cannot clearly display the image content, so users cannot accurately select the thumbnail of the target image with the naked eye.
  • they need to click on the thumbnail of an image to display the image, and then slide left or right. Image to display other images, and finally select the target image, the operation is cumbersome and the user experience is low.
  • the mobile phone 100 may analyze the user's operation behavior on the image, and divide the image into different image types according to the operation behavior. For example, it can be divided into image types that are “liked by the user” and “disliked by the user”; or, divided into image types that are “suitable for publishing” and “not suitable for publishing”; or, divided into “suitable for sending other contacts” and “not suitable for sending other Types of images of "contacts” and so on.
  • the mobile phone 100 When the mobile phone 100 detects an operation for publishing an image, it can recommend an image "liked by the user” or an image "suitable for publishing” to the user; when the mobile phone 100 detects an operation for sending an image to other contacts, You can recommend images that are "liked by the user” or images that are “suitable for sending other contacts” to the user. Therefore, the mobile phone 100 can recommend an image related to the input operation according to the input operation of the user, without looking for an image in a large number of images, which is convenient for the user's operation.
  • the software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the following embodiments exemplarily describe the software structure of the mobile phone 100 by taking an android system with a layered architecture as an example.
  • FIG. 1B is a block diagram of the software structure of the mobile phone 100 provided by an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the android system can be divided into four layers, from top to bottom, they are the application layer, the application framework layer, the android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages. As shown in Figure 1B, the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include video, image, audio, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text and controls that display pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the mobile phone 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android runtime includes core libraries and virtual machines. android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the system library may also include an image processing library.
  • the image can be classified. For example, when a behavior operation on at least one image in a set of associated images is detected, the at least one image is classified.
  • the image processing library may determine at least one image related to the input operation from the associated group of images, and recommend the at least one image.
  • the mobile phone 100 displays a main interface 201, the main interface 201 includes multiple applications (camera application, gallery application, WeChat application, etc.) application icons, when the mobile phone 100 detects that the user triggers the gallery application
  • the mobile phone 100 displays an interface 203 of the gallery application, as shown in (b) of FIG. 2A.
  • the interface 203 of the gallery application displayed by the mobile phone 100 the interface 203 includes thumbnails of pictures stored in the mobile phone 100, in Figure 2A (b), the thumbnails of 3 pictures Take the sketch as an example.
  • a mark is displayed on the thumbnail of a picture that is "liked by the user", and the mark is used to indicate that the picture is an image type that is "liked by the user", that is, the picture with the mark on the thumbnail is "liked by the user”
  • Picture type pictures that do not contain a mark are the picture type "users don't like”.
  • a mark 207 is displayed on the picture 204
  • a mark 208 is displayed on the picture 206, that is, the picture 204 and the picture 205 are picture types "liked by the user”.
  • the mobile phone 100 when the mobile phone 100 detects that the user triggers the operation of the mark 207, prompt information is displayed.
  • the prompt information is used to remind the user that the picture 204 is a picture that the user likes.
  • the mobile phone 100 also displays The confirmation control and the cancellation control can be displayed.
  • the confirmation control is triggered, the mobile phone 100 determines that the picture 204 is of the type that the user likes.
  • the cancel control is triggered, the mobile phone 100 determines that the picture 204 is not the type that the user likes, and cancels it on the picture 204
  • the mark 207 is displayed.
  • the edges of the thumbnails of the pictures "the user likes” are bold, while the edges of the thumbnails of other images are not bold; in other examples, "the user likes
  • the size of the thumbnail of the picture with "" is larger, while the size of the thumbnails of other pictures is smaller.
  • the sizes of the thumbnail 204 and the thumbnail 206 are larger than the size of the thumbnail 205 .
  • the interface 203 of the gallery application includes a control 210 of "picture classification". When the control 210 is triggered, the mobile phone 100 displays multiple options, see FIG.
  • the mobile phone 100 displays an option 211 of “user favorite pictures”, an option of “favorite pictures” 212, an option of “pictures suitable for publishing” 213, and an option of “pictures suitable for sending other contacts” 214. Assuming that when the option 211 is selected, the mobile phone 100 only displays pictures "the user likes". As shown in (d) in FIG. 2B, only pictures 204 and 206 are displayed.
  • the mobile phone 100 displays an interface 301 of the WeChat application, and the interface 301 includes a control 302.
  • the control 302 When the control 302 is triggered, the mobile phone 100 displays a shooting option 303 and a picture selection option 304 from the gallery. See (b) in Figure 3A.
  • the mobile phone 100 When the mobile phone 100 detects that the option 304 to select a picture from the gallery is selected, the mobile phone 100 displays a picture to-be-selected interface 305, as shown in (c) in FIG. 3A.
  • the picture to be selected interface 305 includes one or more pictures.
  • the picture to be selected interface 305 includes thumbnails of multiple pictures in the gallery in the mobile phone 100, where the thumbnails of the "user favorite" image can be displayed Mark, the marked picture that is not included in the thumbnail is the picture that the user does not like.
  • the picture to-be-selected interface 305 includes thumbnails of multiple pictures in the gallery in the mobile phone 100, wherein the thumbnails of the pictures "suitable for publishing" are marked, and the thumbnails of the pictures "not suitable for publishing" are not displayed. mark.
  • the picture to-be-selected interface 305 includes thumbnails of multiple pictures in the gallery in the mobile phone 100, wherein the first mark is displayed on the thumbnail of the picture "liked by the user", and the thumbnail of the picture "suitable for publishing" is displayed on the thumbnail
  • the second mark, the first mark and the second mark are different.
  • the thumbnails of the pictures 306 and 308 display marks, that is, the pictures 306 and 308 are pictures “liked by the user” or pictures "suitable for publishing".
  • the mobile phone 100 detects that the picture 308 is selected, an interface as shown in (d) in FIG. 3A is displayed.
  • the thumbnails of the pictures "users like” or the pictures "suitable for publishing” are relative to other pictures.
  • the size of the thumbnail of a picture is relatively large.
  • the picture to be selected interface 305 only includes thumbnails of two pictures, which are "user favorites" selected by the mobile phone 100 from the massive pictures in the gallery. Pictures and/or pictures that are "suitable for publishing".
  • the picture candidate interface 305 only includes recommended pictures, and does not include non-recommended pictures. For example, non-recommended pictures can be hidden.
  • the picture to-be-selected interface 305 includes a "see more" control. When the mobile phone 100 detects that the control is triggered, the mobile phone 100 displays more thumbnails.
  • the mobile phone 100 displays multiple options, that is, "pictures that the user likes” Option 311, “Favorite Picture” option 312, “Picture suitable for publishing” option 313, and “Picture suitable for sending other contacts” option 314. Assuming that when the option 311 is selected, the mobile phone 100 only displays pictures "the user likes”. As shown in (d) in FIG. 3B, the mobile phone 100 only displays pictures 306 and 308.
  • different areas of the picture to be selected interface 305 display different types of images.
  • the first area displays images that the user likes.
  • the second area displays images suitable for publishing.
  • the same image may exist in the image that the user likes and the image suitable for publishing.
  • the mobile phone 100 when it detects an operation for posting an image on a social platform, it can display an image "suitable for posting” or an image "liked by the user”, or display different types of images and display different identification information (such as " Images that are suitable for publishing display the first logo, and images that are “liked by the user” display the second logo), which is convenient for users to find images quickly.
  • the types of images are classified by the mobile phone 100 according to the user's operation behavior on the images, so the image types are different The division conforms to the user's operating habits and helps to improve the user experience.
  • the mobile phone 100 displays an interface 401 of a short message application, and the interface 401 is a communication interface between the user and other contacts.
  • the interface 401 includes a control 402.
  • a gallery control 403 and a shooting control 404 are displayed.
  • the mobile phone 100 when the mobile phone 100 detects that the user triggers the gallery control 403, it displays an interface 405 as shown in (b) in FIG. 4A.
  • the interface 405 includes thumbnails of multiple pictures, and a mark 407 is set on the picture 406. , Used to indicate that the picture 406 is a picture "liked by the user", and a mark 412 is set on the picture 411 to indicate that the picture 411 is a picture "liked by the user".
  • the mark 407 is used to indicate that the picture 406 is a picture "suitable for sending other contacts"
  • the mark 412 is used to indicate that the picture 411 is a picture "suitable for sending other contacts”.
  • different marks are displayed on the picture 406 and the picture 411.
  • the first mark is displayed on the picture 406 and the second mark is displayed on the picture 411.
  • the first mark is used to indicate that the picture 406 is a "user likes" picture
  • the second mark is used
  • the characterizing picture 411 is a picture “suitable for sending other contacts”.
  • the mobile phone 100 detects that the picture 406 is selected, and then detects that the user triggers the operation of the "send" control 408, the mobile phone 100 sends the picture 406 to the contact, and the mobile phone 100 displays an interface 409 as shown in (c) in FIG. 4A , No mark is displayed on the picture 410 in the interface 409.
  • the mobile phone 100 may determine the image type "suitable for sending to a specific contact".
  • the specific contact may include a specific type of contact, or a specific contact, and so on.
  • a specific type of contact can be a contact belonging to the same group/group. Taking the WeChat application as an example, a specific type of contact can be all contacts belonging to the same WeChat chat group in the WeChat application, or in the WeChat application All contacts belonging to the same group (for example, family), or all contacts with a common word (for example, teacher) in the remark name in the WeChat application.
  • a specific contact may be a specific contact, and the electronic device can determine whether the contact is a specific contact according to the remark name of the contact.
  • a specific contact may include contacts whose contact remarks are "Dad”, contact remarks are "Mom", and so on.
  • the mobile phone 100 learns to send certain images (such as landscape images or portrait images) to a specific contact more often, it determines that these images are “suitable for sending to a specific contact”.
  • a chat interface with a specific contact for example, a WeChat chat interface, or a short message chat interface
  • the mobile phone 100 detects an operation for sending an image, it displays an image suitable for sending to the specific contact.
  • the mobile phone 100 detects that the remarks of "Dad” or “Mom” sent to the contact are many times, and the mobile phone 100 determines that the image suitable for sending to "Dad” or “Mom” is a landscape image, or learn Until the person image is sent to the contact with a remark of "Amy" many times, the mobile phone 100 determines that the image suitable for sending to Amy is a portrait image.
  • the mobile phone 100 displays a chat interface with "Dad” or “Mom” or a group containing “Dad” and/or "Mom", when the mobile phone 100 detects an operation for sending an image, it can display that it is suitable for sending to "Dad” Or a thumbnail of the image of "Mom”.
  • the thumbnails of pictures "liked by the user” or pictures "suitable for sending other contacts" in the interface 405 are relative to other pictures.
  • the size of the thumbnail is larger.
  • thumbnails (picture 406 and picture 411) of the picture "the user likes” or the picture "suitable for sending other contacts" in the interface 305 have bold edges, and the edges of other pictures Is bold.
  • the interface 405 only includes thumbnails of 2 pictures, these 2 pictures are the "user favorite" pictures selected by the mobile phone 100 from the massive pictures in the gallery and/ Or a picture "suitable for sending other contacts".
  • the interface 305 does not include other pictures.
  • the interface 405 includes a "see more" control. When the mobile phone 100 detects an operation on the control, the mobile phone 100 displays more thumbnails.
  • the "picture type" control in the interface 305 when the "picture type” control is triggered, the mobile phone 100 displays multiple options, that is, the option of "the user's favorite picture". “Favorite pictures” option, “Picture suitable for publishing” option, “Picture suitable for sending other contacts” option. Assuming that when the option “suitable for sending pictures of other contacts" is selected, the mobile phone 100 only displays pictures suitable for sending other contacts.
  • the mobile phone 100 when it detects an operation for sending an image to other contacts, it can display an image "suitable for sending other contacts" or an image "liked by the user", or Display different types of images to display different identification information (for example, the image of "suitable for sending other contacts” displays the first identifier, and the image of "user likes” displays the second identifier), which is convenient for users to find images quickly, and the type of image is
  • the mobile phone 100 is divided according to the user's operation behavior on the image, so the classification of the image types conforms to the user's operating habits, which helps to improve the user experience.
  • the mobile phone 100 displays a main interface 501
  • the main interface 501 includes application icons of multiple applications, when the mobile phone 100 detects that the user triggers the camera application icon 502, the mobile phone 100 displays the camera application
  • the interface 503 is as shown in (b) of FIG. 5A.
  • the interface 503 of the camera application includes a preview image.
  • the preview image is an image captured by the mobile phone 100 based on the camera's preset shooting parameters, where the preset shooting parameters may It is the shooting parameter analyzed by the mobile phone 100 according to the user's favorite picture. For example, if the mobile phone 100 learns that the brightness of the picture that the user likes is high, the mobile phone 100 adjusts the shooting parameters of the camera, such as increasing the exposure value. Therefore, in this manner, after the mobile phone 100 starts the camera application, the mobile phone 100 captures images with preset shooting parameters by default, so that the captured images are more likely to be images that the user likes.
  • the interface 503 of the camera application includes a control 504.
  • the interface 503 displays prompt information 505, prompt information 505 It is used to remind the user that the mobile phone 100 takes a picture that the user likes as a template to shoot.
  • the mobile phone 100 adjusts the shooting parameters to the shooting parameters analyzed according to the pictures liked by the user.
  • the interface 503 of the camera application includes a control 504.
  • the interface 503 displays a shooting mode selection box.
  • the mode selection box includes controls corresponding to various shooting modules, including a control 505 of "taking a picture that the user likes as a template".
  • the mobile phone 100 adjusts the shooting parameters to the shooting parameters analyzed according to the picture that the user likes.
  • the interface 503 of the camera application includes a "light stick” control 504.
  • the mobile phone 100 detects that the "light stick” control 504 is triggered, the mobile phone 100 displays a Or multiple options, such as the option 505 of “use a favorite picture as a template”.
  • the mobile phone 100 adjusts the shooting parameters to the shooting parameters analyzed according to the user's favorite pictures.
  • the following embodiment introduces a process in which the mobile phone 100 divides the stored pictures into two image types: "user likes” and “user dislikes”.
  • FIG. 6 is a schematic diagram of the image classification process provided by this embodiment of the application. As shown in Figure 6, the process may include:
  • the mobile phone 100 detects an operation behavior for a picture, and the operation behavior includes behaviors such as deleting, viewing, sharing, collecting, and editing the picture.
  • Table 1 is an example of the operation behavior of the mobile phone 100 for each picture.
  • the mobile phone 100 divides the pictures into images that are "liked by the user” and images that are not liked by the user.
  • the mobile phone 100 can use artificial intelligence (AI) learning methods (such as using an AI model) to divide the pictures in the gallery into "users like” images and “users dislike” image types, and then " Add a "like” tag to pictures that the user likes, and add a "dislike” tag to pictures that the user doesn't like.
  • AI artificial intelligence
  • the mobile phone 100 may mark pictures that have been viewed more than a preset number of times as pictures that the user likes, and mark pictures that are less than or equal to the preset times as pictures that the user dislikes.
  • the mobile phone 100 may mark pictures with a number of sharing times greater than a preset number as pictures that the user likes, and mark pictures with a number of sharing times less than or equal to the preset times as pictures that the user dislikes.
  • Table 2 for an example of determining the user's like degree of each picture for the mobile phone 100.
  • the like degree can be characterized by "yes/yes” or “no/no”, “yes/yes” means like, and "no/no” means dislike; or the like degree can also be characterized by scores.
  • the following embodiment introduces the process in which the mobile phone 100 uses the AI model to classify pictures into image types of "user likes” and “users dislike”.
  • the model may be, for example, a neural network unit, a machine learning model, etc.
  • the model may include model parameters.
  • the mobile phone 100 uses input parameters, model parameters, and related algorithms to obtain an output result, and the output result may be a classification label. See Figure 7 for an example of an algorithm related to model parameters:
  • x1, x2-xn are multiple input parameters; w1, w2-wn are the coefficients of each input parameter (also called weight); b is the offset of each input parameter (used to indicate u and coordinate The intercept of the origin); f is a function used to ensure that the value range of the output result is within the interval [0,1] (such as Sigmoid function, tanh function, etc.).
  • the input parameter is x
  • the model parameter is the weight w i and the offset b
  • the output parameter is y.
  • the input parameter x is one or more images (hereinafter referred to as input images).
  • the output result can be obtained through the algorithm related to the model. It can be the classification label to which one or more images belong.
  • the classification label can be "user likes" or “user dislikes”.
  • the model use process can be to use one or more images as input parameters, use model parameters (for example, model parameters obtained by training), run algorithms related to the model, and get the output result, which can be the input image Label, for example, the output result is "yes/yes" or "no/no".
  • the output result can also be obtained based on the probability that the input image belongs to the "like” tag (or the probability of the "dislike” tag). For example, when an image belongs to the "like” tag The probability of the label is 0.9, and the mobile phone 100 can determine that the input image belongs to the classification label of "user favorite", and the output result can be "yes/yes".
  • model training process is the process of determining model parameters.
  • the following embodiment describes the model training process. Referring to Figure 8, the flow of the model training process may include:
  • the "associated" group of images may be at least two images, and “associated” may be that a group of images are related in content, shooting time, shooting location, and so on.
  • the mobile phone 100 continuously shoots 3 images, then these 3 images are a group of related images.
  • the mobile phone 100 has captured three images of the same object, that is, the three images contain the same object (also referred to as the captured object), and the three images are also a set of related images.
  • the mobile phone 100 has captured 3 images within a certain period of time (for example, within 30 minutes), then these 3 images may be a group of related images.
  • the user can perform different operations on the images in the associated set of images.
  • a group of related images includes 3 images containing the same person
  • the mobile phone 100 detects an image publishing operation for one of the images (such as publishing to WeChat Moments), and the mobile phone 100 determines that the image is "suitable for publishing"
  • the image type may add a tag to the image, such as a tag "suitable for publishing".
  • the mobile phone 100 can determine the image type of the image according to the user's operation behavior on the image in the associated set of images, and then add appropriate tags to the image.
  • S804 Use the first image as an input parameter, determine initial model parameters, and run calculations related to the model parameters to obtain an output result, which may be a label of the first image.
  • S805 Determine whether the output result is the same as the label of the first image determined in S803, if yes, the training ends, if not, execute S806.
  • S807 Use the first image as an input parameter, and use the adjusted model parameter to run an algorithm related to the model parameter to obtain a new output result.
  • the model training process the process of determining w i and b when x i and y are known.
  • the initial w0 and b0 are determined, and the above formula (1) is calculated to obtain y0. Compare whether the difference between y0 and the known y is small. If so, then After the model training is completed, if not, adjust the initial w0 and b0, for example, adjust to w1 and b1, and then, when x i and w1 and b1 are known, calculate the above formula (1) again to get y1, compare y1 and The difference between the known y. Until the difference between the obtained yn and the known y is small, the model training ends.
  • S808 Determine whether the output result is the same as the label of the first image determined in S803, if so, the training ends, if not, execute S806.
  • the mobile phone 100 uses the first image as the positive training set and the second image as the anti-training set.
  • the mobile phone 100 takes the first image and the second image as input parameters, uses the model parameters, and runs the algorithm related to the model to obtain the first output result and the second output result. If the first output result indicates that the first image is a user Like the picture, and the second output result indicates that the second image is a picture that the user does not like, that is, the label of the first output result and the first image are the same, and the second output result is the same as the label of the second image Therefore, the mobile phone 100 does not need to adjust the model parameters.
  • the labels used in the process of training the model of the mobile phone 100 are the pictures that the user likes or the labels that the user does not like, so the function of the model parameters obtained by training is to divide the pictures into the type that the user likes or the type that the user does not like.
  • one or more models can be stored in the mobile phone 100. If multiple models are stored, each model can have a different function. For example, one model is used to classify images into types that users like or dislike, and the other model is to classify images into types suitable for publishing or not suitable for publishing.
  • the process of using the model includes: the mobile phone 100 uses one or more images as input parameters of the model, and the known model parameters (determined during the model training process, for example, w i and b ), run the algorithm related to the model to determine the output result, which is the classification label of the input one or more images.
  • the mobile phone 100 can periodically train a model or use the model to classify images, or the mobile phone 100 can also train the model or use the model to perform image processing when idle (for example, the user has not operated the mobile phone 100 for a long time). Classification is not limited in the embodiment of this application.
  • the following introduces several examples in which the mobile phone 100 uses the AI model to classify pictures into user likes or dislikes.
  • the mobile phone 100 captures three images. As shown in FIG. 9A, the user deletes the first two images and keeps the third image (or the user views the third image more times and the first two images less frequently, or Three images are modified (for example, the first two images are not modified, etc.), the mobile phone 100 detects the user's different operation behaviors for these three images, it can determine that the user likes the third image, and then the first two images
  • the image is used as the positive training set
  • the third image is used as the anti-training set to train the AI model to obtain the trained model.
  • the trained model can classify pictures with fewer people in the background as pictures that users like, and classify pictures with more people in the background as pictures that users don't like.
  • the mobile phone 100 can input the image into the AI model and run the AI model for calculation. If it is judged that there are fewer people in the background of the picture, the output result is "yes”, if there are more people The output result is "no”, “yes” is used to indicate that the user likes it, and “no” is used to indicate that the user doesn't like it.
  • the mobile phone 100 displays a logo on the thumbnail of the image.
  • the mobile phone 100 may output a prompt message to prompt the user to delete the picture.
  • the mobile phone 100 captures three images. As shown in FIG. 9B, the user deletes the first two images and keeps the third image.
  • the mobile phone 100 detects different operation behaviors of the user on the three images, and can determine that the user likes the third image. Image, and then use the first two images as the positive training set and the third image as the anti-training set to train the AI model to obtain the trained model.
  • the model can classify pictures without watermarks in the images as pictures that users like, and divide pictures with watermarks in the images as pictures that users do not like.
  • the mobile phone 100 After the mobile phone 100 captures an image again, the mobile phone 100 inputs the image to the AI model and runs the AI model for calculation. If it is judged that there is no watermark on the picture, the output result is "yes”, if so, the output result is "no” , “Yes” is used to indicate that the user likes it, and “no” is used to indicate that the user does not like it.
  • the mobile phone 100 captures three images. As shown in FIG. 9C, the user deletes the first two images and keeps the third image.
  • the mobile phone 100 detects the user's different operation behaviors on these three images, and can determine that the user likes the third image. Image, and then use the first two images as the positive training set and the third image as the anti-training set to train the AI model to obtain the trained model.
  • the trained model can classify the unshaded pictures of the faces of the characters in the images as pictures that the user likes, and divide the pictures with shadows on the faces of the characters in the image as the pictures that the user does not like.
  • the mobile phone 100 After the mobile phone 100 captures an image again, the mobile phone 100 inputs the image to the AI model and runs the AI model for calculation. If it is judged that there is no shadow on the face of the person in the picture, the result "yes” is output, and if so, the result is output "No” and “yes” are used to indicate that the user likes it, and “no” is used to indicate that the user doesn't like it.
  • the mobile phone 100 captures three images. As shown in FIG. 9D, the user deletes the first two images and keeps the third image.
  • the mobile phone 100 detects different operation behaviors of the user on the three images, and can determine that the user likes the third image. Image, and then use the first two images as the positive training set and the third image as the anti-training set to train the AI model to obtain the trained model.
  • the model can classify pictures with suitable brightness and high definition as pictures that users like, and divide pictures with excessive or dark brightness and low definition as pictures that users don't like.
  • the mobile phone 100 After the mobile phone 100 captures an image again, it inputs the image to the AI model and runs the AI model to perform calculations. If it is judged that the image has high definition and moderate brightness, the output result is “yes”, if the brightness is high or low The output result is “no”, “yes” is used to indicate that the user likes it, and “no” is used to indicate that the user does not like it.
  • the mobile phone 100 detects the different operation behaviors of the user on multiple images, and can determine the image that the user likes (for example, confirm that the shared image, the retained image, or the favorite image is the image that the user likes). For example, the mobile phone 100 determines the image through AI learning The angle of the face in the image that the user likes is usually 60 degrees on the left side.
  • the mobile phone 100 determines that the angle of the face in an image is 60 degrees on the left side, it can recommend to the user to keep the image (or prompt the user that the image can be used for sharing). If the mobile phone 100 determines that the angle of the face in an image is not 60 degrees from the left side face, the user may be prompted to delete the image.
  • Example 5 is only an example of a face angle in a selfie.
  • the mobile phone 100 can also learn facial expressions, postures, and position of oneself in a group photo that the user likes in pictures. Assuming that the facial expression in the picture that the user likes is smiling, the posture is standing or the position is centered, then after the mobile phone 100 captures an image, if it is judged that the facial expression in the picture is smiling and the posture is standing or occupying position In the middle, the mobile phone 100 retains the image (or can prompt the user that the image can be used for sharing, etc.).
  • the mobile phone 100 can learn according to any or a combination of the above methods. This application is implemented The examples are not limited.
  • the mobile phone 100 when it recognizes a picture "the user does not like" through the AI model, it can output a prompt message to prompt the user to delete the image, or automatically delete the image, or after the mobile phone 100 detects that the image is backed up to the cloud, Automatically delete all images labeled "user disliked", or after the mobile phone 100 detects that the image is backed up to the cloud, it outputs a prompt message to prompt the user whether to delete the image labeled "user disliked", or mobile phone 100 detects it After the image is backed up to the cloud, a control is displayed. When the control is triggered, the mobile phone 100 deletes all the pictures "the user does not like".
  • the following embodiment introduces a process in which the mobile phone 100 divides an image into "suitable for publishing” and "not suitable for publishing".
  • the mobile phone 100 can use the AI model to determine which images are “suitable for publishing” and which images are “not suitable for publishing.”
  • the mobile phone 100 detects the first feature information of the published image in the stored image. When the mobile phone 100 acquires an image, it determines whether the second feature information on the image satisfies the first feature information. If so, the mobile phone 100 is The image is labeled with "suitable for publishing". If it is not satisfied, the mobile phone 100 adds the label of "not suitable for publishing" to the image. For example, the mobile phone 100 detects that a landscape image has been published in the stored images. After the mobile phone 100 obtains an image, if the image is a landscape image, the mobile phone 100 adds a label of "suitable for publishing" to the image.
  • the label may also be "user likes and is suitable for publishing", “user likes but not suitable for publishing”, “user does not like but suitable for publishing” or “user does not like and is not suitable for publishing” . That is to say, when the label of the image is recognized by the model, multiple classifications of the image can be recognized, which is not limited in the embodiment of the present application.
  • the mobile phone 100 When the mobile phone 100 detects an operation for publishing an image, it recommends an image labeled "suitable for publishing" to the user. Taking (b) in FIG. 3A as an example, when the mobile phone 100 detects that the user triggers the operation of the "select from gallery” option 304, it displays thumbnails of multiple images, and the thumbnails of some of the images display a mark, and the mark is To characterize that the image belongs to the image category "suitable for publishing".
  • the mobile phone 100 detects that the user has targeted these three images. According to different operation behaviors, it can be determined that the first two images are suitable for publishing, and the third image is not shared. Then the first two images are used as the positive training set, and the third image is used as the anti-training set. Perform training to get the model after training.
  • the trained model can determine whether the input image meets the conditions suitable for publishing (for example, it meets the characteristic information on the published image). If it meets, it will be classified as a classification label suitable for publishing. If it is not met, the image will be classified as Classification labels that are not suitable for publication.
  • the mobile phone 100 After the mobile phone 100 captures an image again, it inputs the image into the AI model and runs the AI model for calculation. If it is judged that the picture meets the conditions suitable for publishing, the output result "yes”, if the picture does not meet the conditions suitable for publishing Condition, output result "no".
  • the mobile phone 100 may divide a moving picture into a plurality of static pictures at a time interval of 100 ms (this value is an example, and is not limited in the embodiment of this application), and input each static picture into the AI model.
  • the output result of each frame of image is obtained, such as the probability of each frame of image belonging to the category label of "user favorite", and the mobile phone 100 selects the image with the highest probability as the cover image of the animation.
  • the mobile phone 100 can use the AI model to determine which images are “suitable for sending contacts” and which images are “not suitable for sending contacts”.
  • the mobile phone 100 detects the first feature information of an image sent to one or more contacts (which can be any contact) among all stored images, and when the mobile phone 100 acquires an image, it determines that the image is Whether the second feature information on the above satisfies the first feature information, if it is satisfied, the mobile phone 100 adds the label "suitable for sending contacts" to the image, or if it is not satisfied, the mobile phone 100 adds "not suitable for sending contacts" to the image Tag of. For example, the mobile phone 100 detects that among the stored images, the image obtained by the screenshot of the mobile phone has been sent to the contact. After the mobile phone 100 obtains an image, if the image is the image obtained by the screenshot of the mobile phone 100, the mobile phone 100 is the image Add the label "suitable for sending contacts".
  • the mobile phone 100 When the mobile phone 100 detects an operation for sending an image to other contacts, it recommends an image labeled "suitable for sending a contact" to the user. Taking (a) in FIG. 4A as an example, when the mobile phone 100 detects that the user triggers the operation of the photo 403, it displays multiple images, and the thumbnails of some of the images display icons, which are used to indicate that the image is suitable for sending to a contact .
  • the mobile phone 100 can also determine which images are "suitable for sending a specific contact" through an AI model.
  • the specific contact may include a specific type of contact, or a specific contact, and so on.
  • a specific type of contact can be a contact belonging to the same group/group. Taking the WeChat application as an example, a specific type of contact can be all contacts belonging to the same WeChat chat group in the WeChat application, or in the WeChat application All contacts belonging to the same group, or all contacts with a common word (for example, teacher) in the remark name in the WeChat application.
  • a specific contact may be a specific contact, and the electronic device can determine whether the contact is a specific contact according to the remark name of the contact.
  • the mobile phone 100 can detect the first feature information of an image sent to a specific contact (for example, a parent) in all stored images, and when the mobile phone 100 acquires an image, it can determine whether the second feature information on the image satisfies If the first characteristic information is satisfied, the mobile phone 100 adds a tag "suitable for sending a specific contact" to the image.
  • a specific contact for example, a parent
  • an embodiment of the present application provides an image display method, which can be implemented in a mobile phone 100 or other electronic devices as shown in FIG. 1A. As shown in Figure 10, the method may include the following steps:
  • the input operation may be an operation of clicking the "select from gallery" control 304.
  • the image selection interface (also referred to as the image to-select interface) can display thumbnails of multiple images for the user to select, for example, the interface 203 in FIG. 2B, or the interface in FIG. 3B 305 and so on.
  • the local storage may be a storage inside the electronic device.
  • the image stored in the cloud may be an image stored in a cloud server by an electronic device.
  • the associated group of images includes: a group of images containing the same subject, such as the three images shown in FIG. 9C; and/or, a group of images whose shooting time difference is less than a preset time difference Images; and/or, a group of images in the same location; and/or, a group of images belonging to the same album, and/or, a group of images containing the same content but different resolutions; and/or Or, get a set of images after different retouching methods for the same image.
  • the electronic device may determine the image type of each image according to the user's operation behavior on the image.
  • the electronic device acquires a group of associated images; detects an operation behavior for each image in the group of images; the operation behavior includes deletion, retention, retouching, image publishing, and sending to contacts One or more of; the image type of each image is determined according to the operation behavior; the image type includes an image type suitable for publishing or an image type suitable for sending other contacts.
  • the electronic device acquires the three images shown in FIG. 9C, and determines that the third image in the images is published on the social platform, and then determines that the third image belongs to the type suitable for image publishing.
  • the electronic device obtains the three images shown in FIG. 9A, determines that the third image in the images is sent to other contacts, and then determines that the third image is of a type suitable for sending other contacts.
  • a possible implementation manner is that the electronic device can determine the operation type of the input operation; according to the operation type, determine at least one image associated with the operation type.
  • the electronic device determines that the input operation is an operation for publishing an image; according to the operation type, according to the operation type, determine at least one image suitable for publishing. In other words, after the electronic device detects the operation for publishing the image, it can only display the determined thumbnail of the image suitable for publishing to facilitate the user's selection.
  • the electronic device determines that the input operation is an operation of communicating with other contacts; according to the operation type, determines that it is suitable to send at least one image of the other contact. In other words, when the electronic device detects an operation for sending an image to other contacts, it may only display thumbnails suitable for sending images of other contacts to facilitate the user to view.
  • the at least one image suitable for publishing may include: at least one image that has been published; it may also include images of the same type as the image that has been published; for example, at least one image that has been published. If an image is a portrait (for example, the person in the image has a larger area), the portrait image is suitable for publishing; for example, if at least one image that has been published is an image with elongated legs, then the electronic device Make sure that the time domain of the image processed by the elongated leg is suitable for the published image.
  • the at least one image suitable for publishing may also include at least one image whose number of retouching reaches a preset number of times.
  • the at least one image suitable for sending other contacts includes: the image includes the image of the other contact; and the image that is of the same type as the image once sent to the other contact image. For example, if the image once sent to a contact is a mobile phone screenshot, then the image obtained by the mobile phone screenshot is suitable for sending the image of the contact.
  • Another possible implementation manner is that the electronic device determines information related to the application targeted by the input operation; and determines at least one image associated with the related information of the application according to the related information of the application.
  • the electronic device may determine the type or function of the application targeted by the input operation; according to the type or function of the application, determine at least one image that matches the type or function.
  • the application targeted by the input operation is Baihe.com, and the electronic device determines that at least one image matching the application is at least one selfie image.
  • the application targeted by the input operation is a game application, and the electronic device determines that at least one image matching the application is at least one image of a game screen.
  • the electronic device may also determine the historical record of publishing or sharing images with the application targeted by the input operation; and determine at least one image matching the historical record according to the historical record of the application.
  • the electronic device determines that the at least one image is at least one of the "mine” albums based on the historical record. Images.
  • the electronic device may also determine time information corresponding to the input operation; according to the time information, and at least one image that matches the time information.
  • the time information may include date information, time information (for example, 12:10) and so on.
  • the electronic device determines that the current time information is May 1, and the electronic device may determine that at least one image corresponding to the time information is at least one image taken on May 1, or it was released on May 1 last year /At least one image shared, or at least one image of 5.1 is included in the image.
  • the associated image of each type of input operation may be preset. For example, for an input operation used to send an image to a contact, the electronic device determines that the associated image of the input operation is a specific image, such as an image with a large number of retouching, or for an input operation used for distribution on a social platform , The electronic device determines that the associated image of the input operation is a specific image, for example, an image of elongated legs. Therefore, after the electronic device detects the input operation, at least one image associated with the detected input operation is determined according to the preset images associated with the input operation.
  • the electronic device may also display marking information in the image selection interface, where the marking information is used to mark that the at least one image is related to the input operation.
  • the electronic device displays the mark information on the thumbnail of each image in the at least one image, such as displaying the mark on the thumbnail 204 and the thumbnail 206 in FIG. 2A; or, on the image The mark information is displayed in the area where the at least one image is not displayed in the selection interface.
  • the mark information includes one or more of icons, text, and pictures; or, displaying the mark information on the thumbnail of each image in the at least one image includes: the at least The edge of the thumbnail of each image in an image is highlighted. For example, the edges of the thumbnail 306 and the thumbnail 308 in FIG. 3B are highlighted.
  • the electronic device can read or load all the images of the associated group of images from local storage or cloud storage; determine the input value from all the images of the group of images. Operate at least one image related to the operation; then display the thumbnail of the at least one image on the image selection interface, and not display the thumbnails of the other images on the image selection interface.
  • the mobile phone can download all the images in the associated group of images from cloud storage, and only display the thumbnails of the two images related to the input operation in the interface 305.
  • the mobile phone detects the operation for viewing more controls, it can display thumbnails of other images except the two images.
  • the electronic device can read or load at least one image associated with the input operation from local storage or cloud storage, and display a thumbnail of the at least one image on the image selection interface Excluding the at least one image other than the at least one image in the set of images is not read or loaded from the local storage or cloud storage.
  • the mobile phone may only download two images related to the input operation from the cloud storage, and then display the thumbnails of the two images in the interface 305.
  • the phone detects the operation for viewing more controls, it can download other images from cloud storage and display thumbnails of other images.
  • the electronic device may only preload the thumbnail of the at least one image from the local storage or cloud storage; do not preload the group of images to remove other images in the at least one image. Image; then, a thumbnail of at least one image is displayed in the image selection interface.
  • the mobile phone can only preload the thumbnails of the two images related to the input operation from the cloud storage, and there is no need to completely download the original images of the two images (the definition of the thumbnails) Less than the resolution of the original image), and there is no need to preload thumbnails of other images.
  • the mobile phone can display the thumbnails of the two images in the interface 305. When the mobile phone detects the operation for viewing more controls, it can preload thumbnails of other images from cloud storage and display thumbnails of other images.
  • a first operation is detected, and the first operation is used to select a first thumbnail on the image selection interface.
  • the input operation may be an operation for publishing an image. Then, after the electronic device selects the first thumbnail, it can perform an image publishing process on the first thumbnail. Taking a microblog application as an example, the image publishing process may include the electronic device sending the image to a server corresponding to the microblog application, so as to publish the image to the microblog platform through the server.
  • the input operation may be an operation for sending an image to a contact. Then, after the electronic device selects the first thumbnail, the image corresponding to the first thumbnail may be sent to the contact.
  • the method provided in the embodiments of the present application is introduced from the perspective of the electronic device (mobile phone 100) as the execution subject.
  • the terminal device may include a hardware structure and/or software module, and realize the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above-mentioned functions is executed in a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • the term “when” can be interpreted as meaning “if" or “after” or “in response to determining" or “in response to detecting".
  • the phrase “when determining" or “if detected (statement or event)” can be interpreted as meaning “if determined" or “in response to determining" or “when detected (Condition or event stated)” or “in response to detection of (condition or event stated)”.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium, (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state hard disk).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

一种图像显示方法与电子设备。该方法可以应用于人工智能(artificial intelligence,AI)、人机交互等领域。该方法包括:检测到输入操作(S1001);响应于所述输入操作,在显示屏上显示图像选择界面(S1002);从本地存储器或者云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像(S1003);在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像(S1004);检测到第一操作,该第一操作用于在所述图像选择界面选择第一缩略图(S1005);对所述第一缩略图执行与所述输入操作相对应的处理流程(S1006)。通过该方法,用户无需从海量的图像中选择目标图像,方便用户操作。

Description

一种图像显示方法与电子设备
相关申请的交叉引用
本申请要求在2019年07月26日提交中国专利局、申请号为201910683677.2、申请名称为“一种图像显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种图像显示方法与电子设备。
背景技术
随着终端技术的进步,电子设备的功能逐渐完善。以手机为例,图像拍摄功能是用户使用频率较高的功能之一。因此,手机中可能存储大量的图片。
然而,若用户想要对某张图片操作(比如分享、删除)时,需要手动从海量照片中寻找到这张图片,操作繁琐,用户体验较差。
发明内容
本申请提供了一种图像显示方法与电子设备,该方法可以帮助用户快速定位目标图像,方便用户操作。
第一方面,本申请实施例提供一种图像显示方法,该方法可以由电子设备执行。该方法包括:检测到输入操作;响应于所述输入操作,在显示屏上显示图像选择界面;从本地存储器或者云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像;在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像;检测到在所述图像选择界面选择第一缩略图的第一操作;对所述第一缩略图执行与所述输入操作相对应的处理流程。
在一些实施例中,电子设备可以根据输入操作,从一组图像中确定与输入操作相关联的至少一张图像。当电子设备检测到用户从至少一张图像中选择一种目标图像后,可以对该目标图像执行与输入操作对应的处理流程。该方法中,电子设备可以从海量图像中筛选出符合条件(与输入操作相关)的图像,然后用户可以从电子设备已筛选出的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,所述隐藏其余图像,包括:隐藏所述一组图像中除去所述至少一张图像中的其它图像。
在一些实施例中,电子设备可以根据输入操作,从一组图像中确定与输入操作相关联的至少一张图像。电子设备显示所述至少一张图像时,可以隐藏所述一组图像中的其它图像,方便用户查看,可以帮助用户快速定位目标图片,方便用户操作。
在一种可能的设计中,电子设备还可以显示标记信息,所述标记信息用于标记所述至少一张图像与所述输入操作相关。
在一些实施例中,电子设备可以显示标记信息,以帮助用户快速定位目标图片,方便 用户操作。
在一种可能的设计中,所述显示标记信息,包括:在所述至少一张图像中每张图像的缩略图上显示所述标记信息;或者,在所述图像选择界面中不显示所述至少一张图像的区域显示所述标记信息。
应理解,电子设备可以以任何形式显示标记信息,只要该标记信息能够表征所述至少一张图像与所述输入操作相关即可,本申请实施例不作限定。
在一种可能的设计中,所述标记信息包括图标、文字、图片中的一种或多种;或者,在所述至少一张图像中每张图像的缩略图上显示所述标记信息,包括:所述至少一张图像中每张图像的缩略图的边缘突出显示。
应理解,上述仅是列举了几种标识信息的示例,并非限定。
在一种可能的设计中,所述相关联的一组图像包括:包含相同拍摄对象的一组图像;和/或,拍摄时间差小于预设时间差的一组图像;和/或,拍摄地点处于同一地点的一组图像;和/或,属于同一相册的一组图像,和/或,包含相同内容但分辨率不同的一组图像;和/或,针对同一张图像经过不同修图方式后得到一组图像。
应理解,上述关于一组图像的描述,仅是举例,不是限定。
在一种可能的设计中,在所述检测输入操作之前,电子设备还可以;预先设置每种类型的输入操作的关联图像。
在一些实施例中,电子设备可以事先设置好每种类型的输入操作的关联图像,这样的话,电子设备检测到输入操作之后,可以确定与该输入操作对应的至少一张图像。通过该方法,用户无需从海量的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,所述输入操作是用于发布图像的操作,对所述第一缩略图执行与所述输入操作相对应的处理流程,包括:对所述第一缩略图对应的图像执行图像发布流程;或者所述输入操作是用于向联系人发送图像的操作,对所述第一缩略图执行与所述输入操作相对应的处理流程,包括:将所述第一缩略图对应的图像发送所述联系人。
在一些实施例中,电子设备检测到用于发布图像的操作时,与该发布操作相关的至少一张图像的缩略图。当电子设备检测到用户从至少一张图像的缩略图中显示第一缩略图的操作后,可以对第一缩略图对应的图像执行发布流程。或者,电子设备检测到用于将图像发送联系人的操作时,与该操作相关的至少一张图像的缩略图。当电子设备检测到用户从至少一张图像的缩略图中显示第一缩略图的操作后,可以将第一缩略图对应的图像发送联系人。通过该方法,电子设备可以基于输入操作,筛选出与输入操作相关的图像,即电子设备可以实现从海量的图像中筛选出较为符合条件(与输入操作相关)的图像,然后,用户可以从电子设备已经筛选出的图像中选择目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,所述从本地存储器或云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:确定所述输入操作的操作类型;根据所述操作类型,确定与所述操作类型相关联的至少一张图像。
举例来说,输入操作的操作类型是图像发布时,电子设备确定适合发布的图像。再例如,输入操作的操作类型是图像分享给联系人时,电子设备确定适合分享联系人的图像。
在一种可能的设计中,所述确定所述输入操作的操作类型,包括:确定所述输入操作是用于发布图像的操作;根据所述操作类型,确定与所述操作类型相关联的至少一张图像,包括:根据所述操作类型,确定适合发布的至少一张图像。
在一些实施例中,电子设备可以从较多图像中确定与适合发布的至少一张图像,用户无需从海量的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,所述确定所述输入操作的操作类型,包括:确定所述输入操作是与其他联系人通信的操作;根据所述操作类型,确定与所述操作类型相关联的至少一张图像,包括:根据所述操作类型,确定适合发送其它联系人的至少一张图像。
在一些实施例中,电子设备可以从较多图像中确定适合分享联系人的至少一张图像,用户无需从海量的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,所述适合发布的至少一张图像,包括:与曾经发布过的图像属于同一类型的图像;和/或修图次数达到预设次数的至少一张图像。
应理解,以上仅是适合发布的图像的举例,不是限定,在实际应用中,电子设备还可以通过其他方式确定哪些图像是适合发布的图像。
在一种可能的设计中,所述适合发送其它联系人的至少一张图像,包括:图像中包括所述其它联系人的图像;和/或与曾经发送给所述其它联系人的图像属于同一类型的图像。
应理解,以上仅是适合发布的图像的举例,不是限定,在实际应用中,电子设备还可以通过其他方式确定哪些图像是适合分享给联系人的图像。
在一种可能的设计中,所述从本地存储器或云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:确定与所述输入操作所针对的应用的相关信息;根据所述应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像。
在一些实施例中,电子设备可以根据应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像。通过该方法,用户无需从海量的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,确定与所述输入操作所针对的应用的相关信息,包括:确定与所述输入操作所针对的应用的类型或功能;根据所述应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像,包括:根据所述应用的类型或功能,确定与所述类型或功能相匹配的至少一张图像。
在一些实施例中,电子设备可以根据应用的类型或功能确定至少一张图像。通过该方法,用户无需从海量的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,确定与所述输入操作所针对的应用的相关信息,包括:确定与所述输入操作所针对的应用发布或分享图像的历史记录;根据所述应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像,包括:根据所述应用的历史记录,确定与所述历史记录相匹配的至少一张图像。
在一些实施例中,电子设备可以根据应用的历史记录,确定至少一张图像。通过该方法,用户无需从海量的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,所述从本地存储器或云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:确定与所述输入操作对应的时间信息;根据所述时间信息,和与所述时间信息匹配的至少一张图像。
在一些实施例中,电子设备可以根据输入操作的时间信息,确定至少一张图像。通过该方法,用户无需从海量的图像中查找目标图像,方便用户操作,提升用户体验。
在一种可能的设计中,从本地存储器或者云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:从所述本地存储器或云存储读取或加载所述相关联的 一组图像的中全部图像;从所述一组图像的全部图像中确定与所述输入操作相关的至少一张图像;在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像,包括:将所述至少一张图像的缩略图显示于所述图像选择界面,不在所述图像选择界面中显示所述其它图像的缩略图。
在一些实施例中,电子设备可以从本地存储器或云存储读取一组图像中所有图像,然后从读取的所有图像中选择至少一张图像。电子设备可以仅在图像选择界面中显示选择出的至少一张图像的缩略图,不显示其它图像的缩略图,例如可以丢弃其它图像。
在一种可能的设计中,在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像,包括:从所述本地存储器或云存储中读取或加载所述至少一张图像,将所述至少一张图像的缩略图显示于所述图像选择界面;未从所述本地存储器或云存储中读取或加载所述一组图像中除去所述至少一张图像之外的其它图像。
在一些实施例中,电子设备可以仅从本地存储器或云存储读取与输入操作相关的至少一张图像,不读取其它图像。因此,电子设备可以仅在图像选择界面中显示读取出的至少一张图像的缩略图,不显示其它图像的缩略图。
在一种可能的设计中,在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像,包括:从所述本地存储器或云存储预加载所述至少一张图像的缩略图,未预加载所述一组图像中除去所述至少一张图像中的其它图像的缩略图;在所述图像选择界面中显示至少一张图像的缩略图。
在一些实施例中,电子设备可以不完全加载任何图像,而是预加载与输入操作相关的至少一张图像的缩略图,无需预加载其它图像的缩略图。因此,电子设备可以在图像选择界面中显示预加载的至少一张图像的缩略图,不显示其它图像的缩略图。
第二方面,本申请实施例还提供一种电子设备。该电子设备包括显示屏,至少一个处理器和存储器;所述存储器用于存储一个或多个计算机程序;当所述存储器存储的一个或多个计算机程序被所述至少一个处理器执行时,使得所述电子设备能够实现上述第一方面及其第一方面任一可能设计的技术方案。
第三方面,本申请实施例还提供了一种电子设备,所述电子设备包括执行上述第一方面或者第一方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第四方面,本申请实施例还提供一种芯片,所述芯片与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本申请实施例第一方面及其第一方面任一可能设计的技术方案;本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
第五方面,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行本申请实施例第一方面及其第一方面任一可能设计的技术方案。
第六方面,本申请实施例的中一种程序产品,包括指令,当所述程序产品在电子设备上运行时,使得所述电子设备执行本申请实施例第一方面及其第一方面任一可能设计的技术方案。
附图说明
图1A为本申请一实施例提供的手机100的硬件结构示意图;
图1B为本申请一实施例提供的手机100的软件结构示意图;
图2A为本申请一实施例提供的手机100的用户图形界面的示意图;
图2B为本申请一实施例提供的手机100的用户图形界面的示意图;
图3A为本申请一实施例提供的手机100的用户图形界面的示意图;
图3B为本申请一实施例提供的手机100的用户图形界面的示意图;
图4A为本申请一实施例提供的手机100的用户图形界面的示意图;
图4B为本申请一实施例提供的手机100的用户图形界面的示意图;
图5A为本申请一实施例提供的手机100的用户图形界面的示意图;
图5B为本申请一实施例提供的手机100的用户图形界面的示意图;
图6为本申请一实施例提供的一种图像分类方法的流程的示意图;
图7为本申请一实施例提供的一种模型的示意图;
图8为本申请一实施例提供的一种模型训练的流程的示意图;
图9A为本申请一实施例提供的手机100的用户图形界面的示意图;
图9B为本申请一实施例提供的手机100的用户图形界面的示意图;
图9C为本申请一实施例提供的手机100的用户图形界面的示意图;
图9D为本申请一实施例提供的手机100的用户图形界面的示意图;
图10为本申请一实施例提供的图像显示方法的流程示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,并不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
本申请实施例涉及的应用程序(application,简称app),为能够实现某项或多项特定功能的软件程序。通常,终端中可以安装多个应用程序。比如,相机应用、图库应用、短信应用、彩信应用、各种邮箱应用、微信、腾讯聊天软件(QQ)、WhatsApp Messenger、连我(Line)、照片分享(instagram)、Kakao Talk、钉钉等。下文中提到的应用程序,可以是终端出厂时已安装的应用程序,也可以是用户在使用终端的过程中从网络下载或其他终端获取的应用程序。
本申请实施例涉及的社交应用(或称为社交平台),能够实现内容(比如图片、文字)分享的应用程序。比如脸书(facebook),推特(twitter),微博,微信,instagram、知乎、linkedin、豆瓣、天涯、小红书等。
本申请实施例涉及的图像选择界面(也可以称为图像待选界面),可以显示多张图像的缩略图以供用户选择的界面,比如,下文中的图2B中的界面203,或者图3B中的界面305等。
本申请实施例涉及的缩略图,为了方便用户浏览图像,或者显示更多的图像,而制作的一张图像的不完全的图像。其中,不完全可以是对一张图像压缩后得到的图像,或者,将一张图像的尺寸缩小之后得到的图像,或者,通过采样一张图像上的部分像素点得到的图像,或者仅显示一张图像上的部分内容的图像,或者,存储在云端的图像,而在本地只 能显示该图像的模糊的轮廓(未从云端下载的图像)。比如,图2B中的界面203,或者图3B中的界面305等中可以显示缩略图,用户从缩略图中选择图像。
本申请实施例涉及的多个,是指大于或等于两个。
需要说明的是,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。且在本申请实施例的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
以下介绍电子设备、用于这样的电子设备的图形用户界面(graphical user interface,GUI)、和用于使用这样的电子设备的实施例。在本申请一些实施例中,电子设备可以是包含显示屏的便携式终端,诸如手机、平板电脑等。便携式电子设备的示例性实施例包括但不限于搭载
Figure PCTCN2020103115-appb-000001
或者其它操作系统的便携式电子设备。上述便携式电子设备也可以是其它便携式电子设备,例如数码相机。还应当理解的是,在本申请其他一些实施例中,上述电子设备也可以不是便携式电子设备,而是具有显示屏的台式计算机等。
通常情况下,电子设备可以支持多种应用。比如以下应用中的一个或多个:相机应用、即时消息收发应用、照片管理应用等。其中,即时消息收发应用可以有多种。比如微信(Wechat)、微博、腾讯聊天软件(QQ)、WhatsApp Messenger、连我(Line)、照片分享(Instagram)、Kakao Talk、钉钉等。用户通过即时消息收发应用,可以将文字、语音、图片、视频文件以及其他各种文件等信息发送给其他联系人(或其它联系人);或者,用户可以通过即时消息收发应用实现与其他联系人的视频或音频通话。
下文以电子设备是手机为例,图1A示出了手机100的结构示意图。
手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110 中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
处理器100可以运行本申请实施例提供的图像分享算法的软件代码,实现图像分享过程。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为手机100充电,也可以用于手机100与外围设备之间传输数据。
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
手机100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在手机100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,手机100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得手机100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包 括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
手机100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。
摄像头193用于捕获静态图像或视频。摄像头193可以包括前置摄像头和后置摄像头。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,以及至少一个应用程序(比如相机应用,微信应用等)的软件代码等。存储数据区可存储手机100使用过程中所产生的数据(比如图像、视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
内部存储器121还可以存储本申请实施例提供的图像分享方法的软件代码,当处理器110运行所述软件代码时,执行图像分享方法的流程步骤,实现图像分享过程。
内部存储器121还可以存储拍摄得到的图像、模型、图片的分类标签等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
当然,本申请实施例提供的图像分享方法的软件代码也可以存储在外部存储器中,处理器110可以通过外部存储器接口120运行所述软件代码,执行图像分享方法的流程步骤,实现图像分享过程。手机100拍摄得到的图像、模型、图片的分类标签等也可以存储在外部存储器中。
应理解,用户可以指定将图像存储在内部存储器121还是外部存储器中。比如,手机100当前与外部存储器连接时,若手机100拍摄得到一张图像时,可以弹出提示信息,以提示用户将图像存储在外部存储器还是内部存储器121;当然,还有其它的指定方式,本申请实施例不作限定;或者,手机100检测到内部存储器121的内存量小于预设量时,可以自动将图像存储在外部存储器中。
手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。
陀螺仪传感器180B可以用于确定手机100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定手机100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。
气压传感器180C用于测量气压。在一些实施例中,手机100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。手机100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当手机100是翻盖机时,手机100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测手机100在各个方向上(一般为三轴)加速度的大小。当手机100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。手机100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,手机100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机100通过发光二极管向外发射红外光。手机100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机100附近有物体。当检测到不充分的反射光时,手机100可以确定手机100附近没有物体。手机100可以利用接近光传感器180G检测用户手持手机100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。手机100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测手机100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,手机100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,手机100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,手机100对电池142加热,以避免低温导致手机100异常关机。在其他一些实施例中,当温度低于又一阈值时,手机100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。手机100可以接收按键输入,产生与手机100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。
可以理解的是,本申请实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
需要说明的是,现有技术中,手机100中会存储海量的图像,当手机100检测到某个输入操作(比如用于发布图像的操作,或者用于将图像发送给其他联系人的操作)时,会显示所有图像的缩略图,用户需要从海量的缩略图中选择目标图像的缩略图,操作繁琐。另外,缩略图通常无法清楚的显示图像内容,所以用户凭借肉眼无法准确的选择出目标图像的缩略图,通常需要点击某张图像的缩略图,即显示该张图像,然后左滑或右滑该图像,以显示其他图像,最终选择出目标图像,操作繁琐,用户体验较低。
在本申请实施例中,手机100可以分析用户对图像的操作行为,根据该操作行为将图像分成不同的图像种类。比如,分成“用户喜欢”和“用户不喜欢”的图像种类;或者,分成“适合发布”和“不适合发布”的图像种类;或者,分成“适合发送其他联系人”和“不适合发送其他联系人”的图像种类等等。当手机100检测到用于发布图像的操作时,可以将“用户喜欢”的图像或“适合发布”的图像推荐给用户;当手机100检测到用于将图像发送给其他联系人的操作时,可以将“用户喜欢”的图像或“适合发送其他联系人”图像推荐给用户。因此,手机100可以根据用户的输入操作,推荐与该输入操作相关的图像,无需在海量的图像中寻找图像,方便用户操作。
在一些实施例中,手机100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。以下实施例以分层架构的安卓(android)系统为例,示例性说明手机100的软件结构。
图1B是本申请实施例提供的手机100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,android系统可以分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(android runtime)和系统库,以及内核层。应用程序层可以包括一系列应用程序包。如图1B所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图1B所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。窗口管理器用于管理窗口程序。窗口管理器可以获 取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。电话管理器用于提供手机100的通信功能。例如通话状态的管理(包括接通,挂断等)。资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
android runtime包括核心库和虚拟机。android runtime负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
在本申请实施例中,参见图1B,系统库中还可以包括图像处理库。在检测到针对图像的操作行为时,可以对图像进行分类。比如,在检测到针对相关联的一组图像中至少一张图像的行为操作时,对所述至少一张图像分类。当再次检测到输入操作时,图像处理库可以从相关联的一组图像中确定与该输入操作相关的至少一张图像,并推荐该至少一张图像。
为了便于理解,本申请以下实施例将以具有图1A和图1B所示结构的手机为例,结合附图对本申请实施例提供的图像显示方法进行具体阐述。下面分不同的应用场景介绍本申请技术方案的实现过程。
场景1-图库应用。
参见图2A中的(a)所示,手机100显示主界面201,主界面201中包括多个应用(相机应用、图库应用、微信应用等)的应用图标,当手机100检测到用户触发图库应用的图标202的操作时,手机100显示图库应用的界面203,如图2A中的(b)所示。
参见图2A中的(b)所示,手机100显示的图库应用的界面203,界面203中包括手机100存储的图片的缩略图,在图2A中的(b)中,以3张图片的缩略图为例。
一种示例中,“用户喜欢”的图片的缩略图上显示一标记,该标记用于表征该图片是“用户喜欢”的图像类型,即,缩略图上包含标记的图片是“用户喜欢”的图片类型,不包含标记的图片是“用户不喜欢”的图片类型。参见图2A中的(b)所示,图片204上显示标记207、图片206上显示标记208,即,图片204和图片205是“用户喜欢”的图片类型。
示例性的,当手机100检测到用户触发标记207的操作时,显示提示信息,该提示信息用于提示用户图片204是用户喜欢的图片,参见图2A中的(c)所示,手机100还可以显示确定控件和取消控件,当确定控件被触发时,手机100确定图片204是用户喜欢的类型,当取消控件被触发时,手机100确定图片204不是用户喜欢的类型,则取消在图片204上显示标记207。
另一些示例中,参见图2B中的(a)所示,“用户喜欢”的图片的缩略图的边缘是加粗,其它图像的缩略图的边缘未加粗;又一些示例中,“用户喜欢”的图片的缩略图的尺寸较大,而其它图片的缩略图的尺寸较小,比如,参见图2B中的(b)所示,缩略图204和缩略图206的尺寸大于缩略图205的尺寸。又一些示例中,参见图2B中的(c)所示,图库应用的界面203中包括“图片分类”的控件210,当控件210被触发时,手机100显示多个选项,参见图2B中的(d)所示,手机100显示“用户喜欢的图片”的选项211,“收藏的图片”选项212、“适合发布的图片”选项213,“适合发送其他联系人的图片”选项214。假设选项211被选中时,手机100仅显示“用户喜欢”的图片,参见图2B中的(d)所示,仅显示图片204和图片206。
在场景1中,在图库应用的界面中,不同种类的图像显示不同的标识信息,方便用户快速查找图像,而且,图像的种类是手机100根据用户对图像的操作行为划分的,所以图像种类的划分符合用户的操作习惯,有助于提升用户体验。
场景2-社交平台。
参见图3A中的(a)所示,手机100显示微信应用的界面301,在界面301中包括控件302,当控件302被触发时,手机100显示拍摄选项303和从图库中选择图片选项304,参见图3A中的(b)所示。
当手机100检测到从图库中选择图片选项304被选中时,手机100显示图片待选界面305,如图3A中的(c)所示。图片待选界面305中包括一张或多张图片。
作为一种示例,参见图3A中的(c)所示,图片待选界面305中包括手机100中图库中的多张图片的缩略图,其中,“用户喜欢”的图像的缩略图上可以显示标记,缩略图上不包含的标记的图片是“用户不喜欢”的图片。或者,图片待选界面305中包括手机100中图库中的多张图片的缩略图,其中,“适合发布”的图片的缩略图上显示标记,“不适合发布”的图片的缩略图上不显示标记。或者,图片待选界面305中包括手机100中图库中的多张图片的缩略图,其中,“用户喜欢”的图片的缩略图上显示第一标记,“适合发布”的图片的缩略图上显示第二标记,第一标记和第二标记不同。
如图3A中的(c)所示,图片306和图片308的缩略图上显示标记,即图片306和图片308是“用户喜欢”的图片或是“适合发布”的图片。当手机100检测到图片308被选中时,显示如图3A中的(d)所示的界面。
作为另一些示例,参见图3B中的(a)所示,图片待选界面305中,“用户喜欢”的 图片或者“适合发布”的图片的缩略图(比如图片306和图片308)相对于其它图片(比如图片307)的缩略图的尺寸较大。
再比如,参见图3B中的(b),图片待选界面305中,“用户喜欢”的图片或者“适合发布”的图片的缩略图(图片306和图片308)的边缘加粗。
再比如,参见图3B中的(c)所示,图片待选界面305仅包括2张图片的缩略图,这2张图片是手机100从图库中的海量图片中选择出的“用户喜欢”的图片和/或“适合发布”的图片,在该示例中,图片待选界面305中只包括推荐的图片,不包括不推荐的图片,例如,可以隐藏不推荐的图片。当然,继续参见图3B中的(c)所示,图片待选界面305中包括“查看更多”控件,当手机100检测到该控件被触发时,手机100显示更多的缩略图。
再比如,参见图3B中的(d)所示,图片待选界面305中“图片类型”控件,当“图片类型”控件被触发时,手机100显示多个选项,即“用户喜欢的图片”的选项311,“收藏的图片”选项312、“适合发布的图片”选项313,“适合发送其他联系人的图片”选项314。假设选项311被选中时,手机100仅显示“用户喜欢”的图片,参见图3B中的(d)所示,手机100仅显示图片306和图片308。
再比如,参见图3B中的(e)所示,图片待选界面305中不同区域显示不同类型的图像。第一区域显示用户喜欢的图像。第二区域显示适合发布的图像。在一些实施例中,用户喜欢的图像和适合发布的图像中可以存在相同的图像。
在图3A和图3B中,以微信朋友圈为例,在实际应用中,对于其它社交平台(比如微博应用、小红书、facebook、推特等),也可以采用类似的方式,不再赘述。
在场景2中,手机100检测到用于在社交平台发布图像的操作时,可以显示“适合发布”的图像或者“用户喜欢”的图像,或者显示不同种类的图像显示不同的标识信息(比如“适合发布”的图像显示第一标识,“用户喜欢”的图像显示第二标识),方便用户快速查找图像,而且,图像的种类是手机100根据用户对图像的操作行为划分的,所以图像种类的划分符合用户的操作习惯,有助于提升用户体验。
场景3-即时通信应用。
参见图4A中的(a)所示,手机100显示短信应用的一个界面401,该界面401是用户与其他联系人的通信界面。界面401中包括控件402,当手机100检测到控件402被触发时,显示图库控件403和拍摄控件404。
作为一种示例,当手机100检测到用户触发图库控件403时,显示如图4A中的(b)所示的界面405,界面405中包括多张图片的缩略图,其中图片406上设置标记407,用于表征图片406是“用户喜欢”的图片,图片411上设置标记412,用于表征图片411是“用户喜欢”的图片。或者,标记407用于表征图片406是“适合发送其他联系人”的图片,标记412用于表征图片411是“适合发送其他联系人”的图片。或者,图片406和图片411上显示不同的标记,假设图片406上显示第一标记,图片411上显示第二标记,第一标记用于表征图片406是“用户喜欢”的图片,第二标记用于表征图片411是“适合发送其他联系人”的图片。假设手机100检测到图片406被选中,然后检测到用户触发“发送”控件408的操作,则手机100将图片406发送给联系人,手机100显示如图4A中的(c)所示的界面409,界面409中图片410上不显示标记。
在一些实施例中,手机100可以确定“适合发送给特定联系人”的图像类型。其中, 特定联系人可以包括中特定类型的联系人,或者某个特定的联系人等等。其中特定类型的联系人可以是属于同一群组/分组的联系人,以微信应用为例,特定类型的联系人可以是微信应用中属于同一微信聊天群内的所有联系人,或者,微信应用中属于同一分组(例如家人)的所有联系人,或者微信应用中备注名中具有共同词(例如,老师)的所有联系人。某个特定的联系人可以是具体的联系人,电子设备可以根据联系人的备注名确定该联系人是否是特定的联系人。例如,特定联系人可以包括联系人备注是“爸爸”、联系人备注是“妈妈”等的联系人。例如,手机100学习到将某些图像(比如风景图,或者人像图)发送给特定联系人的次数较多,则确定这些图像属于“适合发送给特定联系人”。当手机100显示与特定联系人的聊天界面(比如,微信的聊天界面,或者短信的聊天界面)时,手机100检测到用于发送图像的操作时,显示合适发送给特定联系人的图像。再例如,手机100检测到将风景图像发送给联系人备注是“爸爸”或“妈妈”的次数较多,手机100确定适合发给“爸爸”或“妈妈”的图像是风景图,或者,学习到将人物图像发送给联系人备注是“Amy”的次数较多,手机100确定适合发送给Amy的图像是人像图。当手机100显示与“爸爸”或“妈妈”或者包含“爸爸”和/或“妈妈”的群的聊天界面时,手机100检测到用于发送图像的操作时,可以显示适合发送给“爸爸”或“妈妈”的图像的缩略图。
作为另一些示例,参见图4B中的(a)所示,界面405中“用户喜欢”的图片或者“适合发送其他联系人”的图片的缩略图(比如图片406和图片411)相对于其它图片的缩略图的尺寸较大。
再比如,参见图4B中的(b),界面305中“用户喜欢”的图片或者“适合发送其他联系人”的图片的缩略图(图片406和图片411)的边缘加粗,其它图片的边缘为加粗。
再比如,参见图4B中的(c)所示,界面405仅包括2张图片的缩略图,这2张图片是手机100从图库中的海量图片中选择出的“用户喜欢”的图片和/或“适合发送其他联系人”的图片,在该示例中,界面305不包括其它图片。当然,继续参见图4B中的(c)所示,界面405中包括“查看更多”控件,当手机100检测到针对该控件的操作时,手机100显示更多的缩略图。
再比如,参见图4B中的(d)所示,界面305中“图片类型”控件,当“图片类型”控件被触发时,手机100显示多个选项,即“用户喜欢的图片”的选项,“收藏的图片”选项、“适合发布的图片”选项,“适合发送其他联系人的图片”选项。假设“适合发送其他联系人的图片”选项被选中时,手机100仅显示适合发送其他联系人的图片。
应理解,在图4A中以短信应用为例,对于其他即时通讯应用,可以采用类似的方式,不多赘述。
需要说明的是,图4A所示的实施例中,手机100检测到用于向其他联系人发送图像的操作时,可以显示“适合发送其他联系人”的图像或者“用户喜欢”的图像,或者显示不同种类的图像显示不同的标识信息(比如“适合发送其他联系人”的图像显示第一标识,“用户喜欢”的图像显示第二标识),方便用户快速查找图像,而且,图像的种类是手机100根据用户对图像的操作行为划分的,所以图像种类的划分符合用户的操作习惯,有助于提升用户体验。
场景4-图片拍摄。
参见图5A中的(a)所示,手机100显示主界面501,主界面501中包括多个应用的 应用图标,当手机100检测到用户触发相机应用的图标502时,手机100显示相机应用的界面503,如图5A中的(b)所示。
在一种示例中,参见图5A中的(b)所示,相机应用的界面503中包括预览图像,预览图像是手机100基于相机的预设拍摄参数捕捉的图像,其中,预设拍摄参数可以是手机100根据用户喜欢的图片分析出的拍摄参数。比如,手机100学习到用户喜欢的图片的亮度较高,那么手机100调整相机的拍摄参数,比如将曝光取值调大等。因此,这种方式中,手机100启动相机应用后,手机100默认以预设拍摄参数捕捉图像,这样拍摄得到的图像更可能是用户喜欢的图像。
在另一种示例中,参见图5B中的(a)所示,相机应用的界面503中包括控件504,当手机100检测到控件504被触发时,界面503中显示提示信息505,提示信息505用于提示用户手机100以用户喜欢的图片为模板进行拍摄。手机100调整拍摄参数为根据用户喜欢的图片分析出的拍摄参数。
在又一种示例中,参见图5B中的(b)所示,相机应用的界面503中包括控件504,当手机100检测到控件504被触发时,界面503中显示拍摄模式选择框,在拍摄模式选择框中包括多种拍摄模块对应的控件,其中包括“以用户喜欢的图片为模板”的控件505。当手机100检测到控件505被触发时,手机100调整拍摄参数为根据用户喜欢的图片分析出的拍摄参数。
在又一种示例中,参见图5B中的(c)所示,相机应用的界面503中包括“荧光棒”控件504,手机100检测到“荧光棒”控件504被触发时,手机100显示一个或多个选项,比如“以喜欢的图片为模板”选项505,当手机100检测到选项505被选中时,手机100调整拍摄参数为根据用户喜欢的图片分析出的拍摄参数。
以下实施例介绍手机100将存储的图片划分为“用户喜欢”和“用户不喜欢”两种图像类型的过程。
参见图6所示,为本申请实施例提供的图片分类的流程示意图。如图6所示,该流程可以包括:
S601:手机100检测针对图片的操作行为,所述操作行为包括对图片的删除、查看、分享、收藏、编辑等行为。
假设手机100中存储的图片较多,其中部分图片用户查看次数较多,或者部分图片被编辑(比如使用修图软件进行修图)等;还有些图片用户会删除或长时间未查看等,手机100可以统计针对每张图片的操作行为。
示例性的,参见表1,为手机100统计的针对每张图片的操作行为的示例。
图片标识 操作行为
图片ID1 查看次数3次/天
图片ID2 分享朋友圈
图片ID3 删除
图片ID4 查看次数0次/天
表1
S602:手机100根据该操作行为,将图片分为“用户喜欢”的图像和“用户不喜欢”的图像。
应理解,手机100对图片进行分类的方式有多种。比如,手机100可以通过人工智能 (artificial intelligence,AI)学习的方式(比如使用AI模型),将图库中的图片分为“用户喜欢”的图像和“用户不喜欢”的图像类型,然后将“用户喜欢”的图片添加“喜欢”标签,将“用户不喜欢”的图片添加“不喜欢”的标签。
示例性的,以操作行为是查看为例,手机100可以将查看次数大于预设次数的图片标记为用户喜欢的图片,将查看次数小于等于预设次数的图片标记为用户不喜欢的图片。示例性的,以操作行为是分享为例,手机100可以将分享次数大于预设次数的图片标记为用户喜欢的图片,将分享次数小于等于预设次数的图片标记为用户不喜欢的图片。
示例性的,参见表2,为手机100确定用户对每张图片的喜欢程度的示例。
图片标识 操作行为 喜欢程度
图片ID1 查看次数3次/天 喜欢
图片ID2 分享至一种社交软件 喜欢
图片ID3 删除 不喜欢
图片ID4 查看次数0次/天 不喜欢
表2
需要说明的是,喜欢程度可以通过“是/yes”或“否/no”表征,“是/yes”表征喜欢,“否/no”表征不喜欢;或者,喜欢程度也可以通过分数表征,分数越高表征喜欢程度越大,分数越低表征喜欢程度越小;分数可以是10分制、100分制等,本申请实施例不作限定。
以下实施例介绍手机100使用AI模型将图片划分为“用户喜欢”和“用户不喜欢”的图像类型的过程。
在一些实施例中,模型可以是比如神经网络单元、机器学习模型等。通常,模型可以包括模型参数,手机100使用输入参数、模型参数以及相关的算法,可以得到输出结果,该输出结果可以是分类标签。参见图7所示,为一种与模型参数相关的算法的示例:
Figure PCTCN2020103115-appb-000002
其中,x1,x2-xn是多个输入参数;w1,w2-wn是每个输入参数的系数(也称之为权重);b是每个输入参数的偏移量(用于指示u与坐标原点的截距);f是用于保证输出结果的取值范围为区间[0,1]内的函数(比如Sigmoid函数、tanh函数等)。在一些实施例中,输入参数即x,模型参数即为权重w i和偏移量b,输出参数为y。当w、b、x给出具体取值时,可以通过上述公式得到输出结果y。需要说明的是,图7仅是为了方便理解而列举的一种模式的示例,并不是对本申请的模式的限定。
应理解,在本申请实施例中,输入参数x即一张或多张图像(下文简称:输入图像),在模型参数确定的情况下,通过与模型相关的算法可以得到输出结果,该输出结果可以是输入的一张或多张图像所属于的分类标签。比如分类标签可以是“用户喜欢”或“用户不喜欢”。模型使用过程可以是,将一张或多张图像作为输入参数,使用模型参数(比如,训练得到的模型参数),运行与模型相关的算法,得到输出结果,该输出结果可以是该输入图像的标签,比如输出结果为“是/yes”或“否/no”。在一些实施例中,输出结果也可以基于该输入图像属于“喜欢”的标签的概率(或者属于“不喜欢”的标签的概率)得到的,举例来说,当一张图像属于“喜欢”的标签的概率是0.9,手机100可以确定该输入图像属于“用户喜欢”的分类标签,那么输出结果可以是“是/yes”。
需要说明的是,模型的使用分为“训练过程”和“使用过程”。其中模型训练过程,即确定模型参数的过程。以下实施例介绍模型训练过程。参见图8,模型的训练过程的流程可以包括:
S801:获取相关联的一组图像。
在一些实施例中,“相关联”的一组图像可以是至少两张图像,“相关联”可以是一组图像在内容、拍摄时间、拍摄地点等等相关联。作为一种示例,手机100连拍3张图像,那么这3张图像就是相关联的一组图像。作为另一些示例,手机100对同一物体拍摄了3张图像,即这三张图像中包含相同的物体(也可以称为拍摄对象),则这三张图像也是相关联的一组图像。作为又一种示例,手机100在某一个时长(比如30分钟内)内拍摄了3张图像,那么这3张图像可以是相关联的一组图像。
S802:检测针对所述一组图像中的第一图像的操作行为。
用户可以对相关联的一组图像中的图像执行不同的操作。比如,相关联的一组图像包括3张包含同一人的图像,手机100检测到针对其中一张图像的图像发布操作(比如发布到微信朋友圈),手机100确定该图像是“适合发布”的图像类型。手机100可以为该图像添加标签,比如“适合发布”的标签。
S803:根据所述操作行为,为所述第一图像的添加标签。
应理解,手机100根据用户对相关联的一组图像中的图像的操作行为,可以确定该图像的图像类型,进而为该图像添加合适的标签。
S804:将所述第一图像作为输入参数,确定初始的模型参数,运行与模型参数相关的算,得到输出结果,该输出结果可以是该第一图像的标签。
S805:判断输出结果与S803中确定的所述第一图像的标签是否相同,若是,训练结束,若否,则执行S806。
S806:调整模型参数。
S807:将所述第一图像作为输入参数,使用调整后的模型参数运行与模型参数相关的算法,得到新的输出结果。
以图7为例,模型训练过程:在已知x i和y的情况下,确定w i和b的过程。在一些实施例中,在已知x i的情况下,确定初始w0和b0,运算上述公式(1),得到y0,比较该y0与已知的y之间的差异是否较小,若是,则模型训练完成,若否,则调整初始w0和b0,比如调整成w1和b1,然后在已知x i和w1和b1的情况下,再次运算上述公式(1),得到y1,在比较y1和已知的y之间的差异。直到得到的yn与已知的y之间的差异较小,则模型训练结束。
S808:判断输出结果与S803中确定的所述第一图像的标签是否相同,若是,则训练结束,若否,则执行S806。
举例来说,手机100拍摄得到相关联的两张图像,手机100检测到用户在朋友圈发布第一张图像,未发布第二张图像,则第一张图像的标签是用户喜欢的图片,第二张图片的标签是用户不喜欢的图片,手机100将第一张图像作为正训练集,将第二张图像作为反训练集。
手机100将第一张图像和第二张图像作为输入参数,使用模型参数,运行与模型相关的算法,得到第一输出结果和第二输出结果,若该第一输出结果表征第一图像是用户喜欢的图片,且第二输出结果表征第二图像是用户不喜欢的图片,即第一输出结果和第一张图 像的标签是一致的,第二输出结果与第二张图像的标签是一致的,所以手机100无需调整模型参数。当第一输出结果与第一张图像的标签不一致,或者第二输出结果与第二张图像的标识不一致,则调整模型参数,直到第一输出结果与第一张图像的标签一致,且第二输出结果与第二张图像的标识一致,即模型训练结束。
在上述过程中,手机100训练模型的过程中使用的标签是用户喜欢的图片或者用户不喜欢的标签,所以训练得到的模型参数的作用是将图片划分为用户喜欢的类型或用户不喜欢的类型。在一些实施例中,手机100中可以存储一个或多个模型,若存储多个模型,则每个模型的作用可以不同。比如,一个模型是用于将图片划分为用户喜欢或不喜欢的类型,另一个模型是用户将图片划分为适合发布或不适合发布的类型。
在一些实施例中,模型训练结束后,模型的使用过程包括:手机100将一张或多张图像作为模型的输入参数,在已知模型参数(模型训练过程确定的,比如,w i和b)的情况下,运行与模型相关的算法,确定输出结果,该输出结果即输入的一张或多张图像的分类标签。
在一些实施例中,手机100可以周期性的训练模型或使用模型对图像进行分类,或者,手机100也可以在空闲(比如用户较长时间未操作手机100)时训练模型或使用模型对图像进行分类,本申请实施例不作限定。
下面介绍手机100通过AI模型将图片分类为用户喜欢或用户不喜欢的几种示例。
示例1:
手机100拍摄得到三张图像,参见图9A所示,用户删除了前两张图像,保留第三张图像(或者用户查看第三张的次数较多,查看前两张的次数较少,或者第三张图像被修改(比如修图),前两张图像未被修改等),手机100检测到用户针对这三张图像的不同操作行为,可确定用户喜欢第三张图像,然后将前两张图像作为正训练集,将第三张图像作为反训练集,对AI模型进行训练,得到训练之后的模型。训练之后的模型可以将图像中背景人较少的图片划分为用户喜欢的图片,将图像背景中人较多的图片划分为用户不喜欢的图片。
若手机100再次拍摄得到一张图像,手机100可以将该图像输入到AI模型,运行该AI模型进行计算,若判断该图片的背景中人数较少,则输出结果“yes”,若人数较多则输出结果“no”,“yes”用于指示用户喜欢,“no”用于指示用户不喜欢。
可选的,对于用户喜欢的图像,手机100在该图像的缩略图上显示标识。对于输出结果是“no”的图片,手机100可以输出提示信息,以提示用户删除该图片。
示例2:
手机100拍摄得到三张图像,如图9B所示,用户删除了前两张图像,保留第三张图像,手机100检测到用户针对这三张图像的不同操作行为,可确定用户喜欢第三张图像,然后将前两张图像作为正训练集,将第三张图像作为反训练集,对AI模型进行训练,得到训练之后的模型。训练之后的模型可以将图像中没有水印的图片划分为用户喜欢的图片,将图像中有水印的图片划分为用户不喜欢的图片。
手机100再次拍摄得到一张图像之后,手机100将该图像输入到AI模型,运行该AI模型进行计算,若判断该图片上无水印,输出结果“yes”,若有,则输出结果“no”,“yes”用于指示用户喜欢,“no”用于指示用户不喜欢。
示例3:
手机100拍摄得到三张图像,如图9C所示,用户删除了前两张图像,保留第三张图像,手机100检测到用户针对这三张图像的不同操作行为,可确定用户喜欢第三张图像,然后将前两张图像作为正训练集,将第三张图像作为反训练集,对AI模型进行训练,得到训练之后的模型。训练之后的模型可以将图像中人物面部无阴影的图片划分为用户喜欢的图片,将图像中人物面部有阴影的图片划分为用户不喜欢的图片。
手机100再次拍摄得到一张图像之后,手机100将该图像输入到AI模型,运行该AI模型进行计算,若判断该图片上人物的面部无阴影,则输出结果“yes”,若有,输出结果“no”,“yes”用于指示用户喜欢,“no”用于指示用户不喜欢。
示例4:
手机100拍摄得到三张图像,如图9D所示,用户删除了前两张图像,保留第三张图像,手机100检测到用户针对这三张图像的不同操作行为,可确定用户喜欢第三张图像,然后将前两张图像作为正训练集,将第三张图像作为反训练集,对AI模型进行训练,得到训练之后的模型。训练之后的模型可以将亮度适用、清晰度较高的图片划分为用户喜欢的图片,将亮度过量或过暗、清晰度较低的图片划分为用户不喜欢的图片。
手机100再次拍摄得到一张图像之后,将该图像输入到AI模型,运行该AI模型进行计算,若判断该图片的清晰度较高、亮度适中,输出结果“yes”,若亮度较高或较低,清晰度较低,输出结果“no”,“yes”用于指示用户喜欢,“no”用于指示用户不喜欢。
示例5:
以自拍像为例,人在自拍时总会在多种角度拍摄,例如左边侧脸60°、左边侧脸30°、正脸、右边侧脸30°、右边侧脸60°等等。通常,在拍摄多张后,用户会进行筛选,保留筛选出的图像,删除其它图像,或者用户可能将某张图像分享、收藏、反复查看等。手机100检测到用户对多张图像的不同操作行为,可以确定用户喜欢的图像(比如确定被分享的图像、保留的图像或收藏的图像是用户喜欢的图像),例如手机100通过AI学习方式确定用户喜欢的图像中人脸角度通常是左边侧脸60度。
当用户通过手机100再次自拍一张图像时,若手机100判断一张图像中人脸角度是左边侧脸60度,则可以向用户推荐保留该图像(或者提示用户该图像可以用于分享),若手机100判断一张图像中人脸角度不是左边侧脸60度,可以提示用户删除该图像。
示例5仅是以自拍照中人脸角度为例,在实际应用中,手机100还可以学习用户喜欢的图片中人脸表情、姿势、合影中自己的站位等。假设用户喜欢的图片中人脸表情是微笑、姿态是站姿或占位居中,那么手机100拍摄到一张图像之后,若判断该图片中人脸表情是微笑、姿态是站姿或占位居中,则手机100保留该图像(或者可以提示用户该图片可以用于分享等)。
以上列举了4种手机100根据用户对图片的操作行为,学习哪些图片是用户喜欢的类型的过程,在实际应用中,手机100可以根据上述任一或几种方式组合来进行学习,本申请实施例不作限定。
需要说明的是,手机100通过AI模型识别出“用户不喜欢”的图片时,可以输出提示信息,以提示用户删除该图像,或者自动删除该图像,或者手机100检测到图像备份到云端之后,自动删除所有标签为“用户不喜欢”的图像,或者,手机100检测到图像备份到云端之后,输出提示信息,以提示用户是否删除标签为“用户不喜欢”的图像,或者,手机100检测到图像备份到云端之后,显示一控件,该控件被触发时,手机100删除所有 “用户不喜欢”的图片。
以下实施例介绍手机100将图像划分为“适合发布”和“不适合发布”的过程。
同样的,手机100可以使用AI模型的方式,确定哪些图像“适合发布”、哪些图像“不适合发布”。
手机100检测存储的图像中被发布过的图像的第一特征信息,当手机100获取一张图像时,判断该图像上的第二特征信息是否满足第一特征信息,若满足,则手机100为该图像添加“适合发布”的标签,若不满足,则手机100为该图像添加“不适合发布”的标签。举例来说,手机100检测到存储的图像中,风景图被发布过,手机100获取一张图像后,若该图像是风景图,则手机100为该图像添加“适合发布”的标签。
需要说明的时,本申请实施例中,标签还可以是“用户喜欢且适合发布”、“用户喜欢但是不适合发布”、“用户不喜欢但适合发布”或者“用户不喜欢且不适合发布”。也就是说,通过模型识别图像的标签时,可以识别出该图像的多种分类,本申请实施例不作限定。
当手机100检测到用于发布图像的操作时,向用户推荐标签为“适合发布”的图像。以图3A中的(b)为例,手机100检测到用户触发“从图库中选择”选项304的操作时,显示多张图像的缩略图,其中部分图像的缩略图上显示标记,该标记用于表征该图像属于“适合发布”的图像种类。
下面介绍手机100将图片分为“适合发布”或“不适合发布”的示例。
示例6:
手机100中存储三张图像,前两张图像被发布过(或发布次数较多),第三张图像未被发布过程(或发布次数较少),手机100检测到用户针对这三张图像的不同操作行为,可确定前两张图像是适合发布的图片,第三张图片是不发布分享的,然后将前两张图像作为正训练集,将第三张图像作为反训练集,对AI模型进行训练,得到训练之后的模型。训练之后的模型可以判断输入图像是否满足适合发布的条件(比如满足被发布过的图像上的特征信息),若满足,则将划分为适合发布的分类标签,若不满足,则将图片划分为不适合发布的分类标签。
手机100再次拍摄得到一张图像之后,将该图像输入到AI模型,运行该AI模型进行计算,若判断该图片满足适合发布的条件,则输出结果“yes”,若该图片不满足适合发布的条件,输出结果“no”。
示例7:
在一些实施例中,手机100可以将一张动图按照100ms(该值为举例,本申请实施例不作限定)的时间间隔分割成多张静态图,将每张静态图输入到AI模型中,得到每帧图像的输出结果,比如每帧图像的属于“用户喜欢”的分类标签的概率,手机100从中选择概率最高的图像作为该动图的封面图。
下面介绍手机100将图像划分为“适合发送联系人”和“不适合发送联系人”的过程。
同样的,手机100可以使用AI模型的方式,确定哪些图像属于“适合发送联系人”的,哪些图像属于“不适合发送联系人”。
在一些实施例中,手机100检测存储的所有图像中被发送给一个或多个联系人(可以是任何联系人)的图像的第一特征信息,当手机100获取一张图像时,判断该图像上的第二特征信息是否满足第一特征信息,若满足,则手机100为该图像添加“适合发送联系人”的标签,或不满足,则手机100为该图像添加“不适合发送联系人”的标签。举例来说, 手机100检测到存储的图像中,手机截屏得到的图被发送过联系人,手机100获取一张图像后,若该图像是手机100的截屏得到的图,则手机100为该图像添加“适合发送联系人”的标签。
当手机100检测到用于将图像发送给其他联系人的操作时,向用户推荐标签为“适合发送联系人”的图像。以图4A中的(a)为例,手机100检测到用户触发照片403的操作时,显示多张图像,其中部分图像的缩略图上显示图标,该图标用于表征该图像适合发送给联系人。
在另一些实施例中,手机100还可以通过AI模型,确定哪些图像属于“适合发送特定联系人”。其中特定联系人可以包括特定类型的联系人,或者某个特定的联系人等等。其中特定类型的联系人可以是属于同一群组/分组的联系人,以微信应用为例,特定类型的联系人可以是微信应用中属于同一微信聊天群内的所有联系人,或者,微信应用中属于同一分组的所有联系人,或者微信应用中备注名中具有共同词(例如,老师)的所有联系人。某个特定的联系人可以是具体的联系人,电子设备可以根据联系人的备注名确定该联系人是否是特定的联系人。例如,手机100可以检测存储的所有图像中被发送给特定联系人(例如,父母)的图像的第一特征信息,当手机100获取一张图像时,判断该图像上的第二特征信息是否满足第一特征信息,若满足,则手机100为该图像添加“适合发送特定联系人”的标签。
本申请的各个实施方式可以任意进行组合,以实现不同的技术效果。
结合上述实施例及相关附图,本申请实施例提供了一种图像显示方法,该方法可以在如图1A所示的手机100或其他电子设备中实现。如图10所示,该方法可以包括以下步骤:
1001,检测到输入操作。
作为一种示例,以图3A中的(b)为例,输入操作可以是点击“从图库中选择”控件304的操作。
1002,响应于所述输入操作,在显示屏上显示图像选择界面。
在一些实施例中,图像选择界面(也可以称为图像待选界面),可以显示多张图像的缩略图以供用户选择的界面,比如,图2B中的界面203,或者图3B中的界面305等。
1003,从本地存储器或者云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像。
在一些实施例中,本地存储器可以是电子设备内部的存储器。云存储的图像可以是电子设备存储在云端服务器中的图像。
在一些实施例中,所述相关联的一组图像包括:包含相同拍摄对象的一组图像,比如图9C所示的三张图;和/或,拍摄时间差小于预设时间差的一组图像;和/或,拍摄地点处于同一地点的一组图像;和/或,属于同一相册的一组图像,和/或,包含相同内容但分辨率不同的一组图像;和/或,针对同一张图像经过不同修图方式后得到一组图像。
在一些实施例中,电子设备可以根据用户对图像的操作行为,确定每个图像的图像类型。示例性的,电子设备获取相关联的一组图像;检测针对所述一组图像中每张图像的操作行为;所述操作行为包括删除、保留、修图、图像发布、发送联系人中的一种或多种;根据所述操作行为确定所述每张图像的图像类型;所述图像类型包括适合发布的图像类型,或适合发送其它联系人的图像类型。
举例来说,电子设备获取图9C所示的三张图像,确定图像中第三张图像在社交平台上 发布,则确定第三张图像属于适合图像发布的类型。
再例如,电子设备获得图9A所示的三张图像,确定图像中第三张图像发送给其它联系人,则确定第三张图像属于适合发送其它联系人的类型。
一种可能的实现方式为,电子设备可以确定输入操作的操作类型;根据所述操作类型,确定与所述操作类型相关联的至少一张图像。作为一种示例,电子设备确定所述输入操作是用于发布图像的操作;根据所述操作类型,根据所述操作类型,确定适合发布的至少一张图像。也就是说,电子设备检测到用于发布图像的操作后,可以仅显示确定出的适合发布的图像的缩略图,以方便用户选择。作为另一些示例,电子设备确定所述输入操作是与其他联系人通信的操作;根据所述操作类型,确定适合发送其它联系人的至少一张图像。也就是说,电子设备检测到用于向其它联系人发送图像的操作时,可以仅显示适合发送其它联系人的图像的缩略图,以方便用户查看。
在一些实施例中,所述适合发布的至少一张图像可以包括:曾经发布过的至少一张图像;还可以包括与曾经发布过的图像属于同一类型的图像;比如,曾经发布过的至少一张图像是人像(比如,图像中人物所在的面积较大),则人像类的图像属于适合发布的图像;再比如,曾经发布过的至少一张图像是经过拉长腿的图像,则电子设备确定经过拉长腿处理的图像时域适合发布的图像。适合发布的至少一张图像还可以包括修图次数达到预设次数的至少一张图像。
在一些实施例中,所述适合发送其它联系人的至少一张图像,包括:图像中包括所述其它联系人的图像;还可以包括与曾经发送给所述其它联系人的图像属于同一类型的图像。比如,曾经发送给某个联系人的图像是手机截图,那么手机截图得到的图像属于适合发送该联系人的图像。
另一种可能的实现方式为,电子设备确定与所述输入操作所针对的应用的相关信息;根据所述应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像。
作为一种示例,电子设备可以确定与所述输入操作所针对的应用的类型或功能;根据所述应用的类型或功能,确定与所述类型或功能相匹配的至少一张图像。
举例来说,输入操作针对的应用是百合网,电子设备确定与所述应用相匹配的至少一张图像是至少一张自拍图像。再例如,输入操作针对的应用是游戏类应用,电子设备确定与所述应用相匹配的至少一张图像是至少一张游戏画面的图像。
作为另一些示例,电子设备还可以确定与所述输入操作所针对的应用发布或分享图像的历史记录;根据所述应用的历史记录,确定与所述历史记录相匹配的至少一张图像。
举例来说,电子设备检测到用户上一次使用微信朋友圈发布的图像来自相册“我的”,则电子设备根据该历史记录,确定所述至少一张图像是“我的”相册中的至少一张图像。
其它可能的情况为,电子设备还可以确定与所述输入操作对应的时间信息;根据所述时间信息,和与所述时间信息匹配的至少一张图像。时间信息可以包括日期信息、时刻信息(比如,12点10分)等。
举例来说,电子设备确定当前时间信息是5月1日,电子设备可以确定与该时间信息对应的至少一张图像是5月1日拍摄的至少一张图像,或者,去年5月1日发布/分享过的至少一张图像,或者,图像中包括5.1的至少一张图像。
在一些实施例中,在1001之前,即电子设备检测到输入操作之前,可以预先设置好每种类型的输入操作的关联图像。比如,对于用于将图像发送联系人的输入操作,电子设备 确定该输入操作的关联图像是特定的图像,比如修图次数较多的图像,或者,对于用于在社交平台上分布的输入操作,电子设备确定该输入操作的关联图像是特定的图像,比如,拉长腿的图像。因此,在电子设备检测到输入操作之后,根据预先设置的与输入操作关联的图像中确定与检测到的输入操作相关联的至少一张图像。
1004,在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像。
作为一种示例,以图3B中的(b)和图3B中的(c)为例,图像选择界面中只显示选择出的与输入操作相关的图像的缩略图,不显示其它图像的缩略图。比如,一组图像中包括三张图像,电子设备选择出与输入操作相关联的一张图像,则图像选择界面中仅显示该图像,而不显示其它两张图像。
可选的,电子设备还可以在图像选择界面中显示标记信息,所述标记信息用于标记所述至少一张图像与所述输入操作相关。作为一种示例,电子设备在所述至少一张图像中每张图像的缩略图上显示所述标记信息,比如图2A中的缩略图204和缩略图206上显示标记;或者,在所述图像选择界面中不显示所述至少一张图像的区域显示所述标记信息。
可选的,所述标记信息包括图标、文字、图片中的一种或多种;或者,在所述至少一张图像中每张图像的缩略图上显示所述标记信息,包括:所述至少一张图像中每张图像的缩略图的边缘突出显示,比如,图3B中缩略图306和缩略图308的边缘突出显示。
一种可能的情况为,电子设备可以从本地存储器或云存储读取或加载所述相关联的一组图像的中全部图像;从所述一组图像的全部图像中确定与所述输入操作相关的至少一张图像;然后将所述至少一张图像的缩略图显示于所述图像选择界面,不在所述图像选择界面显示所述其它图像的缩略图。
示例性的,假设图像存储在云存储中。以图3B中的(c)为例,手机可以将相关联的一组图像中的所有图像从云存储下载,而仅将与输入操作相关的两张图像的缩略图显示在界面305中。当手机检测到针对查看更多的控件的操作时,可以将除去两张图像之外的其它图像的缩略图显示出来。
另一种可能的情况为,电子设备可以从本地存储器或云存储中读取或加载与输入操作相关联的至少一张图像,将所述至少一张图像的缩略图显示于所述图像选择界面;未从所述本地存储器或云存储中读取或加载所述一组图像中除去所述至少一张图像之外的其它图像。
示例性的,假设图像存储在云存储中。继续以图3B中的(c)为例,手机可以仅从云存储中下载与输入操作相关的两张图像,然后将这两张图像的缩略图显示在界面305中。当手机检测到针对查看更多的控件的操作时,可以再从云存储中下载其它图像,并显示其它图像的缩略图。
又一些可能的情况为,电子设备也可以仅从本地存储器或云存储预加载所述至少一张图像的缩略图;不预加载所述一组图像中除去所述至少一张图像中的其它图像;然后,在所述图像选择界面中显示至少一张图像的缩略图。
示例性的,假设图像存储在云存储中。继续以图3B中的(c)为例,手机可以仅从云存储中预加载与输入操作相关的两张图像的缩略图,可以无需完全下载这两张图像的原图(缩略图的清晰度小于原图的清晰度),也无需预加载其它图像的缩略图。手机可以将这两张图像的缩略图显示在界面305中。当手机检测到针对查看更多的控件的操作时,可以再从云存储中预加载其它图像的缩略图,并显示其它图像的缩略图。
1005,检测到第一操作,该第一操作用于在所述图像选择界面选择第一缩略图。
1006,对所述第一缩略图执行与所述输入操作相对应的处理流程。
在一些实施例中,输入操作可以是用于发布图像的操作,那么,电子设备选择出第一缩略图后,可以对第一缩略图执行图像发布流程。以微博应用为例,图像发布流程可以包括电子设备将图像发送给微博应用对应的服务器,以通过该服务器将图像发布微博平台。在另一些实施例中,输入操作可以是用于向联系人发送图像的操作,那么,电子设备选择出第一缩略图之后,可以将所述第一缩略图对应的图像发送所述联系人。
上述本申请提供的实施例中,从电子设备(手机100)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,终端设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
为了解释的目的,前面的描述是通过参考具体实施例来进行描述的。然而,上面的示例性的讨论并非意图是详尽的,也并非意图要将本申请限制到所公开的精确形式。根据以上教导内容,很多修改形式和变型形式都是可能的。选择和描述实施例是为了充分阐明本申请的原理及其实际应用,以由此使得本领域的其他技术人员能够充分利用具有适合于所构想的特定用途的各种修改的本申请以及各种实施例。

Claims (23)

  1. 一种图像显示方法,应用于具有显示屏的电子设备,其特征在于,所述方法包括:
    检测到输入操作;
    响应于所述输入操作,在显示屏上显示图像选择界面;
    从本地存储器或者云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像;
    在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像;
    检测到第一操作,所述第一操作用于在所述图像选择界面选择第一缩略图;
    对所述第一缩略图执行与所述输入操作相对应的处理流程。
  2. 如权利要求1所述的方法,其特征在于,所述隐藏其余图像,包括:
    隐藏所述一组图像中除去所述至少一张图像中的其它图像。
  3. 如权利要求1或2所述的方法,其特征在于,所述方法还包括:
    显示标记信息,所述标记信息用于标记所述至少一张图像与所述输入操作相关。
  4. 如权利要求3所述的方法,其特征在于,所述显示标记信息,包括:
    在所述至少一张图像中每张图像的缩略图上显示所述标记信息;或者,
    在所述图像选择界面中不显示所述至少一张图像的区域显示所述标记信息。
  5. 如权利要求3或4所述的方法,其特征在于,所述标记信息包括图标、文字、图片中的一种或多种;或者,
    在所述至少一张图像中每张图像的缩略图上显示所述标记信息,包括:
    所述至少一张图像中每张图像的缩略图的边缘突出显示。
  6. 如权利要求1-5任一所述的方法,其特征在于,所述相关联的一组图像包括:包含相同拍摄对象的一组图像;和/或,拍摄时间差小于预设时间差的一组图像;和/或,拍摄地点处于同一地点的一组图像;和/或,属于同一相册的一组图像,和/或,包含相同内容但分辨率不同的一组图像;和/或,针对同一张图像经过不同修图方式后得到一组图像。
  7. 如权利要求1-6任一所述的方法,其特征在于,在所述检测输入操作之前,还包括;预先设置每种类型的输入操作的关联图像。
  8. 如权利要求1-7任一所述的方法,其特征在于,所述输入操作是用于发布图像的操作,对所述第一缩略图执行与所述输入操作相对应的处理流程,包括:对所述第一缩略图对应的图像执行图像发布流程;或者
    所述输入操作是用于向联系人发送图像的操作,对所述第一缩略图执行与所述输入操作相对应的处理流程,包括:将所述第一缩略图对应的图像发送所述联系人。
  9. 如权利要求1-8任一所述的方法,其特征在于,所述从本地存储器或云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:
    确定所述输入操作的操作类型;
    根据所述操作类型,确定与所述操作类型相关联的至少一张图像。
  10. 如权利要求9所述的方法,其特征在于,所述确定所述输入操作的操作类型,包括:
    确定所述输入操作是用于发布图像的操作;
    根据所述操作类型,确定与所述操作类型相关联的至少一张图像,包括:
    根据所述操作类型,确定适合发布的至少一张图像。
  11. 如权利要求9所述的方法,其特征在于,所述确定所述输入操作的操作类型,包括:
    确定所述输入操作是与其他联系人通信的操作;
    根据所述操作类型,确定与所述操作类型相关联的至少一张图像,包括:
    根据所述操作类型,确定适合发送所述其他联系人的至少一张图像。
  12. 如权利要求10所述的方法,其特征在于,所述适合发布的至少一张图像,包括:
    与曾经发布过的图像属于同一类型的图像;和/或
    修图次数达到预设次数的至少一张图像。
  13. 如权利要求11所述的方法,其特征在于,所述适合发送其它联系人的至少一张图像,包括:
    图像中包括所述其它联系人的图像;和/或
    与曾经发送给所述其它联系人的图像属于同一类型的图像。
  14. 如权利要求1-8任一所述的方法,其特征在于,所述从本地存储器或云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:
    确定与所述输入操作所针对的应用的相关信息;
    根据所述应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像。
  15. 如权利要求14所述的方法,其特征在于,确定与所述输入操作所针对的应用的相关信息,包括:
    确定与所述输入操作所针对的应用的类型或功能;
    根据所述应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像,包括:
    根据所述应用的类型或功能,确定与所述类型或功能相匹配的至少一张图像。
  16. 如权利要求14所述的方法,其特征在于,确定与所述输入操作所针对的应用的相关信息,包括:
    确定与所述输入操作所针对的应用发布或分享图像的历史记录;
    根据所述应用的相关信息,确定与所述应用的相关信息相关联的至少一张图像,包括:
    根据所述应用的历史记录,确定与所述历史记录相匹配的至少一张图像。
  17. 如权利要求1-8任一所述的方法,其特征在于,所述从本地存储器或云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:
    确定与所述输入操作对应的时间信息;
    根据所述时间信息,和与所述时间信息匹配的至少一张图像。
  18. 如权利要求1-17任一所述的方法,其特征在于,从本地存储器或者云存储的相关联的一组图像中确定与所述输入操作相关的至少一张图像,包括:
    从所述本地存储器或云存储读取或加载所述相关联的一组图像的中全部图像;
    从所述一组图像的全部图像中确定与所述输入操作相关的至少一张图像;
    在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像,包括:
    将所述至少一张图像的缩略图显示于所述图像选择界面,不在所述图像选择界面显示所述其它图像的缩略图。
  19. 如权利要求1-17任一所述的方法,其特征在于,在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像,包括:
    从所述本地存储器或云存储中读取或加载所述至少一张图像,将所述至少一张图像的 缩略图显示于所述图像选择界面;未从所述本地存储器或云存储中读取或加载所述一组图像中除去所述至少一张图像之外的其它图像。
  20. 如权利要求1-17任一所述的方法,其特征在于,在所述图像选择界面中显示至少一张图像的缩略图,并隐藏其余图像,包括:
    从所述本地存储器或云存储预加载所述至少一张图像的缩略图;未预加载所述一组图像中除去所述至少一张图像中的其它图像的缩略图;
    在所述图像选择界面中显示至少一张图像的缩略图。
  21. 一种电子设备,其特征在于,包括:显示屏;一个或多个处理器;存储器;一个或多个程序;其中所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-20中任一所述的方法步骤。
  22. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-20中任一项所述的方法。
  23. 一种程序产品,其特征在于,当所述程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-20中任一项所述的方法。
PCT/CN2020/103115 2019-07-26 2020-07-20 一种图像显示方法与电子设备 WO2021017932A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202080052474.4A CN114127713A (zh) 2019-07-26 2020-07-20 一种图像显示方法与电子设备
EP20847568.1A EP3979100A4 (en) 2019-07-26 2020-07-20 PICTURE DISPLAY METHOD AND ELECTRONIC DEVICE
US17/626,701 US20220269720A1 (en) 2019-07-26 2020-07-20 Image Display Method and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910683677.2A CN110543579A (zh) 2019-07-26 2019-07-26 一种图像显示方法与电子设备
CN201910683677.2 2019-07-26

Publications (1)

Publication Number Publication Date
WO2021017932A1 true WO2021017932A1 (zh) 2021-02-04

Family

ID=68709845

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103115 WO2021017932A1 (zh) 2019-07-26 2020-07-20 一种图像显示方法与电子设备

Country Status (4)

Country Link
US (1) US20220269720A1 (zh)
EP (1) EP3979100A4 (zh)
CN (2) CN110543579A (zh)
WO (1) WO2021017932A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543579A (zh) * 2019-07-26 2019-12-06 华为技术有限公司 一种图像显示方法与电子设备
CN112580400B (zh) * 2019-09-29 2022-08-05 荣耀终端有限公司 图像选优方法及电子设备
CN111143431B (zh) * 2019-12-10 2022-12-13 云南电网有限责任公司信息中心 一种智能化量费核查与异常识别系统
CN111104533A (zh) * 2019-12-26 2020-05-05 维沃移动通信有限公司 一种图片处理方法及电子设备
CN111538997A (zh) * 2020-03-31 2020-08-14 宇龙计算机通信科技(深圳)有限公司 图像处理方法、装置、存储介质以及终端
JP2022086520A (ja) * 2020-11-30 2022-06-09 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
USD992593S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD992592S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
CN113067766B (zh) * 2021-03-12 2023-05-12 网易(杭州)网络有限公司 图片发送方法、装置和电子设备
CN113286040B (zh) * 2021-05-17 2024-01-23 广州三星通信技术研究有限公司 用于选择图像的方法及其装置
CN113268187B (zh) * 2021-06-02 2023-04-04 杭州网易云音乐科技有限公司 一种图片聚合显示的方法和装置及设备
CN113885748A (zh) * 2021-09-23 2022-01-04 维沃移动通信有限公司 对象切换方法、装置、电子设备和可读存储介质
CN113873081B (zh) * 2021-09-29 2023-03-14 维沃移动通信有限公司 关联图像的发送方法、装置及电子设备
CN117234324A (zh) * 2022-06-08 2023-12-15 北京字跳网络技术有限公司 信息输入页面的图像获取方法、装置、设备、介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834687A (zh) * 2015-04-17 2015-08-12 深圳市金立通信设备有限公司 一种图片显示方法
CN107153708A (zh) * 2017-05-16 2017-09-12 珠海市魅族科技有限公司 一种图片查看方法及装置、计算机装置、计算机可读存储介质
CN109325137A (zh) * 2018-09-30 2019-02-12 华勤通讯技术有限公司 一种图片存储方法、显示方法及终端设备
US20190095067A1 (en) * 2016-07-13 2019-03-28 Tencent Technology (Shenzhen) Company Limited Method and apparatus for uploading photographed file
CN110543579A (zh) * 2019-07-26 2019-12-06 华为技术有限公司 一种图像显示方法与电子设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4198004B2 (ja) * 2003-07-18 2008-12-17 オリンパス株式会社 画像分類プログラム、画像分類装置
US7890858B1 (en) * 2005-10-11 2011-02-15 Google Inc. Transferring, processing and displaying multiple images using single transfer request
US8438495B1 (en) * 2009-08-17 2013-05-07 Adobe Systems Incorporated Methods and systems for creating wireframes and managing containers
US20110093520A1 (en) * 2009-10-20 2011-04-21 Cisco Technology, Inc.. Automatically identifying and summarizing content published by key influencers
JP2011215963A (ja) * 2010-03-31 2011-10-27 Sony Corp 電子機器、画像処理方法及びプログラム
US8996625B1 (en) * 2011-02-01 2015-03-31 Google Inc. Aggregate display of messages
US20120324002A1 (en) * 2011-02-03 2012-12-20 Afolio Inc. Media Sharing
JP5834895B2 (ja) * 2011-12-26 2015-12-24 ブラザー工業株式会社 画像処理装置及びプログラム
US20130332840A1 (en) * 2012-06-10 2013-12-12 Apple Inc. Image application for creating and sharing image streams
US9591050B1 (en) * 2013-02-28 2017-03-07 Google Inc. Image recommendations for thumbnails for online media items based on user activity
WO2015194237A1 (ja) * 2014-06-18 2015-12-23 ソニー株式会社 情報処理装置、情報処理システム、情報処理装置の制御方法およびプログラム
US10891485B2 (en) * 2017-05-16 2021-01-12 Google Llc Image archival based on image categories

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834687A (zh) * 2015-04-17 2015-08-12 深圳市金立通信设备有限公司 一种图片显示方法
US20190095067A1 (en) * 2016-07-13 2019-03-28 Tencent Technology (Shenzhen) Company Limited Method and apparatus for uploading photographed file
CN107153708A (zh) * 2017-05-16 2017-09-12 珠海市魅族科技有限公司 一种图片查看方法及装置、计算机装置、计算机可读存储介质
CN109325137A (zh) * 2018-09-30 2019-02-12 华勤通讯技术有限公司 一种图片存储方法、显示方法及终端设备
CN110543579A (zh) * 2019-07-26 2019-12-06 华为技术有限公司 一种图像显示方法与电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3979100A1 *

Also Published As

Publication number Publication date
CN114127713A (zh) 2022-03-01
CN110543579A (zh) 2019-12-06
EP3979100A4 (en) 2022-08-03
EP3979100A1 (en) 2022-04-06
US20220269720A1 (en) 2022-08-25

Similar Documents

Publication Publication Date Title
WO2021017932A1 (zh) 一种图像显示方法与电子设备
CN113542485B (zh) 一种通知处理方法、电子设备及计算机可读存储介质
US11914850B2 (en) User profile picture generation method and electronic device
WO2020019873A1 (zh) 图像处理方法、装置、终端及计算机可读存储介质
US20160247034A1 (en) Method and apparatus for measuring the quality of an image
WO2020207326A1 (zh) 一种对话消息的发送方法及电子设备
CN112783379B (zh) 一种选择图片的方法和电子设备
WO2020259554A1 (zh) 可进行学习的关键词搜索方法和电子设备
CN114390139B (zh) 一种电子设备在来电时呈现视频的方法、电子设备和存储介质
US20220343648A1 (en) Image selection method and electronic device
CN109086680A (zh) 图像处理方法、装置、存储介质和电子设备
WO2020192761A1 (zh) 记录用户情感的方法及相关装置
CN112150499A (zh) 图像处理方法及相关装置
WO2020073317A1 (zh) 文件管理方法及电子设备
CN115525783B (zh) 图片显示方法及电子设备
CN113497835B (zh) 多屏交互方法、电子设备及计算机可读存储介质
CN116527805A (zh) 卡片显示方法、电子设备及计算机可读存储介质
WO2020150979A1 (zh) 一种分享图像方法和移动设备
WO2022127609A1 (zh) 图像处理方法及电子设备
CN116522400B (zh) 图像处理方法和终端设备
WO2023072241A1 (zh) 一种媒体文件管理方法及相关装置
CN114244951B (zh) 应用程序打开页面的方法及其介质和电子设备
WO2023207682A1 (zh) 一种文本编辑方法及电子设备
WO2024046162A1 (zh) 一种图片推荐方法及电子设备
WO2022111640A1 (zh) 应用分类方法、电子设备及芯片系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20847568

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020847568

Country of ref document: EP

Effective date: 20211230

NENP Non-entry into the national phase

Ref country code: DE