CN110008364B - Image processing method, device and system - Google Patents

Image processing method, device and system Download PDF

Info

Publication number
CN110008364B
CN110008364B CN201910231390.6A CN201910231390A CN110008364B CN 110008364 B CN110008364 B CN 110008364B CN 201910231390 A CN201910231390 A CN 201910231390A CN 110008364 B CN110008364 B CN 110008364B
Authority
CN
China
Prior art keywords
image
images
image group
group
typical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910231390.6A
Other languages
Chinese (zh)
Other versions
CN110008364A (en
Inventor
柯海滨
靳玉茹
胡娜
李杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910231390.6A priority Critical patent/CN110008364B/en
Publication of CN110008364A publication Critical patent/CN110008364A/en
Application granted granted Critical
Publication of CN110008364B publication Critical patent/CN110008364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides an image processing method, the method including: acquiring a plurality of images; dividing the plurality of images into one or more first image groups according to a predetermined rule; determining one or more images in each first image group as first representative images for characterizing each first image group; and responsive to the first acquisition request, presenting at least one first representative image of at least one of the one or more first image groups. The present disclosure also provides an image processing apparatus and an image processing system.

Description

Image processing method, device and system
Technical Field
The present disclosure relates to an image processing method, apparatus and system.
Background
With the increase of living standard, people are more willing to record life with photos or videos, so people tend to store a large number of photos or videos in terminal devices. And as electronic technology has evolved, the pixels of image capturing apparatuses that take photographs or videos have also rapidly increased, and thus, the amount of data per photograph or video has also gradually increased.
In implementing the concepts of the present disclosure, the inventors found that at least the following problems exist in the prior art: in the existing terminal equipment, when photos and videos are stored, the photos and videos are often arranged and stored in sequence only according to time sequence. When presenting stored photos and videos to a user, they are also often presented to the user in a chronological list or in thumbnail form. However, when the user searches for the photo or video he wants to see, he needs to browse the photo and video one by one to locate the photo and video, which definitely reduces the efficiency of searching for the photo or video, and brings bad experience to the user.
Disclosure of Invention
An aspect of the present disclosure provides an image processing method for improving response efficiency. The method comprises the following steps: acquiring a plurality of images; dividing the plurality of images into one or more first image groups according to a predetermined rule; determining one or more images in each first image group as first representative images for characterizing each first image group; and responsive to the first acquisition request, presenting at least one first representative image of at least one of the one or more first image groups.
Optionally, the image processing method further includes: responding to the first acquisition request, and displaying at least one operation control corresponding to at least one first image group one by one; and responding to the selected operation of the first operation control in the at least one operation control, displaying partial images or all images of the first image group corresponding to the first operation control, wherein the partial images comprise other images except the first typical image.
Optionally, the image processing method further includes: responsive to selecting a first image of the at least one first representative image in a first manner, presenting at least one other image of the first group of images that is representative of the first image, other than the first image, wherein the at least one other image comprises the first representative image and/or an image other than the first representative image; and/or, in response to the operation of selecting the second image of the at least one first typical image in the second manner, displaying the first typical images except the second image in the first image group represented by the second image, wherein the first image group represented by the second image comprises a plurality of first typical images, and in response to the first acquisition request, displaying only one first typical image of each of the at least one first image group.
Optionally, the image processing method further includes: processing the first typical image of each first image group to obtain a label of each first image group; and responsive to the second acquisition request, displaying a first representative image included in the first image group that matches the second acquisition request.
Optionally, the displaying, in response to the second acquisition request, the first typical image included in the first image group matched with the second acquisition request includes: extracting request features of a second acquisition request by adopting a first neural network model; determining an image group to be displayed in one or more first image groups, wherein the image group to be displayed has a label matched with the request characteristic; and displaying a first typical image included in the image group to be displayed, wherein the second acquisition request comprises voice information, image information and/or text information.
Optionally, the image processing method further includes: responsive to the rearrangement request, dividing the plurality of images into one or more second image groups; determining one or more images of each second image group as second representative images for characterizing each second image group; and updating the at least one first representative image of the at least one first image set to the at least one second representative image of the at least one second image set.
Optionally, the dividing the plurality of images into one or more first image groups according to a predetermined rule includes; dividing images with shooting time belonging to the same time period in a plurality of images into the same image group to obtain one or more first image groups; or dividing images including similar/identical objects in the plurality of images into the same image group to obtain one or more first image groups; or dividing images shot by the same shooting mode in the plurality of images into the same image group to obtain one or more first image groups; or dividing the images shot in the same area in the plurality of images into the same image group to obtain one or more first image groups.
Optionally, the dividing the plurality of images into one or more first image groups according to the predetermined rule includes: dividing the plurality of images into one or more first image groups using a second neural network model; and/or, the determining that the one or more images in each first image group are the first typical images includes: determining one or more images of each first image group as first representative images from the first image groups using a third neural network model.
Another aspect of the present disclosure provides an image processing apparatus including an acquisition module, a grouping module, a typical image determination module, and a presentation module. The acquisition module is used for acquiring a plurality of images; the grouping module is used for dividing the plurality of images into one or more first image groups according to a preset rule; the typical image determining module is used for determining one or more images in each first image group as first typical images and is used for representing each first image group; the display module is used for displaying at least one first typical image of at least one first image group in one or more first image groups in response to the first acquisition request.
Optionally, the display module is further configured to: responding to the first acquisition request, and displaying at least one operation control corresponding to at least one first image group one by one; and responding to the selected operation of the first operation control in the at least one operation control, displaying partial images or all images of the first image group corresponding to the first operation control, wherein the partial images comprise other images except the first typical image.
Optionally, the display module is further configured to: in response to selecting a first image of the at least one first representative image in the first manner, at least one other image of the first image set of the first image representation other than the first image is presented. Wherein the at least one other image comprises the first representative image and/or an image other than the first representative image. And/or the display module is further used for displaying the first typical images except the second image in the first image group represented by the second image in response to the operation of selecting the second image in the at least one first typical image in the second mode. Wherein the first image group of the second image representation comprises a plurality of first representative images, and only one first representative image of each of the at least one first image group is presented in response to the first acquisition request.
Optionally, the image processing device further includes a processing module, where the processing module is configured to process the first typical image of each first image group to obtain a label of each first image group. The display module is further configured to display, in response to the second acquisition request, a first typical image included in the first image group that matches the second acquisition request.
Optionally, the display module includes an extracting unit, a determining unit and a display unit. The extraction unit is used for extracting the request characteristics of the second acquisition request by adopting a first neural network model; the determining unit is used for determining an image group to be displayed in the one or more first image groups, and the image group to be displayed is provided with a label matched with the request feature; the display unit is used for displaying a first typical image included in the image group to be displayed. Wherein the second acquisition request comprises voice information, image information and/or text information.
Optionally, the grouping module is further configured to divide the plurality of images into one or more second image groups in response to the rearrangement request; the typical image determining module is further configured to determine one or more images in each second image group as second typical images, and to characterize each second image group; the display module is further configured to update at least one first representative image of the displayed at least one first image group to at least one second representative image of the at least one second image group.
Optionally, the grouping module is specifically configured to: dividing images with shooting time belonging to the same time period in a plurality of images into the same image group to obtain one or more first image groups; or dividing images including similar/identical objects in the plurality of images into the same image group to obtain one or more first image groups; or dividing images shot by the same shooting mode in the plurality of images into the same image group to obtain one or more first image groups; or dividing the images shot in the same area in the plurality of images into the same image group to obtain one or more first image groups.
Optionally, the grouping module is specifically configured to divide the plurality of images into one or more first image groups using a second neural network model. And/or the above-mentioned typical image determining module is specifically configured to determine, for each first image group, one or more images of each first image group as first typical images by using the third neural network model.
Another aspect of the present disclosure provides an image processing system including one or more processors; and a storage means for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the image processing method described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, cause the processor to perform the above-described image processing method.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which, when executed, are for implementing an image processing method as described above.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of an image processing method, apparatus, and system according to an embodiment of the present disclosure;
fig. 2 schematically shows a flowchart of an image processing method according to a first embodiment of the present disclosure;
fig. 3A schematically shows a flowchart of an image processing method according to a second embodiment of the present disclosure;
FIG. 3B schematically illustrates a presentation effect diagram in response to a user operation in a second embodiment of the present disclosure;
fig. 4A schematically illustrates a flowchart of an image processing method according to a third embodiment of the present disclosure;
FIG. 4B schematically illustrates a presentation effect diagram in response to a user operation in a third embodiment of the present disclosure;
Fig. 5A schematically shows a flowchart of an image processing method according to a fourth embodiment of the present disclosure;
fig. 5B schematically illustrates a presentation effect diagram in response to a user operation in a fourth embodiment of the present disclosure;
fig. 6A schematically shows a flowchart of an image processing method according to a fifth embodiment of the present disclosure;
FIG. 6B schematically illustrates a flowchart for exposing an image in response to a second acquisition request, in accordance with an embodiment of the present disclosure;
fig. 7A schematically shows a flowchart of an image processing method according to a sixth embodiment of the present disclosure;
fig. 7B schematically illustrates a presentation effect diagram in response to a rearrangement operation in a sixth embodiment according to the present disclosure;
fig. 8 schematically shows a block diagram of the structure of an image processing apparatus according to an embodiment of the present disclosure; and
fig. 9 schematically illustrates a block diagram of an image processing system adapted to perform an image processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon, the computer program product being for use by or in connection with an instruction execution system.
Embodiments of the present disclosure provide an image processing method for improving response efficiency, the method including: acquiring a plurality of images; dividing the plurality of images into one or more first image groups according to a predetermined rule; determining one or more images in each first image group as first representative images for characterizing each first image group; and responsive to the first acquisition request, presenting at least one first representative image of at least one of the one or more first image groups.
According to the image processing method disclosed by the invention, when a user browses and searches for the images, only part or all of typical images of each image group are displayed, but not all of the images are displayed, so that the response efficiency can be improved to a certain extent. Moreover, as the image groups are divided according to the preset rule, the user can quickly determine the image group in which the searched required image is located according to the displayed typical image, and therefore the positioning efficiency of the required image can be improved, and the user experience is improved.
Fig. 1 schematically illustrates an application scenario diagram of an image processing method, apparatus and system according to an embodiment of the present disclosure. It should be noted that fig. 1 is merely an example of a scenario in which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, an application scenario 100 according to an embodiment of the present disclosure includes terminal devices 111, 112, 113.
The terminal devices 111, 112, 113 may have, for example, a storage function capable of storing images in response to a user's operation. According to an embodiment of the present disclosure, the terminal device 111, 112, 113 may also have a display function to present part or all of the stored images in response to a presentation request by a user. In particular, the terminal devices 111, 112, 113 may be various electronic devices having a display screen and provided with a storage unit, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
According to the embodiment of the present disclosure, the above-mentioned image stored in response to the operation of the user may be photographed by the terminal device 111, 112, 113, for example, and accordingly, the terminal device 111, 112, 113 should also have a photographing function. Alternatively, the image stored in response to the user's operation may also be acquired from an external image pickup apparatus or from another external storage apparatus.
According to an embodiment of the present disclosure, the terminal device 111, 112, 113 may further have a processing function, for example, for processing the stored image, and may specifically, for example, identify, classify, display, etc., the stored image. And/or the terminal device 111, 112, 113 may also process each image group obtained by the classification, determine a typical image that can be used to characterize each image group, so that only the image page 140 is presented when the image is presented. I.e. when displaying images, only one or more typical images of each image group are displayed. For example, images may be divided into three image groups of persons, sports, and dining according to the image content, and when images of the three image groups are displayed, only a person representative image 141 (an image of a person opening both arms and legs), a sports representative image 142 (basketball penalty area image), and a dining representative image 143 (dining sign image) representing the person image group are displayed.
According to an embodiment of the present disclosure, as shown in fig. 1, the application scenario 100 may further include, for example, a network 120 and a server 130. The network 120 is the medium used to provide communication links between the terminal devices 111, 112, 113 and the server 130. The network 120 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The server 130 may be a server providing various services, and may identify, classify, etc., images stored by the terminal devices 111, 112, 113, for example. Accordingly, the terminal devices 111, 112, 113 may send the photographed, acquired, or stored image to the server 130 for the server 130 to process the image. Furthermore, the server 130 may also push the image requested to be displayed by the user to the terminal devices 111, 112, 113 in response to the request instruction sent by the terminal devices 111, 112, 113.
It should be noted that, the image processing method provided by the embodiments of the present disclosure may be generally performed by the terminal devices 111, 112, 113, or may be partially performed by the server 130. Accordingly, the image processing apparatus provided by the embodiments of the present disclosure may be generally provided in the terminal devices 111, 112, 113, or may be partially provided in the server 130, and partially provided in the terminal devices 111, 112, 113.
It should be understood that the number of terminal devices, networks and servers in fig. 1 and the content and number of images in the image page presented are merely illustrative. There may be any number of terminal devices, networks, and servers, and the image page may be presented with any type and number of images, as desired for implementation.
Fig. 2 schematically shows a flowchart of an image processing method according to a first embodiment of the present disclosure.
As shown in fig. 2, the image processing method of the embodiment of the present disclosure includes operations S201 to S204.
In operation S201, a plurality of images are acquired.
The plurality of images may be captured in real time, or may be acquired from an external image capturing apparatus or a storage apparatus, for example. The plurality of images may be images photographed at respective photographing times and photographing places, images photographed in various photographing modes, images photographed with various scenes, first frame images of pictures or video clips, and the like. The present disclosure is not limited to the manner in which the plurality of images are acquired, the specific type of the plurality of images, and the like.
In operation S202, a plurality of images are divided into one or more first image groups according to a predetermined rule.
The predetermined rule may be, for example: the images whose photographing times belong to the same period are divided into the same image group, thereby dividing the plurality of images into at least one first image group. For example, when the earliest photographing time is 2018, 1 and the latest photographing time is 2019, 1, 31, among the photographing times of the plurality of images, the images having photographing times belonging to the same month may be divided into the same image group. Among a plurality of photographing times of a plurality of images, including photographing times respectively belonging to each of 1 st from 2018 to 1 st 2019, the plurality of images may be divided into 13 image groups.
The predetermined rule may be, for example: images having similar/identical objects are divided into the same image group, thereby dividing the plurality of images into at least one image group. For example, when a person is included in both images, the two images may be divided into the same image group. In particular, the operation S202 may further include an operation of identifying a plurality of images to determine an object included in each image according to an embodiment of the present disclosure. Further, this operation S202 may specifically also group a plurality of images by employing the second neural network model, for example. Specifically, a plurality of images are used as the input of the second neural network model, namely, the types of objects included in the plurality of images can be output, and then the classification of the plurality of images can be realized according to the types of the objects included in the plurality of images.
The predetermined rule may be, for example: and dividing images shot by the same shooting mode into the same image group to obtain at least one first image group. According to an embodiment of the present disclosure, considering that the data amounts of the obtained images are different when the obtained images photographed by different photographing modes (for example, a panoramic mode and a front photographing mode) are stored, the predetermined rule may be, in particular, grouping a plurality of images according to the data amounts of the images. Alternatively, the classifying of the image according to the photographing mode may be performed by means of the second neural network model described above, which is not limited in the present disclosure.
The predetermined rule may be, for example: and dividing images shot in the same area in the plurality of images into the same image group to obtain at least one first image group. According to an embodiment of the present disclosure, the predetermined rule may specifically be to classify the plurality of images according to shooting locations of the plurality of images. Considering that the existing image capturing apparatuses generally have a positioning function, a capturing place of an image can be obtained while capturing the obtained image, thereby facilitating the execution of the above-described operation S202. According to an embodiment of the present disclosure, in a case where the image capturing apparatus does not have a positioning function or does not turn on the positioning function, operation S202 may specifically also be to determine a shooting location of an image by identifying an object in the image, for example, when there is a building "home palace" in the image, it may be determined that the shooting location is "beijing" or the like. The determination of the recording location by recognizing the image can also be carried out in particular by means of the second neural network model described above, which is not described in detail here.
According to embodiments of the present disclosure, the second neural network model described above, which is trained to classify a plurality of images, may be constructed based on, for example, a back propagation neural network, a radial basis (Radial Basis Function, RBF) neural network, or a perceptron neural network, or the like. It will be appreciated by those skilled in the art that the second neural network model may have different structures and different parameters according to different predetermined rules, and the construction of the second neural network model is merely exemplary to facilitate understanding of the present disclosure, which is not limited thereto.
In operation S203, determining one or more images in each first image group as first typical images for characterizing said each first image group;
the operation S203 may determine the first typical image according to, for example, the sharpness, the pixel size, the proportion of the photographed object to the image, the occurrence frequency of the object in the image, and the like of the plurality of images included in each first image group. Specifically, for example, an image in which the proportion of the object captured in the plurality of images to the image is greater than a preset proportion (50%) may be determined as the first representative image; or may be, for example, to determine an image having the largest pixel size among the plurality of images as the first representative image, or the like.
In order to improve the processing efficiency, according to the embodiment of the present disclosure, the operation S203 may specifically further determine the first typical image using the third neural network model. In the processing procedure, all the images included in each first image group are taken as the input of the third neural network model, the images which can be taken as the first typical images or the numbers of the images are output, and the like, and one or more images of each first image group are determined by each first image group to be the first typical images. Specifically, for example, an image with more common features may be obtained through screening as the first typical image. The third neural network model may be, for example, a model of the same type as the second neural network model, or a model of a different type. The second neural network model is trained to determine the most representative image of the plurality of images, which may be constructed based on, for example, a back propagation neural network or a convolutional neural network.
According to the embodiment of the disclosure, the number of the first typical images of each first image group may be specifically determined according to all the images included in the first image group, for example, when the first image group is a person image group and the images of children, young people and old people are included in the image group, three images may be determined as the first typical images of the image group. It will be appreciated that the above-described method of determining the number of typical images of an image group is merely an example to facilitate understanding of the present disclosure, which is not limited thereto.
In response to the first acquisition request, at least one first representative image of at least one of the one or more first image groups is presented in operation S204.
According to an embodiment of the present disclosure, the first acquisition request may specifically be triggered when a user opens a storage space storing a plurality of images, or when the user opens an image browsing program. Then, in order to improve the response efficiency, only the first typical image of each first image group may be presented to the user when the user opens the storage space storing the plurality of images or opens the image browsing program.
According to the disclosed embodiments, all first representative images of each first image group may be presented in particular. In order to further improve the response efficiency, only one first typical image of each first image group may be displayed, which may be specifically determined according to a history of browsing the user or the like. For example, when the first typical image includes a child image, a young person image, and an old person image, if it is determined that the frequency of browsing the child image by the user is highest based on the history browsing record, it may be determined that the first typical image is a child image.
In the embodiment of the present disclosure, in order to further improve the response efficiency, the operation S203 may group the plurality of images only according to the attribute (e.g., photographing time, photographing place, etc.) possessed by the image itself, so that the recognition process for each image is not required. Specifically, the image processing method may specifically be: firstly, sorting images to be processed according to shooting dates; then dividing the images with shooting dates belonging to the same time period into one image group to obtain one or more image groups; then randomly selecting a plurality of (e.g. 3) images from each image group as typical images; then, preferentially carrying out image recognition processing on the selected typical images, and determining the label of each image group; thus, the image can be displayed in time in response to the acquisition request of the user. Wherein for non-selected atypical images, the images can be processed slowly in the background of the terminal device for presentation when the user requests more images.
As can be seen from the above, in the image processing method according to the embodiment of the present disclosure, a plurality of images are grouped according to a predetermined rule, and a typical image is determined for each image group. When the user requests to browse the images, the typical images of each image group can be only displayed to the user, so that the response efficiency is improved, the user can conveniently and quickly locate the image group to which the image to be browsed belongs, the efficiency of locating the images by the user is improved, and the user experience is improved.
Fig. 3A schematically shows a flowchart of an image processing method according to a second embodiment of the present disclosure; fig. 3B schematically illustrates a presentation effect diagram in response to a user operation in the second embodiment of the present disclosure.
As shown in fig. 3A, the image processing method of the embodiment of the present disclosure may include operation S304 and operation S305 in addition to operation S201 to operation S203.
In operation S304, at least one first typical image of at least one of the one or more first image groups is presented in response to the first acquisition request, and at least one operation control corresponding to the at least one first image one-to-one is presented. In operation S305, in response to an operation in which the first operation control is selected among the at least one operation control, a partial image or all images of the first image group corresponding to the first operation control are presented.
This operation S304 differs from operation S204 in fig. 2 only in that, in response to the first acquisition request, at least one operation control corresponding one-to-one to at least one first image group is also presented. The operating control may specifically be a touch button, for example. The partial image may specifically include, for example, an image other than the first typical image. Through the operations S304 to S305, in the case where the user locates the first image group to which the desired image belongs and the displayed first typical image is not the desired image, more images may be acquired by clicking the operation control corresponding to the located first image group, thereby further locating the desired image.
According to an embodiment of the present disclosure, as shown in fig. 3B, in the case where the plurality of first image groups include a character image group, a moving image group, and a dining image group, as shown in the left-hand diagram in fig. 3B, the presentation page is presented with a first operation control 301 corresponding to the character image group, a second operation control 302 corresponding to the moving image group, and a third operation control 303 corresponding to the dining image group, respectively, through the above-described operation S304. When the user clicks the first operation control 301, as shown in the right diagram in fig. 3B, through operation S305 described above, the person typical image 141 and the images 311 to 315 included in the person image group may be presented in the presentation page in response to the operation in which the first operation control 301 is selected (i.e., clicked).
Therein, consider that the user clicking on first operation control 301 is due to the fact that in the presentation page of the left-hand diagram in FIG. 3B, the character representative image 141 is not a desired image, and that its desired image belongs to the character image group. Therefore, in order for the presentation page of the right-hand diagram in fig. 3B to present more images, the presentation page in the right-hand diagram may not present the character representative image 141.
According to an embodiment of the present disclosure, in the case where only one first typical image of each first image group is displayed in operation S304 and the person image group includes at least two first typical images, the right-side display page in fig. 3B obtained through operation S305 described above may display not only the other first typical images except for the person typical image 141 in the person image group, but also the other images except for the first typical images (i.e., atypical images) in the person image group. In the case where all the first typical images of each first image group are displayed in operation S304, the right display page in fig. 3B obtained through operation S305 described above may display all the atypical images of the person image group. The present disclosure is not limited to the number, type, and the like of the display images obtained through operation S305.
As can be seen from the above, according to the embodiment of the present disclosure, by setting the operation control corresponding to the first image group, a user can conveniently and quickly obtain other images of the first image group located by the user, and therefore, efficiency of locating a required image by the user can be further improved, and user experience is improved.
Fig. 4A schematically illustrates a flowchart of an image processing method according to a third embodiment of the present disclosure; fig. 4B schematically illustrates a presentation effect diagram in response to a user operation in the third embodiment of the present disclosure.
As shown in fig. 4A, the image processing method of the embodiment of the present disclosure includes operation S406 in addition to operation S201 to operation S204.
In response to the operation of selecting the first image of the at least one first representative image in the first manner, at least one other image of the first image group, other than the first image, of the first image representation is presented in operation S406.
According to the embodiment of the disclosure, in order to facilitate a user to quickly acquire more images of the image group to which the user needs to belong after locating the image group to which the user needs to belong, a first image in at least one first typical image of each first image group to be displayed can be used as a hyperlink of other images, and when the image is selected in a first mode, at least one other image of the image group to which the image belongs can be acquired. The first manner may be, for example, a manner in which the user clicks on the first image, and the at least one other image includes the first typical image and/or an image other than the first typical image. Specifically, in the case where all the first typical images of the first image group are not displayed in operation S204, at least one other image in operation S406 may include the first typical images not displayed and images other than the first typical images. In the case where all of the first typical images of the first image group are presented in operation S204, at least one other image in operation S406 may include only other images except the first typical image.
In the embodiment of the present disclosure, in the case where the first image group obtained by dividing the plurality of images includes the person image group, the moving image group, and the dining image group, as shown in the left-hand diagram in fig. 4B, when the user clicks the moving typical image 142 of the moving image group, a presentation page as shown in the right-hand diagram in fig. 4B can be presented to the user. Namely, a tennis court image 421, a basketball court image 422, a olive court image 423, and a badminton court image 424, which all belong to the sports image group, are shown and described as sports typical images 142. It is to be understood that the above images 421-424 are merely examples to facilitate understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 5A schematically shows a flowchart of an image processing method according to a fourth embodiment of the present disclosure; fig. 5B schematically illustrates a presentation effect diagram in response to a user operation in a fourth embodiment of the present disclosure.
According to an embodiment of the present disclosure, in the case where only one first representative image of each first image group is shown in operation S204 and at least one first image group includes at least two first representative images, in order to facilitate a user to view other representative images of at least one first image group, as shown in fig. 5A, the image processing method of the embodiment of the present disclosure includes operation S507 in addition to operations S201 to S204.
In response to the operation of selecting the second image of the at least one first representative image in the second manner, the first representative images of the first image group, excluding the second image, characterized by the second image are presented in operation S507.
The second manner may specifically be, for example, a sliding manner, and the operation S507 specifically is to sequentially display other first typical images in response to the operation of the user to slide the second images of the first image group. As shown in fig. 5B, in the case where the user's finger slides from the first position 501 of the left diagram to the second position 502 of the right diagram while clicking on the person typical image 141, the typical image 311 and the typical image 312 belonging to the person image group in the right diagram together with the person typical image 141 may be displayed.
Similarly, in the case where the group of character images includes other representative images in addition to the representative image 141, the representative image 311, and the representative image 312, the image processing method of the embodiment of the present disclosure may further continue to display other representative images in response to a user sliding operation from the right side to the left side of the presentation page.
Similarly, in the case where the current presentation page is the right image in fig. 5B, the image processing method of the embodiment of the present disclosure may further gradually move the presentation image representative image 311 and the representative image 312 to the right in response to the user's sliding operation from the second position 502 to the first position 501, and simultaneously gradually present the person representative image 141.
As can be seen from the above, by the setting of operation S507, the image processing method according to the embodiment of the present disclosure may facilitate the user to view typical images except for the second image in the first image group, so as to facilitate the user to determine whether the required image belongs to the first image group, and thus may further improve the positioning efficiency and improve the user experience.
Fig. 6A schematically shows a flowchart of an image processing method according to a fifth embodiment of the present disclosure.
As shown in fig. 6A, the image processing method of the embodiment of the present disclosure includes operations S608 to S609 in addition to operations S201 to S204 described in fig. 2.
In operation S608, the first representative image of each first image group is processed to obtain a label of each first image group.
Among them, operation S608 may be performed between operation S203 and operation S204. Alternatively, in the case where the image group division is required by performing the recognition processing on the plurality of images in operation S202, this operation S608 may also be performed in synchronization with operation S202, for example, and after the first representative image is determined in operation S203, the label of each first image group is determined.
According to an embodiment of the disclosure, the operation S608 may specifically be performing image recognition on the first typical images, extracting image features, determining features common to the plurality of first typical images according to the extracted image features of the plurality of first typical images, and determining the labels of each first image group according to the common features. According to an embodiment of the present disclosure, this operation S608 may specifically be further performed by using a pre-trained machine learning model, and then, specifically, the determined typical image of each image group is taken as an input of the machine learning model, and the common feature of each image group is obtained by outputting, so as to obtain the label of each image group.
In operation S609, in response to the second acquisition request, a first typical image included in the first image group matching the second acquisition request is presented.
According to the embodiment of the present disclosure, in consideration of the diversity of the images, more image groups may be obtained by dividing according to a predetermined rule, and when typical images of the plurality of image groups are displayed in operation S204, typical images of all image groups may not be displayed in one page. In order to facilitate the user to quickly locate the image group to which the required image belongs, a search function may also be set for the user, so that the user may search for a plurality of image groups by inputting voice information, images, search sentences, or the like.
Accordingly, the second acquisition request may include voice information, image information, and/or text information. The second acquisition request may be generated when voice information of the user is acquired or an image and a search term input by the user are acquired, and the second acquisition request includes the acquired voice information, the input image and the search term, and the like. For example, when the user inputs a voice command "please help me find out the food picture", the typical image of the food image group may be displayed to the user for the user to browse and select through the above operation S609.
In accordance with an embodiment of the present disclosure, the above-mentioned operation S609 specifically further requires, for example, performing an identification process on the second acquisition request to obtain a matched typical image presented to the user. In this case, the execution of the operation S609 may be specifically described with reference to fig. 6B, and will not be described herein.
Fig. 6B schematically illustrates a flowchart for exposing an image in response to a second acquisition request in accordance with an embodiment of the present disclosure.
According to an embodiment of the present disclosure, as shown in fig. 6B, the above-described operation S609 may specifically include operations S6091 to S6093.
In operation S6091, the request feature of the second acquisition request is extracted using the first neural network model. In operation S6092, a set of images to be displayed among the one or more first image sets is determined, the set of images to be displayed having a tag matching the requested feature. In operation S6093, a first typical image included in the image group to be displayed is displayed.
According to an embodiment of the disclosure, the process of displaying the image in response to the second acquisition request may specifically be: the second acquisition request (specifically, the acquired voice information or the acquired image, the search sentence and the like) is used as the input of the first neural network model, the request feature of the second acquisition request is input, for example, when the user inputs the voice information of 'please help me find out the food picture', the request feature 'food' can be extracted. And comparing the extracted request features with the labels of each first image group, and taking the first image group with the labels matched with the request features as the image group to be displayed. And finally, directly displaying the typical image of the image group to be displayed.
According to an embodiment of the present disclosure, the above-mentioned first neural network model may specifically be, for example, a convolutional neural network model for extracting features of the second acquisition request (specifically, extracting voice information, images or text). It is understood that the first neural network model may have different structures and parameters according to different types of the second acquisition request. And the first neural network model may be the same type of model as the second neural network model and/or the third neural network model described above, or a different type of structure, which is not limited in this disclosure.
Fig. 7A schematically shows a flowchart of an image processing method according to a sixth embodiment of the present disclosure; fig. 7B schematically illustrates a presentation effect diagram in response to a rearrangement operation in the sixth embodiment according to the present disclosure.
In consideration of various requirements of a user in searching for a required image, a plurality of images can be grouped according to different preset rules to obtain various image groups of different types. For example, the plurality of images may be divided into a plurality of first image groups according to the included image content; and the plurality of images can be further divided into a plurality of second image groups according to shooting time, so that a plurality of modes are provided for searching of a user. For example, when the user only knows what the required image includes, the user can quickly locate the image group to which the required image belongs by viewing the typical images of the plurality of first image groups; and when the user only knows the shooting time of the required image, the user can quickly locate the image group to which the required image belongs by viewing the typical images of the plurality of second image groups.
Therefore, the image processing method of the embodiment of the present disclosure should also provide the user with a selection of the division manner of the image group, and should also update the content presented on the current presentation page in response to the different division manners selected by the user. Then the image processing method of the embodiment of the present disclosure should further include operations S710 to S712, as shown in fig. 7A.
In operation S710, the plurality of images are divided into one or more second image groups in response to the rearrangement request. In operation S711, one or more images in each second image group are determined as second typical images for characterizing each second image group. At operation S712, at least one first representative image of the displayed at least one first image group is updated to at least one second representative image of the at least one second image group. The operations of dividing the plurality of images into image groups in the operation S710 and determining the second typical image of the second image group in the operation S711 are similar to the operations S202 and S203, respectively, and are not repeated here.
Specifically, as shown in fig. 7B, the image processing method of the embodiment of the present disclosure may provide, for example, four different image group dividing methods, dividing according to image content, dividing according to photographing mode, dividing according to photographing time, and dividing according to photographing place. In the case where the image group in the current presentation page shown in the left-hand diagram is divided according to the image content, a rearrangement request may be generated in response to an operation of selecting a photographing time by the user. And according to the rearrangement request, the images are subjected to image group repartition to obtain at least one second image group. The second representative image of each second image group is then determined, and finally the presentation page of the left-hand diagram in fig. 7B is replaced with the presentation page of the right-hand diagram. For example, when the second image group photographed in the first period and the second image group photographed in the second period are obtained when grouped according to photographing time, the representative image 701 (bed) and the representative image 702 (table) of the two second image groups are respectively displayed in the display page of the right-hand side drawing. It will be appreciated that the generation of the presentation page and the rearrangement request described above with reference to FIG. 7B is merely exemplary to facilitate an understanding of the present disclosure, which is not limited in this disclosure.
As can be seen from the above description, the embodiments of the present disclosure may provide more choices for the user through the settings of operations S710 to S712, and thus may facilitate the user to quickly locate the required image, thereby further improving the response efficiency and user experience.
Fig. 8 schematically shows a block diagram of the structure of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 8, an image processing apparatus 800 of an embodiment of the present disclosure includes an acquisition module 810, a grouping module 820, a typical image determination module 830, and a presentation module 840.
Wherein the acquiring module 810 is configured to acquire a plurality of images. The grouping module 820 is configured to divide the plurality of images into one or more first image groups according to a predetermined rule. The representative image determination module 830 is configured to determine one or more images in each first image group as first representative images, where the first representative images of each first image group are used to characterize the each first image group. The presentation module 840 is configured to present at least one first representative image of at least one of the one or more first image groups in response to the first acquisition request. The acquiring module 810, the grouping module 820, the typical image determining module 830, and the displaying module 840 may be used to perform operations S201 to S204 described in fig. 2, respectively, and are not described herein.
The presentation module 840 described above may also be used, for example, in accordance with embodiments of the present disclosure: responding to the first acquisition request, and displaying at least one operation control corresponding to at least one first image group one by one; and responding to the selected operation of the first operation control in the at least one operation control, displaying partial images or all images of the first image group corresponding to the first operation control, wherein the partial images comprise other images except the first typical image. The presentation module 840 may also be used to perform operations S304-S305 described in fig. 3A, for example, according to an embodiment of the present disclosure, which is not described herein.
The presentation module 840 described above may also be used, for example, in accordance with embodiments of the present disclosure: in response to selecting a first image of the at least one first representative image in the first manner, at least one other image of the first image set of the first image representation other than the first image is presented. Wherein the at least one other image comprises the first representative image and/or an image other than the first representative image. The presentation module 840 may also be used to perform operation S406 described in fig. 4A, for example, according to an embodiment of the present disclosure, which is not described herein.
The presentation module 840 described above may also be used, for example, in accordance with an embodiment of the present disclosure: in response to selecting a second image of the at least one first representative image in the second manner, first representative images of the first group of images, other than the second image, that are representative of the second image are presented. Wherein the first image group of the second image representation includes a plurality of first representative images, and the presentation module 840 presents only one first representative image of each of the at least one first image group in response to the first acquisition request. The presentation module 840 may also be used to perform operation S507 described in fig. 5A, for example, according to an embodiment of the present disclosure, which is not described herein.
According to an embodiment of the present disclosure, as shown in fig. 8, the image processing apparatus 800 may further include a processing module 850. The processing module 850 is configured to process the first typical image of each first image group to obtain a label of each first image group. Accordingly, the presenting module 840 may be further configured to present, in response to the second acquisition request, the first typical image included in the first image group that matches the second acquisition request. The processing module 850 and the display module 840 may also be used to perform the operations S608 and S609 described in fig. 6A, respectively, according to the embodiments of the present disclosure, which are not described herein.
According to an embodiment of the present disclosure, as shown in fig. 8, the presentation module 840 may specifically include an extraction unit 841, a determination unit 842, and a presentation unit 843. The extracting unit 841 is configured to extract the request feature of the second acquisition request by using the first neural network model. The determining unit 842 is configured to determine a set of images to be displayed in the one or more first image sets, where the set of images to be displayed has a tag matching the requested feature. The display unit 843 is configured to display a first typical image included in the image group to be displayed. Wherein the second acquisition request comprises voice information, image information and/or text information. The extracting unit 841, the determining unit 842, and the presenting unit 843 may be used to perform operations S6091, S6092, and S6093, respectively, described in fig. 6B, according to an embodiment of the present disclosure, which are not described herein.
The grouping module 820 described above may also be used to divide the plurality of images into one or more second image groups, for example, in response to a rearrangement request, according to embodiments of the present disclosure. The exemplary image determining module 830 described above may also be configured to determine one or more images in each second image group as second exemplary images, where the second exemplary images of each second image group are used to characterize each second image group, for example. The presentation module 840 described above may also be used, for example, to update at least one first representative image of at least one first image set to be presented to at least one second representative image of at least one second image set. The grouping module 820, the representative image determining module 830, and the presentation module 840 described above may also be used to perform operations S710-S712, respectively, described with reference to fig. 7A, for example, and are not traced again herein, according to an embodiment of the present disclosure.
The grouping module 820 may divide the plurality of images at least in four ways according to an embodiment of the present disclosure. The first mode is as follows: and dividing images with shooting time belonging to the same time period in the plurality of images into the same image group to obtain one or more first image groups. The second mode is as follows: the images including similar/identical objects in the plurality of images are divided into identical image groups, and one or more first image groups are obtained. The third way is: and dividing images shot by the same shooting mode in the plurality of images into the same image group to obtain one or more first image groups. The fourth mode is: and dividing images shot in the same area in the plurality of images into the same image group to obtain one or more first image groups.
The grouping module 820 described above may be particularly useful for classifying a plurality of images into one or more first image groups using a second neural network model, according to embodiments of the present disclosure. The exemplary image determining module 830 may be specifically configured to determine, for each first image group, one or more images of each first image group as a first exemplary image using the third neural network model.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the acquisition module 810, the grouping module 820, the representative image determination module 830, the presentation module 840, the processing module 850, the extraction unit 841, the determination unit 842, and the presentation unit 843 may be combined in one module to be implemented, or any one of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the acquisition module 810, the grouping module 820, the exemplary image determination module 830, the presentation module 840, the processing module 850, the extraction unit 841, the determination unit 842, and the presentation unit 843 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or as hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or as any one of or a suitable combination of any of the three implementations of software, hardware, and firmware. Alternatively, at least one of the acquisition module 810, the grouping module 820, the representative image determination module 830, the presentation module 840, the processing module 850, the extraction unit 841, the determination unit 842, and the presentation unit 843 may be at least partially implemented as a computer program module, which, when executed, may perform the corresponding functions.
Fig. 9 schematically illustrates a block diagram of an image processing system adapted to perform an image processing method according to an embodiment of the present disclosure. It will be appreciated that the image processing system illustrated in fig. 9 is merely an example, and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 9, the image processing system 900 includes a processor 910 and a computer-readable storage medium 920. The image processing system 900 may perform an image processing method according to an embodiment of the present disclosure.
In particular, processor 910 can include, for example, a general purpose microprocessor, an instruction set processor, and/or an associated chipset and/or special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 910 may also include on-board memory for caching purposes. Processor 910 may be a single processing unit or multiple processing units for performing different actions in accordance with the method flows of embodiments of the disclosure.
Computer-readable storage medium 920, which may be, for example, a non-volatile computer-readable storage medium, specific examples include, but are not limited to: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; etc.
The computer-readable storage medium 920 may include a computer program 921, which computer program 921 may include code/computer-executable instructions that, when executed by the processor 910, cause the processor 910 to perform a method according to an embodiment of the disclosure, or any variation thereof.
The computer program 921 may be configured to have computer program code comprising, for example, computer program modules. For example, in an example embodiment, code in the computer program 921 may include one or more program modules, including 921A, modules 921B, … …, for example. It should be noted that the division and number of modules is not fixed, and that a person skilled in the art may use suitable program modules or combinations of program modules according to the actual situation, which when executed by the processor 910, enable the processor 910 to perform a method according to an embodiment of the disclosure or any variations thereof.
At least one of the acquisition module 810, the grouping module 820, the representative image determination module 830, the presentation module 840, the processing module 850, the extraction unit 841, the determination unit 842, and the presentation unit 843 may be implemented as computer program modules described with reference to fig. 9, which, when executed by the processor 910, may implement the respective operations described above.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.

Claims (9)

1. An image processing method, comprising:
acquiring a plurality of images;
dividing the plurality of images into one or more first image groups according to a predetermined rule, comprising: dividing images with shooting time belonging to the same time period in the plurality of images into the same image group to obtain one or more first image groups;
Randomly determining one or more images in each first image group as first typical images;
preferentially performing image recognition processing on the first typical images of each first image group according to the first typical images of each first image group and other images except the first typical images in each first image group to obtain labels of each first image group;
displaying at least one first representative image of at least one of the one or more first image groups in response to a first acquisition request; and
and responding to the second acquisition request, and displaying the first typical images included in the first image group matched with the second acquisition request based on the label of each first image group.
2. The method of claim 1, further comprising:
responding to the first acquisition request, and displaying at least one operation control corresponding to the at least one first image group one by one; and
and responding to the selected operation of a first operation control in the at least one operation control, and displaying partial images or all images of a first image group corresponding to the first operation control, wherein the partial images comprise the other images except the first typical image.
3. The method of claim 1, further comprising:
responsive to selecting a first image of the at least one first representative image in a first manner, presenting at least one other image of a first group of images characterized by the first image, other than the first image, wherein the at least one other image comprises the first representative image and/or an image other than the first representative image; and/or
And in response to the operation of selecting the second image in the at least one first typical image in the second mode, displaying the first typical images except the second image in the first image group characterized by the second image, wherein the first image group characterized by the second image comprises a plurality of first typical images, and in response to the first acquisition request, displaying only one first typical image of each first image group in the at least one first image group.
4. The method of claim 3, wherein in response to a second acquisition request, presenting a first representative image included in a first image group matching the second acquisition request comprises:
extracting request features of the second acquisition request by adopting a first neural network model;
Determining an image group to be displayed in the one or more first image groups, wherein the image group to be displayed has a label matched with the request feature; and
displaying a first typical image included in the image group to be displayed,
wherein the second acquisition request includes voice information, image information, and/or text information.
5. The method of claim 1, further comprising:
in response to the rearrangement request, dividing the plurality of images into one or more second image groups;
determining one or more images in each second image group as second representative images for characterizing said each second image group; and
updating the at least one first representative image of the at least one first image group to be presented to at least one second representative image of the at least one second image group.
6. The method of claim 1, wherein the dividing the plurality of images into one or more first image groups according to a predetermined rule further comprises:
dividing images including similar/identical objects in the plurality of images into identical image groups to obtain one or more first image groups; or alternatively
Dividing images shot by the same shooting mode in the plurality of images into the same image group to obtain one or more first image groups; or alternatively
And dividing the images shot in the same area in the plurality of images into the same image group to obtain the one or more first image groups.
7. The method according to claim 1, wherein:
dividing the plurality of images into one or more first image groups according to a predetermined rule comprises: dividing the plurality of images into one or more first image groups using a second neural network model; and/or
Determining that one or more images in each first image group are first representative images includes: determining one or more images of each first image group as first representative images from the first image groups using a third neural network model.
8. An image processing apparatus comprising:
the acquisition module is used for acquiring a plurality of images;
a grouping module for dividing the plurality of images into one or more first image groups according to a predetermined rule, comprising: dividing images with shooting time belonging to the same time period in the plurality of images into the same image group to obtain one or more first image groups;
a typical image determining module for randomly determining one or more images in each first image group as first typical images;
The processing module is used for preferentially carrying out image recognition processing on the first typical images of each first image group to obtain labels of each first image group for the first typical images of each first image group and other images except the first typical images in each first image group;
a display module for displaying at least one first representative image of at least one of the one or more first image groups in response to a first acquisition request; and responding to the second acquisition request, and displaying the first typical images included in the first image group matched with the second acquisition request based on the label of each first image group.
9. An image processing system, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
CN201910231390.6A 2019-03-25 2019-03-25 Image processing method, device and system Active CN110008364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910231390.6A CN110008364B (en) 2019-03-25 2019-03-25 Image processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910231390.6A CN110008364B (en) 2019-03-25 2019-03-25 Image processing method, device and system

Publications (2)

Publication Number Publication Date
CN110008364A CN110008364A (en) 2019-07-12
CN110008364B true CN110008364B (en) 2023-05-02

Family

ID=67168150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910231390.6A Active CN110008364B (en) 2019-03-25 2019-03-25 Image processing method, device and system

Country Status (1)

Country Link
CN (1) CN110008364B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859019A (en) * 2020-07-17 2020-10-30 腾讯音乐娱乐科技(深圳)有限公司 Method for acquiring page switching response time and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005157909A (en) * 2003-11-27 2005-06-16 Olympus Corp Device and method for managing image, program, and image display device
CN105426904A (en) * 2015-10-28 2016-03-23 小米科技有限责任公司 Photo processing method, apparatus and device
CN106557523A (en) * 2015-09-30 2017-04-05 佳能株式会社 Presentation graphics system of selection and equipment and object images search method and equipment
CN108134906A (en) * 2017-12-21 2018-06-08 联想(北京)有限公司 Image processing method and its system
CN108228852A (en) * 2018-01-10 2018-06-29 上海展扬通信技术有限公司 The method, apparatus and computer readable storage medium of electron album cover generation
US10163173B1 (en) * 2013-03-06 2018-12-25 Google Llc Methods for generating a cover photo with user provided pictures

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236218A (en) * 2005-02-28 2006-09-07 Fuji Photo Film Co Ltd Electronic album display system, electronic album display method, and electronic album display program
CN201037938Y (en) * 2006-12-27 2008-03-19 南京风速网络系统有限公司 Electronic photo album system capable of automatic founding and classifying photos
JP2011191382A (en) * 2010-03-12 2011-09-29 Olympus Imaging Corp Electronic photo album
CN102323936A (en) * 2011-08-31 2012-01-18 宇龙计算机通信科技(深圳)有限公司 Method and device for automatically classifying photos
CN106156247B (en) * 2015-04-28 2020-09-15 中兴通讯股份有限公司 Image management method and device
US9674426B2 (en) * 2015-06-07 2017-06-06 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
CN105528450A (en) * 2015-12-23 2016-04-27 北京奇虎科技有限公司 Method and device for naming photo album
CN107016004A (en) * 2016-01-28 2017-08-04 百度在线网络技术(北京)有限公司 Image processing method and device
CN205942662U (en) * 2016-05-20 2017-02-08 苹果公司 Electronic equipment with be used for device that divides into groups to a plurality of images
CN106503693B (en) * 2016-11-28 2019-03-15 北京字节跳动科技有限公司 The providing method and device of video cover
CN107704519B (en) * 2017-09-01 2022-08-19 毛蔚青 User side photo album management system based on cloud computing technology and interaction method thereof
CN107977674B (en) * 2017-11-21 2020-02-18 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107977431A (en) * 2017-11-30 2018-05-01 广东欧珀移动通信有限公司 Image processing method, device, computer equipment and computer-readable recording medium
CN109508321B (en) * 2018-09-30 2022-01-28 Oppo广东移动通信有限公司 Image display method and related product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005157909A (en) * 2003-11-27 2005-06-16 Olympus Corp Device and method for managing image, program, and image display device
US10163173B1 (en) * 2013-03-06 2018-12-25 Google Llc Methods for generating a cover photo with user provided pictures
CN106557523A (en) * 2015-09-30 2017-04-05 佳能株式会社 Presentation graphics system of selection and equipment and object images search method and equipment
CN105426904A (en) * 2015-10-28 2016-03-23 小米科技有限责任公司 Photo processing method, apparatus and device
CN108134906A (en) * 2017-12-21 2018-06-08 联想(北京)有限公司 Image processing method and its system
CN108228852A (en) * 2018-01-10 2018-06-29 上海展扬通信技术有限公司 The method, apparatus and computer readable storage medium of electron album cover generation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Konrad Schindler.An Overview and Comparison of Smooth Labeling Methods for Land-Cover Classification.《IEEE Transactions on Geoscience and Remote Sensing》.2012,4534-4545. *
林兰.基于半监督学习的图像自动标注方法研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2019,I138-3982. *

Also Published As

Publication number Publication date
CN110008364A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
US11606622B2 (en) User interface for labeling, browsing, and searching semantic labels within video
US10303984B2 (en) Visual search and retrieval using semantic information
US9972113B2 (en) Computer-readable recording medium having stored therein album producing program, album producing method, and album producing device for generating an album using captured images
US20230148049A1 (en) Methods, systems, and media for presenting media content items belonging to a media content group
US9538116B2 (en) Relational display of images
US9652534B1 (en) Video-based search engine
CN106028134A (en) Detect sports video highlights for mobile computing devices
US20150185599A1 (en) Audio based on captured image data of visual content
CN111314759B (en) Video processing method and device, electronic equipment and storage medium
CN109408672B (en) Article generation method, article generation device, server and storage medium
US20160335493A1 (en) Method, apparatus, and non-transitory computer-readable storage medium for matching text to images
JP6366626B2 (en) Generating device, generating method, and generating program
US20150189384A1 (en) Presenting information based on a video
BR122021013788B1 (en) METHOD FOR IDENTIFYING AN INDIVIDUAL IN A CONTINUOUS FLOW OF MEDIA CONTENT, AND, SYSTEM FOR IDENTIFYING AN INDIVIDUAL IN A CONTINUOUS FLOW OF MEDIA CONTENT
US20150131967A1 (en) Computerized systems and methods for generating models for identifying thumbnail images to promote videos
KR102592904B1 (en) Apparatus and method for summarizing image
JP2019159537A (en) Image search apparatus, image search method, electronic device and its control method
CN107391608B (en) Picture display method and device, storage medium and electronic equipment
CN110008364B (en) Image processing method, device and system
CN116049490A (en) Material searching method and device and electronic equipment
US20200074218A1 (en) Information processing system, information processing apparatus, and non-transitory computer readable medium
US20180189602A1 (en) Method of and system for determining and selecting media representing event diversity
KR20150096552A (en) System and method for providing online photo gallery service by using photo album or photo frame
KR20180053221A (en) Display device and method for control thereof
US20230148007A1 (en) System and method for playing audio corresponding to an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant