CN108108461B - Method and device for determining cover image - Google Patents

Method and device for determining cover image Download PDF

Info

Publication number
CN108108461B
CN108108461B CN201711475410.1A CN201711475410A CN108108461B CN 108108461 B CN108108461 B CN 108108461B CN 201711475410 A CN201711475410 A CN 201711475410A CN 108108461 B CN108108461 B CN 108108461B
Authority
CN
China
Prior art keywords
image
preset
information
target
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711475410.1A
Other languages
Chinese (zh)
Other versions
CN108108461A (en
Inventor
王熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201711475410.1A priority Critical patent/CN108108461B/en
Publication of CN108108461A publication Critical patent/CN108108461A/en
Application granted granted Critical
Publication of CN108108461B publication Critical patent/CN108108461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Abstract

The present disclosure provides a method and a device for determining a cover image, wherein the method comprises: acquiring characteristic information of a preset image included in a preset image set; matching preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information; determining a cover image of the image set according to the target image; wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene. By the method for determining the cover image, the representative cover image can be intelligently selected for the preset image set, a user can conveniently and quickly position the image set according to the representative cover image, and the user experience is improved.

Description

Method and device for determining cover image
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for determining a cover image.
Background
With the development of digital camera technology, people can use electronic equipment with an image acquisition function, such as a digital camera or a smart phone, to acquire a large number of digital images, and great convenience is provided for recording the drip of life. Taking the example that the user shoots the tourist commemorative images, the electronic equipment can be used for collecting a large number of tourist commemorative images when the user arrives at a place such as a preset tourist attraction. In the subsequent image arrangement process, the images collected in one trip can be stored in a preset file, such as an electronic photo album, or a folder, so as to form a preset image set.
In order to facilitate the user to identify the image information stored in the preset document, the related art may automatically select one or more images in the image set as a cover image of the preset document. However, the conventional practice of the related art is to default the first or the first images in an image set to the cover image of the preset document according to a preset rule, which results in that the cover image of the preset document is not representative in practice and the user experience is poor.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method and an apparatus for determining a cover image, which can intelligently select a representative cover image for a preset image set.
According to a first aspect of embodiments of the present disclosure, there is provided a method of determining a cover image, comprising:
acquiring characteristic information of a preset image included in a preset image set;
matching preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information;
determining a cover image of the image set according to the target image;
wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene.
Optionally, the obtaining of the feature information of the preset image included in the preset image set includes at least one of:
acquiring shooting geographical position information of the preset image;
and extracting the preset image characteristics of the preset image to obtain the image characteristic extraction information of the preset image.
Optionally, the preset reference information includes: geographic location information of at least one preset scene;
the matching of preset reference information according to the feature information to obtain a target image comprises:
acquiring target geographical position information matched with the shooting geographical position information of the preset image from the geographical position information of the preset scenery;
and determining the target image according to the target geographical position information.
Optionally, the determining the target image according to the target geographic location information includes any one of:
determining an image meeting a preset position matching condition as the target image, wherein the preset position matching condition comprises: the distance between the shooting geographical position of the image and the geographical position of the preset scenery meets the preset position matching precision;
and determining a sample image of a preset scene corresponding to the target geographical position information as the target image.
Optionally, the preset reference information includes: sample image information of at least one preset scene;
the matching of preset reference information according to the feature information to obtain a target image comprises:
acquiring a target sample image matched with the image feature extraction information of the preset image from the sample image information of the preset scene;
determining the target image from the target sample image.
Optionally, the matching preset reference information according to the feature information to obtain the target image includes:
matching the geographical position information of a preset scene according to the shooting geographical position information of the preset image;
if the distance between the shooting geographical position of the preset image and the geographical position of the preset scenery is within a preset distance range, determining the preset image as an image to be determined, which contains a preset scenery image;
determining the image matching degree between the image feature extraction information of the image to be determined and a sample image of a preset scenery, wherein the sample image of the preset scenery is the sample image corresponding to the geographical position of the preset scenery;
and if the image matching degree exceeds a preset matching threshold value, determining the preset image as the target image.
Optionally, if the number of the target images is greater than the number of preset cover images, determining the cover image of the preset image set according to the target images includes:
determining the matching degree of each target image;
and acquiring a preset number of target images according to the sequence of the matching degrees from high to low, and determining the target images as cover images of the preset image set.
Optionally, after the determining a cover image of the image set from the target image, the method further comprises:
determining the similarity between a cover image of the historical image set and a current cover image;
and if the similarity is larger than or equal to a preset threshold value, updating the current cover image.
Optionally, before the matching of the preset reference information according to the feature information, the method further includes:
acquiring historical user data in a preset time range before the image set shooting time;
determining the preset reference information by analyzing the historical user data.
According to a second aspect of embodiments of the present disclosure, there is provided an apparatus for determining a cover image, the apparatus comprising:
the image processing device comprises a characteristic acquisition module, a processing module and a processing module, wherein the characteristic acquisition module is configured to acquire characteristic information of preset images included in a preset image set;
the matching module is configured to match preset reference information according to the characteristic information to obtain a target image;
a cover image determination module configured to determine a cover image of the image set from the target image;
wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene.
Optionally, the feature obtaining module includes at least one of the following sub-modules:
a first feature acquisition sub-module configured to acquire shooting geographical position information of the preset image;
and the second feature acquisition sub-module is configured to acquire image feature extraction information of the preset image by performing preset image feature extraction on the preset image.
Optionally, the preset reference information includes: geographic location information of at least one preset scene;
the matching module comprises:
the position matching sub-module is configured to acquire target geographical position information matched with the shooting geographical position information of the preset image from the geographical position information of the preset scenery;
a first target determination sub-module configured to determine the target image from the target geographic location information.
Optionally, the first target determining sub-module includes:
a first target image determination unit configured to determine, as the target image, an image that satisfies a preset position matching condition including: the distance between the shooting geographical position of the image and the geographical position of the preset scenery meets the preset position matching precision;
and the second target image determining unit is configured to determine a sample image of a preset scene corresponding to the target geographical position information as the target image.
Optionally, the preset reference information includes: sample image information of at least one preset scene;
the matching module comprises:
the image feature matching sub-module is configured to acquire a target sample image matched with the image feature extraction information of the preset image from the sample image information of the preset scene;
a second target determination sub-module configured to determine the target image from the target sample image.
Optionally, the matching module includes:
the initial matching sub-module is configured to match the geographical position information of a preset scene according to the shooting geographical position information of the preset image;
the initial judgment sub-module is configured to determine that the preset image is an image to be determined, which contains a preset scenery image, if the distance between the shooting geographical position of the preset image and the geographical position of the preset scenery is within a preset distance range;
the accurate matching sub-module is configured to determine the image matching degree between the image feature extraction information of the image to be determined and a sample image of a preset scenery, wherein the sample image of the preset scenery is a sample image corresponding to the geographical position of the preset scenery;
a target image determination sub-module configured to determine the preset image as the target image if the image matching degree exceeds a preset matching threshold.
Optionally, if the number of the target images is greater than the number of preset cover images, the cover image determining module includes:
a matching degree determination sub-module configured to determine a matching degree of each of the target images;
and the cover image determining submodule is configured to acquire a preset number of target images in the sequence from high to low according to the matching degree and determine the target images as the cover images of the preset image set.
Optionally, the apparatus further comprises:
a similarity determination module configured to determine a similarity between a cover image of the historical image set and a current cover image;
an image updating module configured to update the current cover image if the similarity is greater than or equal to a preset threshold.
Optionally, the apparatus further comprises:
the data acquisition module is configured to acquire historical user data within a preset time range before the image set shooting time;
a reference information determination module configured to determine the preset reference information by analyzing the historical user data.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the first aspect described above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring characteristic information of a preset image included in a preset image set;
matching preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information;
determining a cover image of the image set according to the target image;
wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method and the device, the electronic equipment can match the preset reference information according to the characteristic information of the preset image included in the preset image set, precisely match the representative target image in the image set in a geographic position matching and/or preset image characteristic matching mode, and determine the cover image of the image set according to the target image, so that a user can visually know the geographic position characteristic information corresponding to the image set, the target image set can be quickly positioned, and the visual experience of the user can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 2 is a flow chart illustrating another method of determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 3 is a flow chart illustrating another method of determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 4 is a flow chart illustrating another method of determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 5 is a flow chart illustrating another method of determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 6 is a flow chart illustrating another method of determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 7 is a flow chart illustrating another method of determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 8 is a schematic diagram illustrating an application scenario for determining a cover image according to an exemplary embodiment of the present disclosure.
FIG. 9 is a block diagram of an apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 10 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 11 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 12 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 13 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 14 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 15 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 16 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 17 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 18 is a block diagram illustrating another apparatus for determining a cover image according to one exemplary embodiment of the present disclosure.
FIG. 19 is a block diagram illustrating an electronic device according to an exemplary embodiment of the present disclosure.
FIG. 20 is a block diagram of another electronic device shown in accordance with an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiment of the disclosure provides a method for determining a representative image, which can be applied to an electronic device. The electronic device may be a user terminal such as a smart phone, a Personal Digital Assistant (PDA), a Personal computer, a tablet computer, a wearable device, or a server such as a cloud server.
Referring to FIG. 1, a flow diagram of a method of determining a cover image is shown according to an exemplary embodiment, which may include the steps of:
in step 11, acquiring feature information of a preset image included in a preset image set;
in this disclosure, the preset image set may be an image set corresponding to an image container, such as an electronic album or a folder. For example, assuming that the user has gone to Paris for a trip at a small time and has taken 500 images, the 500 images taken during the trip may be stored in an electronic album or folder of the personal computer when the travel records are collated. Taking an electronic album as an example, the 500 images are the preset image set.
In the embodiment of the present disclosure, the preset image may preset any image in an image set, and as described in the above example, the preset image may be any one image in the 500 images. In determining the cover image, the electronic device may obtain characteristic information for each image.
In the present disclosure, the feature information of the image may include at least one of: shooting geographical position information of the image and image feature extraction information of the preset image. The image feature extraction information of the preset image can be image features such as an image background, a building outline and the like which can represent scene features.
If the characteristic information is shooting geographical position information of an image, the shooting geographical position information of each image can be obtained by at least two modes:
the first mode is to directly obtain the shooting geographical position information of a preset image from the image data of the image
In this case, when the electronic device collects an image, it may determine geographic location information of the image collection location, such as GPS positioning information with a positioning accuracy of less than 3 meters, where the positioning accuracy is a difference between a measured location obtained by the positioning device and a theoretical location. Assuming that the electronic device is a smart phone, the smart phone can determine the geographical location information of the currently shot image of the user, namely the shooting geographical location information of the preset image, through a preset positioning device such as a satellite positioning device like a GPS positioning device when the image is collected. When image acquisition is completed and image storage is performed, the geographical position information can be stored together as attribute information of a preset image to obtain image data of the image. The subsequent electronic device can directly acquire shooting geographical position information of the image from the image data of the preset image.
The second mode is that shooting geographic position information of the image is indirectly acquired according to the related information of the preset image
If the shooting geographical location information is not stored in the image data of one image, in this case, the electronic device may perform deep learning by using an artificial Neural Network (artificial Neural Network) technology according to the relevant information of the image, such as a tag set by a user for the image and image relevant information published in social application software, for example, information published by a circle of friends provided with the image and information of a historical chat record sharing the image with a certain contact, so as to obtain the shooting geographical location information of the image. For example, the shooting geographical position information of the image is estimated according to the label 'Eiffel Tower' of the image and the information such as the depth of field of the image.
If the feature information is image feature extraction information of a preset image, the process of acquiring the image feature extraction information of the preset image may be: and performing preset image feature extraction on an original image by adopting a preset digital image processing algorithm to obtain image feature extraction information of the preset image.
For example, assuming that the preset image is a shadow of the user in front of the eiffel tower, the original image data of the image includes: the image information of the person and the scene image information, wherein the scene image information comprises the image information of the Eiffel tower. In the embodiment of the present disclosure, the electronic device may adopt a preset digital image processing algorithm, such as a contour detection algorithm, to perform preset image feature extraction on the original image data of the preset image, and filter out an image information portion of the eiffel tower. The image information part of the eiffel tower is the image feature extraction information belonging to the preset image. Or, the electronic device extracts the character image information of the original image data by adopting a preset digital image processing algorithm, and determines the residual image information without the character image information as the image feature extraction information of the preset image.
Feature information in step 12, matching preset reference information according to the feature information to obtain a target image, wherein the feature information of the target image is matched with the preset reference information;
in this disclosure, after obtaining the feature information of each image in the preset image set, the electronic device may match the feature information with preset reference information to obtain a target image. Or, matching the feature information of the preset image with preset reference information after obtaining the feature information of each preset image in a one-by-one matching mode, and determining the image as a target image if the feature information of the preset image is matched with the preset reference information.
In the present disclosure, the target image may be directly determined as a cover image of a preset image set in the following order, or as a spare cover image of the preset image set.
As above, according to the difference of the characteristic information, the preset reference information may include at least one of the following: and presetting the geographical position information of the scenery and the sample image information of the scenery. For example, the sample image information of the preset scene may be a panoramic image of an eiffel tower.
Accordingly, the implementation of step 12 may include at least three cases:
in a first case, the preset reference information includes geographical location information of at least one reference scene. Referring to FIG. 2, another flowchart for determining a cover image according to an exemplary embodiment, step 12 may include:
in step 1211, acquiring target geographical location information matched with the photographed geographical location information of the preset image, among geographical location information of preset scenes;
in an embodiment of the present disclosure, after determining the shooting geographical location information of one preset image, matching the shooting geographical location information with the geographical location information of the at least one preset scene to find whether there is matching target geographical location information. If yes, the preset image is described as possibly containing image information of the preset famous scenery, or the preset image can be judged as a group photo memorial of the user and the preset famous scenery according to the image information. If not, the preset image is determined not to be the group photo memorial of the user and the preset famous scenery, so that the preset image is determined not to be a representative target image.
In another embodiment of the present disclosure, after acquiring the shooting geographical location information of all the images in the preset image set, the electronic device may also perform geographical location information matching with the geographical location information of the at least one preset scene.
In the present disclosure, when the electronic device matches the geographic location information, the electronic device may match the location according to a preset location matching precision, where the preset location matching precision refers to an effective distance between the electronic device and the geographic location of a preset scene, for example, the preset location matching precision may be 100 meters or 1000 meters, etc. The matching conditions may be: the distance between the shooting geographic position of the current image and the geographic position of the preset scenery is not more than the matching precision of the preset position. That is, if the distance between the shooting location of the current image and the preset scenery is not greater than the preset effective distance, it can be determined that the current image contains the image information of the preset scenery, that is, the current image is determined to be the target image.
The preset position matching precision may be set by default in the system, or the electronic device may obtain information manually input by the user through a preset human-computer interaction interface, which is not limited in the present disclosure.
For the situation of manual input position matching accuracy, the effective distance for identifying a preset scene can be self-defined by a user according to prior information, for example, the position matching accuracy for identifying a famous statue can be set to be 50 meters, the position matching accuracy for identifying a famous high-rise building such as an eiffel tower can be set to be 500 meters, the position matching accuracy of the preset scene is set according to the outline information of the preset scene and the optical imaging principle of an object, so that the electronic equipment can carry out position information matching according to more accurate position matching accuracy of different preset scenes, and the accuracy of positioning a cover image by the electronic equipment is improved.
In step 1212, the target image is determined according to the target geographic location information.
In the present disclosure, the target image may be directly used as a cover image of a preset image set, or as a backup cover image of the preset image set.
Still taking the 500 images as an example, the shooting geographical position information corresponding to the 500 images is matched with the geographical position information of the at least one preset scene one by one to obtain a matching result.
If the matching result indicates that: the shooting geographical position information of at least one image and the geographical information of the preset scenery meet the preset geographical position matching condition, namely the shooting geographical position information of at least one image is matched with the target geographical position information.
For each target geographical location information, the target image may be determined in any of the following ways:
in a first mode, the current image meeting the geographic position matching condition can be determined as the target image.
It is assumed that the geographical position information of one image matches the geographical position information of a preset scene, for example, the distance therebetween is within the above-mentioned preset position matching accuracy range. Assuming that the image number of the current image meeting the position matching condition is 001, in an embodiment of the present disclosure, the electronic device may determine the current image 001 as the target image.
And secondly, the electronic equipment can determine the sample image of the preset scenery corresponding to the target geographical position information as the target image.
Corresponding to the above example, a preset geographic information list may be preset in the electronic device, where the preset geographic information list includes: the corresponding relationship between the preset scene name and the geographical position of the preset scene is exemplarily shown in table one:
preset scene names Geographical location information
Eiffel tower First position information
Paris san Second position information
Triumph door Third position information
Lusenberg park Fourth position information
…… ……
Watch 1
For example, it is assumed that the target geographical position information corresponding to the geographical position information of the image 001 is the first geographical position information, which is the geographical position information of the eiffel tower. In this embodiment of the present disclosure, the electronic device may further determine, as the target image, a sample image of the scene corresponding to the target geographic location information, that is, a sample image of the eiffel tower. As illustrated above, in the embodiment of the present disclosure, the electronic device may determine the sample image of the eiffel tower as the target image, so as to subsequently determine the sample image of the eiffel tower as the cover image of the preset image set according to the preset rule.
For the user who pays more attention to the image display effect, the limitation of the user photographic technology is considered, the sample image with better shooting effect and preset scenery can be used as the cover image of the preset image set according to the user requirement, so that the individual requirements of different users are met, and the user experience is improved.
And in case two, the preset reference information comprises sample image information of at least one preset scene.
Referring to FIG. 3, another flowchart for determining a cover image according to an exemplary embodiment, step 12 may include:
in step 1221, in the sample image information of the preset scene, a target sample image matched with the image feature extraction information of the preset image is obtained;
similar to the geographic location information matching process, after feature extraction is performed on one preset image in the preset image set, the electronic device performs image feature matching on the obtained extraction result, namely the image feature extraction information of the preset image, and the sample image information of the preset scenery to obtain a matching result.
In step 1222, the target image is determined from the target sample image.
Similarly, in the embodiment of the present disclosure, the preset image meeting the preset image matching condition in the preset image set may be determined as the target image. The preset image matching condition may be: the image matching degree between the image feature extraction information of the preset image and the sample image information of the preset scenery exceeds a preset threshold value.
Similarly, in another embodiment of the present disclosure, the electronic device may also directly determine the matched target sample image, that is, the sample image of the target scene, as the target image, and when the target image is subsequently used as the cover image, the electronic device may enable the user to have a better visual experience.
In case three, the preset reference information includes: geographical location information of at least one preset scene and sample image information of at least one preset scene.
Referring to FIG. 4, another flowchart for determining a cover image according to an exemplary embodiment, step 12 may include:
in step 1231, performing feature matching according to first feature information to obtain a first matching result, where the first feature information includes: shooting geographical position information or image feature extraction information of a preset image;
in step 1232, second feature information matching is performed on the image in the first matching result, so as to obtain a target image.
In the embodiment of the present disclosure, if the first feature information is shooting geographical location information, the second preset feature is image feature extraction information of a preset image; on the contrary, if the first feature information is image feature extraction information of a preset image, the second feature information is shooting geographical position information. The embodiment of the disclosure aims to improve the accuracy of determining a target image through multiple feature matching, and does not limit the sequence of feature information matching.
For example, the shooting geographical position information of the 500 images may be firstly subjected to feature matching with the geographical position information of the preset scene, so as to obtain an image to be determined which meets the preset position matching condition. In a scene, although the shooting geographical position information of the preset image meets the preset position matching condition, the image does not contain the image information of the preset scenery because the shooting angles of the preset image are not opposite. In order to accurately select a representative cover image, the image matching degree between the image feature extraction information of the preset image and the sample image of the preset scenery may be further determined according to the image feature extraction information of the image to be determined, which is obtained in the step 11, where the sample image of the preset scenery is the sample image corresponding to the geographical position of the preset scenery; and if the image matching degree exceeds a preset matching threshold value, determining the preset image as the target image.
In the embodiment of the present disclosure, the image output in step 1231 may be further subjected to image feature matching, and the target image may be further matched on the basis of the first matching result, so as to improve the accuracy of target image matching. The image matching process may refer to the matching process described in the first case and the matching process described in the second case, which is not described herein again.
In another embodiment of the present disclosure, before any of the above-mentioned feature matching steps, such as step 1211, step 1221, or step 1231, and before step 11, as shown in fig. 5, the method may further include:
in step 101, acquiring historical user data within a preset time range before the image set shooting time;
assuming that the initial shooting time node of the preset image set is 2017, 1 month and 1 day, considering that the user can consult some travel information or make some travel strategies in advance before starting traveling, the electronic device can obtain historical user data such as internet surfing records of the user in a preset time range, namely 2016, 12 months and 1 day in 12 months in 2016 to 12 months and 31 days in 2016, one month in advance in the preset time range, so that the travel destination of the user can be analyzed later.
In step 102, the preset reference information is determined by analyzing the historical user data.
For example, information such as historical communication records, information query records and the like of the current user in a preset historical time period before the shooting time of the preset image set is acquired from relevant information of the current user through an artificial neural network technology, location information such as paris possibly corresponding to the preset image set is deeply learned, and then the geographic position of a famous scenery included in the location and/or reference image information of the famous scenery are stored as reference information in advance according to the location information. Under the condition that the preset reference information comprises the information, the priority of the partial reference information can be improved, so that when the electronic equipment determines the cover image, information matching is preferentially carried out on the preset reference information with higher priority, and the matching efficiency of the target image is improved.
In step 13, a cover image of the image collection is determined from the target image.
In the embodiment of the present disclosure, the electronic device may determine a cover image of a preset image set based on the target image, so that the determined cover image is set as a cover of the preset image set for display.
The implementation of step 13 may include at least two of the following cases:
in the first case, the number of the target images is less than or equal to the number of the cover images to be determined, and the target images are directly determined as the cover images of the preset image set.
For example, assuming that the number of the preset cover images is 1, if the number of the target images determined in any of the above cases is also 1, the target image determined in the above step 12 may be directly determined as the cover image of the preset image set.
In the second case, the number of the target images is larger than that of the cover images to be determined, and the cover images are determined according to the matching degree of each target image.
The implementation of this case applies to either of the cases of step 12 described above. Referring to FIG. 6, another flowchart for determining a cover image according to an exemplary embodiment, step 13 may include:
in step 131, determining a matching degree of each target image;
the matching degree can be a distance matching degree, an image feature matching degree, or a comprehensive matching degree value calculated according to the distance matching degree and the image feature matching degree and preset weight.
Taking the image feature matching degree as an example, assuming that a plurality of target images, such as the image 001, the image 020, the image 065, and the image 080, are matched from the preset image set according to the above steps 1221 and 1222, in this embodiment of the present disclosure, the matching degree between each target image and the corresponding sample image may also be determined.
In step 132, a preset number of target images are acquired in the order of the matching degree from high to low, and the target images are determined as cover images of the preset image set.
Also as in the above example, assuming a degree of match between each target image and the corresponding sample image, as shown in table two below:
Figure BDA0001532731690000161
Figure BDA0001532731690000171
watch two
For example, if the preset image set needs one cover image, in the embodiment of the present disclosure, the target image with the highest matching degree, that is, the target image with the number 065 shown in table two, may be determined as the cover image. Similarly, if the number of the preset cover images is 2, that is, it indicates that two images need to be selected as the cover image combination, the target images with the two top-ranked matching degrees, such as the two images with the numbers 065 and 080 in the second table, are determined as the cover images.
The manner of determining the cover image based on the distance matching degree, the comprehensive matching degree, and the like of the target image is similar to the above example, and will not be described here again.
In another embodiment of the present disclosure, if other image sets provided with cover images are stored in the electronic device, the electronic device may be referred to as a history image set. Referring to FIG. 7, another flowchart for determining a cover image according to an exemplary embodiment, after step 13, the method may further include:
in step 14, determining a similarity between a cover image of the historical image set and a current cover image;
the current cover image is a cover image determined for a preset image set at present. Taking the currently determined cover image as an image, after determining the cover image of a preset image set, in order to avoid that the recognizability of the current cover image is lower than that of the cover image of the historical image set, before setting the cover image for the preset image set, the electronic device may further calculate the similarity between the current cover image and the cover images of the historical image sets according to a related image matching algorithm, and determine whether the currently determined cover image is easily visually confused with the existing cover image, resulting in the lower recognizability of the preset image set.
In step 15, if the similarity is greater than or equal to a preset threshold, the current cover image is updated.
In the embodiment of the present disclosure, the similarity value obtained in step 14 may be compared with a preset threshold, for example, 80%, and if the similarity is greater than or equal to the preset threshold, it is determined that the current cover image needs to be replaced or other feature information of the preset image set is added based on the current cover image, so that the current cover image is obviously different from the cover image of the history image set, and then, the updated cover image may be set as the cover image of the preset image set. On the contrary, if the similarity is smaller than the preset threshold, it indicates that the current cover image is not visually confused with the cover image of the historical image set, and the current cover image may be set as the cover image of the preset image set in the subsequent step.
In the embodiment of the present disclosure, if the current cover image is similar to the cover image of the history image set, the current cover image may be updated in any one of the following ways:
the method comprises the steps of adding preset characteristic information, such as shooting time information of the preset image set, into a current cover image, so that the current cover image has remarkable characteristics, a user can distinguish the cover image of a historical image set conveniently, and the user can position the preset image set quickly.
And secondly, reselecting the cover image from the target image list according to a preset rule, for example, updating the target image with the matching degree ranked next to the current cover image into the cover image of the preset image set so as to distinguish the cover image of the historical image set. The updated cover image may be an image having the same scene background but containing different character image information, as compared with the historical cover image.
And thirdly, determining the specified image update of the preset image set as a cover image by default according to the related art.
In the application interface for displaying the target image list, providing a user operation entrance for the user to reselect a cover image from the matched target images; acquiring image selection indication information triggered by a user based on the target image list; redetermining the cover image from the remaining target images according to the image selection instruction information.
Referring to fig. 8, according to an exemplary embodiment, a schematic diagram of an application scene for determining a cover image is shown, where the cover image determined for a current electronic album, that is, album 3 is shown in fig. 8, and an image of an eiffel tower, that is, a target image is determined as a cover of the electronic album 3, so that a user can conveniently and quickly locate an album for visiting paris, and user experience is improved.
In the embodiment of the disclosure, when the electronic device finds that the currently determined cover image has a higher similarity with the cover image of the historical image set, in order to enhance the significance of the cover image of the preset image set, the current cover image can be updated in an automatic or manual selection mode, an image with a high similarity is actively avoided to be used as the cover image of the current image set, the recognizability of the preset image set is improved through the updated cover image, a user can conveniently and rapidly and intuitively position the preset image set, the intelligent degree of the electronic device is improved, and the user experience is improved.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently.
Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
Corresponding to the embodiment of the application function implementation method, the disclosure also provides an embodiment of an application function implementation device and a corresponding terminal.
Referring to FIG. 9, a block diagram of an apparatus for determining a cover image is shown according to an exemplary embodiment, the apparatus may include:
a feature obtaining module 21 configured to obtain feature information of a preset image included in a preset image set;
the matching module 22 is configured to match preset reference information according to the characteristic information to obtain a target image;
a cover image determination module 23 configured to determine a cover image of the image set from the target image;
wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene.
Referring to fig. 10, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment is shown, and based on the embodiment of the apparatus shown in fig. 9, the feature obtaining module 21 includes at least one of the following sub-modules:
a first feature acquisition sub-module 211 configured to acquire shooting geographical location information of the preset image;
the second feature obtaining sub-module 212 is configured to obtain image feature extraction information of the preset image by performing preset image feature extraction on the preset image.
In an embodiment of the apparatus of the present disclosure, the preset reference information may include: geographical location information of at least one preset scene. Accordingly, referring to fig. 11, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment, on the basis of the embodiment of the apparatus shown in fig. 9, the matching module 22 may include:
a location matching sub-module 2211 configured to acquire, among the geographical location information of the preset scene, target geographical location information matched with the shooting geographical location information of the preset image;
a first target determination sub-module 2212 configured to determine the target image according to the target geographical location information.
Referring to fig. 12, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment, on the basis of the apparatus embodiment shown in fig. 11, the position matching sub-module 2211 may include:
a matching accuracy determination unit 22111 configured to determine preset position matching accuracy of the geographical position information;
a position matching unit 22112 configured to match the shooting geographical position information of the image with preset geographical position information according to the position matching accuracy.
Referring to fig. 13, another block diagram of an apparatus for determining a cover image according to an exemplary embodiment, based on the apparatus embodiment shown in fig. 11, the first object determining sub-module 2212 may include:
a first target image determining unit 22121 configured to determine an image satisfying a preset position matching condition as the target image, the preset position matching condition including: the distance between the shooting geographical position of the image and the geographical position of the preset scenery meets the preset position matching precision; alternatively, the first and second electrodes may be,
a second target image determining unit 22122 configured to determine a sample image of a preset scene corresponding to the target geographical position information as the target image.
In another apparatus embodiment of the present disclosure, the preset reference information may include: at least one sample image of a preset scene;
referring to fig. 14, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment is shown, and based on the embodiment of the apparatus shown in fig. 9, the matching module 22 may include:
an image feature matching sub-module 2221 configured to acquire, from the sample image information of the preset scene, a target sample image matched with the image feature extraction information of the preset image;
a second target determination submodule 2222 configured to determine the target image from the target sample image.
Referring to fig. 15, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment is shown, and based on the embodiment of the apparatus shown in fig. 9, the matching module 22 may include:
an initial matching sub-module 2231 configured to match the geographical location information of the preset scene according to the shooting geographical location information of the preset image;
an initial determination sub-module 2232, configured to determine that the preset image is an image to be determined that includes a preset scene image if a distance between a shooting geographical position of the preset image and a geographical position of a preset scene is within a preset distance range;
an exact matching sub-module 2233 configured to determine an image matching degree between the image feature information of the image to be determined and a sample image of a preset scene, where the sample image of the preset scene is a sample image corresponding to a geographic location of the preset scene;
a target image determining sub-module 2234 configured to determine the preset image as the target image if the image matching degree exceeds a preset matching threshold.
In an embodiment of the present disclosure, if the number of the target images is greater than the number of the preset cover images, referring to fig. 16, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment, on the basis of the apparatus embodiment shown in fig. 9, the cover image determining module 23 may include:
a matching degree determination sub-module 231 configured to determine a matching degree of each of the target images;
a cover image determination sub-module 232 configured to acquire a preset number of target images in the order of the matching degree from high to low, and determine the target images as cover images of the preset image set.
Referring to fig. 17, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment is shown, and based on the embodiment of the apparatus shown in fig. 9, the apparatus may further include:
a similarity determination module 24 configured to determine a similarity between a cover image of the historical image set and a current cover image;
an image updating module 25 configured to update the current cover image if the similarity is greater than or equal to a preset threshold.
Referring to fig. 18, a block diagram of another apparatus for determining a cover image according to an exemplary embodiment is shown, and on the basis of the embodiment of the apparatus shown in fig. 9, before the matching module 22, the apparatus may further include:
a data acquisition module 201 configured to acquire historical user data within a preset time range before the image set shooting time;
a reference information determination module 202 configured to determine the preset reference information by analyzing the historical user data. With regard to the apparatus in the above embodiments, the specific manner in which each module performs the triggering has been described in detail in the embodiments related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
In addition, the present disclosure also provides an electronic device, which may include: a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring characteristic information of a preset image included in a preset image set;
matching preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information;
determining a cover image of the preset image set according to the target image;
wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene.
It should be further noted that, for other programs stored in the memory, reference is specifically made to the description in the foregoing method flow, and details are not described here again, and the processor is also configured to execute the other programs stored in the memory.
Fig. 19 is a block diagram illustrating an electronic device according to an exemplary embodiment of the present disclosure.
Referring to fig. 19, the electronic device 1900 may be, for example, a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, etc., having a routing function.
Device 1900 includes one or more of the following components: a processing component 1902, a memory 1904, a power component 1909, a multimedia component 1908, an audio component 1910, an input/output (I/O) interface 1912, a sensor component 1914, and a communications component 1916.
The processing component 1902 generally controls overall triggering of the device 1900, such as triggers associated with display, telephone calls, data communications, camera triggers, and recording triggers. The processing assembly 1902 may include one or more processors 1920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the process component 1902 can include one or more modules that facilitate interaction between the process component 1902 and other components. For example, the processing component 1902 can include a multimedia module to facilitate interaction between the multimedia component 1908 and the processing component 1902.
The memory 1904 is configured to store various types of data to support triggering at the device 1900. Examples of such data include instructions for any application or method triggered on device 1900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1904 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 1909 provides power to the various components of the device 1900. The power components 1909 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1900.
The multimedia component 1908 includes a screen that provides an output interface between the device 1900 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or slide action, but also detect the duration and pressure associated with the touch or slide trigger. In some embodiments, the multimedia component 1908 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the back-facing camera may receive external multimedia data when the device 1900 is in a trigger mode, such as a capture mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1910 is configured to output and/or input audio signals. For example, audio component 1910 may include a Microphone (MIC) configured to receive external audio signals when device 1900 is in a triggering mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1904 or transmitted via the communication component 1916. In some embodiments, audio component 1910 further includes a speaker for outputting audio signals.
The I/O interface 1912 provides an interface between the processing component 1902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1914 includes one or more sensors to provide various aspects of state assessment for the device 1900. For example, sensor component 1914 may detect an open/closed state of device 1900, the relative positioning of components, such as a display and keypad of device 1900, the sensor component 1914 may also detect a change in position of device 1900 or a component of device 1900, the presence or absence of user contact with device 1900, orientation or acceleration/deceleration of device 1900, and a change in temperature of device 1900. The sensor component 1914 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor component 1914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 1914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1916 is configured to facilitate wired or wireless communication between the device 1900 and other devices. The device 1900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 1900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1904 comprising instructions, executable by the processor 1920 of the device 1900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a device, enable the device to perform a method of determining a cover image, the method comprising:
acquiring characteristic information of a preset image included in a preset image set;
matching preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information;
determining a cover image of the preset image set according to the target image;
wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene.
As shown in fig. 20, fig. 20 is a schematic structural diagram of another electronic device 2000 according to an exemplary embodiment.
For example, the electronic device 2000 may be provided as a server. Referring to fig. 20, the electronic device 2000 includes a processing component 2022, which further includes one or more processors, and memory resources, represented by memory 2032, for storing instructions, e.g., applications, executable by the processing component 2022. The application programs stored in the memory 2032 may include one or more modules each corresponding to a set of instructions. Further, the processing component 2022 is configured to execute instructions to perform the above-described method of determining a cover image.
The device 2000 may also include a power component 2026 configured to perform power management of the device 2000, a wired or wireless network interface 2050 configured to connect the device 2000 to a network, and an input/output (I/O) interface 2058. The device 2000 may operate based on an operating system stored in the memory 2032, such as Android, IOS, Windows Server, Mac OS XTM, UnixTM, Linux, FreeBSDTM, or the like.
Wherein the instructions in the memory 2032, when executed by the processing component 2022, enable the electronic device 2000 to perform the above-described method of determining a cover image, comprising:
acquiring characteristic information of a preset image included in a preset image set;
matching preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information;
determining a cover image of the preset image set according to the target image;
wherein the preset reference information includes at least one of: the method comprises the steps of presetting geographic position information of a scene, and presetting sample image information of the scene.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. A method of determining a cover image, the method comprising:
acquiring historical user data within a preset time range before the shooting time of a preset image set, and analyzing the historical user data through a deep learning technology to take the geographical position information of a preset scene obtained through analysis and/or the sample image information of the preset scene as preset reference information;
acquiring characteristic information of a preset image included in the preset image set;
matching the preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information;
determining a cover image of the image set according to the target image, and updating the current cover image when the similarity between the cover image of the historical image set and the current cover image is greater than or equal to a preset threshold value;
wherein, in the case that the preset reference information contains the geographical position information of the preset scenery and the sample pattern information of the preset scenery, the method further comprises: and determining the geographic position information of the preset scenery and the priority of the sample image information of the preset scenery as the preset reference information by analyzing the historical user data, so that information matching is preferentially carried out on the preset reference information with higher priority when determining the cover image.
2. The method according to claim 1, wherein the obtaining of the feature information of the preset images included in the preset image set includes at least one of:
acquiring shooting geographical position information of the preset image;
and extracting the preset image characteristics of the preset image to obtain the image characteristic extraction information of the preset image.
3. The method of claim 2, wherein the preset reference information comprises: geographic location information of at least one preset scene;
the matching of preset reference information according to the feature information to obtain a target image comprises:
acquiring target geographical position information matched with the shooting geographical position information of the preset image from the geographical position information of the preset scenery;
and determining the target image according to the target geographical position information.
4. The method of claim 3, wherein determining the target image according to the target geographic location information comprises any one of:
determining an image meeting a preset position matching condition as the target image, wherein the preset position matching condition comprises: the distance between the shooting geographical position of the image and the geographical position of the preset scenery meets the preset position matching precision;
and determining a sample image of a preset scene corresponding to the target geographical position information as the target image.
5. The method of claim 2, wherein the preset reference information comprises: sample image information of at least one preset scene;
the matching of the preset reference information according to the feature information to obtain a target image comprises:
acquiring a target sample image matched with the image feature extraction information of the preset image from the sample image information of the preset scene;
determining the target image from the target sample image.
6. The method according to claim 2, wherein the matching the preset reference information according to the feature information to obtain a target image comprises:
matching the geographical position information of a preset scene according to the shooting geographical position information of the preset image;
if the distance between the shooting geographical position of the preset image and the geographical position of the preset scenery is within a preset distance range, determining the preset image as an image to be determined, which contains a preset scenery image;
determining the image matching degree between the image feature extraction information of the image to be determined and a sample image of a preset scenery, wherein the sample image of the preset scenery is the sample image corresponding to the geographical position of the preset scenery;
and if the image matching degree exceeds a preset matching threshold value, determining the preset image as the target image.
7. The method of claim 1, wherein if the number of target images is greater than a predetermined number of cover images, the determining the cover image of the predetermined set of images from the target images comprises:
determining the matching degree of each target image;
and acquiring a preset number of target images according to the sequence of the matching degrees from high to low, and determining the target images as cover images of the preset image set.
8. An apparatus for determining a cover image, the apparatus comprising:
the data acquisition module is used for acquiring historical user data within a preset time range before the shooting time of a preset image set and analyzing the historical user data through a deep learning technology so as to take the geographical position information of a preset scene obtained through analysis and/or the sample image information of the preset scene as preset reference information;
the characteristic acquisition module is configured to acquire characteristic information of preset images included in the preset image set;
the matching module is configured to match the preset reference information according to the characteristic information to obtain a target image;
a cover image determining module configured to determine a cover image of the image set according to the target image and update the current cover image when a similarity between the cover image of the history image set and the current cover image is greater than or equal to a preset threshold;
wherein, in the case that the preset reference information contains the geographical position information of the preset scenery and the sample pattern information of the preset scenery, the method further comprises: and the reference information determining module is configured to determine the geographic position information of the preset scenery and the priority of the sample image information of the preset scenery as the preset reference information by analyzing the historical user data, so that when the cover image is determined, information matching is preferentially performed on the preset reference information with higher priority.
9. The apparatus of claim 8, wherein the feature acquisition module comprises at least one of the following sub-modules:
a first feature acquisition sub-module configured to acquire shooting geographical position information of the preset image;
and the second feature acquisition sub-module is configured to acquire image feature extraction information of the preset image by performing preset image feature extraction on the preset image.
10. The apparatus of claim 9, wherein the preset reference information comprises: geographic location information of at least one preset scene;
the matching module comprises:
the position matching sub-module is configured to acquire target geographical position information matched with the shooting geographical position information of the preset image from the geographical position information of the preset scenery;
a first target determination sub-module configured to determine the target image from the target geographic location information.
11. The apparatus of claim 10, wherein the first target determination submodule comprises:
a first target image determination unit configured to determine, as the target image, an image that satisfies a preset position matching condition including: the distance between the shooting geographical position of the image and the geographical position of the preset scenery meets the preset position matching precision;
and the second target image determining unit is configured to determine a sample image of a preset scene corresponding to the target geographical position information as the target image.
12. The apparatus of claim 9, wherein the preset reference information comprises: sample image information of at least one preset scene;
the matching module comprises:
the image feature matching sub-module is configured to acquire a target sample image matched with the image feature extraction information of the preset image from the sample image information of the preset scene;
a second target determination sub-module configured to determine the target image from the target sample image.
13. The apparatus of claim 9, wherein the matching module comprises:
the initial matching sub-module is configured to match the geographical position information of a preset scene according to the shooting geographical position information of the preset image;
the initial judgment sub-module is configured to determine that the preset image is an image to be determined, which contains a preset scenery image, if the distance between the shooting geographical position of the preset image and the geographical position of the preset scenery is within a preset distance range;
the accurate matching sub-module is configured to determine the image matching degree between the image feature extraction information of the image to be determined and a sample image of a preset scenery, wherein the sample image of the preset scenery is a sample image corresponding to the geographical position of the preset scenery;
a target image determination sub-module configured to determine the preset image as the target image if the image matching degree exceeds a preset matching threshold.
14. The apparatus of claim 8, wherein if the number of target images is greater than a predetermined number of cover images, the cover image determination module comprises:
a matching degree determination sub-module configured to determine a matching degree of each of the target images;
and the cover image determining submodule is configured to acquire a preset number of target images in the sequence from high to low according to the matching degree and determine the target images as the cover images of the preset image set.
15. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the steps of the method of any of claims 1 to 7.
16. An electronic device, comprising:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring historical user data within a preset time range before the shooting time of a preset image set, and analyzing the historical user data through a deep learning technology to take the geographical position information of a preset scene obtained through analysis and/or the sample image information of the preset scene as preset reference information;
acquiring characteristic information of a preset image included in the preset image set;
matching the preset reference information according to the characteristic information to obtain a target image, wherein the characteristic information of the target image is matched with the preset reference information;
determining a cover image of the image set according to the target image, and updating the current cover image when the similarity between the cover image of the historical image set and the current cover image is greater than or equal to a preset threshold value;
the processor is further configured to:
and under the condition that the preset reference information contains the geographic position information of the preset scenery and the sample pattern information of the preset scenery, determining the geographic position information of the preset scenery and the sample image information of the preset scenery as the priority during the preset reference information through analyzing the historical user data so as to preferentially perform information matching aiming at the preset reference information with higher priority during the determination of the cover image.
CN201711475410.1A 2017-12-29 2017-12-29 Method and device for determining cover image Active CN108108461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711475410.1A CN108108461B (en) 2017-12-29 2017-12-29 Method and device for determining cover image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711475410.1A CN108108461B (en) 2017-12-29 2017-12-29 Method and device for determining cover image

Publications (2)

Publication Number Publication Date
CN108108461A CN108108461A (en) 2018-06-01
CN108108461B true CN108108461B (en) 2021-08-31

Family

ID=62214815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711475410.1A Active CN108108461B (en) 2017-12-29 2017-12-29 Method and device for determining cover image

Country Status (1)

Country Link
CN (1) CN108108461B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400534B (en) * 2020-03-05 2023-09-19 杭州海康威视系统技术有限公司 Cover determination method and device for image data and computer storage medium
CN111479129B (en) * 2020-04-02 2023-04-25 广州酷狗计算机科技有限公司 Live cover determination method, device, server, medium and system
CN112395037B (en) * 2020-12-07 2024-01-05 深圳云天励飞技术股份有限公司 Dynamic cover selection method and device, electronic equipment and storage medium
CN114359933B (en) * 2021-11-18 2022-09-20 珠海读书郎软件科技有限公司 Cover image identification method
CN114329023A (en) * 2021-12-28 2022-04-12 北京市商汤科技开发有限公司 File processing method and device, electronic equipment and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740086A (en) * 2008-11-11 2010-06-16 三星电子株式会社 Apparatus and method of albuming content
CN103365887A (en) * 2012-03-31 2013-10-23 腾讯科技(深圳)有限公司 Photo album cover forming method and device
CN104123339A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Method and device for image management
CN105528450A (en) * 2015-12-23 2016-04-27 北京奇虎科技有限公司 Method and device for naming photo album
CN106021405A (en) * 2016-05-12 2016-10-12 北京奇虎科技有限公司 Method and device for generating photo album cover
WO2017209568A1 (en) * 2016-06-03 2017-12-07 삼성전자주식회사 Electronic device and operation method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9071735B2 (en) * 2013-05-01 2015-06-30 Htc Corporation Name management and group recovery methods and systems for burst shot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740086A (en) * 2008-11-11 2010-06-16 三星电子株式会社 Apparatus and method of albuming content
CN103365887A (en) * 2012-03-31 2013-10-23 腾讯科技(深圳)有限公司 Photo album cover forming method and device
CN104123339A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Method and device for image management
CN105528450A (en) * 2015-12-23 2016-04-27 北京奇虎科技有限公司 Method and device for naming photo album
CN106021405A (en) * 2016-05-12 2016-10-12 北京奇虎科技有限公司 Method and device for generating photo album cover
WO2017209568A1 (en) * 2016-06-03 2017-12-07 삼성전자주식회사 Electronic device and operation method thereof

Also Published As

Publication number Publication date
CN108108461A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108108461B (en) Method and device for determining cover image
EP3125154B1 (en) Photo sharing method and device
RU2679199C1 (en) Method and device for controlling photoshoot of unmanned aircraft
RU2659746C2 (en) Method and device for image processing
EP3287745B1 (en) Information interaction method and device
WO2017054358A1 (en) Navigation method and device
EP3012796A1 (en) Method and device for acquiring user information
CN105956091B (en) Extended information acquisition method and device
US20140089401A1 (en) System and method for camera photo analytics
EP3125188A1 (en) Method and device for determining associated user
US10313537B2 (en) Method, apparatus and medium for sharing photo
RU2656978C2 (en) Method and device for cloud business card recommendation and method and apparatus
CN104008129B (en) Position information processing method, device and terminal
US10356160B2 (en) Methods and devices for acquiring user information
US20180122421A1 (en) Method, apparatus and computer-readable medium for video editing and video shooting
CN105488074B (en) Photo clustering method and device
CN112146676B (en) Information navigation method, device, equipment and storage medium
CN108027821B (en) Method and device for processing picture
CN113870195A (en) Target map detection model training and map detection method and device
KR20160094307A (en) Apparatus and method for managing photos
CN112825544A (en) Picture processing method and device and storage medium
CN113132531B (en) Photo display method and device and storage medium
JP7359074B2 (en) Information processing device, information processing method, and system
CN113885550B (en) Information processing apparatus, information processing method, and non-transitory storage medium
CN110019894B (en) Position searching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant