CN108924439B - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN108924439B
CN108924439B CN201810751346.3A CN201810751346A CN108924439B CN 108924439 B CN108924439 B CN 108924439B CN 201810751346 A CN201810751346 A CN 201810751346A CN 108924439 B CN108924439 B CN 108924439B
Authority
CN
China
Prior art keywords
image
images
target
information
screening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810751346.3A
Other languages
Chinese (zh)
Other versions
CN108924439A (en
Inventor
曹威
陈标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810751346.3A priority Critical patent/CN108924439B/en
Publication of CN108924439A publication Critical patent/CN108924439A/en
Application granted granted Critical
Publication of CN108924439B publication Critical patent/CN108924439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The application discloses an image processing method and a related product, wherein the method comprises the following steps: the electronic equipment firstly obtains M reference images from an image library, wherein M is an integer larger than 1, then generates a target theme according to the M reference images, determines an image screening strategy corresponding to the target theme, screens N target images from the image library according to the image screening strategy, and finally generates a recall video according to the N target images, wherein the N target images comprise the M reference images, and N is an integer larger than M.

Description

Image processing method and related product
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an image processing method and a related product.
Background
With the rapid development and the increasing popularity of the technology of the intelligent terminal (such as a smart phone), the technology of the intelligent terminal is now an indispensable electronic product in the daily life of users. The mobile phone photo album and other applications can screen the pictures with the same theme according to the screening conditions such as time and place, and form an exclusive picture set for a user to look up the pictures in a specific time period or place.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related product, provides a method for creating a recall video, and is beneficial to improving the efficiency and speed of creating the recall video by electronic equipment.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring M reference images from an image library, wherein M is an integer greater than 1;
generating a target theme according to the M reference images, and determining an image screening strategy corresponding to the target theme;
and screening N target images from the image library according to the image screening strategy, and generating a recall video according to the N target images, wherein the N target images comprise M reference images, and N is an integer greater than M.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring M reference images from an image library, and M is an integer larger than 1;
the determining unit is used for generating a target theme according to the M reference images and determining an image screening strategy corresponding to the target theme;
the screening unit is used for screening N target images from the image library according to the image screening strategy, wherein the N target images comprise the M reference images, and N is an integer larger than M;
and the creating unit is used for generating a recall video according to the N target images.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
It can be seen that, in the embodiment of the present application, the electronic device first obtains M reference images from the image library, wherein M is an integer larger than 1, then generating a target subject according to M reference images, determining an image screening strategy corresponding to the target subject, screening N target images from an image library according to the image screening strategy, finally generating a recall video according to the N target images, wherein the N target images comprise the M reference images, N is an integer larger than M, and thus, the electronic device determines a target subject of the recall video for less M reference images, then, N target images of the memory video are screened and created based on the target theme without spending more time on selecting the N target images by a user, therefore, the effect of quickly creating the recall video can be achieved, and the efficiency and the speed of creating the recall video by the electronic equipment can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1A is a schematic structural diagram of an example electronic device provided in an embodiment of the present application;
fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another image processing method provided in the embodiments of the present application;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 4 is another schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another electronic device provided in the embodiment of the present application.
Detailed description of the invention
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic devices involved in the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices connected to a wireless modem, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal device), and so on, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure, where the electronic device 100 includes: casing 110, set up in circuit board 120 in casing 110 and set up in display screen 130 on casing 110, be provided with processor 121 on the circuit board 120, processor 121 connects display screen 130, wherein, the display screen can include touch-control display screen, touch-control display screen is used for receiving the user to its execution to electronic equipment's touch-control operation.
Referring to fig. 1B, fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present application, where the image processing method includes:
101. m reference images are obtained from an image library, wherein M is an integer larger than 1.
In this embodiment of the application, the image library refers to an area for storing images in the electronic device, where M is a smaller integer, and may be, for example, a numerical value within 5 to 10, optionally, M reference images are obtained from the image library, where M images selected by a user through a selection operation of a touch display screen may be obtained, that is, the user may select several reference images from the image library by himself, and M images taken or downloaded by the electronic device within a recent period of time and then stored may also be obtained, that is, the user may take several images through the electronic device, or several images are downloaded to the electronic device through a network.
102. And generating a target theme according to the M reference images, and determining an image screening strategy corresponding to the target theme.
In the embodiment of the present application, the target theme refers to a theme that is not defined in an existing image theme in the electronic device, and the target theme of the M images can be determined through image information included in the M reference images, where the image information of each reference image includes a plurality of image sub-information, so that the same image sub-information included in the M images can be determined, and then the target theme is determined according to the same image sub-information.
Optionally, in the step 102, generating the target subject according to the M reference images may include the following steps:
21. acquiring image information of each reference image in the M reference images;
22. determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images;
23. and determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, and taking the target keyword as the target subject.
In an embodiment of the present application, the image information may include at least one of: the present invention relates to a method for capturing an image, and more particularly, to a method for capturing an image, a device for capturing an image, and a storage medium thereof, wherein the method includes the steps of capturing an object included in the image, for example, a capturing target focused when capturing the image, and for example, a main capturing content of a plurality of capturing contents included in the image, the main capturing content may be a capturing content located in the middle of the image or a capturing content occupying the largest area of the image, the capturing scene is a scene where the image is captured, and the capturing scene may be, for example, a day, a night, a sun, a shadow, an indoor space, an outdoor space, a sea, a street, and the like, and is not limited thereto, and the capturing environment may include at least one of: temperature environment, lighting environment, weather environment, such as sunny day, rainy day, cloudy day, snowy day, wind, etc.
The one or more keywords corresponding to each reference image may include, for example: the shooting object is a person, the shooting time is 20180101, the shooting scene is image sub-information of one or more dimensions such as seaside, a plurality of keywords can be determined according to the image sub-information of the plurality of dimensions, and then the target keyword is determined from the plurality of keywords.
Optionally, in the step 22, determining, according to the image information of each of the M reference images, one or more keywords corresponding to each of the M reference images, where the image information of each of the M reference images includes a plurality of image sub information, and the determining includes:
a1, counting the number of each type of image sub information in all the image sub information contained in the M reference images, and screening out at least one image sub information of which the number exceeds a preset value in all the image sub information;
a2, extracting at least one keyword in the at least one image sub-information, wherein each image sub-information corresponds to one keyword;
wherein, counting the number of each kind of image sub information in all the image sub information contained in the M reference images, the number corresponding to each kind of the same sub information in all the image sub information contained in the M reference images can be determined, for example, the number of images of which the photographic object is a person and the number of images of which the photographic object is a scene, the number of images of which the photographic time is one month and the number of images of which the photographic time is two months, the number of images of which the photographic place is at home and the number of images of which the photographic place is at a company, the number of images of which the photographic scene is at sea, the number of images of which the photographic scene is a street, the number of images of which the weather environment is rainy and the number of images of which the weather environment is snowy, the number of images of which the illumination environment is at a first illumination intensity of 80lux and the number of images of which the second illumination intensity of 100lux, and screening at least one image sub information of which exceeds a preset value in the number, for example, the photographic subject is a person, the photographic place is at home, the photographic weather is rainy, the lighting environment is a first lighting intensity of 80lux, and the like.
Optionally, in step 23, the determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images includes:
a3, determining a target keyword corresponding to the target image sub-information with the highest priority in the one or more keywords according to the preset mapping relation between the image sub-information and the priority.
In the process of extracting at least one keyword in at least one image sub-information, the at least one keyword may be a person, a home, a rainy day, 80lux, or the like, and in addition, priorities of a shooting object, a shooting place, shooting weather, and an illumination environment may be preset, a mapping relationship between the image sub-information and the priorities is established, then a target keyword corresponding to target image sub-information with the highest priority among the at least one keyword is determined according to the mapping relationship, and the target keyword is used as the target subject, as shown in the following table, the mapping relationship between the image sub-information and the priorities provided in the embodiment of the present application is provided.
Figure BDA0001725620660000051
Figure BDA0001725620660000061
103. And screening N target images from the image library according to the image screening strategy, and generating a target video according to the N target images, wherein the N target images comprise M reference images, and N is an integer greater than M.
In the embodiment of the application, the image screening strategy determined by the electronic equipment according to the target theme comprises the steps of preliminarily screening images containing the same image sub-information from an image library according to the image information in the image library to obtain a plurality of primary selection images, grading the primary selection images when screening out one primary selection image, and further screening N target images according to the grading values of the primary selection images, so that the target images screened by the electronic equipment can better meet the personalized requirements of users.
Optionally, in the step 103, the step of screening N target images from the image library according to the image screening policy may include the following steps:
31. acquiring image information of other images except the M reference images in the image library to obtain a plurality of image information;
32. screening K initial selection images of which the image information comprises the target keyword from the images in the appointed range of the image library, wherein K is an integer larger than M;
33. scoring the K primary selection images to obtain a plurality of score values, wherein each primary selection image corresponds to one score value;
34. and according to the sequence of the score values from high to low, screening N target images of which the number meets a preset numerical range from the K primary selection images, wherein K is more than N and more than M, and N is a positive number.
Wherein, the above-mentioned specified range can include any one of the following: the time range, the sorting range, the area range, and the like, specifically, K primary selection images may be selected from the images within the specified time range, or K primary selection images within the specified sorting range may be selected from a plurality of images arranged in series, or K primary selection images within the specified area range may be selected from a plurality of images displayed on the display screen, and the specified area range may be a shape area, for example, a rectangular area range.
Optionally, in this embodiment of the present application, the following steps may also be performed:
b1, acquiring a preset image sorting mode, and re-sorting all the images in the image library according to the image sorting mode to obtain all the rearranged images;
b2, acquiring a preset image screening starting point and a preset image screening end point based on all the rearranged images;
and B3, determining the specified range according to the image screening starting point and the image screening end point.
The image sorting mode may include any one of the following: according to the method, the image sorting mode set by a user can be obtained, and then the images in the image library are reordered according to the image sorting mode, so that the ordered images are more convenient for the user to quickly determine the designated range.
And if the matching is successful, the image information of the image contains the target keyword, so that the primarily selected image is screened out.
Optionally, in the step 33, scoring the K primary selection images to obtain a plurality of score values may include the following steps:
c1, acquiring the feature data of each primary selected image in the K primary selected images;
and C2, determining a score value corresponding to the image feature of each primary selection image in the K primary selection images according to the preset mapping relation between the feature data and the score values, and obtaining a plurality of score values.
Wherein, the characteristic data may include at least one of the following: the expression is the facial feature information of the facial image in each continuous shooting picture, according to each feature data, the mapping relation between the feature data and the score value can be preset, and then the score value corresponding to the feature parameter is determined according to the mapping relation.
Optionally, for the multiple types of feature data, corresponding weights may be set for the multiple types of feature data, and after obtaining score values corresponding to the multiple types of feature data, a target score value may be determined according to the multiple score values and the multiple weights corresponding to the multiple types of feature data, and the target score value may be used as a basis for evaluating the corresponding image, where a formula for calculating the target score value is as follows:
the target score value is the first score value, the first weight, the second score value, the second weight, the third score value, and the third weight.
For example, the number of images contained in the image library is relatively large, and may be several hundreds to tens of thousands, or even more, and the number of images for constructing the recall video is usually 40 to 100, and if a user directly selects more than 40 images, it may take much time, and a small number of images with a small number may be directly selected by the user, for example, 8 reference images are selected, and then a target subject is generated according to image information of the 8 reference images, specifically, image sub-information of a shooting object, shooting time, shooting location, shooting scene, shooting environment, and the like in image information of each reference image in the 8 reference images is obtained, and then the number of each image sub-information is determined, and image sub-information of 10 images may be shown in the following table:
reference image Shooting object Time of shooting Shooting location Shooting scene Shooting environment
1 Character nail 20180501 A ground Seaside In sunny days
2 Character nail 20180501 B ground Seaside In sunny days
3 Character nail 20180301 A ground Street with a light source Rainy day
4 Scenery 1 20180501 C ground Street with a light source Cloudy day
5 Scenery 2 20180506 C ground Indoor use In sunny days
6 Character nail 20180501 C ground Open-air Snow sky
7 Scenery 2 20180601 A ground Indoor use Cloudy day
8 Character B 20180601 B ground Open-air Snow sky
As shown in the above table, it can be seen that the image sub information more than half of the 8 images includes: the shooting object is a character A, the shooting time is 2018, 05 and 01, and two keywords can be determined: the character A and the time '2018, 05, 01 and' can further determine that a target keyword is the character A according to a mapping relation between preset image sub-information and priority, namely, the target subject that a user wants to select and shoot the object is the character A can be determined, then 80 primary selection images containing the character A can be screened out from a specified range shot in the last week, then 80 primary selection images are graded to obtain 80 grading values, finally 40 target images with high grading are screened out from the 80 primary selection images, then a recall video about the character A is created according to the 40 target images, and therefore most of images used for creating the recall video can be automatically screened according to the images selected by the user.
It can be seen that, in the embodiment of the present application, the electronic device first obtains M reference images from the image library, wherein M is an integer larger than 1, then generating a target theme according to M reference images, determining an image screening strategy corresponding to the target theme, screening N target images from an image library according to the image screening strategy, and finally, generating a recall video according to N target images, wherein the N target images comprise M reference images, N is an integer larger than M, as can be seen, the electronic device determines the target subject of the recall video for a small number of M reference images, then, N target images of the memory video are screened and created based on the target theme without spending more time on selecting the N target images by a user, therefore, the effect of quickly creating the recall video can be achieved, and the efficiency and the speed of creating the recall video by the electronic equipment can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of another image processing method according to an embodiment of the present application, and the image processing method described in this embodiment is applied to the electronic device shown in fig. 1A, and the method may include the following steps:
201. m reference images are obtained from an image library, wherein M is an integer larger than 1.
202. And acquiring the image information of each reference image in the M reference images.
203. And determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images.
204. And determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, and taking the target keyword as the target subject.
205. And determining an image screening strategy corresponding to the target subject, and screening N target images from the image library according to the image screening strategy.
206. And generating a recall video according to the N target images, wherein the N target images comprise the M reference images, and N is an integer larger than M.
The specific implementation process of the steps 201-206 can refer to the corresponding description in the method shown in fig. 1B, and will not be described herein again.
It can be seen that, in the embodiment of the present application, the electronic device first obtains M reference images from an image library, where M is an integer greater than 1, then obtains image information of each reference image in the M reference images, determines one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images, determines a target keyword according to the one or more keywords corresponding to each reference image in the M reference images, determines an image screening policy corresponding to the target topic, screens N target images from the image library according to the image screening policy, and finally generates a recall video according to the N target images, where N is an integer greater than M, thus it is clear that the electronic device determines a target topic of the recall video for fewer M reference images, and then screening and creating N target images of the recall video based on the target theme without spending more time by a user, so that the effect of quickly creating the recall video can be realized, and the efficiency and the speed of creating the recall video by the electronic equipment can be improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application, as shown in fig. 3, the electronic device of fig. 1A, and the image processing method includes:
301. m reference images are obtained from an image library, wherein M is an integer larger than 1.
302. And acquiring the image information of each reference image in the M reference images.
303. And determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images.
304. And determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, and taking the target keyword as the target subject.
305. And determining an image screening strategy corresponding to the target theme, and acquiring image information of other images except the M images in the image library to obtain a plurality of image information.
306. And screening K initial images of which the image information comprises the target keyword from the images in the specified range of the image library, wherein K is an integer larger than M.
307. And scoring the K primary selection images to obtain a plurality of score values, wherein each primary selection image corresponds to one score value.
308. And according to the sequence of the score values from high to low, screening N target images of which the number meets a preset numerical range from the K primary selection images, wherein K is more than N and more than M, and N is a positive number.
309. And generating a recall video according to the N target images, wherein N is an integer larger than M.
The specific implementation process of steps 301 and 309 can be described with reference to the corresponding description in the method shown in fig. 1B, and will not be described herein again.
It can be seen that, in the embodiment of the present application, an electronic device first obtains M reference images from an image library, then obtains image information of each reference image in the M reference images, determines one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images, determines a target keyword according to the one or more keywords corresponding to each reference image in the M reference images, determines an image screening policy corresponding to the target topic by using the target keyword as a target topic, obtains image information of other images in the image library except the M images to obtain a plurality of image information, screens K primary selected images of which the image information includes the target keyword from the images in a specified range of the image library, scores the K primary selected images to obtain a plurality of score values, each primary selection image corresponds to a score value, N target images with the number meeting a preset numerical range are screened from K primary selection images according to the sequence of the score values from high to low, finally, a recall video is generated according to the N target images, therefore, the electronic equipment determines the target theme of the recall video according to less M reference images, then the N target images of the recall video are screened and created based on the target theme, and the N target images are selected without spending more time by a user, so that the effect of quickly creating the recall video can be realized, and the efficiency and the speed of creating the recall video by the electronic equipment can be improved.
The following is a device for implementing the image processing method, specifically as follows:
in accordance with the above, please refer to fig. 4, in which fig. 4 is an electronic device according to an embodiment of the present application, including: a processor and a memory; and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of:
acquiring M reference images from an image library, wherein M is an integer greater than 1;
generating a target theme according to the M reference images, and determining an image screening strategy corresponding to the target theme;
and screening N target images from the image library according to the image screening strategy, and generating a recall video according to the N target images, wherein the N target images comprise M reference images, and N is an integer greater than M.
In one possible example, in the generating of the target subject from the M reference images, the program includes instructions for performing the steps of:
acquiring image information of each reference image in the M reference images;
determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images;
and determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, and taking the target keyword as the target subject.
In one possible example, the image information of each of the M reference images includes a plurality of image sub-information, and the program includes instructions for performing the following steps in determining one or more keywords corresponding to each of the M reference images according to the image information of each of the M reference images:
counting the quantity of each type of image sub-information in all image sub-information contained in the M reference images, and screening out at least one image sub-information of which the quantity exceeds a preset value in all the image sub-information;
extracting at least one keyword in the at least one image sub-information, wherein each image sub-information corresponds to one keyword;
in the aspect of determining a target keyword based on one or more keywords corresponding to each of the M reference images, the program includes instructions for:
and determining a target keyword corresponding to the target image sub-information with the highest priority in the one or more keywords according to a mapping relation between preset image sub-information and priority.
In one possible example, in the screening of N target images from the image library according to the image screening policy, the program includes instructions for:
acquiring image information of other images except the M images in the image library to obtain a plurality of image information;
screening K initial selection images of which the image information comprises the target keyword from the images in the appointed range of the image library, wherein K is an integer larger than M;
scoring the K primary selection images to obtain a plurality of score values, wherein each primary selection image corresponds to one score value;
and according to the sequence of the score values from high to low, screening N target images of which the number meets a preset numerical range from the K primary selection images, wherein K is more than N and more than M, and N is a positive number.
In one possible example, the program includes instructions for further performing the steps of:
acquiring a preset image sorting mode, and reordering all images in the image library according to the image sorting mode to obtain all rearranged images;
acquiring a preset image screening starting point and an image screening end point based on all the rearranged images;
and determining the specified range according to the image screening starting point and the image screening end point.
In one possible example, in said scoring the K pre-selected images, resulting in a plurality of score values, the program further comprises instructions for performing the steps of:
acquiring characteristic data of each primary selected image in the K primary selected images;
and determining a score value corresponding to the image characteristics of each primary selection image in the K primary selection images according to a preset mapping relation between the characteristic data and the score values to obtain a plurality of score values.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image processing apparatus according to the present embodiment. The image processing apparatus is applied to an electronic device, and includes an acquisition unit 501, a determination unit 502, a filtering unit 503, and a creation unit 504, wherein,
the acquiring unit 501 is configured to acquire M reference images from an image library, where M is an integer greater than 1;
the determining unit 502 is configured to generate a target theme according to the M reference images, and determine an image screening policy corresponding to the target theme;
the screening unit 503 is configured to screen N target images from the image library according to the image screening policy;
the creating unit 504 is configured to generate a recall video according to the N target images, where N is an integer greater than M.
Optionally, in the aspect of generating the target subject according to the M reference images, the determining unit 502 is specifically configured to:
acquiring image information of each reference image in the M reference images;
determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images;
and determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, and taking the target keyword as the target subject.
Optionally, the image information of each reference image in the M reference images includes a plurality of image sub information, and in the aspect of determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images, the determining unit 502 is specifically configured to:
counting the quantity of each type of image sub-information in all image sub-information contained in the M reference images, and screening out at least one image sub-information of which the quantity exceeds a preset value in all the image sub-information;
extracting at least one keyword in the at least one image sub-information, wherein each image sub-information corresponds to one keyword;
in the aspect of determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, the determining unit 502 is specifically configured to:
and determining a target keyword corresponding to the target image sub-information with the highest priority in the one or more keywords according to a mapping relation between preset image sub-information and priority.
Optionally, the screening unit 503 is specifically configured to:
acquiring image information of other images except the M images in the image library to obtain a plurality of image information;
screening K initial selection images of which the image information comprises the target keyword from the images in the appointed range of the image library, wherein K is an integer larger than M;
scoring the K primary selection images to obtain a plurality of score values, wherein each primary selection image corresponds to one score value;
and according to the sequence of the score values from high to low, screening N target images of which the number meets a preset numerical range from the K primary selection images.
Optionally, the obtaining unit 501 is further configured to obtain a preset image sorting mode, and reorder all the images in the image library according to the image sorting mode to obtain all the rearranged images;
acquiring a preset image screening starting point and a preset image screening end point based on all the rearranged images;
the determining unit 502 is further configured to determine the specified range according to the image screening start point and the image screening end point.
Optionally, in the aspect of scoring the K primary selection images to obtain a plurality of score values, the screening unit 503 is specifically configured to:
acquiring characteristic data of each primary selected image in the K primary selected images;
and determining a score value corresponding to the image characteristics of each primary selection image in the K primary selection images according to a preset mapping relation between the characteristic data and the score values to obtain a plurality of score values.
It can be seen that in the image processing apparatus described in the embodiment of the present application, the electronic device first obtains M reference images from the image library, wherein M is an integer larger than 1, then generating a target theme according to M reference images, determining an image screening strategy corresponding to the target theme, screening N target images from an image library according to the image screening strategy, and finally, generating a recall video according to N target images, wherein the N target images comprise M reference images, N is an integer larger than M, as can be seen, the electronic device determines the target subject of the recall video for a small number of M reference images, then, N target images of the memory video are screened and created based on the target theme without spending more time on selecting the N target images by a user, therefore, the effect of quickly creating the recall video can be achieved, and the efficiency and the speed of creating the recall video by the electronic equipment can be improved.
It is to be understood that the functions of each program module of the image processing apparatus of this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not described herein again.
As shown in fig. 6, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (personal digital assistant), a POS (point of sales), a vehicle-mounted computer, etc., taking the electronic device as the mobile phone as an example:
fig. 6 is a block diagram illustrating a partial structure of an electronic device provided in an embodiment of the present invention. As shown in fig. 6, the electronic device 610 may include control circuitry, which may include storage and processing circuitry 630. The storage and processing circuit 630 may be a memory, such as a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry 630 may be used to control the operation of the electronic device 610. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuit 630 may be used to run software in the electronic device 610 such as an internet browsing application, a Voice Over Internet Protocol (VOIP) phone call application, an email application, a media playing application, operating system functions, and the like. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the electronic device 610, and the like, without limitation of embodiments of the present application.
The electronic device 610 may also include input-output circuitry 642. The input-output circuitry 642 may be used to enable the electronic device 610 to enable input and output of data, i.e., to allow the electronic device 610 to receive data from external devices and also to allow the electronic device 610 to output data from the electronic device 610 to external devices. Input-output circuitry 642 may further include sensor 632. The sensors 632 may include ambient light sensors, proximity sensors based on light and capacitance, touch sensors (e.g., optical and/or capacitive based touch sensors, ultrasonic sensors, wherein the touch sensors may be part of a touch display screen or may be used independently as a touch sensor structure), acceleration sensors, and other sensors, etc.
Input-output circuitry 642 may also include one or more displays, such as display 614. Display 614 may include one or a combination of liquid crystal displays, organic light emitting diode displays, electronic ink displays, plasma displays, displays using other display technologies. Display 614 may include an array of touch sensors (i.e., display 614 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The electronic device 610 can also include an audio component 636. The audio component 636 can be used to provide audio input and output functionality for the electronic device 610. Audio components 36 in electronic device 10 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sound.
The communications circuitry 638 may be used to provide the electronic device 610 with the ability to communicate with external devices. The communication circuits 638 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 638 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, wireless communication circuitry in communication circuitry 638 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuitry 638 may include a near field communication antenna and a near field communication transceiver. The communications circuit 638 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so forth.
The electronic device 610 may further include a battery, power management circuitry, and other input-output units 640. The input-output unit 640 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, etc.
A user may input commands through the input-output circuitry 642 to control operation of the electronic device 610 and may use output data of the input-output circuitry 642 to enable receipt of status information and other outputs from the electronic device 610.
In the foregoing embodiments shown in fig. 1B, fig. 2, or fig. 3, the method flows of the steps may be implemented based on the structure of the electronic device.
In the embodiments shown in fig. 4 and 5, the functions of the units may be implemented based on the structure of the electronic device.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute a part or all of the steps of any one of the image processing methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the image processing methods as set forth in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring M reference images from an image library, wherein M is an integer greater than 1;
generating a target theme according to the M reference images, comprising: the image information of each reference image comprises a plurality of image sub-information, the same image sub-information contained in the M reference images is determined, and the target theme is determined according to the same image sub-information, wherein the target theme is a theme which is not defined yet;
determining an image screening strategy corresponding to the target subject;
and screening N target images from the image library according to the image screening strategy, and generating a recall video according to the N target images, wherein the N target images comprise M reference images, and N is an integer greater than M.
2. The method of claim 1, wherein generating a target subject from the M reference images comprises:
acquiring image information of each reference image in the M reference images;
determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images;
and determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, and taking the target keyword as the target subject.
3. The method according to claim 2, wherein the image information of each of the M reference images comprises a plurality of image sub-information, and the determining the one or more keywords corresponding to each of the M reference images according to the image information of each of the M reference images comprises:
counting the quantity of each type of image sub-information in all image sub-information contained in the M reference images, and screening out at least one image sub-information of which the quantity exceeds a preset value in all the image sub-information;
extracting at least one keyword in the at least one image sub-information, wherein each image sub-information corresponds to one keyword;
determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, including:
and determining a target keyword corresponding to the target image sub-information with the highest priority in the one or more keywords according to a mapping relation between preset image sub-information and priority.
4. The method of any one of claims 1 to 3, wherein the screening N target images from the image library according to the image screening policy comprises:
acquiring image information of other images except the M reference images in the image library to obtain a plurality of image information;
screening K initial selection images of which the image information comprises the target keyword from the images in the appointed range of the image library, wherein K is an integer larger than M;
scoring the K primary selection images to obtain a plurality of score values, wherein each primary selection image corresponds to one score value;
and according to the sequence of the score values from high to low, screening N target images of which the number meets a preset numerical range from the K primary selection images, wherein K is more than N and more than M, and N is a positive number.
5. The method of claim 4, further comprising:
acquiring a preset image sorting mode, and reordering all images in the image library according to the image sorting mode to obtain all rearranged images;
acquiring a preset image screening starting point and an image screening end point based on all the rearranged images;
and determining the specified range according to the image screening starting point and the image screening end point.
6. The method according to claim 4 or 5, wherein the scoring the K primary images to obtain a plurality of score values comprises:
acquiring characteristic data of each primary selected image in the K primary selected images;
and determining a score value corresponding to the image characteristics of each primary selection image in the K primary selection images according to a preset mapping relation between the characteristic data and the score values to obtain a plurality of score values.
7. An image processing apparatus characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring M reference images from an image library, and M is an integer larger than 1;
the determining unit is used for generating a target theme according to the M reference images and comprises the following steps: the image information of each reference image comprises a plurality of image sub-information, the same image sub-information contained in the M reference images is determined, the target theme is determined according to the same image sub-information, wherein the target theme is a theme which is not defined yet, and an image screening strategy corresponding to the target theme is determined;
the screening unit is used for screening N target images from the image library according to the image screening strategy, wherein the N target images comprise the M reference images, and N is an integer larger than M;
and the creating unit is used for generating a recall video according to the N target images.
8. The apparatus according to claim 7, wherein, in said generating a target subject from said M reference images, said determining unit is specifically configured to:
acquiring image information of each reference image in the M reference images;
determining one or more keywords corresponding to each reference image in the M reference images according to the image information of each reference image in the M reference images;
and determining a target keyword according to one or more keywords corresponding to each reference image in the M reference images, and taking the target keyword as the target subject.
9. An electronic device comprising a processor, a memory, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
CN201810751346.3A 2018-07-10 2018-07-10 Image processing method and related product Active CN108924439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810751346.3A CN108924439B (en) 2018-07-10 2018-07-10 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810751346.3A CN108924439B (en) 2018-07-10 2018-07-10 Image processing method and related product

Publications (2)

Publication Number Publication Date
CN108924439A CN108924439A (en) 2018-11-30
CN108924439B true CN108924439B (en) 2021-07-09

Family

ID=64412411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810751346.3A Active CN108924439B (en) 2018-07-10 2018-07-10 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN108924439B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903260B (en) * 2019-01-30 2023-05-23 华为技术有限公司 Image processing method and image processing apparatus
CN111669620A (en) * 2020-06-05 2020-09-15 北京字跳网络技术有限公司 Theme video generation method and device, electronic equipment and readable storage medium
CN114390345B (en) * 2022-01-24 2024-02-09 惠州Tcl移动通信有限公司 Video generation method, device, electronic equipment and computer readable storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090278949A1 (en) * 2008-05-06 2009-11-12 Mcmahan David Michael Camera system and method for providing information on subjects displayed in a camera viewfinder
CN104239336B (en) * 2013-06-19 2018-03-16 华为技术有限公司 A kind of method for screening images, device and terminal
US9330110B2 (en) * 2013-07-17 2016-05-03 Xerox Corporation Image search system and method for personalized photo applications using semantic networks
CN105893412A (en) * 2015-11-24 2016-08-24 乐视致新电子科技(天津)有限公司 Image sharing method and apparatus
CN105760448B (en) * 2016-02-03 2019-11-15 北京金山安全软件有限公司 Picture processing method and device and electronic equipment
CN105975612A (en) * 2016-05-18 2016-09-28 北京金山安全软件有限公司 Picture processing method, device and equipment
CN106250916B (en) * 2016-07-22 2020-02-21 西安酷派软件科技有限公司 Method and device for screening pictures and terminal equipment
CN108009251A (en) * 2017-12-01 2018-05-08 珠海市魅族科技有限公司 A kind of image file searching method and device
CN108134948B (en) * 2017-12-25 2021-09-03 深圳创维-Rgb电子有限公司 Television program recommendation method, device and system and readable storage medium

Also Published As

Publication number Publication date
CN108924439A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN106528745B (en) Method and device for recommending resources on mobile terminal and mobile terminal
CN107622281A (en) Image classification method, device, storage medium and mobile terminal
CN104765446A (en) Electronic device and method of controlling electronic device
WO2020063014A1 (en) Image display method and related product
CN108924439B (en) Image processing method and related product
CN105809647A (en) Automatic defogging photographing method, device and equipment
CN109151338B (en) Image processing method and related product
CN106856543B (en) Picture display method and device and mobile terminal
CN107831989A (en) A kind of Application Parameters method of adjustment and mobile terminal
CN106331502A (en) Terminal and its filter photographing method and device
CN109919836A (en) Video keying processing method, video keying processing client and readable storage medium storing program for executing
CN113316778B (en) Equipment recommendation method and related product
CN107871000B (en) Audio playing method and device, storage medium and electronic equipment
CN113939814A (en) Content push method and related product
CN107464290A (en) Three-dimensional information methods of exhibiting, device and mobile terminal
CN108460769B (en) image processing method and terminal equipment
CN107870999A (en) Multi-medium play method, device, storage medium and electronic equipment
CN110191303A (en) Video call method and Related product based on screen sounding
CN106445970B (en) Loading processing method and device for placeholder map
CN108718389A (en) A kind of screening-mode selection method and mobile terminal
CN110363702B (en) Image processing method and related product
CN110955788A (en) Information display method and electronic equipment
CN108989555B (en) Image processing method and related product
CN104408051A (en) Song recommendation method and device
EP3249999B1 (en) Intelligent matching method for filter and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant