WO2020119254A1 - 滤镜推荐方法、装置、电子设备及存储介质 - Google Patents

滤镜推荐方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020119254A1
WO2020119254A1 PCT/CN2019/112572 CN2019112572W WO2020119254A1 WO 2020119254 A1 WO2020119254 A1 WO 2020119254A1 CN 2019112572 W CN2019112572 W CN 2019112572W WO 2020119254 A1 WO2020119254 A1 WO 2020119254A1
Authority
WO
WIPO (PCT)
Prior art keywords
category
filter
image
preset
smart
Prior art date
Application number
PCT/CN2019/112572
Other languages
English (en)
French (fr)
Inventor
张渊
郑文
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2020119254A1 publication Critical patent/WO2020119254A1/zh

Links

Images

Classifications

    • G06T5/60
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the field of image processing technology, and in particular, to a filter recommendation method, device, electronic equipment, and storage medium.
  • the mobile terminal has various functions such as calling, shooting images, playing audio, playing video, positioning, scanning barcodes, etc., which brings great convenience to people's lives.
  • the embodiments of the present application provide a filter recommendation method, device, electronic device, and storage medium.
  • an embodiment of the present application provides a filter recommendation method, including: after receiving an instruction to add a filter to an original image, identifying a category to which the original image belongs, among categories included in a preset image feature; According to the corresponding relationship between the preset category and the smart filter, query the smart filter corresponding to the category to which the original image belongs; perform filter recommendation according to the queried smart filter.
  • an embodiment of the present application provides a filter recommendation device, including: a recognition unit configured to recognize the category included in a preset image feature after receiving an instruction to add a filter to an original image The category to which the original image belongs; the query unit is configured to query the smart filter corresponding to the category to which the original image belongs based on the correspondence between the preset category and the smart filter; the recommendation unit is configured to query the smart Filters are recommended for filters.
  • an embodiment of the present application provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein, the processor is configured to perform the above filter recommendation method.
  • an embodiment of the present application provides a non-transitory computer-readable storage medium.
  • the electronic device can perform the filter recommendation method described above.
  • an embodiment of the present application provides a computer program product, which when an instruction in the computer program product is executed by a processor of an electronic device, enables the electronic device to execute the above filter recommendation method.
  • the category to which the original image belongs is identified among the categories included in the preset image feature; according to the correspondence between the preset category and the smart filter , To query the smart filter corresponding to the category to which the original image belongs; to recommend the filter based on the searched smart filter.
  • the smart filter can be recommended according to the original image category, the recommendation process is more objective, the recommended smart filter can be more adapted to the situation of the original image, the recommendation result is accurate, and the user can be improved Experience.
  • Fig. 1 is a flow chart of a method for recommending a filter according to an exemplary embodiment.
  • Fig. 2 is a flow chart showing a method for recommending a filter according to an exemplary embodiment.
  • Fig. 3 is a schematic diagram of a neural network model according to an exemplary embodiment.
  • Fig. 4 is a schematic diagram of a filter list according to an exemplary embodiment.
  • Fig. 5 is a schematic diagram showing a filter effect according to an exemplary embodiment.
  • Fig. 6 is a schematic diagram showing a filter effect according to an exemplary embodiment.
  • Fig. 7 is a schematic diagram showing a filter effect according to an exemplary embodiment.
  • Fig. 8 is a block diagram of a filter recommendation device according to an exemplary embodiment.
  • Fig. 9 is a block diagram of a device for filter recommendation according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a filter recommendation method according to an exemplary embodiment. As shown in Fig. 1, the filter recommendation method is used in an electronic device and includes the following steps S11 to S13.
  • step S11 after receiving an instruction to add a filter to the original image, the category to which the original image belongs is identified among the categories included in the preset image feature.
  • the original image can be a previously saved image that has been saved in the album, or the original image can be the image currently being captured.
  • the original image can be a photo or a video frame in a video.
  • the solution provided by the embodiment of the present application can be applied to recognize the saved image, or it can be applied to recognize the image being taken in real time during direct shooting, and then directly recommend the filter after recognition.
  • Users can add filters to the original image through the electronic device.
  • the user can select an editing operation in the electronic device to import the original image into the editing interface.
  • the editing interface In the editing interface, one-key beautification, cropping, rotating, adding filters, adding stickers, graffiti and other options can be displayed.
  • the user can select the add filter option to trigger the electronic device to add a filter to the original image.
  • the above-mentioned adding a filter to the original image refers to performing filter processing on the original image, so as to realize the effect of adding a filter to the original image.
  • various filters correspond to different filter parameters.
  • the filter parameters include exposure, hue, saturation, and white balance. Different filter parameters will cause different filter effects in the image.
  • the parameters of the original image are adjusted according to the filter parameters of the filter, and a filter is added to the original image.
  • the electronic device may identify the category to which the original image belongs among the categories included in the preset image feature.
  • the category included in the preset image feature is: the category to which the image obtained by classifying the image according to the preset image feature belongs.
  • the preset image features may include at least one.
  • the preset image features may include image targets, image scenes, and image quality.
  • Each preset image feature can contain multiple categories.
  • the image target is the target object contained in the image
  • the category included in the image target can be adults, babies, cats, dogs, food, etc.
  • the image scene is the scene where the image content is located, and the category included in the image scene can be indoor, Stage, crowd, lawn, etc.
  • the categories included in image quality can be high quality, backlighting, blurring, etc.
  • step S12 the smart filter corresponding to the category to which the original image belongs is queried according to the correspondence between the preset category and the smart filter.
  • the smart filter is a preset filter, which may specifically be a filter selected from a large number of filters, or a filter corresponding to a manually set filter parameter.
  • the correspondence between categories and smart filters may be preset. Therefore, after identifying the category to which the original image belongs, the smart filter corresponding to the category to which the original image belongs can be queried from the correspondence between the preset category and the intelligent filter.
  • step S13 filter recommendation is performed according to the inquired smart filter.
  • the inquired smart filter is the smart filter corresponding to the category to which the original image belongs.
  • the inquired smart filter may include one or more filters.
  • the filter recommendation may be performed according to the correlation between the inquired smart filter and the category to which the original image belongs.
  • the correlation between the smart filter and the category to which the original image belongs may include: the degree of matching between the smart filter and the above-mentioned category, and the number of categories to which the original image belongs to in the category corresponding to the smart filter in the above-mentioned correspondence.
  • the smart filter can be recommended according to the category to which the original image belongs.
  • the recommendation process is more objective, the recommended smart filter can be more adapted to the situation of the original image, the recommendation result is accurate, and the user experience can be improved.
  • Fig. 2 is a flowchart of a filter recommendation method according to an exemplary embodiment.
  • the filter recommendation method is used in an electronic device. As shown in Fig. 2, the above method includes the following steps S21 to S25.
  • step S21 after receiving an instruction to add a filter to the original image, the category to which the original image belongs is identified among the categories included in the preset image feature.
  • the category to which the original image belongs can be identified according to the Gist (image global feature) feature of the original image.
  • the Gist feature of the original image can be extracted, and then the category to which the original image belongs can be identified in the category included in the preset image feature according to the Gist feature.
  • the Gist feature is a biological heuristic feature, which is a feature obtained by simulating human visually capturing context information in an image, and is a spatial representation of the external world.
  • the noise in the original image can be removed first.
  • the original image can be filtered through a multi-scale and multi-directional Gabor filter bank, thereby reducing the noise in the original image.
  • the noise-removed image is divided into 4 ⁇ 4 grids, and then each grid is subjected to discrete Fourier transform and window Fourier transform to extract Gist features of the image.
  • the Gist feature has some disadvantages: it can only characterize the shallow features of the image, ignoring the relevant structural information between the images; the accuracy is not high, and the robust performance for complex scenes is not good; according to the Gist feature, only the image belongs to A category of, cannot identify the multiple categories to which the image belongs, and at the same time cannot capture the high-level semantic information in the image.
  • an embodiment of the present application proposes to use a neural network model to identify the category to which the original image belongs.
  • the training of the neural network model can be achieved by acquiring multiple sample images, each sample image is marked with at least one category belonging to the category included in the preset image feature; for the category included in the preset image feature For each category, according to the number of sample images belonging to the category in the sample image, calculate the sampling weight corresponding to the sample image belonging to the category; according to the sampling weight corresponding to each category, select from multiple sample images belonging to the category Sample images for model training; use the sample images to be trained to train the neural network model.
  • the category marked by the sample image refers to a category in which the sample image is labeled according to the category to which the sample image belongs in the category included in the at least one preset image feature.
  • the category of the above mark is independent of the content of the image itself, and does not change the image itself.
  • the image features may include at least one of an image target, an image scene, and an image quality.
  • the image features may also include other features.
  • image features including image objects, image scenes, and image quality as examples, the image may contain various image objects.
  • the image scenes of each image may also have diversity, and there may be differences in image quality.
  • Collecting data sets for these categories that is, collecting sample images, for example, you can collect data sets from different channels.
  • the above channels can be fast-hand data, ImageNet (large visual database) data, and PlaceData competition data collection data sets.
  • the image in the above-mentioned channel may be labeled with the category to which the image features belong in advance
  • the image obtained from the above-mentioned channel may have its own category in the image.
  • sampling weights are set in advance.
  • the main steps are as follows: separately count the number of sample images corresponding to each category k_i, and the total number S of all sample images; calculate the sampling weight corresponding to each category as S/k_i.
  • the sampling weight of the category with the larger number of sample images is smaller, and the sampling weight of the category with the smaller number of sample images is larger, which ensures that the sample images of each category in the set of sample images selected when training the neural network model The quantity is balanced to prevent deviation of the trained neural network model.
  • the above-mentioned unbalanced number of samples refers to: the unbalanced number of selected sample images belonging to different categories.
  • the above sampling weight refers to: the proportion of the number of sample images selected from the sample images belonging to each category for model training.
  • a neural network model is designed based on a convolutional neural network.
  • Fig. 3 is a schematic diagram of a neural network model according to an exemplary embodiment.
  • the neural network model mainly includes two parts.
  • the first part is a convolution layer Conv2d of a first number of layers
  • the second part is a second number of sibling fully connected layers fc1, fc2, fc3...
  • the convolutional layer of the first number of layers is a shared network layer, and the structure of the shared network layer can be designed according to the task characteristics.
  • the subsequent second number of brother fully connected layers respectively correspond to an image feature, which is used to classify the image according to the image object (Object Classifier), image scene (Scene Classifier), image quality (Image Quality), etc. in the image feature ( Classifier).
  • the above-mentioned first number may be 73
  • the second number may be 3.
  • input (image) represents the image of the input neural network model
  • Conv2d (73layer) represents the 73 convolutional layers in the neural network model
  • fc1, fc2, and fc3 respectively represent three fully connected layers.
  • Object Classifier indicates that the image feature corresponding to the fully connected layer fc1 is the image target
  • Scene Classifier indicates that the image feature corresponding to the fully connected layer fc2 is the image scene
  • Image Quality indicates that the image feature corresponding to the fully connected layer fc3 is the image quality.
  • the embodiments of the present application do not limit the number of layers of the convolutional layer and the number of sibling fully connected layers.
  • the number of layers of the convolutional layer may be set according to application scenarios.
  • the convolutional layer may be 70 layers, 80 layers, 75 layers, and so on.
  • the number of the sibling fully connected layers is related to the number of image features that need to be identified, and each sibling fully connected layer corresponds to an image feature. When classification needs to be based on multiple image features, multiple brother fully connected layers can be set accordingly.
  • the neural network model can be trained according to the prepared data set, and the neural network model can be optimized according to the loss function and the optimizer during the training process. And the optimized neural network model is compared with the preset evaluation index to realize the evaluation of the neural network model until the neural network model meets the evaluation index, and the training of the neural network model ends.
  • the image may contain multiple targets, and the image may also meet multiple scenes. Therefore, the image may belong to multiple categories. For each category, the multi-label cross-entropy loss function can be used to calculate the above neural network model for the data in the data set. The output of the classification results is relative to the loss of the marked category of the sample.
  • the parameters in the neural network model can be adjusted according to the loss calculated by the above loss function. Specifically, when adjusting the parameters in the neural network model, the parameters of the model may be adjusted in a stochastic gradient descent.
  • Evaluation index It is used to evaluate the neural network model after each training. After the neural network model meets the evaluation index, it can be considered that the training of the model has been successfully completed. Otherwise, the training of the neural network model needs to be continued until the evaluation is met. index. Among them, the evaluation index can be set based on the TOP1-accuracy standard.
  • a convolutional neural network is used to design an image classification model, which can more effectively obtain image content information.
  • the convolutional neural network can extract high-level semantic information in the image, can identify various types of image targets, image scenes, and picture quality in the image, and the recognition is more accurate, and it is more effective for external interference such as different scenes, occlusion, deformation, lighting, etc. Robust.
  • the sampling weights are calculated for the sample images of each category.
  • the corresponding number of sample images are selected according to the sampling weights to achieve the purpose of data balance and prevent model training deviation.
  • the data is enhanced by random disturbance during the training of the model.
  • the above random disturbance can be horizontal flip, left-right rotation, random cropping, image pixel disturbance, etc., which can improve the anti-interference of the trained model.
  • the model of the model can get the category of the image well for images with weak lighting, occlusion, and low contrast.
  • Applying the neural network model to the recognition of the category of the image can extract the features of the image from the underlying texture to the high-level semantics. Without artificially participating in feature recognition, the network model can automatically learn based on supervised data.
  • the step of identifying the category to which the original image belongs may include: inputting the original image into the trained neural network model, the neural network model includes at least one fully connected layer, and each fully connected layer corresponds to a preset image feature; obtaining each fully connected The category to which the original image output by the layer belongs, wherein the category output by each fully connected layer belongs to the category contained in the preset image feature corresponding to the fully connected layer.
  • the neural network model includes three fully connected layers, and the three fully connected layers respectively output the original image in the category included in the image object, in the image The category that belongs to the category included in the scene and the category that belongs to the category included in the image quality.
  • the original image can be objectively classified according to the image target, image scene, and image quality of the original image, and can combine the characteristics of the image itself to have good robustness for different application scenarios.
  • step S22 according to the corresponding relationship between the category and the smart filter, query the corresponding category to hit the smart filter of the category to which the original image belongs.
  • the category corresponding to the smart filter includes the category to which at least one original image belongs. For example, suppose that the category corresponding to the smart filter is "stage”, and the category to which the original image belongs includes “stage”, "adult”, and “dark light”. Since the category corresponding to the smart filter contains a category to which the original image belongs. "Stage”, so it can be considered that the category corresponding to the smart filter hits the category to which the original image belongs.
  • a smart filter library may be preset, and the correspondence between categories and smart filters is stored in the smart filter library.
  • the smart filter library may include multiple smart filters, and each smart filter corresponds to at least one category. For example, a large number of filters can be analyzed based on at least one of the categories included in the image target, image scene, and image quality, and the corresponding filters are matched for each category, and the matched filters are used as smart filters. At least one of the categories belonging to the category included in the image object, the categories belonging to the category included in the image scene, and the categories belonging to the category included in the image quality corresponding to each smart filter is stored.
  • the above analysis of the filter according to at least one category refers to the analysis of the image to which the category of the filter is applied.
  • the category of the image to which the filter is applied can be directly analyzed manually, or the filter can be analyzed by multiple experiments. For example, for a filter, add the filter to images belonging to different categories, according to After adding the filter, determine which type of image the filter is suitable for.
  • the neural network model is used to identify the category to which the original image belongs, and the identified category of the original image includes at least one.
  • the category to which the original image belongs may include at least one of the category included in the category included in the image object, the category included in the category included in the image scene, and the category included in the category included in the image quality.
  • the smart filter corresponding to the category to which the original image belongs can be queried from the smart filter library.
  • the query process may include: according to the corresponding relationship between the category and the smart filter, querying the corresponding category to hit the smart filter of the category to which the original image belongs.
  • the category corresponding to the smart filter hits the category to which the original image belongs may be: at least one category corresponding to the smart filter hits the category to which the original image belongs.
  • the category corresponding to the smart filter includes the category of the category included in the image target, the category of the category included in the image scene, and the category of the category included in the image quality.
  • the category of the original image includes There are three categories of the category included in the image object, the category included in the category included in the image scene, and the category included in the category included in the image quality.
  • the category corresponding to the smart filter is the same as the category to which the original image belongs, for example, the category corresponding to the smart filter to the category included in the image object is the same as the category to which the original image belongs to the category included in the image object; or The category that the smart filter corresponds to in the category included in the image scene is the same as the category that the original image belongs to in the category included in the image scene; or the category that the smart filter corresponds to the category that contains the image quality belongs to the original image in the image If the category included in the category included in the quality is the same, it can be considered that the category corresponding to the smart filter hits the category to which the original image belongs.
  • the category corresponding to the smart filter that belongs to the category included in the image object is the same as the category to which the original image belongs to the category included in the image object
  • the category of the image scene corresponding to the smart filter belongs to the same category as the original image in the category contained in the image scene
  • the category of the image target corresponding to the smart filter belongs to the original category
  • the image belongs to the same category in the category included in the image target, and the category corresponding to the category included in the image quality corresponding to the smart filter is the same as the category included in the category included in the image quality; or the one corresponding to the intelligent filter
  • the category included in the category included in the image scene is the same as the category that the original image belongs to in the category included in the image scene
  • the category under image quality corresponding to the smart filter is the category that the original image belongs to in the category included in image quality If it is the same, it can be considered that the category corresponding to the smart filter
  • the category corresponding to the smart filter that belongs to the category included in the image target is the same as the category to which the original image belongs to the category included in the image target
  • the category corresponding to the smart filter in the category included in the image scene is the same as the category that the original image belongs to in the category included in the image scene
  • the category corresponding to the smart filter in the category included in the image quality is the same as the original category If the image belongs to the same category in the category included in the image quality, it can be considered that the category corresponding to the smart filter hits the category of the original image.
  • step S23 the smart filter with the largest number of hit categories is selected from the inquired smart filters as the recommended smart filter.
  • smart filters found There may be multiple smart filters found, and at least one of them can be selected as a recommended smart filter. For example, the smart filter with the largest number of hit categories can be selected as the recommended smart filter, and so on.
  • a filter list may be displayed, the recommended smart filter is displayed in the filter list, and the recommended smart filter is marked.
  • the recommended smart filter can be marked as "smart".
  • "intelligence” is a filter logo. You can also highlight the recommended smart filters, and you can also use special symbols to mark smart filters, where the special symbols can be " ⁇ ", "*", etc.
  • the filters of the same category when there is a filter in the filter list that has the same category as the recommended smart filter, at least one of the filters in the same category is marked. Specifically, the filters of the same category may be marked as "recommended” or "REC (recommend)", or the filters of the same category may be highlighted.
  • the solution provided in the above embodiment may be executed according to the following steps S24 and S25:
  • step S24 a filter list is displayed, a recommended smart filter is displayed in the filter list, and the recommended smart filter is marked as "smart".
  • the filter list may also include other filters recommended by the system, and the user may select any filter from the filter list to add to the original image.
  • step S25 when there is a filter of the same category as the recommended smart filter in the filter list, at least one of the filters of the same category is marked as "recommended”.
  • the other filters in the filter list can also correspond to the category to which the image belongs. You can compare the category corresponding to the filter in the filter list with the category corresponding to the recommended smart filter. If the category is the same, you can compare the category At least one of the filters is marked as recommended.
  • other filters may also correspond to the category to which the image belongs. That is, the filter is suitable for images belonging to its corresponding category.
  • the category corresponding to other filters is "sky", that is, the filter is suitable for images belonging to the category of "sky".
  • Fig. 4 is a schematic diagram of a filter list according to an exemplary embodiment. It can be seen from FIG. 4 that the “None” option and five filter options are currently displayed in the filter list.
  • the filter options include the “smart” filter option, that is, the above-mentioned recommended smart filter marked as smart, and There are 4 other filter options including "Puff”, “Shu Fu Lei”, “Fu Rui Bai”, “Jelly”, and one of the filters "Shu Fu Lei” is marked as recommended.
  • the user does not add a filter to the original image, he can select the first "None" option, if he wants to add a filter to the original image, he can choose to add a smart filter, or he can choose to add other filters, such as marked with recommended Filters, etc.
  • Fig. 5 is a schematic diagram showing a filter effect according to an exemplary embodiment. As shown in FIG. 5, the category in which the original image belongs to the category included in the image object is infant, the category in the category included in the image scene is indoor, and the category in the category included in image quality is blurry.
  • Fig. 6 is a schematic diagram showing a filter effect according to an exemplary embodiment. As shown in FIG. 6, the category in which the original image belongs to the category included in the image object is an adult, the category in the category included in the image scene is the stage, and the category in the category included in the image quality is dark light.
  • Fig. 7 is a schematic diagram showing a filter effect according to an exemplary embodiment. As shown in FIG. 7, the category in which the original image belongs to the category included in the image object is other, the category in the category included in the image scene is night scene, and the category in the category included in image quality is dark light.
  • the smart filter can be recommended according to the category to which the original image belongs.
  • the recommendation process is more objective, the recommended smart filter can be more adapted to the situation of the original image, the recommendation result is accurate, and the user experience can be improved.
  • Fig. 8 is a block diagram of a filter recommendation device according to an exemplary embodiment. 8, the device includes an identification unit 701, a query unit 702 and a recommendation unit 703.
  • the recognition unit 701 is configured to, after receiving an instruction to add a filter to the original image, identify the category to which the original image belongs among the categories included in the preset image feature;
  • the query unit 702 is configured to query the smart filter corresponding to the category to which the original image belongs according to the preset correspondence between the category and the smart filter;
  • the recommendation unit 703 is configured to perform filter recommendation according to the inquired smart filter.
  • the recognition unit 701 includes an image input module configured to input the original image into a preset neural network model after receiving an instruction to add a filter to the original image, the neural network model includes At least one fully connected layer, each fully connected layer corresponds to a preset image feature, and each preset image feature includes at least one category; a category acquisition module configured to acquire the original image output by each fully connected layer The category to which the category output by each fully connected layer belongs to the category contained in the preset image feature corresponding to the fully connected layer.
  • the filter recommendation device further includes: an acquisition unit configured to acquire a plurality of sample images, each sample image is marked with a category belonging to at least one category included in the preset image feature;
  • the calculation unit is configured to calculate the sampling weight corresponding to the sample images belonging to the category according to the number of sample images belonging to the category in the sample image for each category included in the preset image feature;
  • a unit configured to select a sample image for model training from a plurality of sample images belonging to the category according to the sampling weight corresponding to each category; a training unit configured to use the sample image to be trained to train The neural network model.
  • the category to which the original image belongs includes at least one of the categories included in the preset image feature, and the correspondence between the preset category and the smart filter corresponds to each smart filter At least one category.
  • the query unit 702 includes: a category query module configured to query the smart filter corresponding to the category to which the original image belongs according to the correspondence between the preset category and the smart filter.
  • the recommendation unit 703 includes: a filter selection module configured to select a smart filter with the largest number of categories that match the original image from the searched smart filters as the recommended smart Filter, filter recommendation.
  • the filter recommendation device further includes: a display unit configured to display a list of filters after the filter selection module selects a recommended smart filter from the queried smart filters, in The recommended smart filter is displayed in the filter list, and the recommended smart filter is marked; the marking unit is configured to exist in the filter list with the recommended smart filter category When the filters are the same, at least one of the filters in the same category is marked.
  • the preset image features include image targets, image scenes, and image quality.
  • the smart filter can be recommended according to the category of the original image, the recommendation process is more objective, the recommended smart filter can be more adapted to the situation of the original image, the recommendation result is accurate, and the user experience can be improved.
  • Fig. 9 is a block diagram of a device 800 for filter recommendation according to an exemplary embodiment.
  • the apparatus 800 is provided as an electronic device, and the electronic device may be a mobile terminal.
  • the device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so on.
  • the device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, ⁇ 816.
  • a processing component 802 a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, ⁇ 816.
  • the processing component 802 generally controls the overall operations of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps in the above method.
  • the processing component 802 may include one or more modules to facilitate interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operation at the device 800. Examples of these data include instructions for any application or method operating on the device 800, contact data, phone book data, messages, pictures, videos, and so on.
  • the memory 804 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable and removable Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 806 provides power to various components of the device 800.
  • the power component 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC).
  • the microphone When the device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the device 800 with status assessment in various aspects.
  • the sensor component 814 can detect the on/off state of the device 800, and the relative positioning of the components, for example, the component is the display and keypad of the device 800, and the sensor component 814 can also detect the position change of the device 800 or a component of the device 800 The presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and the temperature change of the device 800.
  • the sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the device 800 and other devices.
  • the device 800 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the apparatus 800 may be one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are used to implement the above filter recommendation method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are used to implement the above filter recommendation method.
  • a non-transitory computer-readable storage medium including instructions is also provided, for example, a memory 804 including instructions, which can be executed by the processor 820 of the device 800 to complete the filter recommendation method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, or the like.
  • a computer program product is also provided, which when the instructions in the computer program product are executed by a processor of an electronic device, enables the electronic device to perform the above filter recommendation method.
  • the electronic device embodiment the computer readable storage medium embodiment, and the computer program product embodiment, since they are basically similar to the method embodiments, the description is relatively simple.
  • the related parts refer to the method embodiment Just explain.

Abstract

一种滤镜推荐方法、装置、电子设备及存储介质。其中方法包括:接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别所述原始图像所属的类别(S11);依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜(S12);根据查询到的智能滤镜进行滤镜推荐(S13)。所述方法可以根据原始图像的类别进行智能滤镜的推荐,推荐过程更加客观,推荐的智能滤镜能够更加适应原始图像的情况,推荐结果准确,能够提升用户体验。

Description

滤镜推荐方法、装置、电子设备及存储介质
本申请要求于2018年12月10日提交中国专利局、申请号为201811505873.2发明名称为“滤镜推荐方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种滤镜推荐方法、装置、电子设备及存储介质。
背景技术
随着移动互联网和移动终端的飞速发展,移动终端逐渐成为人们生活中不可或缺的一部分。移动终端具有通话、拍摄图像、播放音频、播放视频、定位、扫描条码等各种功能,为人们的生活带来了极大的便利。
用户在利用移动终端拍摄图像时,为了使得拍摄的图像质量更好,很多用户会利用移动终端在图像上增加滤镜,以达到美化图像的目的。
在相关技术中,用户通常是根据自己的喜好来选择要增加的滤镜,鉴于此,发明人意识到,滤镜的选择往往偏主观,而很多用户并不是专业摄像人士,从而导致用户选择的滤镜与移动终端拍摄的图像不相符,滤镜选择不准确,用户体验差。
发明内容
为克服相关技术中存在的问题,本申请实施例提供了一种滤镜推荐方法、装置、电子设备及存储介质。
第一方面,本申请实施例提供了一种滤镜推荐方法,包括:接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别所述原始图像所属的类别;依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜;根据查询到的智能滤镜进行滤镜推荐。
第二方面,本申请实施例提供了一种滤镜推荐装置,包括:识别单元,被配置为接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别所述原始图像所属的类别;查询单元,被配置为依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜;推荐单元,被配置为根据查询到的智能滤镜进行滤镜推荐。
第三方面,本申请实施例提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行上述滤镜推 荐方法。
第四方面,本申请实施例提供了一种非临时性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述滤镜推荐方法。
第五方面,本申请实施例提供了一种计算机程序产品,当所述计算机程序产品中的指令由电子设备的处理器执行时,使得电子设备能够执行上述滤镜推荐方法。
本申请实施例提供的技术方案可以包括以下有益效果:
本申请实施例提供的方案中,接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别原始图像所属的类别;依据预设的类别与智能滤镜的对应关系,查询原始图像所属的类别对应的智能滤镜;根据查询到的智能滤镜进行滤镜推荐。由此可知,本申请实施例提供的方案中可以根据原始图像的类别进行智能滤镜的推荐,推荐过程更加客观,推荐的智能滤镜能够更加适应原始图像的情况,推荐结果准确,能够提升用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据一示例性实施例示出的一种滤镜推荐方法的流程图。
图2是根据一示例性实施例示出的一种滤镜推荐方法的流程图。
图3是根据一示例性实施例示出的一种神经网络模型的示意图。
图4是根据一示例性实施例示出的一种滤镜列表的示意图。
图5是根据一示例性实施例示出的一种滤镜效果的示意图。
图6是根据一示例性实施例示出的一种滤镜效果的示意图。
图7是根据一示例性实施例示出的一种滤镜效果的示意图。
图8是根据一示例性实施例示出的一种滤镜推荐装置的框图。
图9是根据一示例性实施例示出的一种用于滤镜推荐的装置的框图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并 举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图1是根据一示例性实施例示出的一种滤镜推荐方法的流程图,如图1所示,滤镜推荐方法用于电子设备中,包括以下步骤S11至步骤S13。
在步骤S11中,接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别原始图像所属的类别。
原始图像可以为相册中已保存的、之前拍摄的图像,原始图像也可以为当前正在拍摄的图像。原始图像可以为照片,也可以是视频中的视频帧。
本申请实施例提供的方案可以应用于对已保存的的图像进行识别,也可以应用于直接拍摄时,实时对正在拍摄的图像进行识别,识别后直接进行滤镜推荐。
用户可以通过电子设备为原始图像增加滤镜。比如,用户可以在电子设备中选择编辑操作,将原始图像导入到编辑界面中。在编辑界面中可以显示一键美化、裁切、旋转、增加滤镜、增加贴纸、涂鸦等选项,用户可以选择其中的增加滤镜选项,从而触发电子设备为原始图像增加滤镜的指令。
其中,上述为原始图像增加滤镜,指的是对原始图像进行滤镜处理,从而实现在原始图像上增加滤镜效果。具体的,各种滤镜对应不同的滤镜参数,滤镜参数包括曝光度、色调、饱和度、白平衡度等,不同的滤镜参数会使图像产生不同的滤镜效果。根据滤镜的滤镜参数对原始图像的参数进行调整,实现为原始图像增加滤镜。
电子设备在接收到为原始图像增加滤镜的指令后,可以在预置图像特征包含的类别中,识别该原始图像所属的类别。其中,预置图像特征包含的类别为:对图像按照预置图像特征进行分类所得到的图像所属的类别。
其中,预置图像特征可以包括至少一种。比如,预置图像特征可以包括图像目标、图像场景及图像质量等。每种预置图像特征可以包含多种类别。其中,图像目标为图像中所包含的目标对象,图像目标包含的类别可以为大人、婴儿、猫、狗、美食等;图像场景为图像内容所处的场景,图像场景包含的类别可以为室内、舞台、人群、草坪等;图像质量包含的类别可以为高质量、逆光、模糊等。
在步骤S12中,依据预设的类别与智能滤镜的对应关系,查询原始图像 所属的类别对应的智能滤镜。
其中,智能滤镜为预先设定的滤镜,具体可以是从大量滤镜中选择出来的滤镜,也可以是人工设定滤镜参数对应的滤镜。
本申请实施例中,可以预先设置类别与智能滤镜的对应关系。因此在识别出原始图像所属的类别后,即可从预设的类别与智能滤镜的对应关系中,查询原始图像所属的类别对应的智能滤镜。
在步骤S13中,根据查询到的智能滤镜进行滤镜推荐。
查询到的智能滤镜为原始图像所属的类别对应的智能滤镜。查询到的智能滤镜可以包括一个或多个滤镜。
本申请的一个实施例中,可以根据查询到的智能滤镜与原始图像所属的类别的相关性大小,进行滤镜推荐。其中,智能滤镜与原始图像所属的类别的相关性可以包括:智能滤镜与上述类别的匹配程度、智能滤镜在上述对应关系中所对应的类别中包含原始图像所属类别的数量等。
本申请实施例所提供的方案中可以根据原始图像所属的类别进行智能滤镜的推荐,推荐过程更加客观,推荐的智能滤镜能够更加适应原始图像的情况,推荐结果准确,能够提升用户体验。
图2是根据一示例性实施例示出的一种滤镜推荐方法的流程图,滤镜推荐方法用于电子设备中,如图2所示,上述方法包括以下步骤S21至步骤S25。
在步骤S21中,接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别原始图像所属的类别。
本申请的一个实施例中,可以根据原始图像的Gist(图像全局特征)特征识别原始图像所属的类别。具体的,可以提取原始图像的Gist特征,进而根据Gist特征在预置图像特征包含的类别中,识别原始图像所属的类别。其中,Gist特征是一种生物启发式特征,该特征为模拟人的视觉捕获图像中的上下文信息而得到的特征,是对外部世界的一种空间表示。提取Gist特征时,可以首先去除原始图像中的噪声,例如,可以通过多尺度多方向Gabor滤波器组对原始图像进行滤波,从而减小原始图像中的噪声。将去除噪声后的图像划分为4×4的网格,然后对各个网格进行离散傅里叶变换和窗口傅里叶变换从而提取图像的Gist特征。但是,Gist特征存在一些缺点:只能表征图像的浅层特征,忽略了图像之间的相关结构信息;准确度不高,对于复杂场景的鲁棒性能不好;根据Gist特征只能识别图像所属的一种类别,无法识别出图像所属的多种类别,同时无法捕捉到图像中的高层语义信息。
因此,本申请的一个实施例中提出了采用神经网络模型识别原始图像所属的类别。
可以通过如下方式实现对神经网络模型的训练:获取多个样本图像,每个样本图像标记有至少一种在预置图像特征包含的类别中所属的类别;针对预置图像特征包含的类别中的每种类别,根据样本图像中属于该类别的样本图像的数量,计算属于该类别的样本图像对应的采样权重;按照每种类别各自对应的采样权重从属于该类别的多个样本图像中选取用于进行模型训练的样本图像;利用待训练的样本图像,训练神经网络模型。
其中,上述样本图像所标记的类别,指的是根据样本图像在至少一种预置图像特征包含的类别中所属的类别对该样本图像进行标注的类别。上述标记的类别是独立于图像自身的内容,对图像自身不产生改变。
对于样本图像中属于每一类别的样本图像的数量,由于每一样本图像均标记有该图像所属的类别,其中存在标记有同样类别的样本图像,也就说明这类样本图像属于同一类别。通过统计标记有每一类别的样本图像的数量,可以得到属于该类别的样本图像的数量。
(1)数据集准备
本申请的一个实施例中,可以针对图像的不同图像特征进行处理,图像特征可以包括图像目标、图像场景及图像质量中的至少一个,当然图像特征还可以包括其他特征。以图像特征包括图像目标、图像场景及图像质量为例,图像中可能含有各种各样的图像目标,另外各个图像的图像场景也可能具有多样性,图像质量也可能存在差别。比如,针对图像目标可以存在10个类别:大人、婴儿、猫、狗、美食、绿植、文字、花、水果、其他,针对图像场景可以存在10个类别:室内、舞台、人群、草坪、瀑布、天空、雪景、海滩、夜景、其他,针对图像质量可以存在5个类别:高质量、逆光、模糊、雾图、暗光。针对这些类别采集数据集,也就是采集样本图像,比如可以从不同的渠道采集数据集,上述渠道可以是快手数据、ImageNet(大型可视化数据库)数据、PlaceData竞赛数据采集数据集。
由于上述渠道中的图像可能预先针对图像特征标记了所属的类别,因此对于从上述渠道获取的图像,图像中可能自带有所属的类别。针对这类图像,在进行数据集准备、为图像标记类别时,可以使用图像自带的类别,也可以重新识别该图像所属的类别。
不同类别之间,样本图像的数量有一定的差别,如果随机从样本图像中 选取一组样本图像对网络模型进行训练,可能会出现样本数量不均衡的问题,从而导致训练得到的模型不准确。因此,本申请的一个实施例中针对不同类别的样本图像之间存在的数量不均衡问题,预先设置了采样权重。鉴于此,提出了如下方式设计不同类别的样本图像的采样权重,主要步骤如下:分别统计每种类别对应的样本图像数量k_i,以及全部样本图像的总数S;计算每种类别对应的采样权重为S/k_i。因此,样本图像数量越多的类别的采样权重越小,样本图像数量越少的类别的采样权重越大,这样保证在对神经网络模型训练时选取的一组样本图像中各个类别的样本图像的数量均衡,防止训练后的神经网络模型出现偏差。
其中,上述样本数量不均衡指的是:所选取的属于不同类别的样本图像的数量不均衡。上述采样权重指:从属于每一类别的样本图像中选取用于进行模型训练的样本图像的数量所占的比重。
(2)模型结构设计
本申请的一个实施例中,基于卷积神经网络来设计神经网络模型。图3是根据一示例性实施例示出的一种神经网络模型的示意图。
本申请的一个实施例中,神经网络模型主要包含两部分,第一部分是第一数量层的卷积层Conv2d,第二部分是第二数量个兄弟全连接层fc1、fc2、fc3……。第一数量层卷积层为共享网络层,共享网络层的结构可以根据任务特性进行设计。后续的第二数量个兄弟全连接层分别对应一种图像特征,用于依据图像特征中的图像目标(Object Classifier)、图像场景(Scene Classifier)、图像质量(Image Quality)等对图像进行分类(Classifier)。将输入图像(input image)输入到第一数量层卷积层Conv2d进行特征提取,得到图像特征,再将提取得到的图像特征输入到全连接层fc1、fc2、fc3……分别进行分类,得到图像所属的类别。其中,上述第一数量可以是73,第二数量可以是3。如图3所示,input image(输入图像)表示输入神经网络模型的图像,Conv2d(73layer)表示神经网络模型中的73个卷积层,fc1、fc2、fc3分别表示3个全连接层,其中,Object Classifier表示全连接层fc1对应的图像特征为图像目标,Scene Classifier表示全连接层fc2对应的图像特征为图像场景,Image Quality表示全连接层fc3对应的图像特征为图像质量。
需要说明的是,本申请实施例中并不对上述卷积层的层数和兄弟全连接层的数量进行限定。在实际应用中,可以根据应用场景对上述卷积层的层数 进行设定,例如,上述卷积层可以是70层、80层、75层等。上述兄弟全连接层的数量与需要识别的图像特征的数目有关,每一兄弟全连接层对应一种图像特征。当需要基于多个图像特征进行分类时,相应地可以设定多个兄弟全连接层。
(3)模型的优化和评估
在神经网络模型构建好后,可以依据准备的数据集对神经网络模型进行训练,在训练过程中依据损失函数及优化器实现对神经网络模型的优化。并将优化后的神经网络模型与预先设置的评价指标进行对比,实现对神经网络模型的评估,直至神经网络模型满足评价指标,结束对神经网络模型的训练。
损失函数:图像中可能包含多个目标,图像也有可能符合多种场景,因此图像可能属于多个类别,针对每一类别,可以采用多标签交叉熵损失函数计算上述神经网络模型针对数据集中的数据输出的分类结果相对于样本所标记类别的损失。
优化器:在神经网络模型训练过程中,可以根据上述损失函数计算得到的损失,调整神经网络模型中的参数。具体的,在调整神经网络模型中参数时,可以以随机梯度下降的方式调整模型的参数。
评价指标:用于对每次训练后的神经网络模型进行评估,在神经网络模型满足评价指标后,可以认为成功完成了对模型的训练,否则,需要继续对神经网络模型进行训练,直至满足评价指标。其中,评价指标可以基于TOP1-accuracy标准设定。
本申请的一个实施例中采用卷积神经网络设计图像分类模型,可以更有效的获得图像内容信息。卷积神经网络能够提取出图像中的高层语义信息,能够识别出图像中各种类别的图像目标、图像场景、图片质量,且识别更准确,针对不同场景、遮挡、形变、光照等外界干扰更加鲁棒。
在训练神经网络模型时,针对各个类别的样本图像计算了采样权重,在选取样本图像时按照采样权重选取对应数量的样本图像,达到数据均衡的目的,防止模型训练偏移。针对训练数据,在训练模型的过程中通过随机扰动进行数据增强,上述随机扰动可以是水平翻转、左右旋转、随机裁剪、图像像素扰动等,可以提高训练后的模型的抗干扰性,这样训练后的模型对于光照弱、遮挡、低对比度等图像都能很好地得到图像所属的类别。将神经网络模型应用到图像所属类别识别中,能够提取到图像从底层纹理到高层语义的特征,不用人工参与特征识别,网络模型可以根据监督数据可以自动学习。
利用上述训练后的神经网络模型,可以实现对图像所属类别的识别。识别原始图像所属的类别的步骤可以包括:将原始图像输入训练后的神经网络模型,神经网络模型包括至少一个全连接层,每个全连接层对应一种预置图像特征;获取每个全连接层输出的原始图像所属的类别,其中,每个全连接层输出的类别属于该全连接层对应的预置图像特征包含的类别。
比如,如果图像特征包括图像目标、图像场景及图像质量3种,则神经网络模型包括3个全连接层,三个全连接层分别输出原始图像在图像目标包含的类别中所属的类别、在图像场景包含的类别中所属的类别、以及在图像质量包含的类别中所属的类别。上述实施例提供的方案中,能够针对原始图像的图像目标、图像场景、图像质量对原始图像进行客观的分类,能够结合图像自身的特征,针对不同应用场景都有较好的鲁棒性。
在步骤S22中,依据类别与智能滤镜的对应关系,查询对应的类别命中原始图像所属的类别的智能滤镜。
其中,上述命中指:智能滤镜对应的类别中包含至少一原始图像所属的类别。例如,假设智能滤镜对应的类别为“舞台”,而原始图像所属的类别包括“舞台”、“大人”、“暗光”,由于智能滤镜对应的类别中包含一个原始图像所属的类别“舞台”,因此可以认为该智能滤镜对应的类别命中原始图像所属的类别。
本申请的一个实施例中,可以预先设置智能滤镜库,在智能滤镜库中存储类别与智能滤镜的对应关系。智能滤镜库中可以包括多个智能滤镜,每个智能滤镜对应至少一种类别。比如,可以将大量滤镜依据图像目标、图像场景、图像质量三者所包含的类别中的至少一个类别进行分析,为每一类别匹配对应的滤镜,将匹配的滤镜作为智能滤镜,并存储每个智能滤镜对应的在图像目标包含的类别中所属的类别、在图像场景包含的类别中所属的类别、在图像质量包含的类别中所属的类别三者中的至少一个类别。
其中,上述将滤镜依据至少一个类别进行分析,指的是分析该滤镜适用于属于哪种类别的图像。具体的,可以是直接由人工分析该滤镜适用的图像所属的类别,也可以由多次实验对滤镜进行分析,例如,针对一个滤镜,向属于不同类别的图像添加该滤镜,根据添加滤镜后的效果,判断该滤镜适用于哪种类别的图像。
利用神经网络模型识别出原始图像所属的类别,识别出的原始图像的类别包括至少一种。比如,原始图像所属的类别可以包括在图像目标包含的类 别中所属的类别、在图像场景包含的类别中所属的类别、在图像质量包含的类别中所属的类别三者中的至少一个类别。
依据原始图像所属的类别,可以从智能滤镜库中查询该原始图像所属的类别对应的智能滤镜。查询过程可以包括:依据类别与智能滤镜的对应关系,查询对应的类别命中原始图像所属的类别的智能滤镜。
其中,智能滤镜对应的类别命中原始图像所属的类别可以为:智能滤镜对应的至少一个类别命中原始图像所属的类别。
比如,智能滤镜对应的类别包括在图像目标包含的类别中所属的类别、在图像场景包含的类别中所属的类别、在图像质量包含的类别中所属的类别三者,原始图像所属的类别包括在图像目标包含的类别中所属的类别、在图像场景包含的类别中所属的类别、在图像质量包含的类别中所属的类别三者。
如果智能滤镜对应的一个类别与原始图像所属的一个类别相同,比如,智能滤镜对应的在图像目标包含的类别中所属的类别与原始图像在图像目标包含的类别中所属的类别相同;或者智能滤镜对应的在图像场景包含的类别中所属的类别与原始图像在图像场景包含的类别中所属的类别相同;或者智能滤镜对应在图像质量包含的类别中所属的类别与原始图像在图像质量包含的类别中所属的类别相同,则可以认为智能滤镜对应的类别命中原始图像所属的类别。
如果智能滤镜对应的两个类别与原始图像所属的两个类别相同,比如,智能滤镜对应的在图像目标包含的类别中所属的类别与原始图像在图像目标包含的类别中所属的类别相同,并且智能滤镜对应的在图像场景包含的类别中所属的类别与原始图像在图像场景包含的类别中所属的类别相同;或者智能滤镜对应的在图像目标包含的类别中所属的类别与原始图像在图像目标包含的类别中所属的类别相同,并且智能滤镜对应的在图像质量包含的类别中所属的类别与原始图像在图像质量包含的类别中所属的类别相同;或者智能滤镜对应的在图像场景包含的类别中所属的类别与原始图像在图像场景包含的类别中所属的类别相同,并且智能滤镜对应的在图像质量下的类别与原始图像在图像质量包含的类别中所属的类别相同,则可以认为智能滤镜对应的类别命中原始图像所属的类别。
如果智能滤镜对应的三个类别与原始图像所属的三个类别相同,比如,智能滤镜对应的在图像目标包含的类别中所属的类别与原始图像在图像目标包含的类别中所属的类别相同,并且智能滤镜对应的在图像场景包含的类别 中所属的类别与原始图像在图像场景包含的类别中所属的类别相同,并且智能滤镜对应的在图像质量包含的类别中所属的类别与原始图像在图像质量包含的类别中所属的类别相同,则可以认为智能滤镜对应的类别命中原始图像的类别。
在步骤S23中,从查询到的智能滤镜中选取命中类别的数量最多的智能滤镜,作为推荐的智能滤镜。
查询到的智能滤镜可能有多个,可以从其中选取至少一个作为推荐的智能滤镜。比如,可以选取命中类别的数量最多的智能滤镜作为推荐的智能滤镜,等等。
本申请的一个实施例中,在得到推荐的智能滤镜之后,还可以显示滤镜列表,在滤镜列表中显示推荐的智能滤镜,并对推荐的智能滤镜进行标记。具体的,可以将推荐的智能滤镜标记为“智能”。其中,“智能”是一种滤镜标识。也可以对推荐的智能滤镜进行突出显示,还可以利用特殊符号对智能滤镜进行标记,其中,特殊符号可以是“☆”、“*”等。
本申请的一个实施例中,在滤镜列表中存在与推荐的智能滤镜类别相同的滤镜时,将类别相同的滤镜中的至少一个滤镜进行标记。具体的,可以将上述类别相同的滤镜标记为“推荐”,也可以标记为“REC(recommend,推荐)”,还可以对上述类别相同的滤镜进行突出显示等。
具体的,在将推荐的智能滤镜标记为“智能”、将类别相同的滤镜标记为“推荐”的情况下,可以按照如下步骤S24和S25执行上述实施例中提供的方案:
在步骤S24中,显示滤镜列表,在滤镜列表中显示推荐的智能滤镜,并将推荐的智能滤镜标记为“智能”。
在为原始图像增加滤镜时,可以在编辑界面的底部显示滤镜列表。并且,在得到推荐的智能滤镜后,在滤镜列表中显示该推荐的智能滤镜,并将推荐的智能滤镜标记为“智能”。因此用户在选择增加的滤镜时,可以从滤镜列表中选择推荐的智能滤镜,也即标记为“智能”的滤镜。
当然,滤镜列表中还可以包括系统推荐的其他滤镜,用户可以从滤镜列表中选择任意一个滤镜增加到原始图像上。
在步骤S25中,在滤镜列表中存在与推荐的智能滤镜类别相同的滤镜时,将类别相同的滤镜中的至少一个滤镜标记为“推荐”。
在一种实施方式中,还可以判断滤镜列表中是否存在与推荐的智能滤镜 类别相同的滤镜。如果滤镜列表中存在与推荐的智能滤镜类别相同的滤镜,还可以将类别相同的滤镜中的至少一个滤镜标记为“推荐”。如果滤镜列表中不存在与推荐智能滤镜类别相同的滤镜,则可以不对其他滤镜进行标记。
滤镜列表中的其它滤镜也可以对应图像所属的类别,可以将滤镜列表中的滤镜对应的类别与推荐的智能滤镜对应的类别进行对比,如果类别相同,则可以将类别相同的滤镜中的至少一个滤镜标记为推荐。
具体的,其他滤镜可能也对应于图像所属的类别。也就是,该滤镜适用于属于其所对应类别的图像。例如,其他滤镜对应的类别为“天空”,也就是,该滤镜适用于属于“天空”这个类别的图像。
图4是根据一示例性实施例示出的一种滤镜列表的示意图。由图4可知,在滤镜列表中当前显示有“无”选项及5个滤镜选项,滤镜选项包括“智能”滤镜选项,也即上述被标记为智能的推荐的智能滤镜,还包括“泡芙”、“舒芙蕾”、“馥芮白”、“果冻”这4个其它滤镜选项,并且其中一个滤镜“舒芙蕾”被标记为推荐。如果用户不为原始图像增加滤镜,可以选择第一个“无”选项,如果要为原始图像增加滤镜,则可以选择增加智能滤镜,也可以选择增加其他滤镜,比如标记有推荐的滤镜,等等。
图5是根据一示例性实施例示出的一种滤镜效果的示意图。如图5所示,其中原始图像在图像目标包含的类别中所属的类别是婴儿,在图像场景包含的类别中所属的类别是室内,在图像质量包含的类别中所属的类别是模糊。
图6是根据一示例性实施例示出的一种滤镜效果的示意图。如图6所示,其中原始图像在图像目标包含的类别中所属的类别是大人,在图像场景包含的类别中所属的类别是舞台,在图像质量包含的类别中所属的类别是暗光。
图7是根据一示例性实施例示出的一种滤镜效果的示意图。如图7所示,其中原始图像在图像目标包含的类别中所属的类别是其他,在图像场景包含的类别中所属的类别是夜景,在图像质量包含的类别中所属的类别是暗光。
本申请实施例中可以根据原始图像所属的类别进行智能滤镜的推荐,推荐过程更加客观,推荐的智能滤镜能够更加适应原始图像的情况,推荐结果准确,能够提升用户体验。
图8是根据一示例性实施例示出的一种滤镜推荐装置框图。参照图8,该装置包括识别单元701、查询单元702和推荐单元703。
识别单元701,被配置为接收到为原始图像增加滤镜的指令后,在预置图 像特征包含的类别中,识别所述原始图像所属的类别;
查询单元702,被配置为依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜;
推荐单元703,被配置为根据查询到的智能滤镜进行滤镜推荐。
在一种实施方式中,识别单元701包括:图像输入模块,被配置为接收到为原始图像增加滤镜的指令后,将所述原始图像输入预设的神经网络模型,所述神经网络模型包括至少一个全连接层,每个全连接层对应一种预置图像特征,每种预置图像特征包含至少一种类别;类别获取模块,被配置为获取每个全连接层输出的所述原始图像所属的类别,其中,每个全连接层输出的类别属于该全连接层对应的预置图像特征包含的类别。
在一种实施方式中,滤镜推荐装置还包括:获取单元,被配置为获取多个样本图像,每个样本图像标记有在至少一种所述预置图像特征包含的类别中所属的类别;计算单元,被配置为针对所述预置图像特征包含的类别中的每种类别,根据所述样本图像中属于该类别的样本图像的数量,计算属于该类别的样本图像对应的采样权重;选取单元,被配置为按照每种类别各自对应的采样权重从属于该类别的多个样本图像中选取用于进行模型训练的样本图像;训练单元,被配置为利用所述待训练的样本图像,训练所述神经网络模型。
在一种实施方式中,所述原始图像所属的类别包括所述预置图像特征包含的类别中的至少一种,所述预设的类别与智能滤镜的对应关系中每个智能滤镜对应至少一种类别。所述查询单元702包括:类别查询模块,被配置为依据所述预设的类别与智能滤镜的对应关系,查询对应的类别命中所述原始图像所属的类别的智能滤镜。
在一种实施方式中,所述推荐单元703包括:滤镜选取模块,被配置为从查询到的智能滤镜中选取命中所述原始图像所属类别的数量最多的智能滤镜,作为推荐的智能滤镜,进行滤镜推荐。
在一种实施方式中,滤镜推荐装置还包括:显示单元,被配置为在所述滤镜选取模块从查询到的智能滤镜中选取得到推荐的智能滤镜后,显示滤镜列表,在所述滤镜列表中显示所述推荐的智能滤镜,并对所述推荐的智能滤镜进行标记;标记单元,被配置为在所述滤镜列表中存在与所述推荐的智能滤镜类别相同的滤镜时,对所述类别相同的滤镜中的至少一个滤镜进行标记。
在一种实施方式中,所述预置图像特征包括图像目标、图像场景及图像 质量。
本申请实施例中可以根据原始图像的类别进行智能滤镜的推荐,推荐过程更加客观,推荐的智能滤镜能够更加适应原始图像的情况,推荐结果准确,能够提升用户体验。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图9是根据一示例性实施例示出的一种用于滤镜推荐的装置800的框图。例如,装置800被提供为一电子设备,电子设备可以是移动终端。例如,装置800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图9,装置800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制装置800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在装置800的操作。这些数据的示例包括用于在装置800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为装置800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为装置800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述装置800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。 如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当装置800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当装置800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为装置800提供各个方面的状态评估。例如,传感器组件814可以检测到装置800的打开/关闭状态,组件的相对定位,例如所述组件为装置800的显示器和小键盘,传感器组件814还可以检测装置800或装置800一个组件的位置改变,用户与装置800接触的存在或不存在,装置800方位或加速/减速和装置800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于装置800和其他设备之间有线或无线方式的通信。装置800可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述滤镜推荐方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器804,上述指令可由装置800的处理器820执行以完成上述滤镜推荐方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品,当所述计算机程序产品中的指令由电子设备的处理器执行时,使得电子设备能够执行上述滤镜推荐方法。
对于装置实施例、电子设备实施例、计算机可读存储介质实施例以及计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (22)

  1. 一种滤镜推荐方法,包括:
    接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别所述原始图像所属的类别;
    依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜;
    根据查询到的智能滤镜进行滤镜推荐。
  2. 根据权利要求1所述的滤镜推荐方法,所述在预置图像特征包含的类别中,识别所述原始图像所属的类别的步骤包括:
    将所述原始图像输入预设的神经网络模型,所述神经网络模型包括至少一个全连接层,每个全连接层对应一种预置图像特征,每种预置图像特征包含多种类别;
    获取每个全连接层输出的所述原始图像所属的类别,其中,每个全连接层输出的类别属于该全连接层对应的预置图像特征包含的类别。
  3. 根据权利要求2所述的滤镜推荐方法,所述神经网络模型通过如下方式得到:
    获取多个样本图像,每个样本图像标记有在至少一种所述预置图像特征包含的类别中所属的类别;
    针对所述预置图像特征包含的类别中的每种类别,根据所述样本图像中属于该类别的样本图像的数量,计算属于该类别的样本图像对应的采样权重;
    按照每种类别各自对应的采样权重从属于该类别的多个样本图像中选取用于进行模型训练的样本图像;
    利用所述待训练的样本图像,训练所述神经网络模型。
  4. 根据权利要求1所述的滤镜推荐方法,所述原始图像所属的类别包括所述预置图像特征包含的类别中的至少一种;所述预设的类别与智能滤镜的对应关系中每个智能滤镜对应至少一种类别;所述依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜的步骤包括:
    依据所述预设的类别与智能滤镜的对应关系,查询对应的类别命中所述原始图像所属的类别的智能滤镜。
  5. 根据权利要求4所述的滤镜推荐方法,根据查询到的智能滤镜进行滤镜推荐的步骤包括:
    从查询到的智能滤镜中选取命中所述原始图像所属类别的数量最多的智 能滤镜,作为推荐的智能滤镜,进行滤镜推荐。
  6. 根据权利要求5所述的滤镜推荐方法,在所述从查询到的智能滤镜中选取命中所述原始图像所属类别的数量最多的智能滤镜,作为推荐的智能滤镜的步骤之后,还包括:
    显示滤镜列表,在所述滤镜列表中显示所述推荐的智能滤镜,并对所述推荐的智能滤镜进行标记;
    在所述滤镜列表中存在与所述推荐的智能滤镜类别相同的滤镜时,对所述类别相同的滤镜中的至少一个滤镜进行标记。
  7. 根据权利要求1所述的滤镜推荐方法,所述预置图像特征包括图像目标、图像场景及图像质量。
  8. 一种滤镜推荐装置,包括:
    识别单元,被配置为接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别所述原始图像所属的类别;
    查询单元,被配置为依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜;
    推荐单元,被配置为根据查询到的智能滤镜进行滤镜推荐。
  9. 根据权利要求8所述的滤镜推荐装置,所述识别单元包括:
    图像输入模块,被配置为接收到为原始图像增加滤镜的指令后,将所述原始图像输入预设的神经网络模型,所述神经网络模型包括至少一个全连接层,每个全连接层对应一种预置图像特征,每种预置图像特征包含至少一种类别;
    类别获取模块,被配置为获取每个全连接层输出的所述原始图像所属的类别,其中,每个全连接层输出的类别属于该全连接层对应的预置图像特征包含的类别。
  10. 根据权利要求9所述的滤镜推荐装置,所述的装置还包括:
    获取单元,被配置为获取多个样本图像,每个样本图像标记有在至少一种所述预置图像特征包含的类别中所属的类别;
    计算单元,被配置为针对所述预置图像特征包含的类别中的每种类别,根据所述样本图像中属于该类别的样本图像的数量,计算属于该类别的样本图像对应的采样权重;
    选取单元,被配置为按照每种类别各自对应的采样权重从属于该类别的多个样本图像中选取用于进行模型训练的样本图像;
    训练单元,被配置为利用所述待训练的样本图像,训练所述神经网络模型。
  11. 根据权利要求8所述的滤镜推荐装置,所述原始图像所属的类别包括所述预置图像特征包含的类别中的至少一种;所述预设的类别与智能滤镜的对应关系中每个智能滤镜对应至少一种类别;所述查询单元包括:
    类别查询模块,被配置为依据所述预设的类别与智能滤镜的对应关系,查询对应的类别命中所述原始图像所属的类别的智能滤镜。
  12. 根据权利要求11所述的滤镜推荐装置,所述推荐单元包括:
    滤镜选取模块,被配置为从查询到的智能滤镜中选取命中所述原始图像所属类别的数量最多的智能滤镜,作为推荐的智能滤镜,进行滤镜推荐。
  13. 根据权利要求12所述的滤镜推荐装置,所述装置还包括:
    显示单元,被配置为在所述滤镜选取模块从查询到的智能滤镜中选取得到推荐的智能滤镜后,显示滤镜列表,在所述滤镜列表中显示所述推荐的智能滤镜,并对所述推荐的智能滤镜进行标记;
    标记单元,被配置为在所述滤镜列表中存在与所述推荐的智能滤镜类别相同的滤镜时,对所述类别相同的滤镜中的至少一个滤镜进行标记。
  14. 根据权利要求8所述的滤镜推荐装置,所述预置图像特征包括图像目标、图像场景及图像质量。
  15. 一种电子设备,包括:处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行以下操作:
    接收到为原始图像增加滤镜的指令后,在预置图像特征包含的类别中,识别所述原始图像所属的类别;
    依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜;
    根据查询到的智能滤镜进行滤镜推荐。
  16. 根据权利要求15所述电子设备,所述在预置图像特征包含的类别中,识别所述原始图像所属的类别的步骤包括:
    将所述原始图像输入预设的神经网络模型,所述神经网络模型包括至少一个全连接层,每个全连接层对应一种预置图像特征,每种预置图像特征包含多种类别;
    获取每个全连接层输出的所述原始图像所属的类别,其中,每个全连接 层输出的类别属于该全连接层对应的预置图像特征包含的类别。
  17. 根据权利要求15所述的电子设备,所述神经网络模型通过如下方式得到:
    获取多个样本图像,每个样本图像标记有在至少一种所述预置图像特征包含的类别中所属的类别;
    针对所述预置图像特征包含的类别中的每种类别,根据所述样本图像中属于该类别的样本图像的数量,计算属于该类别的样本图像对应的采样权重;
    按照每种类别各自对应的采样权重从属于该类别的多个样本图像中选取用于进行模型训练的样本图像;
    利用所述待训练的样本图像,训练所述神经网络模型。
  18. 根据权利要求15所述的电子设备,所述原始图像所属的类别包括所述预置图像特征包含的类别中的至少一种;所述预设的类别与智能滤镜的对应关系中每个智能滤镜对应至少一种类别;所述依据预设的类别与智能滤镜的对应关系,查询所述原始图像所属的类别对应的智能滤镜的步骤包括:
    依据所述预设的类别与智能滤镜的对应关系,查询对应的类别命中所述原始图像所属的类别的智能滤镜。
  19. 根据权利要求18所述的电子设备,根据查询到的智能滤镜进行滤镜推荐的步骤包括:
    从查询到的智能滤镜中选取命中所述原始图像所属类别的数量最多的智能滤镜,作为推荐的智能滤镜,进行滤镜推荐。
  20. 根据权利要求19所述的电子设备,在所述从查询到的智能滤镜中选取命中所述原始图像所属类别的数量最多的智能滤镜,作为推荐的智能滤镜的步骤之后,还包括:
    显示滤镜列表,在所述滤镜列表中显示所述推荐的智能滤镜,并对所述推荐的智能滤镜进行标记;
    在所述滤镜列表中存在与所述推荐的智能滤镜类别相同的滤镜时,对所述类别相同的滤镜中的至少一个滤镜进行标记。
  21. 根据权利要求15所述的电子设备,所述预置图像特征包括图像目标、图像场景及图像质量。
  22. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如权利要求1-7任一项所述的滤镜推荐方法。
PCT/CN2019/112572 2018-12-10 2019-10-22 滤镜推荐方法、装置、电子设备及存储介质 WO2020119254A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811505873.2A CN109727208A (zh) 2018-12-10 2018-12-10 滤镜推荐方法、装置、电子设备及存储介质
CN201811505873.2 2018-12-10

Publications (1)

Publication Number Publication Date
WO2020119254A1 true WO2020119254A1 (zh) 2020-06-18

Family

ID=66295271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/112572 WO2020119254A1 (zh) 2018-12-10 2019-10-22 滤镜推荐方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN109727208A (zh)
WO (1) WO2020119254A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727208A (zh) * 2018-12-10 2019-05-07 北京达佳互联信息技术有限公司 滤镜推荐方法、装置、电子设备及存储介质
CN112819685B (zh) * 2019-11-15 2022-11-04 青岛海信移动通信技术股份有限公司 一种图像的风格模式推荐方法和终端
CN112561827A (zh) * 2020-12-22 2021-03-26 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN113194254A (zh) * 2021-04-28 2021-07-30 上海商汤智能科技有限公司 图像拍摄方法及装置、电子设备和存储介质
CN115797723B (zh) * 2022-11-29 2023-10-13 北京达佳互联信息技术有限公司 滤镜推荐方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636759A (zh) * 2015-02-28 2015-05-20 成都品果科技有限公司 一种获取图片推荐滤镜信息的方法及图片滤镜信息推荐系统
CN105224950A (zh) * 2015-09-29 2016-01-06 小米科技有限责任公司 滤镜类别识别方法及装置
CN107730461A (zh) * 2017-09-29 2018-02-23 北京金山安全软件有限公司 图像处理方法、装置、设备及介质
CN108897786A (zh) * 2018-06-08 2018-11-27 Oppo广东移动通信有限公司 应用程序的推荐方法、装置、存储介质及移动终端
CN109727208A (zh) * 2018-12-10 2019-05-07 北京达佳互联信息技术有限公司 滤镜推荐方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927372B (zh) * 2014-04-24 2017-09-29 厦门美图之家科技有限公司 一种基于用户语义的图像处理方法
CN104700442A (zh) * 2015-03-30 2015-06-10 厦门美图网科技有限公司 一种自动添加滤镜与文字的图像处理方法和系统
CN108898082B (zh) * 2018-06-19 2020-07-03 Oppo广东移动通信有限公司 图片处理方法、图片处理装置及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636759A (zh) * 2015-02-28 2015-05-20 成都品果科技有限公司 一种获取图片推荐滤镜信息的方法及图片滤镜信息推荐系统
CN105224950A (zh) * 2015-09-29 2016-01-06 小米科技有限责任公司 滤镜类别识别方法及装置
CN107730461A (zh) * 2017-09-29 2018-02-23 北京金山安全软件有限公司 图像处理方法、装置、设备及介质
CN108897786A (zh) * 2018-06-08 2018-11-27 Oppo广东移动通信有限公司 应用程序的推荐方法、装置、存储介质及移动终端
CN109727208A (zh) * 2018-12-10 2019-05-07 北京达佳互联信息技术有限公司 滤镜推荐方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN109727208A (zh) 2019-05-07

Similar Documents

Publication Publication Date Title
WO2020119254A1 (zh) 滤镜推荐方法、装置、电子设备及存储介质
US11520824B2 (en) Method for displaying information, electronic device and system
US10534972B2 (en) Image processing method, device and medium
CN105069083B (zh) 关联用户的确定方法及装置
WO2022037307A1 (zh) 信息推荐方法、装置及电子设备
CN109189986B (zh) 信息推荐方法、装置、电子设备和可读存储介质
EP3173969B1 (en) Method, apparatus and terminal device for playing music based on a target face photo album
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
KR20160031992A (ko) 실시간 동영상 제공 방법, 장치, 서버, 단말기기, 프로그램 및 기록매체
CN112672208B (zh) 视频播放方法、装置、电子设备、服务器及系统
CN109040605A (zh) 拍摄引导方法、装置及移动终端和存储介质
WO2022227393A1 (zh) 图像拍摄方法及装置、电子设备和计算机可读存储介质
WO2021063096A1 (zh) 视频合成方法、装置、电子设备及存储介质
CN106550252A (zh) 信息的推送方法、装置及设备
CN109033991A (zh) 一种图像识别方法及装置
CN105203456B (zh) 植物品种识别方法及装置
CN112069951A (zh) 视频片段提取方法、视频片段提取装置及存储介质
CN110019897B (zh) 显示图片的方法及装置
CN107992839A (zh) 人物跟踪方法、装置及可读存储介质
CN107801282A (zh) 台灯、台灯控制方法及装置
CN107122697A (zh) 照片的自动获取方法及装置、电子设备
CN107105311A (zh) 直播方法及装置
CN107122801B (zh) 图像分类的方法和装置
US11715234B2 (en) Image acquisition method, image acquisition device, and storage medium
CN114189719A (zh) 视频信息提取方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19897014

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19897014

Country of ref document: EP

Kind code of ref document: A1