CN114257730A - Image data processing method and device, storage medium and computer equipment - Google Patents

Image data processing method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN114257730A
CN114257730A CN202011003453.1A CN202011003453A CN114257730A CN 114257730 A CN114257730 A CN 114257730A CN 202011003453 A CN202011003453 A CN 202011003453A CN 114257730 A CN114257730 A CN 114257730A
Authority
CN
China
Prior art keywords
filter
data
shooting
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011003453.1A
Other languages
Chinese (zh)
Inventor
田雷
王彬
潘攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011003453.1A priority Critical patent/CN114257730A/en
Publication of CN114257730A publication Critical patent/CN114257730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • G06T3/04
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The invention discloses a method and a device for processing image data, a storage medium and computer equipment. Wherein, the method comprises the following steps: acquiring shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model; and performing filter processing on the shooting data by using the selected filter model to generate a filter image. The invention solves the technical problems that the professional requirement of the user end on image processing is higher and the processing efficiency is low.

Description

Image data processing method and device, storage medium and computer equipment
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and an apparatus for processing image data, a storage medium, and a computer device.
Background
In the field of image processing, after a user shoots an image, the imaging effect of the shot image and video is not full and vivid enough due to a plurality of external factors such as unobtrusive contrast of an image main body, insufficient color of the image, overexposure or underexposure of imaging equipment and the like. To improve the imaging effect, the user may process the image using image processing software or may retake the image. However, in the above methods, the former method requires a high requirement for professional skill and literacy of the user, and wastes time and labor, and the latter method has low efficiency and reduces enthusiasm of the user.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing image data, a storage medium and computer equipment, which are used for at least solving the technical problems of high professional requirements and low processing efficiency of a user end for processing images.
According to an aspect of an embodiment of the present invention, there is provided a method of processing image data, including: acquiring shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model; and performing filter processing on the shooting data by using the selected filter model to generate a filter image.
According to another aspect of the embodiments of the present invention, there is also provided an image data processing method, including: displaying shooting data collected by shooting equipment on an interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; if a filter instruction is detected in any one area of the interactive interface, triggering and analyzing the picture of the shot data, and determining a filter model matched with the shot data, wherein the filter model is a countermeasure model; and displaying a filter image on the interactive interface, wherein the filter image is an image generated by performing filter processing on the shooting data by using the selected filter model.
According to still another aspect of the embodiments of the present invention, there is also provided an image data processing method, including: displaying shooting data on an interactive interface, wherein the shooting data comprises at least one of the following: shooting pictures and videos; a filter instruction matched with the shooting data is sensed in the interactive interface; responding to the filter instruction, and determining a filter model matched with the shooting data, wherein the filter model is a countermeasure model; outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels for the shooting data; and displaying a filter image on the interactive interface, wherein the filter image is an image obtained by carrying out filter processing on the shooting data based on the selected filter model.
According to still another aspect of the embodiments of the present invention, there is also provided an image data processing method, including: the front-end client uploads shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; the front-end client transmits the shooting data to a background server; the front-end client receives a filter model matched with the shooting data returned by the background server, wherein the filter model is an antagonistic model determined from a plurality of types of filter models; and the front-end client uses the selected filter model to carry out filter processing on the shooting data to generate a filter image.
According to an aspect of an embodiment of the present invention, there is provided an image data processing apparatus including: the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring shooting data of a shooting object, and the shooting data comprises at least one of the following data: shooting pictures and videos; the first determining module is used for analyzing the shooting data and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model; and the first generation module is used for carrying out filter processing on the shooting data by using the selected filter model to generate a filter image.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for processing image data, including: the first display module is used for displaying shooting data collected by shooting equipment on an interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; the second determining module is used for triggering and analyzing the picture of the shooting data and determining a filter model matched with the shooting data when a filter instruction is detected in any one area of the interactive interface, wherein the filter model is a countermeasure model; and the second display module is used for displaying a filter image on the interactive interface, wherein the filter image is an image generated by performing filter processing on the shooting data by using the selected filter model.
According to still another aspect of the embodiments of the present invention, there is also provided an apparatus for processing image data, including: the third display module is used for displaying shooting data on the interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; the first sensing module is used for sensing a filter instruction matched with the shooting data in the interactive interface; the third determining module is used for responding to the filter instruction and determining a filter model matched with the shooting data, wherein the filter model is a countermeasure model; the first output module is used for outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels for the shooting data; and the fourth display module is used for displaying a filter image on the interactive interface, wherein the filter image is an image obtained by filtering the shooting data based on the selected filter model.
According to still another aspect of the embodiments of the present invention, there is also provided an image data processing apparatus including: the first uploading module is used for uploading shooting data of a shooting object by a front-end client, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; the first transmission module is used for transmitting the shooting data to a background server by the front-end client; the first receiving module is used for the front-end client to receive a filter model which is returned by the background server and matched with the shooting data, wherein the filter model is an antagonistic model determined from a plurality of kinds of filter models; and the second generation module is used for performing filter processing on the shooting data by using the selected filter model by the front-end client to generate a filter image.
According to an aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the image data processing method according to any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a computer apparatus including: a memory and a processor, the memory storing a computer program; the processor is configured to execute the computer program stored in the memory, and when the computer program runs, the processor is enabled to execute any one of the image data processing methods described above.
In the embodiment of the invention, the mode of acquiring the shooting data of the shooting object is adopted, the filter model matched with the shooting data is determined, and the filter model is used for carrying out filter processing on the shooting data, so that the aim of generating the filter image processed by the filter model is fulfilled, the technical effect of improving the imaging effect of the shooting image is realized, and the technical problems of high professional requirement and low processing efficiency of the user end for processing the image are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal for implementing a processing method of image data according to an embodiment of the present invention;
FIG. 2 is a flow chart of a first method for processing image data according to an embodiment of the present invention;
FIG. 3 is a flowchart of a second method for processing image data according to an embodiment of the present invention;
FIG. 4 is a flowchart of a third method for processing image data according to an embodiment of the present invention;
FIG. 5 is a flowchart of a fourth method for processing image data according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a method of processing image data provided in accordance with an alternative embodiment of the invention;
FIG. 7 is a block diagram of a first apparatus for processing image data according to an embodiment of the present invention;
FIG. 8 is a block diagram of a second image data processing apparatus according to an embodiment of the present invention;
FIG. 9 is a block diagram of a third apparatus for processing image data according to an embodiment of the present invention;
fig. 10 is a block diagram showing a fourth configuration of an image data processing apparatus according to an embodiment of the present invention;
fig. 11 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
a filter, a processing method of image data, processes pixel values in an image to realize various special effects of the image.
The intelligent filter automatically adds a filter effect suitable for a certain image in a machine learning mode without manually selecting the filter and adjusting the effect, and usually needs to jointly process image elements such as an image channel (channel), a pixel (pixel) and a layer (layer) so as to strengthen and weaken certain parts in the image/video, thereby obtaining visual effects such as gradual change, halation, tone and the like, so that the whole image/video meets the requirements of a human aesthetic perception system, and finally obtains a better artistic effect.
U-net, an algorithm for semantic Segmentation using full Convolutional Networks, refers to a network structure proposed in the paper < volumetric Networks for biological Image Segmentation >.
A countermeasure Network (general adaptive Network) is generated, an unsupervised learning algorithm is used for learning by a method of enabling two neural networks to game mutually, and a neural Network model (namely a countermeasure model) meeting requirements is obtained.
A batch normalization layer is a theoretical technology for improving the performance and the training stability of a deep neural network, and solves the problem that the convergence speed becomes slow along with the deepening of the number of layers of the neural network. The technique is capable of presenting a zero mean/unit variance input for any layer in the neural network.
Example 1
There is also provided, in accordance with an embodiment of the present invention, a method embodiment of processing image data, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing a processing method of image data. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission module 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the image data processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing, i.e., implements the image data processing method described above, by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
Under the above operating environment, the present application provides a method for processing image data as shown in fig. 2. Fig. 2 is a flowchart of a first image data processing method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
step S202, shooting data of a shooting object is collected, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
step S204, analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of filter models, wherein the filter model is a countermeasure model;
in step S206, filter processing is performed on the captured data using the selected filter model, and a filter image is generated.
Through the processing, the mode of collecting the shooting data of the shooting object is adopted, the filter model matched with the shooting data is determined, and the filter model is used for carrying out filter processing on the shooting data, so that the purpose of generating the filter image processed by the filter model is achieved, the technical effect of improving the imaging effect of the shooting image is achieved, and the technical problems that the professional requirement of the user end for processing the image is high and the processing efficiency is low are solved.
As an alternative embodiment, determining a filter model that matches the shot data includes: carrying out scene classification on the shooting data to obtain a scene type to which the shooting data belongs; based on the scene type of the shooting object in the shooting data, a filter model matched with the shooting data is called from a plurality of kinds of filter models. The scene type to which the shooting data belongs is more important for selecting the filter model, and because different requirements are made on factors such as image style, integral tone, light and the like aiming at shooting objects in different scenes, a uniform image processing model cannot be applied mechanically.
Optionally, the present embodiment may first perform scene classification on the shooting data and obtain a scene classification. For example, the scene categories may include: gourmet, clothing, outdoor, digital products, etc. The scene recognition and classification may include various methods, for example, an algorithm model capable of recognizing and classifying image scenes may be trained in advance by an artificial intelligence learning method, and a scene category to which a shot image belongs is obtained by processing the algorithm model in the image data in the algorithm model; the features in the image can be extracted by processing the data of the image in an automatic classification mode, and when the extracted image features are matched with feature data in a preset scene feature library, the scene category to which the shot image belongs is determined.
The embodiment may also invoke a matched filter model based on the scene category. For example, in a 'food' scene, a food filter model matched with a food image is called, and the food filter model can improve the contrast and saturation of the foreground, so that the food image in the image picture after model processing is more gorgeous and full in color, clearer in detail, more prominent in food theme and better in imaging effect. For another example, in an "outdoor" scene, an outdoor filter model matching the outdoor environment is called, and the outdoor filter model can enhance blurring of a distant view and a background in an image, so as to avoid excessive interference elements in the outdoor environment and enhance the presence of a shooting object in a picture.
As an alternative embodiment, the scene type to which the shooting data belongs may be acquired by: extracting image features in the shooting data, wherein the image features comprise at least one of the following: shooting the characteristics of an object and the characteristics of a background image; determining scene parameters of the shooting data based on the image characteristics, wherein the scene parameters are used for representing product categories to which the shooting objects recorded in the shooting data belong; and constructing a scene category to which the shooting data belongs based on the scene parameters of the shooting data. Through the processing, the object image characteristic and the background image characteristic of the shot data are extracted, the type of the shot object can be considered, the type of the environment of the background where the shot object is located can also be considered, and the technical effect of accurately determining the scene type of the shot data is achieved. The photographing of the characteristics of the object may include: the type, color, size, shape of the object. For example, the article may be a cake, which is white in color, has a size of 50% of the image area of the image picture, and has a circular shape; features of the background image may include: the background can be clear light, moderate brightness, the background scene is an indoor environment, and the background objects include tables, chairs and the like. By extracting the features, scene parameters of the shooting data are determined, and scene categories are constructed according to the scene parameters. For example, the scene categories may be: indoor close-range gourmet photography with sufficient light. By determining the accurate scene type of the shot object, the subsequent filter model can have more pertinence on the processing of the image data, and a better filter effect is obtained.
As an alternative embodiment, the network structure of each filter model includes: the method comprises a U-Net model structure of global features and independent batch standardization layers corresponding to different scene types.
Optionally, the filter processing is performed on the shooting data by using the selected filter model, and includes: extracting global elements from the shooting data based on a U-Net model structure with global features in the filter model, wherein the global elements comprise at least one of the following components: light, composition, foreground and background; according to the global elements, transferring the pixel data distribution of the shooting data to the pixel data distribution of the scene type matched with the shooting data; based on the migration result, a filter image is generated.
Each filter model may process image data matching a different scene type for that scene type. The U-Net model structure with global characteristics can extract global factors of light, composition, foreground and background lights from shot data, and more accurate and uniform-style image data transformation is realized. In the related art, the generation of the countermeasure network can extract the local features of the image through the local feature U-Net network structure, and the local features are utilized to perform pixel-level processing on the image, so that the filter processing on the image is realized. However, due to lack of extraction and processing of global features of an image, global elements such as overall light, composition, foreground, background and the like of the image are not considered, so that the image area needing to be enhanced to be expressed is not enhanced, and the image area needing to be weakened to be expressed is enhanced, so that the generated filter image is difficult to obtain a satisfactory artistic effect. Through the U-Net model structure with the global characteristics, the characteristics of the scene category of the shot image can be extracted more accurately and comprehensively, and the pixel data of the shot data is processed according to the mode of conforming to the scene category characteristics. Specifically, the pixel data distribution of the shot data is migrated to the pixel data distribution where the scene type matched with the shot data is located, the integrity of the image data processed by the filter model is enhanced, the processing of the filter model is more fit with the style of the scene type, and the inconsistency of the image style is avoided. The independent standardization layer of criticizing can carry out normalization with the sample characteristic that the training data concentrated for gradient direction when the model convergence is more accurate effective, and then overcomes along with the deepening of neural network number of piles, the problem that convergence rate slows down, lets what the model can be bigger carries out the gradient and descends, obtains the filter model through the training more easily, and model itself is more stable. For example, after the network structure of the filter model includes the independent batch normalization layer, the filter models for different scene categories can be trained more quickly, and the filter model with better classification effect is obtained.
As an alternative embodiment, after determining a filter model matching the shot data from among a plurality of kinds of filter models, the method includes: outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels aiming at shooting data; and under the condition that any one filter option is triggered, obtaining a filter model matched with the filter option. After the filter model matched with the shooting data is determined, the user can be allowed to independently select the processing degree of the determined filter model to the image data, the independent selectivity of the user is enhanced, and the user can conveniently obtain the image which is more in line with the requirement of the user. For example, after the filter model is determined, the interactive interface may provide multiple options such as "preliminary filter processing", "medium filter processing", and "high filter processing", and the filter model may adjust parameter settings inside the model to process the original image data to different degrees, corresponding to different filter options. When any one filter option is triggered, the corresponding filter model automatically changes internal parameters according to the triggered filter option, and the adjusted filter model is called.
Fig. 3 is a flowchart of a second image data processing method according to an embodiment of the present invention. As shown in fig. 3, the process includes the following steps:
step S302, shooting data collected by the shooting equipment is displayed on an interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
step S304, if a filter instruction is detected in any one area of the interactive interface, triggering and analyzing the picture of the shot data, and determining a filter model matched with the shot data, wherein the filter model is a countermeasure model;
and S306, displaying a filter image on the interactive interface, wherein the filter image is an image generated by filtering the shooting data by using the selected filter model.
Through the processing, the mode of displaying the shot data and analyzing the picture of the shot data and the filter image on the interactive interface is adopted, the filter model is determined according to the detected filter instruction, the shot data is processed by using the filter model, the process of processing the image data by using the filter model is displayed on the interactive interface, the technical effect of visually displaying the filter effect is achieved, and better use experience is brought to a user.
Fig. 4 is a flowchart of a third method for processing image data according to an embodiment of the present invention. As shown in fig. 4, the process includes the following steps:
step S402, shooting data are displayed on the interactive interface, wherein the shooting data comprise at least one of the following data: shooting pictures and videos;
s404, sensing a filter instruction matched with the shooting data in the interactive interface;
step S406, responding to a filter instruction, and determining a filter model matched with the shooting data, wherein the filter model is a countermeasure model;
step S408, outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels aiming at shooting data;
and S410, displaying a filter image on the interactive interface, wherein the filter image is an image obtained by filtering the shot data based on the selected filter model.
Through the processing, the specific filter model is determined on the interactive interface according to the filter instruction corresponding to the type of the filter model and the filter options for representing the filter models in different levels, and the shooting data is subjected to filter processing according to the determined filter model, so that the technical effect of providing an interactive means for selecting the type of the filter model and the level of the filter model is achieved.
As an alternative embodiment, if the picture quality of the shot data is lower than the standard data, a filter instruction matching the shot data is triggered. When the picture quality of the photographed data is too low, the conventional filter model may not effectively process the photographed data. In this case, the processing may trigger a filter command to call a filter model matching the image quality of the current image quality for the captured data. For example, the called filter model may be a model that can perform filter data processing according to the picture quality of the current shot data, or may be a model that can improve the picture quality of the current shot data according to a predetermined algorithm and then perform filter data processing.
As an alternative embodiment, before displaying the filter image on the interactive interface, the method further comprises: receiving a selection instruction in an area where a corresponding filter option on a selection page is located; and responding to the selection instruction, triggering the corresponding filter option, and calling the filter model at the corresponding level. Through the processing, an interaction means is provided for a user, so that the user can independently select the grade of the filter model to obtain the filter image meeting the requirement.
Fig. 5 is a flowchart of a fourth method for processing image data according to an embodiment of the present invention. As shown in fig. 5, the process includes the following steps:
step S502, the front-end client uploads shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
step S504, the front-end client transmits the shooting data to the background server;
step S506, the front-end client receives a filter model matched with the shooting data returned by the background server, wherein the filter model is an antagonistic model determined from a plurality of types of filter models;
and step S508, the front-end client uses the selected filter model to perform filter processing on the shooting data to generate a filter image.
Through the processing, the front-end client side is called from the model of the background server, the background server returns the matched filter model based on the shooting data transmitted by the front-end client side, and the front-end client side processes the shooting data by using the filter model to generate the filter image.
Fig. 6 is a schematic diagram of a processing method of image data provided in accordance with an alternative embodiment of the present invention. As shown in fig. 6, the image processing method according to the alternative embodiment of the present invention includes two major units, front-end interaction and background algorithm.
The flow in the front-end interaction unit in fig. 6 includes the following steps:
s1, uploading images or videos by the merchant; s2, displaying scene/category information, analyzing the scene/category of the image by using a scene classification module, and displaying the scene category corresponding to the image; s3, intelligent filter processing, namely processing the image data by using a filter model corresponding to the scene type of the image in the intelligent filter module; and S4, obtaining an intelligent filter image.
The background algorithm unit in fig. 6 includes a scene classification module and an intelligent filter module. The scene classification module predicts the scene category of the image by analyzing the image uploaded by the user and processes the image data by using different filter models according to the obtained scene category. The filter model is a countermeasure model and comprises a generation countermeasure network, and the generation countermeasure network comprises a U-Net network structure with global characteristics and an independent batch standardization layer with a specific style.
The generation of the countermeasure network is a neural network learning algorithm, and can generate a brand new image with a filter effect by taking an original image as an input sample image. The generation countermeasure network comprises a generator and a discriminator, wherein the generator generates an image data set processed by a filter according to original image data serving as a sample set through an encoder-decoder; the discriminator is used for comparing the images before and after processing and distinguishing whether the images are original real images or images generated by the generator. The generator and the discriminator can continuously improve the respective performances in the confrontation, thereby achieving dynamic balance and obtaining a confrontation model meeting the requirements. Generating a countermeasure network may employ a max-min loss function to constrain the learning process of the model. By adding a U-Net network structure based on global characteristics and an independent batch standardization layer with a specific style in the generated countermeasure network, the generated countermeasure network can be optimized, and the countermeasure model is guided to learn the characteristics with global characteristics and a specific style. For example, the generation of the countermeasure network with the food as the theme is trained to obtain a countermeasure model with the food as the theme, and the model can perform targeted processing on the image with the food as the scene category, so that the overall style of the processed image is consistent and is in good fit with the food theme.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the image data processing method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is also provided an apparatus for implementing the first image data processing method, and fig. 7 is a block diagram of a first image data processing apparatus according to an embodiment of the present invention, as shown in fig. 7, the apparatus includes: a first acquisition module 702, a first determination module 704 and a first generation module 706, which are described below.
A first collecting module 702, configured to collect shooting data of a shooting object, where the shooting data includes at least one of: shooting pictures and videos; a first determining module 704, connected to the first acquiring module 702, for analyzing the shot data and determining a filter model matching the shot data from a plurality of filter models, wherein the filter model is a confrontation model; and a first generating module 706, connected to the first determining module 704, for performing filter processing on the captured data using the selected filter model to generate a filter image.
It should be noted here that the first acquiring module 702, the first determining module 704 and the first generating module 706 correspond to steps S202 to S206 in embodiment 1, and the three modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
According to an embodiment of the present invention, there is further provided an apparatus for implementing the second image data processing method, and fig. 8 is a block diagram of a second image data processing apparatus according to an embodiment of the present invention, and as shown in fig. 8, the apparatus includes: a first display module 802, a second determination module 804, and a second display module 806, which are described below.
The first display module 802 is configured to display shooting data collected by a shooting device on an interactive interface, where the shooting data includes at least one of: shooting pictures and videos; a second determining module 804, connected to the first displaying module 802, for triggering and analyzing the picture of the shot data and determining a filter model matched with the shot data when a filter instruction is detected in any one area of the interactive interface, where the filter model is a countermeasure model; and a second display module 806, connected to the second determining module 804, configured to display a filter image on the interactive interface, where the filter image is an image generated by performing filter processing on the shot data using the selected filter model.
It should be noted here that the first display module 802, the second determination module 804 and the second display module 806 correspond to steps S302 to S306 in embodiment 1, and the three modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
According to an embodiment of the present invention, there is also provided an apparatus for implementing the third method for processing image data, and fig. 9 is a block diagram of a third apparatus for processing image data according to an embodiment of the present invention, and as shown in fig. 9, the apparatus includes: a third display module 902, a first sensing module 904, a third determining module 906, a first output module 908, and a fourth display module 910, which will be described below.
A third display module 902, configured to display shooting data on the interactive interface, where the shooting data includes at least one of: shooting pictures and videos; a first sensing module 904, connected to the third display module 902, for sensing a filter instruction matching the shooting data in the interactive interface; a third determining module 906, connected to the first sensing module 904, configured to display a filter image on an interactive interface, where the filter image is an image generated by performing filter processing on the captured data using the selected filter model; a first output module 908, connected to the third determining module 906, configured to output a selection page on the interactive interface, where the selection page provides at least one filter option, where different filter options are used to characterize filter models with different levels for the shooting data; and a fourth display module 910, connected to the first output module 908, configured to display a filter image on the interactive interface, where the filter image is an image obtained by performing filter processing on the shooting data based on the selected filter model.
It should be noted that, the third display module 902, the first sensing module 904, the third determining module 906, the first output module 908 and the fourth display module 910 correspond to steps S402 to S410 in embodiment 1, and the five modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
According to an embodiment of the present invention, there is also provided an apparatus for implementing the fourth method for processing image data, and fig. 10 is a block diagram of a fourth apparatus for processing image data according to an embodiment of the present invention, as shown in fig. 10, the apparatus includes: a first uploading module 1002, a first transmitting module 1004, a first receiving module 1006 and a second generating module 1008, which are explained below.
The first uploading module 1002 is configured to upload, by a front-end client, shooting data of a shooting object, where the shooting data includes at least one of the following: shooting pictures and videos; a first transmission module 1004, connected to the first upload module 1002, for transmitting the shooting data to the background server by the front-end client; a first receiving module 1006, connected to the first transmitting module 1004, configured to receive, by the front-end client, a filter model matched with the shooting data and returned by the background server, where the filter model is an antagonistic model determined from multiple types of filter models; and a second generating module 1008, connected to the first receiving module 1006, for performing filter processing on the shooting data by using the selected filter model at the front-end client, so as to generate a filter image.
It should be noted here that the first uploading module 1002, the first transmitting module 1004, the first receiving module 1006 and the second generating module 1008 correspond to steps S502 to S508 in embodiment 1, and the four modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
Example 3
The embodiment of the invention can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the processing method of image data of an application program: acquiring shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model; and performing filter processing on the shooting data by using the selected filter model to generate a filter image.
Alternatively, fig. 11 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 11, the computer terminal may include: one or more processors 1102 (only one of which is shown), memory 1104, and the like.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the image data processing method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, implements the image data processing method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model; and performing filter processing on the shooting data by using the selected filter model to generate a filter image.
Optionally, the processor may further execute the program code of the following steps: analyzing the shot data, and determining a filter model matched with the shot data from a plurality of kinds of filter models, wherein the filter model comprises the following steps: carrying out scene classification on the shooting data to obtain a scene type to which the shooting data belongs; based on the scene type of the shooting object in the shooting data, a filter model matched with the shooting data is called from a plurality of kinds of filter models.
Optionally, the processor may further execute the program code of the following steps: carrying out scene classification on the shooting data to acquire the scene category to which the shooting data belongs, and the method comprises the following steps: extracting image features in the shooting data, wherein the image features comprise at least one of the following: shooting the characteristics of an object and the characteristics of a background image; determining scene parameters of the shooting data based on the image characteristics, wherein the scene parameters are used for representing product categories to which the shooting objects recorded in the shooting data belong; and constructing a scene category to which the shooting data belongs based on the scene parameters of the shooting data.
Optionally, the processor may further execute the program code of the following steps: the network structure of each filter model comprises: the method comprises a U-Net model structure of global features and independent batch standardization layers corresponding to different scene types.
Optionally, the processor may further execute the program code of the following steps: performing filter processing on the shooting data by using the selected filter model, wherein the filter processing comprises the following steps: extracting global elements from the shooting data based on a U-Net model structure with global features in the filter model, wherein the global elements comprise at least one of the following components: light, composition, foreground and background; according to the global elements, transferring the pixel data distribution of the shooting data to the pixel data distribution of the scene type matched with the shooting data; based on the migration result, a filter image is generated.
Optionally, the processor may further execute the program code of the following steps: after determining a filter model matching the shot data from among a plurality of kinds of filter models, the method includes: outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels aiming at shooting data; and under the condition that any one filter option is triggered, obtaining a filter model matched with the filter option.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: displaying shooting data collected by the shooting equipment on an interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; if a filter instruction is detected in any one area of the interactive interface, triggering and analyzing a picture of the shot data, and determining a filter model matched with the shot data, wherein the filter model is a countermeasure model; and displaying a filter image on the interactive interface, wherein the filter image is an image generated by filtering the shooting data by using the selected filter model.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: displaying shooting data on the interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; sensing a filter instruction matched with the shooting data in the interactive interface; responding to a filter instruction, and determining a filter model matched with the shooting data, wherein the filter model is a countermeasure model; outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels aiming at shooting data; and displaying a filter image on the interactive interface, wherein the filter image is an image obtained by carrying out filter processing on the shooting data based on the selected filter model.
Optionally, the processor may further execute the program code of the following steps: and if the picture quality of the shooting data is lower than that of the standard data, triggering a filter instruction matched with the shooting data.
Optionally, the processor may further execute the program code of the following steps: before displaying the filter image on the interactive interface, the method further comprises: receiving a selection instruction in an area where a corresponding filter option on a selection page is located; and responding to the selection instruction, triggering the corresponding filter option, and calling the filter model at the corresponding level.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: the front-end client uploads shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; the front-end client transmits the shooting data to the background server; the method comprises the steps that a front-end client receives a filter model matched with shooting data returned by a background server, wherein the filter model is an antagonistic model determined from a plurality of filter models; and the front-end client uses the selected filter model to perform filter processing on the shot data to generate a filter image.
According to the embodiment of the invention, the mode of acquiring the shooting data of the shooting object is adopted, the filter model matched with the shooting data is determined, and the filter model is used for carrying out filter processing on the shooting data, so that the purpose of generating the filter image processed by the filter model is achieved, the technical effect of improving the imaging effect of the shooting image is achieved, and the technical problems of high professional requirement and low processing efficiency of the user end for processing the image are solved.
It can be understood by those skilled in the art that the structure shown in fig. 11 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 11 is a diagram illustrating a structure of the electronic device. For example, the computer terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 4
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium may be configured to store a program code executed by the image data processing method provided in the first embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model; and performing filter processing on the shooting data by using the selected filter model to generate a filter image.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: analyzing the shot data, and determining a filter model matched with the shot data from a plurality of kinds of filter models, wherein the filter model comprises the following steps: carrying out scene classification on the shooting data to obtain a scene type to which the shooting data belongs; based on the scene type of the shooting object in the shooting data, a filter model matched with the shooting data is called from a plurality of kinds of filter models.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: carrying out scene classification on the shooting data to acquire the scene category to which the shooting data belongs, and the method comprises the following steps: extracting image features in the shooting data, wherein the image features comprise at least one of the following: shooting the characteristics of an object and the characteristics of a background image; determining scene parameters of the shooting data based on the image characteristics, wherein the scene parameters are used for representing product categories to which the shooting objects recorded in the shooting data belong; and constructing a scene category to which the shooting data belongs based on the scene parameters of the shooting data.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the network structure of each filter model comprises: the method comprises a U-Net model structure of global features and independent batch standardization layers corresponding to different scene types.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: performing filter processing on the shooting data by using the selected filter model, wherein the filter processing comprises the following steps: extracting global elements from the shooting data based on a U-Net model structure with global features in the filter model, wherein the global elements comprise at least one of the following components: light, composition, foreground and background; according to the global elements, transferring the pixel data distribution of the shooting data to the pixel data distribution of the scene type matched with the shooting data; based on the migration result, a filter image is generated.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: after determining a filter model matching the shot data from among a plurality of kinds of filter models, the method includes: outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels aiming at shooting data; and under the condition that any one filter option is triggered, obtaining a filter model matched with the filter option.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: displaying shooting data collected by the shooting equipment on an interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; if a filter instruction is detected in any one area of the interactive interface, triggering and analyzing a picture of the shot data, and determining a filter model matched with the shot data, wherein the filter model is a countermeasure model; and displaying a filter image on the interactive interface, wherein the filter image is an image generated by filtering the shooting data by using the selected filter model.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: displaying shooting data on the interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; sensing a filter instruction matched with the shooting data in the interactive interface; responding to a filter instruction, and determining a filter model matched with the shooting data, wherein the filter model is a countermeasure model; outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels aiming at shooting data; and displaying a filter image on the interactive interface, wherein the filter image is an image obtained by carrying out filter processing on the shooting data based on the selected filter model.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: and if the picture quality of the shooting data is lower than that of the standard data, triggering a filter instruction matched with the shooting data.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: before displaying the filter image on the interactive interface, the method further comprises: receiving a selection instruction in an area where a corresponding filter option on a selection page is located; and responding to the selection instruction, triggering the corresponding filter option, and calling the filter model at the corresponding level.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the front-end client uploads shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos; the front-end client transmits the shooting data to the background server; the method comprises the steps that a front-end client receives a filter model matched with shooting data returned by a background server, wherein the filter model is an antagonistic model determined from a plurality of filter models; and the front-end client uses the selected filter model to perform filter processing on the shot data to generate a filter image.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (17)

1. A method of processing image data, comprising:
acquiring shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model;
and performing filter processing on the shooting data by using the selected filter model to generate a filter image.
2. The method of claim 1, wherein analyzing the captured data to determine a filter model matching the captured data from a plurality of classes of filter models comprises:
carrying out scene classification on the shooting data to acquire a scene type to which the shooting data belongs;
and calling a filter model matched with the shooting data from a plurality of kinds of filter models based on the scene type of the shooting object in the shooting data.
3. The method of claim 2, wherein the scene classification of the shooting data and the obtaining of the scene category to which the shooting data belongs comprises:
extracting image features in the shooting data, wherein the image features comprise at least one of the following: shooting the characteristics of an object and the characteristics of a background image;
determining scene parameters of the shooting data based on the image features, wherein the scene parameters are used for representing product categories to which shooting objects recorded in the shooting data belong;
and constructing a scene category to which the shooting data belongs based on the scene parameters of the shooting data.
4. The method according to any one of claims 1 to 3, wherein the network structure of each filter model comprises: the method comprises a U-Net model structure of global features and independent batch standardization layers corresponding to different scene types.
5. The method of claim 4, wherein filter processing the captured data using the selected filter model comprises:
extracting a global element from the shooting data based on a U-Net model structure with global features in the filter model, wherein the global element comprises at least one of the following components: light, composition, foreground and background;
according to the global element, transferring the pixel data distribution of the shooting data to the pixel data distribution of the scene type matched with the shooting data;
generating the filter image based on the migration result.
6. The method according to claim 1, after determining a filter model matching the photographic data from a plurality of kinds of filter models, comprising:
outputting a selection page on an interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels for the shooting data;
and under the condition that any one filter option is triggered, obtaining a filter model matched with the filter option.
7. A method of processing image data, comprising:
displaying shooting data collected by shooting equipment on an interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
if a filter instruction is detected in any one area of the interactive interface, triggering and analyzing the picture of the shot data, and determining a filter model matched with the shot data, wherein the filter model is a countermeasure model;
and displaying a filter image on the interactive interface, wherein the filter image is an image generated by performing filter processing on the shooting data by using the selected filter model.
8. A method of processing image data, comprising:
displaying shooting data on an interactive interface, wherein the shooting data comprises at least one of the following: shooting pictures and videos;
a filter instruction matched with the shooting data is sensed in the interactive interface;
responding to the filter instruction, and determining a filter model matched with the shooting data, wherein the filter model is a countermeasure model;
outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels for the shooting data;
and displaying a filter image on the interactive interface, wherein the filter image is an image obtained by carrying out filter processing on the shooting data based on the selected filter model.
9. The method according to claim 8, wherein if the picture quality of the shot data is lower than standard data, a filter instruction matching the shot data is triggered.
10. The method of claim 8, wherein prior to displaying the filter image on the interactive interface, the method further comprises:
receiving a selection instruction in an area where a corresponding filter option on the selection page is located;
and responding to the selection instruction, triggering the corresponding filter option, and calling the filter model at the corresponding level.
11. A method of processing image data, comprising:
the front-end client uploads shooting data of a shooting object, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
the front-end client transmits the shooting data to a background server;
the front-end client receives a filter model matched with the shooting data returned by the background server, wherein the filter model is an antagonistic model determined from a plurality of types of filter models;
and the front-end client uses the selected filter model to carry out filter processing on the shooting data to generate a filter image.
12. An apparatus for processing image data, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring shooting data of a shooting object, and the shooting data comprises at least one of the following data: shooting pictures and videos;
the first determining module is used for analyzing the shooting data and determining a filter model matched with the shooting data from a plurality of kinds of filter models, wherein the filter model is a countermeasure model;
and the first generation module is used for carrying out filter processing on the shooting data by using the selected filter model to generate a filter image.
13. An apparatus for processing image data, comprising:
the first display module is used for displaying shooting data collected by shooting equipment on an interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
the second determining module is used for triggering and analyzing the picture of the shooting data and determining a filter model matched with the shooting data when a filter instruction is detected in any one area of the interactive interface, wherein the filter model is a countermeasure model;
and the second display module is used for displaying a filter image on the interactive interface, wherein the filter image is an image generated by performing filter processing on the shooting data by using the selected filter model.
14. An apparatus for processing image data, comprising:
the third display module is used for displaying shooting data on the interactive interface, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
the first sensing module is used for sensing a filter instruction matched with the shooting data in the interactive interface;
the third determining module is used for responding to the filter instruction and determining a filter model matched with the shooting data, wherein the filter model is a countermeasure model;
the first output module is used for outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels for the shooting data;
and the fourth display module is used for displaying a filter image on the interactive interface, wherein the filter image is an image obtained by filtering the shooting data based on the selected filter model.
15. An apparatus for processing image data, comprising:
the first uploading module is used for uploading shooting data of a shooting object by a front-end client, wherein the shooting data comprises at least one of the following data: shooting pictures and videos;
the first transmission module is used for transmitting the shooting data to a background server by the front-end client;
the first receiving module is used for the front-end client to receive a filter model which is returned by the background server and matched with the shooting data, wherein the filter model is an antagonistic model determined from a plurality of kinds of filter models;
and the second generation module is used for performing filter processing on the shooting data by using the selected filter model by the front-end client to generate a filter image.
16. A storage medium, characterized in that the storage medium includes a stored program, wherein a device in which the storage medium is located is controlled to execute the processing method of image data according to any one of claims 1 to 11 when the program is executed.
17. A computer device, comprising: a memory and a processor, wherein the processor is capable of,
the memory stores a computer program;
the processor is configured to execute a computer program stored in the memory, and the computer program causes the processor to execute the image data processing method according to any one of claims 1 to 11 when executed.
CN202011003453.1A 2020-09-22 2020-09-22 Image data processing method and device, storage medium and computer equipment Pending CN114257730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011003453.1A CN114257730A (en) 2020-09-22 2020-09-22 Image data processing method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011003453.1A CN114257730A (en) 2020-09-22 2020-09-22 Image data processing method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN114257730A true CN114257730A (en) 2022-03-29

Family

ID=80788430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011003453.1A Pending CN114257730A (en) 2020-09-22 2020-09-22 Image data processing method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN114257730A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115623323A (en) * 2022-11-07 2023-01-17 荣耀终端有限公司 Shooting method and electronic equipment
CN116681788A (en) * 2023-06-02 2023-09-01 萱闱(北京)生物科技有限公司 Image electronic dyeing method, device, medium and computing equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323456A (en) * 2014-12-16 2016-02-10 维沃移动通信有限公司 Image previewing method for photographing device and image photographing device
CN109068056A (en) * 2018-08-17 2018-12-21 Oppo广东移动通信有限公司 A kind of electronic equipment and its filter processing method of shooting image, storage medium
CN109191403A (en) * 2018-09-07 2019-01-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109379572A (en) * 2018-12-04 2019-02-22 北京达佳互联信息技术有限公司 Image conversion method, device, electronic equipment and storage medium
EP3709209A1 (en) * 2019-03-15 2020-09-16 Koninklijke Philips N.V. Device, system, method and computer program for estimating pose of a subject

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323456A (en) * 2014-12-16 2016-02-10 维沃移动通信有限公司 Image previewing method for photographing device and image photographing device
CN109068056A (en) * 2018-08-17 2018-12-21 Oppo广东移动通信有限公司 A kind of electronic equipment and its filter processing method of shooting image, storage medium
CN109191403A (en) * 2018-09-07 2019-01-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109379572A (en) * 2018-12-04 2019-02-22 北京达佳互联信息技术有限公司 Image conversion method, device, electronic equipment and storage medium
EP3709209A1 (en) * 2019-03-15 2020-09-16 Koninklijke Philips N.V. Device, system, method and computer program for estimating pose of a subject

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹仰杰: "生成式对抗网络及其计算机视觉应用研究综述", 《中国图象图形学报》, 16 October 2018 (2018-10-16) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115623323A (en) * 2022-11-07 2023-01-17 荣耀终端有限公司 Shooting method and electronic equipment
CN116681788A (en) * 2023-06-02 2023-09-01 萱闱(北京)生物科技有限公司 Image electronic dyeing method, device, medium and computing equipment
CN116681788B (en) * 2023-06-02 2024-04-02 萱闱(北京)生物科技有限公司 Image electronic dyeing method, device, medium and computing equipment

Similar Documents

Publication Publication Date Title
CN105323456B (en) For the image preview method of filming apparatus, image capturing device
CN109068056B (en) Electronic equipment, filter processing method of image shot by electronic equipment and storage medium
JP6905602B2 (en) Image lighting methods, devices, electronics and storage media
CN108206917B (en) Image processing method and device, storage medium and electronic device
CN106161939B (en) Photo shooting method and terminal
KR20150087362A (en) Recommending transformations for photography
CN111985281B (en) Image generation model generation method and device and image generation method and device
WO2021098486A1 (en) Garment color recognition processing method, device, apparatus, and storage medium
CN109191371A (en) A method of it judging automatically scenery type and carries out image filters processing
CN107424117B (en) Image beautifying method and device, computer readable storage medium and computer equipment
CN114257730A (en) Image data processing method and device, storage medium and computer equipment
KR102009130B1 (en) The System Providing Diagnosis of Makeup and Question and Answer Service
CN111339831A (en) Lighting lamp control method and system
CN105007415B (en) A kind of image preview method and apparatus
CN103765473A (en) Method of providing adjusted digital image representation of view, and apparatus
CN106815803A (en) The processing method and processing device of picture
CN111127367A (en) Method, device and system for processing face image
CN110581950B (en) Camera, system and method for selecting camera settings
CN110149475B (en) Image shooting method and device, electronic device, storage medium and computer equipment
CN108540722B (en) Method and device for controlling camera to shoot and computer readable storage medium
CN112712564A (en) Camera shooting method and device, storage medium and electronic device
CN113222846A (en) Image processing method and image processing apparatus
CN112489144A (en) Image processing method, image processing apparatus, terminal device, and storage medium
CN106791358B (en) Terminal photographing method and device and terminal
CN112831982A (en) Processing method, device and equipment for clothes color identification and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230829

Address after: Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Applicant before: ALIBABA GROUP HOLDING Ltd.