WO2021112350A1 - Method and electronic device for modifying a candidate image using a reference image - Google Patents
Method and electronic device for modifying a candidate image using a reference image Download PDFInfo
- Publication number
- WO2021112350A1 WO2021112350A1 PCT/KR2020/006445 KR2020006445W WO2021112350A1 WO 2021112350 A1 WO2021112350 A1 WO 2021112350A1 KR 2020006445 W KR2020006445 W KR 2020006445W WO 2021112350 A1 WO2021112350 A1 WO 2021112350A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- candidate image
- reference image
- electronic device
- versions
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000000007 visual effect Effects 0.000 claims abstract description 15
- 238000013256 Gubra-Amylin NASH model Methods 0.000 claims description 30
- 230000015654 memory Effects 0.000 claims description 18
- 239000000284 extract Substances 0.000 claims description 4
- 230000004048 modification Effects 0.000 description 17
- 238000012986 modification Methods 0.000 description 17
- 238000003709 image segmentation Methods 0.000 description 15
- 244000025254 Cannabis sativa Species 0.000 description 12
- 230000006870 function Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 239000003086 colorant Substances 0.000 description 6
- 238000007796 conventional method Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 1
- 229910052721 tungsten Inorganic materials 0.000 description 1
- 239000010937 tungsten Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the embodiments herein relate to performing actions in an electronic device. More particularly relates to a method and electronic device for modifying portion of candidate image using version of reference image.
- the content includes multimedia data such as text information, images and videos.
- the users of the electronic devices usually edit the images before sharing the same on the various platforms by using various photo editing tools such as for example filters, effects, overlays, etc to enhance the content in the images aesthetically.
- the various photo editing tools generally modify a candidate image by transferring an image style of a reference image to the candidate image completely, as shown in FIG. 1.
- a user has no choice in determining the image style transformation of the candidate image as the photo editing tools is applied to the entire candidate image.
- the photo editing tools does not provide such an option to the user. Therefore, the existing methods do not provide user flexibility to determine the kind of modification to the candidate image that may be desired by the user.
- the principal object of the embodiments herein is to provide a method for modifying portion of candidate image using version of reference image in an electronic device.
- Another object of the embodiments herein is to display a plurality of reference images corresponding to the candidate image.
- Another object of the embodiments herein is to generate a plurality of versions of at least one selected reference image which comprises variable visual parameter.
- Another object of the embodiments herein is to apply a first Generative Adversarial Network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image.
- GAN Generative Adversarial Network
- Another object of the embodiments herein is to apply a second Generative Adversarial Network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image.
- GAN Generative Adversarial Network
- an embodiment herein discloses a method for modifying a candidate image by an electronic device (100).
- the method comprises obtaining the candidate image; identifying at least one reference image selected from a plurality of reference images associated with the candidate image; generating a plurality of versions of the at least one reference image, wherein each version of the at least one reference image comprises variable visual parameter; modifying at least one portion of the candidate image using at least one version of the plurality of versions; and displaying the modified candidate image.
- the plurality of reference images associated with the candidate image is identified by segmenting the candidate image into a plurality of segments; extracting features from each of the plurality segments of the candidate image; determining a pattern of the candidate image based on the extracted features; and identifying the plurality of reference images based on the pattern of the candidate image.
- the method for generating the plurality of versions of the at least one selected reference image comprises providing the at least one reference image to a first Generative Adversarial Network (GAN) model; and generating the plurality of versions of the at least one selected reference image using the first GAN model (166).
- GAN Generative Adversarial Network
- variable visual parameter is at least one of a color, a light, an intensity and a gradient.
- the method for modifying the candidate image using at least one version of the plurality of versions comprises providing the at least one version of the plurality of versions of the at least one reference image to a second GAN model (168); identifying the at least one portion of the candidate image to be modified; and applying the second GAN model (168) to the at least one portion of the candidate image based on the at least one version of the plurality of versions.
- an embodiment herein discloses a method for modifying a candidate image by an electronic device (100).
- the method comprises obtaining at least one reference image associated with the candidate image; applying a first Generative Adversarial Network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image; applying a second Generative Adversarial Network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image; and storing the modified candidate image.
- GAN Generative Adversarial Network
- an embodiment herein discloses an electronic device (100) for modifying a candidate image.
- the electronic device (100) comprises a memory (120); and at least one processor (160) coupled to the memory (120).
- the at least one processor (160) is configured to: obtain the candidate image; identify at least one reference image selected from a plurality of reference images associated with the candidate image; generate a plurality of versions of the at least one reference image, wherein each version of the at least one reference image comprises variable visual parameter; modify at least one portion of the candidate image using at least one version of the plurality of versions; and display the modified candidate image.
- an embodiment herein discloses an electronic device (100) for modifying a candidate image.
- the electronic device (100) comprises a memory (120); and at least one processor (160) coupled to the memory (120).
- the at least one processor (160) is configured to: obtain at least one reference image associated with the candidate image; apply a first Generative Adversarial Network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image; apply a second Generative Adversarial Network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image; and display the modified candidate image.
- GAN Generative Adversarial Network
- FIG. 1 is an example illustrating a method for modifying a candidate image using a reference image in an electronic device, according to a prior art
- FIG. 2 is a block diagram of the electronic device for modifying the candidate image using the reference image, according to an embodiment as disclosed herein;
- FIG. 3A is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device, according to an embodiment as disclosed herein;
- FIG. 3B is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device, according to another embodiment as disclosed herein;
- FIG. 4 is an example illustrating a method for performing an image segmentation of the candidate image by a candidate image processing engine of the electronic device, according to an embodiment as disclosed herein;
- FIG. 5 is an example illustrating a plurality of reference images corresponding to the candidate image recommended by the electronic device, according to an embodiment as disclosed herein;
- FIG. 6A is an example illustrating a color graph used to generate a plurality of versions of the reference image selected by the user in the electronic device, according to an embodiment as disclosed herein;
- FIG. 6B is an example illustrating the plurality of versions of the reference image generated by the electronic device, according to an embodiment as disclosed herein;
- FIG. 6C is an example illustrating a color bar used to generate the plurality of versions of the reference image selected by the user in the electronic device, according to an embodiment as disclosed herein;
- FIG. 7 is an example illustrating the plurality of versions of the reference image generated by the electronic device based on different perspective angles, according to an embodiment as disclosed herein;
- FIG. 8 is an example illustrating a color and texture extracting technique performed by the electronic device, according to an embodiment as disclosed herein;
- FIG. 9A is an example illustrating an overview of the method for modifying the candidate image based on the plurality of versions of the reference image by the electronic device, according to an embodiment as disclosed herein;
- FIG. 9B illustrates an example of a model architecture of a convolution stage of the encoder-decoder of a second GAN model, according to an embodiment as disclosed herein;
- FIG. 9C illustrates an example of a style transfer pipeline architecture of modifying the candidate image using at least one of the versions of the reference image by the electronic device, according to an embodiment as disclosed herein;
- FIG. 10A is an example illustrating a modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein;
- FIG. 10B is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein;
- FIG. 10C is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein;
- FIG. 10D is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein.
- circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
- circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
- a processor e.g., one or more programmed microprocessors and associated circuitry
- Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure.
- the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
- the embodiments herein provide a method for modifying a candidate image using an electronic device (100).
- the method includes receiving, by the electronic device (100), the candidate image and displaying, by the electronic device (100), a plurality of reference images corresponding to the candidate image. Further, the method includes detecting, by the electronic device (100), at least one reference image selected from the plurality of reference images and generating, by the electronic device (100), a plurality of versions of the at least one selected reference image, where each version of the at least one selected reference image comprises variable visual parameter. Furthermore, the method includes modifying, by the electronic device (100), at least one portion of the candidate image using at least one version of the plurality of versions of the at least one selected reference image; and storing, by the electronic device (100), the modified candidate image.
- an image style of the reference image is superimposed over the candidate image completely and the user is not provided any flexibility to select only a portion of the candidate image for modification.
- the electronic device (100) allows the user to select a portion of the candidate image for modification based on the selected reference image.
- the electronic device (100) a plurality of versions of the reference image is generated and the user is allowed to select at least one version of the plurality of versions for modifying the at least one portion of the candidate image.
- FIGS. 2 through 10D where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
- FIG. 2 is a block diagram of the electronic device (100) for modifying portion of candidate image using version of reference image, according to an embodiment as disclosed herein.
- the electronic device (100) can be, for example, a mobile phone, a smart phone, Personal Digital Assistant (PDA), a tablet, a wearable device, or the like.
- the electronic device (100) includes a memory (120), a display (140) and a processor (160).
- the memory (120) can include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
- the memory (120) may, in some examples, be considered a non-transitory storage medium.
- the term "non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (120) is non-movable.
- the memory (120) is configured to store larger amounts of information than the memory (120).
- a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
- RAM Random Access Memory
- the display (140) is configured to display a candidate image and allow the user to select at least one portion of the candidate image to be modified by the electronic device (100).
- the display (140) is also configured to display plurality of reference images corresponding to the candidate image.
- the display (140) is also configured to display plurality of versions of the modified candidate image on the electronic device (100).
- the processor (160) includes a candidate image processing engine (162), a reference image generation engine (164), a first generative adversarial network (GAN) model (166) and a second generative adversarial network (GAN) model (168).
- the candidate image processing engine (162) is configured to receive the candidate image and perform image segmentation of the candidate image based on at least one of colors and light intensities.
- the image segmentation of the candidate image is performed to determine a number of segments of colors which are present in the candidate image and extract features from each of the segments of the candidate image.
- the segments in the candidate image may include sky, water, green patches, buildings, etc.
- the candidate image processing engine (162) includes a convolution neural network (CNN) which performs the image segmentation.
- CNN convolution neural network
- the reference image generation engine (164) is configured to receive the segmented candidate image from the candidate image processing engine (162). Further, the reference image generation engine (164) is configured to determine a pattern of the candidate image based on the extracted features of each of the segments of the candidate image and generate the plurality of reference images from the electronic device (100) based on the pattern of the candidate image.
- the first GAN model (166) is configured to receive the at least one reference image selected by the user from the plurality of images provided by the reference image generation engine (164).
- the first GAN model (166) performs the image segmentation on the at least one reference image selected by the user and determines plurality of colors in the at least one reference image. Further, a color graph is applied on the selected reference image to generate a plurality of versions of the reference image selected by the user in the electronic device (100).
- Each version of the at least one selected reference image comprises variable visual parameter such as for example variation in intensity of color and light variations.
- the first GAN model (166) is configured to generate the plurality of versions of the reference image based on the color graph by mapping the functions of the selected reference image and the color graph (as described in FIGS. 6A-6C).
- the first GAN model (166) is also configured to generate a plurality of versions of the reference image based on different perspective angles i.e., by rotating the perspective angle of a region of interest/object in the reference image to generate the plurality of versions of the reference image (as described in FIG. 7).
- the second GAN model (168) is configured to determine the at least one portion of the candidate image selected by the user for modification. Further, the second GAN model (168) is configured to is configured to modify the at least one portion of the candidate image based on at least one version of the at least one reference image. Therefore, the proposed method provides multiple versions of the modified candidate image based on the multiple versions of the reference image.
- FIG. 2 shows the hardware elements of the electronic device (100) but it is to be understood that other embodiments are not limited thereon.
- the bendable device (100) may include less or more number of elements.
- the labels or names of the elements are used only for illustrative purpose and does not limit the scope of the invention.
- One or more components can be combined together to perform same or substantially similar function.
- FIG. 3A is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device (100), according to an embodiment as disclosed herein.
- the electronic device (100) obtains the candidate image.
- the at least one processor (160) can be configured to obtain the candidate image.
- the candidate image may be obtained by capturing an image using a capturing device of the electronic device, or receiving the image from an external electronic device.
- the electronic device (100) identifies at least one reference image selected from a plurality of reference images.
- the at least one processor (160) can be configured to display a plurality of reference images associated with the candidate image, and identified at least one reference image selected from the plurality of reference images.
- the electronic device (100) generates a plurality of versions of the at least one reference image.
- the at least one processor (160) can be configured to generate the plurality of versions of the at least one reference image.
- Each version of the at least one reference image comprises variable visual parameter, such as a color, a light, an intensity, and a gradient.
- the electronic device (100) modifies at least one portion of the candidate image using at least one version of the plurality of versions.
- the at least one processor (160) can be configured to modify at least one portion of the candidate image using the at least one version of the at least one reference image.
- the electronic device (100) displays the modified candidate image.
- the at least one processor (160) can be configured to display the modified candidate image
- the memory (120) can be configured to store the modified candidate image.
- FIG. 3B is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device (100), according to another embodiment as disclosed herein.
- the electronic device (100) obtains the at least one reference image associated with the candidate image.
- the at least one processor (160) can be configured to obtain the at least one reference image associated with the candidate image.
- the electronic device (100) applies the first generative adversarial network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image.
- the at least one processor (160) can be configured to apply the first generative adversarial network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image.
- the electronic device (100) applies the second generative adversarial network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image.
- the at least one processor (160) can be configured to apply the second generative adversarial network (GAN) model to modify the at least one portion of the candidate image based on at least one version of the at least one reference image.
- the electronic device (100) stores the modified candidate image.
- the at least one processor (160) can be configured to display the modified candidate image
- the memory (120) can be configured to store the modified candidate image.
- FIG. 4 is an example illustrating the method for performing the image segmentation of the candidate image by the candidate image processing engine (162) of the electronic device (100), according to an embodiment as disclosed herein.
- the image segmentation is performed by the candidate image processing engine (162) using the Convolution Neural network (CNN) which receives the candidate image as the input and provides a segmented version of the candidate image as the output.
- CNN Convolution Neural network
- the CNN is part of the candidate image processing engine (162) which performs the image segmentation of the candidate image.
- the CNN performs the image segmentation based on a plurality of classes using which the CNN has been trained. Therefore, the segmented candidate image will have segments corresponding to the plurality of classes of the CNN.
- the plurality of classes may include for example sky, clouds, human beings, waterfall, mountain, grass-patches, rivers, buildings, etc.
- the segments are the individual components and classes based on which the candidate image is divided.
- the proposed method for modifying the candidate image using the reference image in the electronic device (100) includes performing image segmentation of both the candidate image and the reference image.
- the FIG. 4 illustrates the image segmentation of the candidate image, which is applicable to the reference image as well and hence the same procedure may be considered for the same.
- FIG. 5 is an example illustrating the plurality of reference images corresponding to the candidate image recommended by the electronic device (100), according to an embodiment as disclosed herein.
- the electronic device (100) generates the segmented version of the candidate image as described in the FIG. 4.
- the electronic device (100) determines the plurality of reference images which are stored in the electronic device (100) which have a similar content as the segmented candidate image based on the clustering of the content into the plurality of classes. Further, the electronic device (100) recommends and displays the plurality of reference images based on the clustering of the content of the segmented candidate image. The user is allowed to select at least one reference image of the plurality of reference images to be used for modifying the candidate image.
- FIG. 6 is an example illustrating a color graph used to generate the plurality of versions of the reference image selected by the user in the electronic device (100), according to an embodiment as disclosed herein.
- the color graph comprising a plurality of colors in color temperature in Kelvin.
- the color graph is applied on the selected reference image to generate the plurality of versions of the reference image selected by the user in the electronic device (100).
- the plurality of version represents a plurality of intensity of color and light variations.
- the first GAN model (166) is used to generate the plurality of versions of the reference image based on the color graph by mapping the functions of the selected reference image and the color graph.
- FIG. 6B is an example illustrating the plurality of versions of the reference image generated by the electronic device (100), according to an embodiment as disclosed herein.
- the first GAN model (166) is provided with the reference image selected by the user as the input.
- the first GAN model (166) is configured to generate the plurality of versions of the reference image based on the intensity of the color i.e., color differences determined using the color graph, as shown in FIG. 6B. Therefore, a first version of the reference image may have a highlight of yellow color, a second version of the reference image may have a highlight of blue color, a third version of the reference image may have a highlight of orange color, etc based on the various intensities of the color graph.
- FIG. 6C is an example illustrating a color bar used to generate the plurality of versions of the reference image selected by the user in the electronic device (100), according to an embodiment as disclosed herein.
- the first GAN model (166) comprises plurality of layers for detecting the light colors and the intensities on the basis of the segmentation of the content in the reference image.
- the first GAN model (166) generates five different versions of the reference image selected by the user which may be used for modifying the candidate image.
- the plurality of versions of the selected reference image includes an outdoor share, an evening sun, tungsten, sunrise/sunset and candle flame. Unlike to the conventional methods and systems, in the proposed method each of the plurality of versions of the selected reference image have different hues of color intensities and light components which provides an enhanced number of options to the user for modifying the candidate image.
- FIG. 7 is an example illustrating the plurality of versions of the reference image generated by the electronic device (100) based on different perspective angles, according to an embodiment as disclosed herein.
- the proposed method includes the generation of the plurality of versions of the reference image based on the different perspective angles such as for example ⁇ 1, ⁇ 2, ⁇ 3, etc.
- the electronic evice (100) rotates the perspective angle of the region of interest in the reference image to generate the plurality of versions of the reference image.
- the proposed method applies eight transformations to the region/object of interest to obtain the plurality of versions of the reference image.
- the number of transformations required to be applied to the reference image depends on the region/object of interest, as the number of transformations can be increased up to a point where the region/object of interest starts to distort.
- the plurality of versions of the reference image generated by varying the perspective angle of the region of interest may be used by the user to modify the candidate image. Further, the plurality of versions of the reference image is also stored in the electronic device (100) which may be used by the user.
- a total of 5 versions of the reference image is generated based on the variations in the color intensities and 8 versions of the reference image is generated based on the variations in the perspective angles. Therefore, a total of 40 versions of the reference image are generated using the first GAN model (166) by the electronic device (100).
- FIG. 8 is an example illustrating a color and texture extracting technique performed by the electronic device (100), according to an embodiment as disclosed herein.
- one of the candidate image and the reference image selected by the user is received by the electronic device (100). Further, the electronic device (100) extracts the color and texture for one of the candidate image and the reference image selected by the user.
- the color and texture is extracted by using multiple levels of texture filter such as level-level filter, edge-edge filter, ripple-ripple filter, sport-spot filter etc.
- the filtered images are obtained and at step 4, texture-energy maps are formed for the filtered images.
- the normalized maps are formed for the images and at step 6, the features if the images such as the gradient, the color and the texture of the image are obtained.
- the code for feature and texture extraction is as below:
- top_val row[ result.top() ]
- max_area max(area, max_area);
- top_val row[ result.top() ]
- max_area max(area, max_area);
- FIG. 9A is an example illustrating an overview of the method for modifying the candidate image based on the plurality of versions of the reference image by the electronic device (100), according to an embodiment as disclosed herein.
- the at least one portion of the candidate image to be modified is selected by the user.
- the electronic device (100) recommends the plurality of reference images which are similar to the candidate image and allows the user to select at least one reference image of the plurality of reference images, which is used to modify the candidate image. Further, the electronic device (100) generates the plurality of versions of the reference image selected by the user (as shown in FIG. 9A).
- the electronic device (100) generates the plurality of versions of the modified candidate image by applying each of the versions of the plurality of versions of the reference image.
- the second GAN model (168) includes a Rectified Unet based generator (168a) and a RESNET based discriminator (168b).
- the Rectified Unet based generator (168a) is configured to produce enhanced candidate images using the at least one version of the plurality of versions of the reference image.
- the RESNET based discriminator (168b) is configured to identify whether the generated candidate image and the at least one version of the plurality of versions of the reference image relate to each other over an expected distribution. Both the Rectified Unet based generator (168a) and the RESNET based discriminator (168b) are trained at equal phase but in the final deployment only the Rectified Unet based generator (168a) is used.
- the Rectified Unet based generator (168a) is trained on both adversarial losses i.e., L2_loss (to account for the difference between the reference image and candidate image) and L1_loss (modified version) to account for noise produced in the candidate image.
- the images generated using the Rectified Unet based generator (168a) are chosen based on the L2 distance from the reference image selected by the user.
- the loss L2 is calculated by selecting an average of l2 distance between the top images generated using the Rectified Unet based generator (168a) and the reference image selected by the user.
- the RESNET based discriminator (168b) is trained on cross entropy loss for performing the classification.
- the RESNET model helps to approximate complex functions by stacking a series of residual blocks.
- the Noise reduction pipeline is based on conditional generative adversarial networks.
- the image is selected with the least l2 distance as reference:
- error generator error discriminator + loss L1 + loss L2
- FIG. 9B illustrates an example of the model architecture of the convolution stage of the encoder-decoder of the second GAN model (168), according to an embodiment as disclosed herein.
- the decoder stage includes up-sampling the encoded image and concatenating the encoded image with the low-level features of the input image.
- FIG. 9C illustrates an example of a style transfer pipeline architecture of modifying the candidate image using at least one of the versions of the reference image by the electronic device (100), according to an embodiment as disclosed herein.
- a a standard adversarial discriminator D is used to distinguish the stylized output G(E(xi)) from real examples yj ⁇ Y.
- a single image y0 is given with a set Y of at least one reference image yj ⁇ Y.
- the transformed image loss is defined as:
- C ⁇ H ⁇ W is the size of the image x and for training T is initialized with uniform weights
- FIG. 10A is an example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
- step 1 the user selects a portion of the candidate image comprising the green grass as the region of interest to be modified by the electronic device (100).
- the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
- the user selects a reference image from the plurality of reference images, where the reference image comprises similar landscape as the candidate image which includes green grass with white flowers along a road. Since, at step 1, the user had selected the portion of the candidate image comprising the green grass as the region of interest to be modified, the electronic device (100) automatically modifies the green grass in the candidate image with the effects of the green grass with white flowers using the plurality of versions of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified image to be used for example to publish in a social networking site, etc. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
- FIG. 10B is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
- the user selects a portion of the candidate image comprising the cloudy sky as the region of interest to be modified by the electronic device (100).
- the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
- the user selects a reference image from the plurality of reference images, where the reference image comprises similar landscape as the candidate image which includes a scene of a road along with a sunny sky with clouds. Since, at step 1, the user had selected the portion of the candidate image comprising the cloudy sky as the region of interest to be modified, the electronic device (100) automatically modifies the cloudy sky of the candidate image with the effects of the sunny sky with clouds using the plurality of versions of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified candidate image to be used. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
- FIG. 10C is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
- step 1 the user selects a portion of the candidate image comprising the bright sunny sky as the region of interest to be modified by the electronic device (100).
- the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
- the user selects a reference image from the plurality of reference images, where the reference image comprises a canyon like structure with an evening sky. Since, at step 1, the user had selected the portion of the candidate image comprising the bright sunny sky as the region of interest to be modified, the electronic device (100) automatically modifies the bright sunny sky of the candidate image using the plurality of versions of the evening sky of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified candidate image to be used. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
- FIG. 10D is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
- step 1 the user selects a portion of the candidate image comprising the tall and lush green grass as the region of interest to be modified by the electronic device (100).
- the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
- the user selects a reference image from the plurality of reference images, where the reference image comprises similar landscape as the candidate image which includes green grass along the mountains. Since, at step 1, the user had selected the portion of the candidate image comprising the tall and lush green grass as the region of interest to be modified, the electronic device (100) automatically modifies the selected the portion of the candidate image using the plurality of versions of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified candidate image to be for example shared over a messaging platform, etc. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
Embodiments herein provide a method for modifying a candidate image by an electronic device (100). The method includes obtaining the candidate image and displaying, by the electronic device (100), a plurality of reference images corresponding to the candidate image. Further, the method includes detecting, by the electronic device (100), at least one reference image selected from the plurality of reference images and generating, by the electronic device (100), a plurality of versions of the at least one selected reference image, where each version of the at least one selected reference image comprises variable visual parameter. Furthermore, the method includes modifying, by the electronic device (100), at least one portion of the candidate image using at least one version of the plurality of versions of the at least one selected reference image; and storing, by the electronic device (100), the modified candidate image.
Description
The embodiments herein relate to performing actions in an electronic device. More particularly relates to a method and electronic device for modifying portion of candidate image using version of reference image.
Generally users of electronic devices share a huge amount of content on various platforms like social networking sites, blogging sites, news applications, etc. The content includes multimedia data such as text information, images and videos. The users of the electronic devices usually edit the images before sharing the same on the various platforms by using various photo editing tools such as for example filters, effects, overlays, etc to enhance the content in the images aesthetically.
The various photo editing tools generally modify a candidate image by transferring an image style of a reference image to the candidate image completely, as shown in FIG. 1. However, a user has no choice in determining the image style transformation of the candidate image as the photo editing tools is applied to the entire candidate image. Further, in case the user wants to modify only a portion of the candidate image using a portion of the reference image, then the photo editing tools does not provide such an option to the user. Therefore, the existing methods do not provide user flexibility to determine the kind of modification to the candidate image that may be desired by the user.
The above information is presented as background information only to help the reader to understand the present invention. Applicants have made no determination and make no assertion as to whether any of the above might be applicable as prior art with regard to the present application.
The principal object of the embodiments herein is to provide a method for modifying portion of candidate image using version of reference image in an electronic device.
Another object of the embodiments herein is to display a plurality of reference images corresponding to the candidate image.
Another object of the embodiments herein is to generate a plurality of versions of at least one selected reference image which comprises variable visual parameter.
Another object of the embodiments herein is to apply a first Generative Adversarial Network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image.
Another object of the embodiments herein is to apply a second Generative Adversarial Network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image.
In accordance with an aspect of the present disclose, an embodiment herein discloses a method for modifying a candidate image by an electronic device (100). The method comprises obtaining the candidate image; identifying at least one reference image selected from a plurality of reference images associated with the candidate image; generating a plurality of versions of the at least one reference image, wherein each version of the at least one reference image comprises variable visual parameter; modifying at least one portion of the candidate image using at least one version of the plurality of versions; and displaying the modified candidate image.
In an embodiment, the plurality of reference images associated with the candidate image is identified by segmenting the candidate image into a plurality of segments; extracting features from each of the plurality segments of the candidate image; determining a pattern of the candidate image based on the extracted features; and identifying the plurality of reference images based on the pattern of the candidate image.
In an embodiment, the method for generating the plurality of versions of the at least one selected reference image comprises providing the at least one reference image to a first Generative Adversarial Network (GAN) model; and generating the plurality of versions of the at least one selected reference image using the first GAN model (166).
In an embodiment, the variable visual parameter is at least one of a color, a light, an intensity and a gradient.
In an embodiment, the method for modifying the candidate image using at least one version of the plurality of versions comprises providing the at least one version of the plurality of versions of the at least one reference image to a second GAN model (168); identifying the at least one portion of the candidate image to be modified; and applying the second GAN model (168) to the at least one portion of the candidate image based on the at least one version of the plurality of versions.
In accordance with another aspect of the present disclosure, an embodiment herein discloses a method for modifying a candidate image by an electronic device (100). The method comprises obtaining at least one reference image associated with the candidate image; applying a first Generative Adversarial Network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image; applying a second Generative Adversarial Network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image; and storing the modified candidate image.
In accordance with another aspect of the present disclosure, an embodiment herein discloses an electronic device (100) for modifying a candidate image. The electronic device (100) comprises a memory (120); and at least one processor (160) coupled to the memory (120). The at least one processor (160) is configured to: obtain the candidate image; identify at least one reference image selected from a plurality of reference images associated with the candidate image; generate a plurality of versions of the at least one reference image, wherein each version of the at least one reference image comprises variable visual parameter; modify at least one portion of the candidate image using at least one version of the plurality of versions; and display the modified candidate image.
In accordance with another aspect of the present disclosure, an embodiment herein discloses an electronic device (100) for modifying a candidate image. The electronic device (100) comprises a memory (120); and at least one processor (160) coupled to the memory (120). The at least one processor (160) is configured to: obtain at least one reference image associated with the candidate image; apply a first Generative Adversarial Network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image; apply a second Generative Adversarial Network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image; and display the modified candidate image.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
This invention is illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
FIG. 1 is an example illustrating a method for modifying a candidate image using a reference image in an electronic device, according to a prior art;
FIG. 2 is a block diagram of the electronic device for modifying the candidate image using the reference image, according to an embodiment as disclosed herein;
FIG. 3A is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device, according to an embodiment as disclosed herein;
FIG. 3B is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device, according to another embodiment as disclosed herein;
FIG. 4 is an example illustrating a method for performing an image segmentation of the candidate image by a candidate image processing engine of the electronic device, according to an embodiment as disclosed herein;
FIG. 5 is an example illustrating a plurality of reference images corresponding to the candidate image recommended by the electronic device, according to an embodiment as disclosed herein;
FIG. 6A is an example illustrating a color graph used to generate a plurality of versions of the reference image selected by the user in the electronic device, according to an embodiment as disclosed herein;
FIG. 6B is an example illustrating the plurality of versions of the reference image generated by the electronic device, according to an embodiment as disclosed herein;
FIG. 6C is an example illustrating a color bar used to generate the plurality of versions of the reference image selected by the user in the electronic device, according to an embodiment as disclosed herein;
FIG. 7 is an example illustrating the plurality of versions of the reference image generated by the electronic device based on different perspective angles, according to an embodiment as disclosed herein;
FIG. 8 is an example illustrating a color and texture extracting technique performed by the electronic device, according to an embodiment as disclosed herein;
FIG. 9A is an example illustrating an overview of the method for modifying the candidate image based on the plurality of versions of the reference image by the electronic device, according to an embodiment as disclosed herein;
FIG. 9B illustrates an example of a model architecture of a convolution stage of the encoder-decoder of a second GAN model, according to an embodiment as disclosed herein;
FIG. 9C illustrates an example of a style transfer pipeline architecture of modifying the candidate image using at least one of the versions of the reference image by the electronic device, according to an embodiment as disclosed herein;
FIG. 10A is an example illustrating a modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein;
FIG. 10B is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein;
FIG. 10C is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein; and
FIG. 10D is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device, according to an embodiment as disclosed herein.
Various embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. In the following description, specific details such as detailed configuration and components are merely provided to assist the overall understanding of these embodiments of the present disclosure. Therefore, it should be apparent to those skilled in the art that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. Herein, the term "or" as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
As is traditional in the field, embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as units, engines, manager, modules or the like, are physically implemented by analog and/or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits and the like, and may optionally be driven by firmware and/or software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
Accordingly the embodiments herein provide a method for modifying a candidate image using an electronic device (100). The method includes receiving, by the electronic device (100), the candidate image and displaying, by the electronic device (100), a plurality of reference images corresponding to the candidate image. Further, the method includes detecting, by the electronic device (100), at least one reference image selected from the plurality of reference images and generating, by the electronic device (100), a plurality of versions of the at least one selected reference image, where each version of the at least one selected reference image comprises variable visual parameter. Furthermore, the method includes modifying, by the electronic device (100), at least one portion of the candidate image using at least one version of the plurality of versions of the at least one selected reference image; and storing, by the electronic device (100), the modified candidate image.
In the conventional methods and systems, an image style of the reference image is superimposed over the candidate image completely and the user is not provided any flexibility to select only a portion of the candidate image for modification.
Unlike to the conventional methods and systems, in the proposed method the electronic device (100) allows the user to select a portion of the candidate image for modification based on the selected reference image.
Unlike to the conventional methods and systems, in the proposed method the electronic device (100) a plurality of versions of the reference image is generated and the user is allowed to select at least one version of the plurality of versions for modifying the at least one portion of the candidate image.
Referring now to the drawings, and more particularly to FIGS. 2 through 10D, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
FIG. 2 is a block diagram of the electronic device (100) for modifying portion of candidate image using version of reference image, according to an embodiment as disclosed herein.
Referring to the FIG. 2, the electronic device (100) can be, for example, a mobile phone, a smart phone, Personal Digital Assistant (PDA), a tablet, a wearable device, or the like. In an embodiment, the electronic device (100) includes a memory (120), a display (140) and a processor (160).
In an embodiment, the memory (120) can include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory (120) may, in some examples, be considered a non-transitory storage medium. The term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted that the memory (120) is non-movable. In some examples, the memory (120) is configured to store larger amounts of information than the memory (120). In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
In an embodiment, the display (140) is configured to display a candidate image and allow the user to select at least one portion of the candidate image to be modified by the electronic device (100). The display (140) is also configured to display plurality of reference images corresponding to the candidate image. The display (140) is also configured to display plurality of versions of the modified candidate image on the electronic device (100).
In an embodiment, the processor (160) includes a candidate image processing engine (162), a reference image generation engine (164), a first generative adversarial network (GAN) model (166) and a second generative adversarial network (GAN) model (168).
The candidate image processing engine (162) is configured to receive the candidate image and perform image segmentation of the candidate image based on at least one of colors and light intensities. The image segmentation of the candidate image is performed to determine a number of segments of colors which are present in the candidate image and extract features from each of the segments of the candidate image. For example, the segments in the candidate image may include sky, water, green patches, buildings, etc. The candidate image processing engine (162) includes a convolution neural network (CNN) which performs the image segmentation.
The reference image generation engine (164) is configured to receive the segmented candidate image from the candidate image processing engine (162). Further, the reference image generation engine (164) is configured to determine a pattern of the candidate image based on the extracted features of each of the segments of the candidate image and generate the plurality of reference images from the electronic device (100) based on the pattern of the candidate image.
The first GAN model (166) is configured to receive the at least one reference image selected by the user from the plurality of images provided by the reference image generation engine (164). The first GAN model (166) performs the image segmentation on the at least one reference image selected by the user and determines plurality of colors in the at least one reference image. Further, a color graph is applied on the selected reference image to generate a plurality of versions of the reference image selected by the user in the electronic device (100). Each version of the at least one selected reference image comprises variable visual parameter such as for example variation in intensity of color and light variations. The first GAN model (166) is configured to generate the plurality of versions of the reference image based on the color graph by mapping the functions of the selected reference image and the color graph (as described in FIGS. 6A-6C).
Further, the first GAN model (166) is also configured to generate a plurality of versions of the reference image based on different perspective angles i.e., by rotating the perspective angle of a region of interest/object in the reference image to generate the plurality of versions of the reference image (as described in FIG. 7).
The second GAN model (168) is configured to determine the at least one portion of the candidate image selected by the user for modification. Further, the second GAN model (168) is configured to is configured to modify the at least one portion of the candidate image based on at least one version of the at least one reference image. Therefore, the proposed method provides multiple versions of the modified candidate image based on the multiple versions of the reference image.
Although the FIG. 2 shows the hardware elements of the electronic device (100) but it is to be understood that other embodiments are not limited thereon. In other embodiments, the bendable device (100) may include less or more number of elements. Further, the labels or names of the elements are used only for illustrative purpose and does not limit the scope of the invention. One or more components can be combined together to perform same or substantially similar function.
FIG. 3A is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 3A, at step 310a the electronic device (100) obtains the candidate image. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to obtain the candidate image. The candidate image may be obtained by capturing an image using a capturing device of the electronic device, or receiving the image from an external electronic device.
At step 320a the electronic device (100) identifies at least one reference image selected from a plurality of reference images. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to display a plurality of reference images associated with the candidate image, and identified at least one reference image selected from the plurality of reference images.
At step 330a the electronic device (100) generates a plurality of versions of the at least one reference image. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to generate the plurality of versions of the at least one reference image. Each version of the at least one reference image comprises variable visual parameter, such as a color, a light, an intensity, and a gradient.
At step 340a the electronic device (100) modifies at least one portion of the candidate image using at least one version of the plurality of versions. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to modify at least one portion of the candidate image using the at least one version of the at least one reference image.
At step 350a the electronic device (100) displays the modified candidate image. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to display the modified candidate image, and the memory (120) can be configured to store the modified candidate image.
The various actions, acts, blocks, steps, or the like in the method may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
FIG. 3B is a flow chart illustrating a method for modifying the candidate image using the reference image in the electronic device (100), according to another embodiment as disclosed herein.
Referring to the FIG. 3B, at step 310b the electronic device (100) obtains the at least one reference image associated with the candidate image. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to obtain the at least one reference image associated with the candidate image.
At step 320b the electronic device (100) applies the first generative adversarial network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to apply the first generative adversarial network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image.
At step 330b the electronic device (100) applies the second generative adversarial network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to apply the second generative adversarial network (GAN) model to modify the at least one portion of the candidate image based on at least one version of the at least one reference image.
At step 340b the electronic device (100) stores the modified candidate image. For example, in the electronic device (100) as illustrated in the FIG. 2, the at least one processor (160) can be configured to display the modified candidate image, and the memory (120) can be configured to store the modified candidate image.
The various actions, acts, blocks, steps, or the like in the method may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
FIG. 4 is an example illustrating the method for performing the image segmentation of the candidate image by the candidate image processing engine (162) of the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 4, consider a candidate image which comprises a landscape of mountains, sky, clouds, grass, etc. The image segmentation is performed by the candidate image processing engine (162) using the Convolution Neural network (CNN) which receives the candidate image as the input and provides a segmented version of the candidate image as the output.
The CNN is part of the candidate image processing engine (162) which performs the image segmentation of the candidate image. The CNN performs the image segmentation based on a plurality of classes using which the CNN has been trained. Therefore, the segmented candidate image will have segments corresponding to the plurality of classes of the CNN. The plurality of classes may include for example sky, clouds, human beings, waterfall, mountain, grass-patches, rivers, buildings, etc. The segments are the individual components and classes based on which the candidate image is divided.
The proposed method for modifying the candidate image using the reference image in the electronic device (100) includes performing image segmentation of both the candidate image and the reference image. The FIG. 4 illustrates the image segmentation of the candidate image, which is applicable to the reference image as well and hence the same procedure may be considered for the same.
FIG. 5 is an example illustrating the plurality of reference images corresponding to the candidate image recommended by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 5, the electronic device (100) generates the segmented version of the candidate image as described in the FIG. 4. The electronic device (100) determines the plurality of reference images which are stored in the electronic device (100) which have a similar content as the segmented candidate image based on the clustering of the content into the plurality of classes. Further, the electronic device (100) recommends and displays the plurality of reference images based on the clustering of the content of the segmented candidate image. The user is allowed to select at least one reference image of the plurality of reference images to be used for modifying the candidate image.
FIG. 6 is an example illustrating a color graph used to generate the plurality of versions of the reference image selected by the user in the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 6A, the color graph comprising a plurality of colors in color temperature in Kelvin, is provided. The color graph is applied on the selected reference image to generate the plurality of versions of the reference image selected by the user in the electronic device (100). The plurality of version represents a plurality of intensity of color and light variations. The first GAN model (166) is used to generate the plurality of versions of the reference image based on the color graph by mapping the functions of the selected reference image and the color graph.
FIG. 6B is an example illustrating the plurality of versions of the reference image generated by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 6B in conjunction with the FIG. 6A, the first GAN model (166) is provided with the reference image selected by the user as the input. The first GAN model (166) is configured to generate the plurality of versions of the reference image based on the intensity of the color i.e., color differences determined using the color graph, as shown in FIG. 6B. Therefore, a first version of the reference image may have a highlight of yellow color, a second version of the reference image may have a highlight of blue color, a third version of the reference image may have a highlight of orange color, etc based on the various intensities of the color graph.
FIG. 6C is an example illustrating a color bar used to generate the plurality of versions of the reference image selected by the user in the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 6C, in the proposed method the first GAN model (166) comprises plurality of layers for detecting the light colors and the intensities on the basis of the segmentation of the content in the reference image. In the example, the first GAN model (166) generates five different versions of the reference image selected by the user which may be used for modifying the candidate image.
The plurality of versions of the selected reference image includes an outdoor share, an evening sun, tungsten, sunrise/sunset and candle flame. Unlike to the conventional methods and systems, in the proposed method each of the plurality of versions of the selected reference image have different hues of color intensities and light components which provides an enhanced number of options to the user for modifying the candidate image.
FIG. 7 is an example illustrating the plurality of versions of the reference image generated by the electronic device (100) based on different perspective angles, according to an embodiment as disclosed herein.
Referring to the FIG. 7, the proposed method includes the generation of the plurality of versions of the reference image based on the different perspective angles such as for example θ1, θ2, θ3, etc. The electronic evice (100) rotates the perspective angle of the region of interest in the reference image to generate the plurality of versions of the reference image. In the example, the proposed method applies eight transformations to the region/object of interest to obtain the plurality of versions of the reference image. The number of transformations required to be applied to the reference image depends on the region/object of interest, as the number of transformations can be increased up to a point where the region/object of interest starts to distort.
The plurality of versions of the reference image generated by varying the perspective angle of the region of interest may be used by the user to modify the candidate image. Further, the plurality of versions of the reference image is also stored in the electronic device (100) which may be used by the user.
Hence, based on the FIGS. 6A-6C and the FIG. 7, in the example provided a total of 5 versions of the reference image is generated based on the variations in the color intensities and 8 versions of the reference image is generated based on the variations in the perspective angles. Therefore, a total of 40 versions of the reference image are generated using the first GAN model (166) by the electronic device (100).
FIG. 8 is an example illustrating a color and texture extracting technique performed by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 8, at step 1, one of the candidate image and the reference image selected by the user is received by the electronic device (100). Further, the electronic device (100) extracts the color and texture for one of the candidate image and the reference image selected by the user. At step 2, the color and texture is extracted by using multiple levels of texture filter such as level-level filter, edge-edge filter, ripple-ripple filter, sport-spot filter etc. Further, at step 3, the filtered images are obtained and at step 4, texture-energy maps are formed for the filtered images. At step 5, the normalized maps are formed for the images and at step 6, the features if the images such as the gradient, the color and the texture of the image are obtained. Further, the code for feature and texture extraction is as below:
find color extract
int maxHist(int row[])
{
// Create an empty stack. The stack holds indexes of
// hist[] array/ The bars stored in stack are always
// in increasing order of their heights.
stack<int> result;
int top_val; // Top of stack
int max_area = 0; // Initialize max area in current
// row (or histogram)
int area = 0; // Initialize area with current top
// Run through all bars of given histogram (or row)
int i = 0;
while (i < C)
{
// If this bar is higher than the bar on top stack,
// push it to stack
if (result.empty() || row[ result.top() ] <= row[i])
result.push(i++);
else
{
// If this bar is lower than top of stack, then
// calculate area of rectangle with stack top as
// the smallest (or minimum height) bar. 'i' is
// 'right index' for the top and element before
// top in stack is 'left index
top_val = row[ result.top() ];
result.pop();
area = top_val * i;
if (!result.empty())
area = top_val * (i - result.top() - 1 );
max_area = max(area, max_area);
}
}
while (!result.empty())
{
top_val = row[ result.top() ];
result.pop();
area = top_val * i;
if (!result.empty())
area = top_val * (i - result.top() - 1 );
max_area = max(area, max_area);
}
return max_area;
}
// Returns area of the largest rectangle with all 1s in the image A[][]
int maxRectangle(int A[][C])
// Calculate area for first row and initialize it as
// result
int result = maxHist(A[0]);
// iterate over row to find maximum rectangular area
// considering each row as histogram
for (int i = 1; i < R; i++)
{
for (int j = 0; j < C; j++)
// if A[i][j] is 1 then add A[i -1][j]
if (A[i][j]) A[i][j] += A[i - 1][j];
// Update result if area with current row (as last row)
// of rectangle) is more
result = max(result, maxHist(A[i]));
}
return result;
}
FIG. 9A is an example illustrating an overview of the method for modifying the candidate image based on the plurality of versions of the reference image by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 9A, at step 1, the at least one portion of the candidate image to be modified is selected by the user. At step 2, the electronic device (100) recommends the plurality of reference images which are similar to the candidate image and allows the user to select at least one reference image of the plurality of reference images, which is used to modify the candidate image. Further, the electronic device (100) generates the plurality of versions of the reference image selected by the user (as shown in FIG. 9A).
Further, at step 3, the electronic device (100) generates the plurality of versions of the modified candidate image by applying each of the versions of the plurality of versions of the reference image.
The second GAN model (168) includes a Rectified Unet based generator (168a) and a RESNET based discriminator (168b). The Rectified Unet based generator (168a) is configured to produce enhanced candidate images using the at least one version of the plurality of versions of the reference image. The RESNET based discriminator (168b) is configured to identify whether the generated candidate image and the at least one version of the plurality of versions of the reference image relate to each other over an expected distribution. Both the Rectified Unet based generator (168a) and the RESNET based discriminator (168b) are trained at equal phase but in the final deployment only the Rectified Unet based generator (168a) is used.
The Rectified Unet based generator (168a) is trained on both adversarial losses i.e., L2_loss (to account for the difference between the reference image and candidate image) and L1_loss (modified version) to account for noise produced in the candidate image. The images generated using the Rectified Unet based generator (168a) are chosen based on the L2 distance from the reference image selected by the user.
The loss L2 is calculated by selecting an average of l2 distance between the top images generated using the Rectified Unet based generator (168a) and the reference image selected by the user.
The RESNET based discriminator (168b) is trained on cross entropy loss for performing the classification. The RESNET model helps to approximate complex functions by stacking a series of residual blocks.
In the proposed method, in order to check whether a given image pair consisting of the reference image selected by the user and the modified candidate image is real or fake can be considered as a classification problem. Hence, for the classification problem the real image pair with class label - 1 and the modified image pair with class label - 0 is considered.
Further, the classification issues are addressed using the RESNET model which provides better classification accuracy and adaptability with variable input size. The Noise reduction pipeline is based on conditional generative adversarial networks.
Further, for other error terms the image is selected with the least l2 distance as reference:
errorgenerator = errordiscriminator + lossL1 + lossL2
FIG. 9B illustrates an example of the model architecture of the convolution stage of the encoder-decoder of the second GAN model (168), according to an embodiment as disclosed herein.
Referring to the FIG. 9B, at convolution stage multiple layers are added for detecting light colors and intensities on the basis of objects at the encoder, as shown in FIG. 9B. The decoder stage includes up-sampling the encoded image and concatenating the encoded image with the low-level features of the input image.
FIG. 9C illustrates an example of a style transfer pipeline architecture of modifying the candidate image using at least one of the versions of the reference image by the electronic device (100), according to an embodiment as disclosed herein.
The second GAN model (168) includes the encoder-decoder architecture that utilizes an encoder network E to map an input candidate image x onto a latent representation z = E(x). A generative decoder G then plays the role of a painter and generates the modified output image y = G(z) from the sketchy content representation z.
To train E and G, a a standard adversarial discriminator D is used to distinguish the stylized output G(E(xi)) from real examples yj ∈ Y. A single image y0 is given with a set Y of at least one reference image yj ∈ Y.
The transformed image loss is defined as:
where C × H × W is the size of the image x and for training T is initialized with uniform weights,
where λ controls the relative importance of adversarial loss.
FIG. 10A is an example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 10A, consider the candidate image comprising a landscape with green grass along a road. At step 1, the user selects a portion of the candidate image comprising the green grass as the region of interest to be modified by the electronic device (100).
At step 2, the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
At step 3, the user selects a reference image from the plurality of reference images, where the reference image comprises similar landscape as the candidate image which includes green grass with white flowers along a road. Since, at step 1, the user had selected the portion of the candidate image comprising the green grass as the region of interest to be modified, the electronic device (100) automatically modifies the green grass in the candidate image with the effects of the green grass with white flowers using the plurality of versions of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified image to be used for example to publish in a social networking site, etc. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
FIG. 10B is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 10B, consider the candidate image comprising a landscape with cloudy sky, green grass and a tree. At step 1, the user selects a portion of the candidate image comprising the cloudy sky as the region of interest to be modified by the electronic device (100).
At step 2, the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
At step 3, the user selects a reference image from the plurality of reference images, where the reference image comprises similar landscape as the candidate image which includes a scene of a road along with a sunny sky with clouds. Since, at step 1, the user had selected the portion of the candidate image comprising the cloudy sky as the region of interest to be modified, the electronic device (100) automatically modifies the cloudy sky of the candidate image with the effects of the sunny sky with clouds using the plurality of versions of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified candidate image to be used. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
FIG. 10C is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 10C, consider the candidate image comprising a snow capped mountains with bright sunny sky. At step 1, the user selects a portion of the candidate image comprising the bright sunny sky as the region of interest to be modified by the electronic device (100).
At step 2, the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
At step 3, the user selects a reference image from the plurality of reference images, where the reference image comprises a canyon like structure with an evening sky. Since, at step 1, the user had selected the portion of the candidate image comprising the bright sunny sky as the region of interest to be modified, the electronic device (100) automatically modifies the bright sunny sky of the candidate image using the plurality of versions of the evening sky of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified candidate image to be used. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
FIG. 10D is another example illustrating the modification of at least one portion of the candidate image using at least one reference image by the electronic device (100), according to an embodiment as disclosed herein.
Referring to the FIG. 10D, consider the candidate image comprising a lady standing in front of tall and lush green grass. At step 1, the user selects a portion of the candidate image comprising the tall and lush green grass as the region of interest to be modified by the electronic device (100).
At step 2, the electronic device (100) performs the image segmentation of the candidate image and automatically determines plurality of reference images which are related to the candidate image. Further, the electronic device (100) displays the plurality of reference images on the screen of the electronic device (100) and allows the user to select the at least one reference image from the plurality of reference images to be used to modify the region of interest in the candidate image.
At step 3, the user selects a reference image from the plurality of reference images, where the reference image comprises similar landscape as the candidate image which includes green grass along the mountains. Since, at step 1, the user had selected the portion of the candidate image comprising the tall and lush green grass as the region of interest to be modified, the electronic device (100) automatically modifies the selected the portion of the candidate image using the plurality of versions of the reference image and presents to the user, as shown in step 4. Further, at step 5, the user may select one version of the modified candidate image to be for example shared over a messaging platform, etc. Further, all the versions of the modified candidate image will be available to the user in the electronic device (100).
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Claims (15)
- A method for modifying a candidate image by an electronic device, the method comprising:obtaining the candidate image;identifying at least one reference image selected from a plurality of reference images associated with the candidate image;generating a plurality of versions of the at least one reference image, wherein each version of the at least one reference image comprises variable visual parameter;modifying at least one portion of the candidate image using at least one version of the plurality of versions; anddisplaying the modified candidate image.
- The method of claim 1, wherein the plurality of reference images associated with the candidate image is identified by:segmenting the candidate image into a plurality of segments;extracting features from each of the plurality of segments of the candidate image;determining a pattern of the candidate image based on the extracted features; andidentifying the plurality of reference images based on the pattern of the candidate image.
- The method of claim 2, wherein generating the plurality of versions of the at least one reference image comprises:providing the at least one reference image to a first Generative Adversarial Network (GAN) model; andgenerating the plurality of versions of the at least one reference image using the first GAN model.
- The method of claim 1, wherein the variable visual parameter is at least one of a color, a light, an intensity and a gradient.
- The method of claim 1, wherein modifying the candidate image using at least one version of the plurality of versions comprising:providing the at least one version of the plurality of versions of the at least one reference image to a second GAN model;identifying the at least one portion of the candidate image to be modified; andapplying the second GAN model to the at least one portion of the candidate image based on the at least one version of the plurality of versions.
- A method for modifying a candidate image by an electronic device, the method comprising:obtaining at least one reference image associated with the candidate image;applying a first Generative Adversarial Network (GAN) model on the at least one reference image to generate a plurality of versions of the at least one reference image;applying a second Generative Adversarial Network (GAN) model to modify at least one portion of the candidate image based on at least one version of the at least one reference image; anddisplaying the modified candidate image.
- The method of claim 6, wherein obtaining the at least one reference image comprising:identifying at least one portion of the candidate image to be modified;displaying a plurality of reference images associated with the candidate image; anddetermining the at least one reference image among the plurality of reference images, based on user selection.
- The method of claim 6, wherein applying the first GAN model on the at least one reference image to generate the plurality of versions of the reference image comprising:providing the at least one reference image to the first GAN model;identifying at least one visual parameter associated with the at least one reference image; andgenerating the plurality of versions of the at least one reference image using the first GAN model based on the at least one visual parameter.
- The method of claim 8, wherein the at least one visual parameter is at least one of a color, a light, an intensity and a gradient.
- The method of claim 6, wherein applying the second GAN model to modify at least one portion of the candidate image based on at least one version of the at least one reference image comprising:determining the at least one portion of the candidate image to be modified;applying the at least one version of the at least one reference image to the at least one portion of the candidate image using the second GAN model; andmodifying the at least one portion of the candidate image based on at the least one version of the at least one reference image.
- An electronic device for modifying a candidate image, the electronic device comprising:a memory; andat least one processor coupled to the memory and configured to:obtain the candidate image;identify at least one reference image selected from a plurality of reference images associated with the candidate image;generate a plurality of versions of the at least one reference image, wherein each version of the at least one reference image comprises variable visual parameter;modify at least one portion of the candidate image using at least one version of the plurality of versions; anddisplay the modified candidate image.
- The electronic device of claim 11, wherein the at least one processor is further configured to:segment the candidate image into a plurality of segments;extract features from each of the plurality segments of the candidate image;determine a pattern of the candidate image based on the extracted features; andidentify the plurality of reference images based on the pattern of the candidate image.
- The electronic device of claim 11, wherein the at least one processor is further configured to:provide the at least one reference image to a first Generative Adversarial Network (GAN) model; andgenerate the plurality of versions of the at least one reference image using the first GAN model.
- The electronic device of claim 11, wherein the variable visual parameter is at least one of a color, a light, an intensity and a gradient.
- The electronic device of claim 11, wherein the at least one processor is further configured to:provide the at least one version of the plurality of versions of the at least one reference image to a second GAN model;identify the at least one portion of the candidate image to be modified; andapply the second GAN model to the at least one portion of the candidate image based on the at least one version of the plurality of versions.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201941050181 | 2019-12-05 | ||
IN201941050181 | 2019-12-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021112350A1 true WO2021112350A1 (en) | 2021-06-10 |
Family
ID=76222034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/006445 WO2021112350A1 (en) | 2019-12-05 | 2020-05-15 | Method and electronic device for modifying a candidate image using a reference image |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021112350A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018111786A1 (en) * | 2016-12-16 | 2018-06-21 | Microsoft Technology Licensing, Llc | Image stylization based on learning network |
WO2018194863A1 (en) * | 2017-04-20 | 2018-10-25 | Microsoft Technology Licensing, Llc | Visual style transfer of images |
US20180357800A1 (en) * | 2017-06-09 | 2018-12-13 | Adobe Systems Incorporated | Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images |
US20190164012A1 (en) * | 2017-06-13 | 2019-05-30 | Digital Surgery Limited | State detection using machine-learning model trained on simulated image data |
-
2020
- 2020-05-15 WO PCT/KR2020/006445 patent/WO2021112350A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018111786A1 (en) * | 2016-12-16 | 2018-06-21 | Microsoft Technology Licensing, Llc | Image stylization based on learning network |
WO2018194863A1 (en) * | 2017-04-20 | 2018-10-25 | Microsoft Technology Licensing, Llc | Visual style transfer of images |
US20180357800A1 (en) * | 2017-06-09 | 2018-12-13 | Adobe Systems Incorporated | Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images |
US20190164012A1 (en) * | 2017-06-13 | 2019-05-30 | Digital Surgery Limited | State detection using machine-learning model trained on simulated image data |
Non-Patent Citations (1)
Title |
---|
XINYUAN CHEN; CHANG XU; XIAOKANG YANG; LI SONG; DACHENG TAO: "Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 4 April 2019 (2019-04-04), 201 Olin Library Cornell University Ithaca, NY 14853, XP081164743 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021029648A1 (en) | Image capturing apparatus and auxiliary photographing method therefor | |
WO2020138745A1 (en) | Image processing method, apparatus, electronic device and computer readable storage medium | |
WO2018090455A1 (en) | Method and device for processing panoramic image of terminal, and terminal | |
WO2016006946A1 (en) | System for creating and reproducing augmented reality contents, and method using same | |
WO2017007206A1 (en) | Apparatus and method for manufacturing viewer-relation type video | |
WO2021006482A1 (en) | Apparatus and method for generating image | |
WO2020262977A1 (en) | Method for removing object in image by using artificial intelligence | |
WO2019156428A1 (en) | Electronic device and method for correcting images using external electronic device | |
WO2023055033A1 (en) | Method and apparatus for enhancing texture details of images | |
EP3028445A1 (en) | Electronic apparatus, method of controlling the same, and image reproducing apparatus and method | |
WO2015035702A1 (en) | Anti-counterfeiting method and mobile device | |
WO2015035701A1 (en) | Anti-counterfeiting label, anti-counterfeiting label manufacturing method, and anti-counterfeiting method | |
WO2019132566A1 (en) | Method for automatically generating multi-depth image | |
WO2019135475A1 (en) | Electronic apparatus and control method thereof | |
WO2022240029A1 (en) | System for identifying companion animal and method therefor | |
WO2011087249A2 (en) | Object recognition system and object recognition method using same | |
WO2021112350A1 (en) | Method and electronic device for modifying a candidate image using a reference image | |
WO2019190142A1 (en) | Method and device for processing image | |
WO2022092451A1 (en) | Indoor location positioning method using deep learning | |
EP3707678A1 (en) | Method and device for processing image | |
EP3803797A1 (en) | Methods and systems for performing editing operations on media | |
WO2022131723A1 (en) | Method for providing drawing reading and searching function, and device and system therefor | |
WO2019160262A1 (en) | Electronic device and method for processing image by electronic device | |
EP4352690A1 (en) | Method and system for automatically capturing and processing an image of a user | |
WO2022045587A1 (en) | Image-dependent content integration method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20895212 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20895212 Country of ref document: EP Kind code of ref document: A1 |