WO2023067603A1 - Semantic blending of images - Google Patents

Semantic blending of images Download PDF

Info

Publication number
WO2023067603A1
WO2023067603A1 PCT/IL2022/051109 IL2022051109W WO2023067603A1 WO 2023067603 A1 WO2023067603 A1 WO 2023067603A1 IL 2022051109 W IL2022051109 W IL 2022051109W WO 2023067603 A1 WO2023067603 A1 WO 2023067603A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
machine learning
learning model
vector
generate
Prior art date
Application number
PCT/IL2022/051109
Other languages
French (fr)
Inventor
Hila CHEFER
Sagie BENAIM
Roni PAISS
Lior Wolf
Original Assignee
Ramot At Tel-Aviv University Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ramot At Tel-Aviv University Ltd. filed Critical Ramot At Tel-Aviv University Ltd.
Publication of WO2023067603A1 publication Critical patent/WO2023067603A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • Some embodiments described in the present disclosure relate to a generative machine learning models and, more specifically, but not exclusively, to a blending images according to semantic properties.
  • Style Transfer may be the closest work to the disclosed, Style transfer aims to borrow the style of a target image, while keeping the content of a source image.
  • Recent works such as that of Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge, named “Image style transfer using convolutional neural networks,” published in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6 2016 and [5] X. Huang and S. Belongie’s, “Arbitrary style transfer in real-time with adaptive instance normalization,” published in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy., 2017, pp. 1510-1519. are based on neural networks. These works typically transfer elements of style involving texture and are derived from fixed losses and normalization techniques such as the Gram matrix and Adaptive Instance Normalization
  • StyleCLIP is specific to faces, and considers text-driven manipulations.
  • the disclosure’s method concerns essence transfer from one image to another.
  • a system for image generation comprising at least one processing circuitry, configure to: receive at least one source image and a target image; apply a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; apply the first machine learning model to generate a second vector comprising a second latent representation of the target image; generate a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and apply a second machine learning model to generate the at least one blended image from the third vector.
  • a method for image generation comprising: receiving at least one source image and a target image; applying a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; applying the first machine learning model to generate a second vector comprising a second latent representation of the target image; generating a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and applying a second machine learning model to generate the at least one blended image from the third vector.
  • one or more computer program products comprising instructions for image generation, wherein execution of the instructions by one or more processors of a computing system is to cause a computing system to: receive at least one source image and a target image; apply a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; apply the first machine learning model to generate a second vector comprising a second latent representation of the target image; generate a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and apply a second machine learning model to generate the at least one blended image from the third vector.
  • the first machine learning model was trained on at least two modalities wherein a first modality is visual, a second modality is semantic, and using a loss function comprising a contrastive factor.
  • the second machine learning model is a generative machine learning model trained to generate a visual having local features of a first visual input, and global features of a second visual input.
  • the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the at least one source image.
  • the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the target image.
  • the first latent representation comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features.
  • FIG. 1 is a schematic illustration of an exemplary system for semantic image blending, according to some embodiments of the present disclosure
  • FIG. 2 is a schematic block diagram of an exemplary image blending module, according to some embodiments of the present disclosure
  • FIG. 3 is a flowchart of an exemplary process for semantic image blending, according to some embodiments of the present disclosure
  • FIG. 4 is a table of images comparing the essence transfer disclosed to latent blending in the StyleGAN and CLIP spaces and the style transfer method, according to some embodiments of the present disclosure
  • FIG. 5 is a table of images comparing the disclosure to latent blending in the CLIP space and the style transfer method, according to some embodiments of the present disclosure.
  • FIG. 6 is a set of three tables of images generated using the essence transfer disclosed, according to some embodiments of the present disclosure.
  • Some embodiments described in the present disclosure relate to a generative machine learning models and, more specifically, but not exclusively, to a blending images according to semantic properties.
  • Some digital creations may be described as borrowing the essence of a “target” image I t and transferring it to a “source” image I s , creating an output image I s , t . I s , t which should blend information from I s and I t in a manner that draws semantic attributes from I t while preserving the identity of image I s .
  • a definition of essence may transcend that of known style transfer methods, which focused on what is often referred to as style, i.e., low-level feature statistics, which are usually local.
  • style transfer methods which focused on what is often referred to as style, i.e., low-level feature statistics, which are usually local.
  • style transfer methods which focused on what is often referred to as style, i.e., low-level feature statistics, which are usually local.
  • style i.e., low-level feature statistics, which are usually local.
  • the essence the disclosure considers is more general and includes unique style elements, such as complexion or texture, but also semantic elements, such
  • the disclosure experiment with two targets, one of a young boy and one of an older man.
  • the young boy transfers the age property to source images
  • the older man transfers the age property, as well as the hair color and wrinkles.
  • a viewer may observe that the disclosed method preserves the identity of source images, while transferring the most noticeable semantic features of target images.
  • a more rigorous definition of the disclosure’s goal may be elusive, however the benefit may be observed as semantic properties, that may be straightforwardly described, such as human age, are transferred.
  • latent spaces of high-level vision networks such as those involving capabilities such as image understanding may be additive,.
  • the disclosure assumes that the learned transformation is doubly additive, i.e. both in the latent space of the semantic-image generator, and in the latent space of the image understanding engine.
  • the disclosure may obtain a transformation that is based on a constant shift in the generator space and leads to a constant difference in the high-level description of the image.
  • StyleGAN may be used as an exemplary generator, also referred to herein as a second machine learning model. Additivity in the space of StyleGAN was demonstrated in some works exploring latent semantic spaces of generative adversarial networks (GAN) for linearly interpolating between different images along semantic directions as well as for the manipulation of semantic attributes as done in ‘styleclip’.
  • GAN generative adversarial networks
  • the CLIP network which has shown zero- shot capabilities across multiple domains such as image classification and adaptation of generated images may be used. CLIP was also shown to behave additively.
  • the disclosure may derive style from CLIP, a method for the semantic association of text and images.
  • CLIP a method for the semantic association of text and images.
  • two images are close, or similar, to each other if their textual association is close.
  • Such similarity may consider unique style elements, such as texture or complexion. It may also consider semantic elements, such as gender and facial attributes, but not other semantic elements such as identity.
  • the disclosure argue that this notion of style, which the disclosure use here, is more general.
  • Image Manipulation the disclosure’s work is also related to recent image manipulation works based on a pre-trained generator or CLIP, however other models may be used.
  • the method disclosed may apply training using two types of loss terms.
  • the first term may ensure that the transformed image is semantically similar to the target image I t in the latent space of the first machine learning model, for example CLIP.
  • the second type of constraint links the constant shift in the latent space of the generator to a constant shift in the latent space of the first machine learning model.
  • the disclosure demonstrate the ability to transfer the essence of a target image I t to a source image I s , while preserving the identity of I s .
  • the disclosure demonstrate coherent results even when the target images are out of the domain of the images generated by StyleGAN, on top of which the disclosure’s method may not require inversion of the target image.
  • the disclosure demonstrate that the directions found by the disclosure’s method may be global and may be applied to any other source image, for performing the same semantic transfer of essence.
  • the blending operator may be optimized to be simultaneously additive in both semantic latent spaces and generative latent spaces.
  • the disclosure may derive style from CLIP, a method for the semantic association of text and images.
  • CLIP a method for the semantic association of text and images.
  • two images are close, or similar, to each other if their textual association is close.
  • Such similarity may consider unique style elements, such as texture or complexion. It may also consider semantic elements, such as gender and facial attributes, but not other semantic elements such as identity.
  • the disclosure argue that this notion of style, which the disclosure use here, is more general.
  • Image Manipulation the disclosure’s work is also related to recent image manipulation works based on a pretrained generator or CLIP, however other models may be used.
  • the disclosed method comprises four components, given as inputs: (i) A generator G, which, given a vector z, generates an image G(z), also referred to as the second machine learning model (ii) An image recognition engine C, which, given an image I, provides a latent representation of the essence of the image, C(7) in some latent space, also referred to as the first machine learning model (iii) A target image I t , from which the essence is extracted, and (iv) a set of, i.e, at least one, source images S, which are used to provide the statistics of images for which the method is applied.
  • the disclosure define 5 as a collection of z vectors, and the set of source images as G(z).
  • a source image using a vector z rather than the set of image values in I s allows the disclosed method to directly define the double-linearity property.
  • the image may be converted to z by finding an optimal z, using StyleGAN encoding inversion method.
  • StyleGAN encoding inversion method The disclosure note, however, that the disclosure’s formulation does not require, at any stage, the inversion of I t .
  • linearity is expressed by:
  • H(z) G(z + b) (1) for some shift vector b, where b represents the essence latent.
  • Linearity in the latent space of the image recognition engine is expressed as: (2) for some fixed d.
  • modifying the latent z in G’s latent space with b induces a constant semantic change in the latent space of C.
  • This property is otherwise referred to in literature as a global semantic direction as shown by ‘styleclip’.
  • the disclosure have defined the problem of learning a pair of semantic directions b,d, in two different latent spaces, such that (b,d) match.
  • the disclosure wish to add a constraint that ties this direction of I t .
  • the disclosure wish to maximize similarity in the semantic space provided by the recognition engine C between I t and the generated images H(z).
  • the disclosure wish to minimize the sum of differences:
  • the disclosure now define the disclosure’s method, based on the implementations of (1) to (5).
  • the disclosure first assumes that when considering vectors in the semantic space of the first machine learning model, for example CLIP, it may be beneficial to employ their normalized version, i.e., employ the cosine distance instead of the L2 norm, in accordance with the training process of CLIP, where text and image encoding are both normalized.
  • L transfer the loss term (L transfer ), may be applied over batches of S source images: where N is the batch size. This transfer loss estimates the similarity between the image encodings of the target image and the source images.
  • the second concept the disclosure aims to maintain is consistency.
  • the goal of essence transfer is to change the essence of the source image using a collection of semantic attributes that encapsulates the essence of the target image. These attributes are independent of the source image.
  • the disclosure demand that the semantic edits by the direction b be consistent across the source images, using CLIP’S latent space. This is expressed in (4) above and translates to the following loss (L consistency ) : where )) , and, as before, N is the batch size.
  • the first machine learning model such as CLIP may be trained on a large corpus of matched images and text captions and is, therefore, much richer semantically than networks that perform multiclass classification for a limited number of classes only. It has been shown to be extremely suitable for zero-shot computer vision tasks; here, the disclosure demonstrate its ability to support semantic blending. While the StyleGAN space already performs reasonable blending for images of, e.g., two children, it struggles when blending images with different attributes. On the other hand, CLIP by itself struggles to maintain identity when blending. The combination of the two seems to provide a powerful blending technique, which enjoys the benefits of both representations. This is enabled through a novel method, which assumes additivity in the first latent space and ensures additivity in the second through optimization.
  • Embodiments may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the embodiments.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of embodiments may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of embodiments.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • FIG. 1 is a schematic illustration of an exemplary system for semantic image blending, according to some embodiments of the present disclosure.
  • An exemplary computer environment 100 may be used for executing execute processes such as 300 for generating blended images. Further details about these exemplary processes follow as FIG. 2 and FIG. 3 are described.
  • CPP embodiment is a term used in the present disclosure to describe any set of one, or more, storage media (also called “mediums”) collectively included in a set of one, or more, storage devices that collectively include machine readable code corresponding to instructions and/or data for performing computer operations specified in a given CPP claim.
  • storage device is any tangible device that can retain and store instructions for use by a computer processor.
  • the computer readable storage medium may be an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or any suitable combination of the foregoing.
  • Some known types of storage devices that include these mediums include: diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash memory), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanically encoded device (such as punch cards or pits / lands formed in a major surface of a disc) or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanically encoded device such as punch cards or pits / lands formed in a major surface of a disc
  • a computer readable storage medium is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media.
  • transitory signals such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media.
  • data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, de-fragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored.
  • Computing environment 100 contains an example of an environment for the execution of at least some of the computer code involved in performing the inventive methods, such as a blending module for images 200.
  • computing environment 100 includes, for example, computer 102, wide area network (WAN) 108, end user device (EUD) 132, remote server 104, public cloud 150, and private cloud 106.
  • computer 102 includes processor set 110 (including processing circuitry 120 and cache 134), communication fabric 160, volatile memory 112, persistent storage 116 (including operating system 122 and block 200, as identified above), peripheral device set 114 (including user interface (UI), device set 126, storage 124, and Internet of Things (loT) sensor set 128), and network module 118.
  • Remote server 104 includes remote database 130.
  • Public cloud 150 includes gateway 140, cloud orchestration module 146, host physical machine set 142, virtual machine set 148, and container set 144.
  • COMPUTER 102 may take the form of a desktop computer, laptop computer, tablet computer, smart phone, smart watch or other wearable computer, mainframe computer, quantum computer or any other form of computer or mobile device now known or to be developed in the future that is capable of running a program, accessing a network or querying a database, such as remote database 130.
  • performance of a computer-implemented method may be distributed among multiple computers and/or between multiple locations.
  • this presentation of computing environment 100 detailed discussion is focused on a single computer, specifically computer 102, to keep the presentation as simple as possible.
  • Computer 102 may be located in a cloud, even though it is not shown in a cloud in Figure 1.
  • computer 102 is not required to be in a cloud except to any extent as may be affirmatively indicated.
  • PROCESSOR SET 110 includes one, or more, computer processors of any type now known or to be developed in the future.
  • a processor set may include one or more of a central processing unit (CPU), a microcontroller, a parallel processor, supporting multiple data such as a digital signal processing (DSP) unit, a graphical processing unit (GPU) module, and the like, as well as optical processors, quantum processors, and processing units based on technologies that may be developed in the future.
  • Processing circuitry 120 may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry 120 may implement multiple processor threads and/or multiple processor cores.
  • Cache 134 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 110. Cache memories are typically organized into multiple levels depending upon relative proximity to the processing circuitry. Alternatively, some, or all, of the cache for the processor set may be located “off chip.” In some computing environments, processor set 110 may be designed for working with qubits and performing quantum computing.
  • Computer readable program instructions are typically loaded onto computer 102 to cause a series of operational steps to be performed by processor set 110 of computer 102 and thereby effect a computer-implemented method, such that the instructions thus executed will instantiate the methods specified in flowcharts and/or narrative descriptions of computer-implemented methods included in this document (collectively referred to as “the inventive methods”).
  • These computer readable program instructions are stored in various types of computer readable storage media, such as cache 134 and the other storage media discussed below.
  • the program instructions, and associated data are accessed by processor set 110 to control and direct performance of the inventive methods.
  • at least some of the instructions for performing the inventive methods may be stored in block 200 in persistent storage 116.
  • COMMUNICATION FABRIC 160 is the signal conduction paths that allow the various components of computer 102 to communicate with each other.
  • this fabric is made of switches and electrically conductive paths, such as the switches and electrically conductive paths that make up busses, bridges, physical input / output ports and the like.
  • Other types of signal communication paths may be used, such as fiber optic communication paths and/or wireless communication paths.
  • VOLATILE MEMORY 112 is any type of volatile memory now known or to be developed in the future. Examples include dynamic type random access memory (RAM) or static type RAM. Typically, the volatile memory is characterized by random access, but this is not required unless affirmatively indicated. In computer 102, the volatile memory 112 is located in a single package and is internal to computer 102, but, alternatively or additionally, the volatile memory may be distributed over multiple packages and/or located externally with respect to computer 102.
  • RAM dynamic type random access memory
  • static type RAM static type RAM.
  • the volatile memory 112 is located in a single package and is internal to computer 102, but, alternatively or additionally, the volatile memory may be distributed over multiple packages and/or located externally with respect to computer 102.
  • PERSISTENT STORAGE 116 is any form of non-volatile storage for computers that is now known or to be developed in the future.
  • the non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer 102 and/or directly to persistent storage 116.
  • Persistent storage 116 may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid state storage devices.
  • Operating system 122 may take several forms, such as various known proprietary operating systems or open source Portable Operating System Interface type operating systems that employ a kernel.
  • the code included in block 200 typically includes at least some of the computer code involved in performing the inventive methods.
  • PERIPHERAL DEVICE SET 114 includes the set of peripheral devices of computer 102.
  • Data communication connections between the peripheral devices and the other components of computer 102 may be implemented in various ways, such as Bluetooth connections, Near-Field Communication (NFC) connections, connections made by cables (such as universal serial bus (USB) type cables), insertion type connections (for example, secure digital (SD) card), connections made though local area communication networks and even connections made through wide area networks such as the internet.
  • UI device set 126 may include components such as a display screen, speaker, microphone, wearable devices (such as goggles and smart watches), keyboard, mouse, printer, touchpad, game controllers, and haptic devices.
  • Storage 124 is external storage, such as an external hard drive, or insertable storage, such as an SD card. Storage 124 may be persistent and/or volatile. In some embodiments, storage 124 may take the form of a quantum computing storage device for storing data in the form of qubits. In embodiments where computer 102 is required to have a large amount of storage (for example, where computer 102 locally stores and manages a large database) then this storage may be provided by peripheral storage devices designed for storing very large amounts of data, such as a storage area network (SAN) that is shared by multiple, geographically distributed computers.
  • loT sensor set 128 is made up of sensors that can be used in Internet of Things applications. For example, one sensor may be a thermometer and another sensor may be a motion detector.
  • Network module 118 is the collection of computer software, hardware, and firmware that allows computer 102 to communicate with other computers through WAN 108.
  • Network module 118 may include hardware, such as modems or Wi-Fi signal transceivers, software for packetizing and/or de-packetizing data for communication network transmission, and/or web browser software for communicating data over the internet.
  • network control functions and network forwarding functions of network module 118 are performed on the same physical hardware device. In other embodiments (for example, embodiments that utilize software- defined networking (SDN)), the control functions and the forwarding functions of network module 118 are performed on physically separate devices, such that the control functions manage several different network hardware devices.
  • Computer readable program instructions for performing the inventive methods can typically be downloaded to computer 102 from an external computer or external storage device through a network adapter card or network interface included in network module 118.
  • WAN 108 is any wide area network (for example, the internet) capable of communicating computer data over non-local distances by any technology for communicating computer data, now known or to be developed in the future.
  • the WAN may be replaced and/or supplemented by local area networks (LANs) designed to communicate data between devices located in a local area, such as a Wi-Fi network.
  • LANs local area networks
  • the WAN and/or LANs typically include computer hardware such as copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and edge servers.
  • EUD 132 is any computer system that is used and controlled by an end user (for example, a customer of an enterprise that operates computer 102), and may take any of the forms discussed above in connection with computer 102.
  • EUD 132 typically receives helpful and useful data from the operations of computer 102. For example, in a hypothetical case where computer 102 is designed to provide a recommendation to an end user, this recommendation would typically be communicated from network module 118 of computer 102 through WAN 108 to EUD 132. In this way, EUD 132 can display, or otherwise present, the recommendation to an end user.
  • EUD 132 may be a client device, such as thin client, heavy client, mainframe computer, desktop computer and so on.
  • REMOTE SERVER 104 is any computer system that serves at least some data and/or functionality to computer 102.
  • Remote server 104 may be controlled and used by the same entity that operates computer 102.
  • Remote server 104 represents the machine(s) that collect and store helpful and useful data for use by other computers, such as computer 102. For example, in a hypothetical case where computer 102 is designed and programmed to provide a recommendation based on historical data, then this historical data may be provided to computer 102 from remote database 130 of remote server 104.
  • PUBLIC CLOUD 150 is any computer system available for use by multiple entities that provides on-demand availability of computer system resources and/or other computer capabilities, especially data storage (cloud storage) and computing power, without direct active management by the user. Cloud computing typically leverages sharing of resources to achieve coherence and economies of scale.
  • the direct and active management of the computing resources of public cloud 150 is performed by the computer hardware and/or software of cloud orchestration module 146.
  • the computing resources provided by public cloud 150 are typically implemented by virtual computing environments that run on various computers making up the computers of host physical machine set 142, which is the universe of physical computers in and/or available to public cloud 150.
  • the virtual computing environments (VCEs) typically take the form of virtual machines from virtual machine set 148 and/or containers from container set 144.
  • VCEs may be stored as images and may be transferred among and between the various physical machine hosts, either as images or after instantiation of the VCE.
  • Cloud orchestration module 146 manages the transfer and storage of images, deploys new instantiations of VCEs and manages active instantiations of VCE deployments.
  • Gateway 140 is the collection of computer software, hardware, and firmware that allows public cloud 150 to communicate through WAN 108.
  • VCEs can be stored as “images.” A new active instance of the VCE can be instantiated from the image.
  • Two familiar types of VCEs are virtual machines and containers.
  • a container is a VCE that uses operating- system- level virtualization. This refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances, called containers. These isolated user-space instances typically behave as real computers from the point of view of programs running in them.
  • a computer program running on an ordinary operating system can utilize all resources of that computer, such as connected devices, files and folders, network shares, CPU power, and quantifiable hardware capabilities.
  • programs running inside a container can only use the contents of the container and devices assigned to the container, a feature which is known as containerization.
  • PRIVATE CLOUD 106 is similar to public cloud 150, except that the computing resources are only available for use by a single enterprise. While private cloud 106 is depicted as being in communication with WAN 108, in other embodiments a private cloud may be disconnected from the internet entirely and only accessible through a local/private network.
  • a hybrid cloud is a composition of multiple clouds of different types (for example, private, community or public cloud types), often respectively implemented by different vendors. Each of the multiple clouds remains a separate and discrete entity, but the larger hybrid cloud architecture is bound together by standardized or proprietary technology that enables orchestration, management, and/or data/application portability between the multiple constituent clouds.
  • public cloud 150 and private cloud 106 are both part of a larger hybrid cloud.
  • FIG. 2 is a schematic block diagram of an exemplary image blending module, according to some embodiments of the present disclosure.
  • FIG. 2 shows an exemplary blending module 250, which may receive one or more source images 210 and a target image 212.
  • the source images, as well as the target image may be processed by the first machine learning model 220 to generate a latent representation or an embedding.
  • CLIP is an example of a first machine learning model, however other models are capable of embedding visual information and may be used.
  • Some alternative implementation may use a somewhat different model, however the latent representation should be made compatible.
  • the operator which may be additive 238, may be used to generate a third vector 234 for each source image.
  • the operator may apply additivity or other characteristics of the embedding space to generate a blended representation.
  • the second machine learning model 240 may generate a blended image 260 for each source image.
  • StyleGAN is an example of a second machine learning model, however other generative neural networks and similar models are known, and are expected to be developed in the future, and may also be used.
  • FIG. 3 is a flowchart of an exemplary process for semantic image blending, according to some embodiments of the present disclosure.
  • the processing circuitry 120 may execute the exemplary process 300 for blending images for a variety of purposes such as generating visualizations, artistic goals, decoration, and/or the like.
  • the exemplary process 300 starts, as shown in 302, with receiving at least one source image and a target image
  • the process may aim at creating in image corresponding with a source image, or a plurality of source images, which shares a property with a target image.
  • a plurality of blended images may be a plurality of portraits of the people as expected to appear when elderly.
  • Some implementation may receive the image as fixed size bitmaps, with 3 colors or greyscale, for example an integer array of sizes such as 256x256x3, or HD image, however other implementations may be adapted to receive images of various sizes, different formats such as PNG or JPEG, as a text description, as embedding, and/or the like.
  • the images may be received from an end user device 132, the public cloud 150, the private cloud 106, the persistent storage 116, a device from the peripheral device set 114, the volatile memory 112, and/or the like, and may be transferred using the communication fabric 160.
  • the exemplary process 300 continues, as shown in 304, with applying a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image.
  • the first machine learning model may be executed using the processing circuitry 120, by one or more processors, and may benefit from dedicated processors with enhanced parallel processing, digital signal processing, graphic processing units, and/or the like.
  • the first machine learning model was trained on at least two modalities wherein a first modality is visual, a second modality is semantic, and using a loss function comprising a contrastive factor, for example CLIP.
  • Other implementations may be based on extracting data from auto encoders, neural networks trained in adversarial manner, machine learning models trained for classification or segmentation, and/or the like, as well as models expected to be developed in the future.
  • Mapping to a latent space may be generated for purposes of dimensionality reduction, as well as extracting certain characteristics of an image, sound, table, and/or the like. Characteristics may comprise objects present in the image, their locations, relations, background, lighting, color, size, pose, expression, texture, artistic style aspects, and/or the like, as well as tacit properties which are difficult to express semantically.
  • the first vector may comprise embedding, or a latent representation of each of the at least one source image, and may also comprise metadata and the like.
  • the first latent representation may comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features.
  • the exemplary process 300 continues, as shown in 306, with applying the first machine learning model to generate a second vector comprising a second latent representation of the target image.
  • the second latent representation may be compatible with the first latent representation and comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features in corresponding parts of the associated vector.
  • the second latent representation may be generated by the same model used for generating the first latent representation, however alternative implementation may apply ablations and modifications to the model, or use a different model provided adequate compatibility pertaining to at least one property the implementation aims to extract and apply to source image. Mapping to the latent space may be similar to that of block 304, however different mapping that maintains adequate compatibility pertaining to the al least one property.
  • the exemplary process 300 continues, as shown in 308, with generating a third vector comprising latent representation of at least one blended image, using the first vector and the second vector.
  • Generating the third vector may comprise a linear additive operator, a multiplicative operator, or a different operator adapted to the characteristic of the latent, or the embedding space. Some known latent spaces have been shown to be linearly additive.
  • the implementation may aim to generate blending which gives rise to an image having some, or all of the properties of an associated source images, and one or more properties extracted from the target image.
  • the process 300 may continue by applying the second machine learning based model, executed by the processing circuitry 120, for to generate the at least one blended image from the third vector.
  • the second machine learning model is a generative machine learning model trained to generate a visual having local features of a first visual input, and global features of a second visual input.
  • the second machine learning model may be based on generative adversarial network (GAN) architectures such as StyleGAN. It should be noted that StyleGAN may have plurality of versions and implementations.
  • Alternative implementations may derive the second machine learning model from DALL- E, Make-A-Scene GauGAN, BigGAN, CycleGAN, DCGAN, Deep convolutional GAN (DCGAN), Self-attention GAN (SAGAN), Transformer GAN (TransGAN), Bidirectional GAN (BiGAN), Adversarial autoencoder based generative models such as Variational autoencoder GAN (VAEGAN), and other generative image models, and/or others, either known or future models.
  • DALL- E Make-A-Scene GauGAN, BigGAN, CycleGAN, DCGAN, Deep convolutional GAN (DCGAN), Self-attention GAN (SAGAN), Transformer GAN (TransGAN), Bidirectional GAN (BiGAN), Adversarial autoencoder based generative models such as Variational autoencoder GAN (VAEGAN), and other generative image models, and/or others, either known or future models.
  • VAEGAN Variational autoencode
  • the second machine learning model may be trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the at least one source image.
  • the second machine learning model may be also be is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the target image.
  • the loss function may comprise other elements used by known models, an adversarial loss, and other elements that may be developed in the future.
  • the second machine learning model may also be executed using the processing circuitry 120, by one or more processors, and also benefit from accelerators based on parallel processing, either by single instruction multiple data (SIMD) or by different methods, graphic processing units (GPU) and/or the like.
  • SIMD single instruction multiple data
  • GPU graphic processing units
  • the at least one blended image is the same size, and color depth of the at least one source image and/or the target image, however some implementation may generate the blended image in a different resolution, a fixed resolution, an adjustable resolution, apply super-resolution, compression and/or the like.
  • FIG. 4 is a table of images comparing the essence transfer disclosed to latent blending in the StyleGAN and CLIP spaces and the style transfer method, according to some embodiments of the present disclosure.
  • FIG. 4 through application of the disclosed method, and showing comparison between the disclosed method shown as ‘Ours’ and: StyleGAN latent blending (1), CLIP latent blending (2), and style transfer (3).
  • the first row and first column of each row depict the source and target images.
  • the second row depicts the result of applying the disclosed method to transfer the essence of the target to the source.
  • the third, fourth, and fifth rows present the results of prior art.
  • the implementation of the disclosure used to generate the images in the table is based on a standard latent optimizer. The same hyperparameters are used for each type of experiment.
  • source and target images are generated randomly (FIG. 4)
  • FIG. 4 shows that the disclosure produces results that transcend both latent blending in a single latent space and traditional style transfer.
  • FIG. 4 shows results of using the disclosed method for essence transfer in comparison to latent blending in the StyleGAN and CLIP spaces and in comparison to the style transfer method described in “Image style transfer using convolutional neural networks,” published in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), on 6 2016.
  • Some implementations may define blending in the StyleGAN space as the average of the latent representations of the target and the source, i.e. for that holds: and that holds: and some implementations may define the blended image as:
  • some implementations may define blending in the CLIP space as the image I s , t that holds:
  • FIG. 5 is a table of images comparing the disclosure to latent blending in the CLIP space and the style transfer method, according to some embodiments of the present disclosure.
  • Blending results for StyleGAN which was trained on churches are shown in FIG. 5.
  • the first row presents the source images and the first column is the target.
  • This figure present essence transfer using our method, blending in the CLIP space, and style transfer following “Image style transfer using convolutional neural networks”. This example does not present blending in StyleGAN’ s space since the target is a real image and may not be inverted using existing models.
  • Some implementations may find the image corresponding to by optimizing in the latent space of G for a latent It may be observed that while single representation blending methods either change the identity or change unrelated semantic attributes, such as the background of the image, the disclosure’s method is able to preserve the identity of the person in the source image and adopting only the essential semantic features from the target image. Similarly, FIG. 5 compares the results of the disclosure’s method to blending in the CLIP space and to the results of [4]. Since the target images in the churches experiment are all real images, some implementations do not invert them. Thus, StyleGAN blending is not presented in this case.
  • FIG. 6 is a set of three tables of images generated using the essence transfer disclosed, according to some embodiments of the present disclosure.
  • the tables show images as follows: (a). Results of the disclosure’s method with StyleGAN for faces. The inputs are the source images (top row) and target images (left column). Notice how the disclosure’s method is able to produce coherent results, even with out-of-domain targets, (b). Target consistency results, (c). As in (a), but with churches.
  • Some implementations show that the semantic properties defined as the essence of a target are consistent. To this end, it is presented in FIG. 6(b) the results of applying the disclosure’s method to the same set of source images, with two different target images of the same person. As can be seen, the transferred semantic properties are hair and skin color, and wrinkles for both targets. Notice how the second target image has slightly brighter hair, resulting in slightly brighter hair being transferred to the sources, in addition to the distinct expression of the first target, which is also transferred to the sources.
  • FIG. 6 (a) demonstrates some results for out of domain targets and shows that the method is able to produce coherent results, even though some targets are animated.
  • the results of the transfer process in this case resemble animated images; the results shown were generated using only StyleGAN, with no other generators involved in the process.
  • FIG. 6(c) demonstrates results of using the disclosure’s method for StyleGAN with churches.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of embodiments. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range.
  • the phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The conceptual blending of two signals is a semantic task that may underline both creativity and intelligence. The disclosure propose to perform such blending in a way that incorporates two latent spaces: that of the generator network and that of the semantic network. For the first network, the disclosure may employ the image-language matching network of CLIP, and for the second, the StyleGAN generative neural network. The disclosure comprises a blending operator that is optimized to be simultaneously additive in both latent spaces. The disclosure may generate blending appearing more natural than that obtained in each space separately.

Description

SEMANTIC BLENDING OF IMAGES
RELATED APPLICATION
This application claims the benefit of priority of U.S. Provisional Patent Application No. 63/270,389 filed on October 21, 2021, the contents of which are incorporated herein by reference in their entirety.
BACKGROUND
Some embodiments described in the present disclosure relate to a generative machine learning models and, more specifically, but not exclusively, to a blending images according to semantic properties.
Style Transfer may be the closest work to the disclosed, Style transfer aims to borrow the style of a target image, while keeping the content of a source image. Recent works, such as that of Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge, named “Image style transfer using convolutional neural networks,” published in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6 2016 and [5] X. Huang and S. Belongie’s, “Arbitrary style transfer in real-time with adaptive instance normalization,” published in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy., 2017, pp. 1510-1519. are based on neural networks. These works typically transfer elements of style involving texture and are derived from fixed losses and normalization techniques such as the Gram matrix and Adaptive Instance Normalization
One set of works typically manipulate an image based on finding a set of possibly disentangled and semantic directions. These works typically borrow the semantic meaning from the generator itself. A recent work of StyleCLIP, borrows the semantic meaning from the CLIP space, in a similar manner to us. However, unlike the disclosure’s method, StyleCLIP is specific to faces, and considers text-driven manipulations. In contrast, the disclosure’s method concerns essence transfer from one image to another.
SUMMARY
It is an object of the present disclosure to describe a system and a method for blending images using a first machine learning to generate embedding of the images, an operator in a latent space using the embedding of the images, and at least one blended image using the embedding generated by the operator, wherein the blending operator training aims for simultaneous additivity in a plurality of latent spaces. The foregoing and other objects are achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
According to an aspect of some embodiments of the present invention there is provided a system for image generation, comprising at least one processing circuitry, configure to: receive at least one source image and a target image; apply a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; apply the first machine learning model to generate a second vector comprising a second latent representation of the target image; generate a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and apply a second machine learning model to generate the at least one blended image from the third vector.
According to an aspect of some embodiments of the present invention there is provided a method for image generation, comprising: receiving at least one source image and a target image; applying a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; applying the first machine learning model to generate a second vector comprising a second latent representation of the target image; generating a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and applying a second machine learning model to generate the at least one blended image from the third vector.
According to an aspect of some embodiments of the present invention there is provided one or more computer program products comprising instructions for image generation, wherein execution of the instructions by one or more processors of a computing system is to cause a computing system to: receive at least one source image and a target image; apply a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; apply the first machine learning model to generate a second vector comprising a second latent representation of the target image; generate a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and apply a second machine learning model to generate the at least one blended image from the third vector.
Optionally, the first machine learning model was trained on at least two modalities wherein a first modality is visual, a second modality is semantic, and using a loss function comprising a contrastive factor.
Optionally, wherein the second machine learning model is a generative machine learning model trained to generate a visual having local features of a first visual input, and global features of a second visual input.
Optionally, wherein generating the third vector comprising a linear additive operator.
Optionally, wherein the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the at least one source image.
Optionally, wherein the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the target image.
Optionally, wherein the first latent representation comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features.
Other systems, methods, features, and advantages of the present disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which embodiments. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
Some embodiments are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments may be practiced.
In the drawings:
FIG. 1 is a schematic illustration of an exemplary system for semantic image blending, according to some embodiments of the present disclosure;
FIG. 2 is a schematic block diagram of an exemplary image blending module, according to some embodiments of the present disclosure;
FIG. 3 is a flowchart of an exemplary process for semantic image blending, according to some embodiments of the present disclosure;
FIG. 4 is a table of images comparing the essence transfer disclosed to latent blending in the StyleGAN and CLIP spaces and the style transfer method, according to some embodiments of the present disclosure;
FIG. 5 is a table of images comparing the disclosure to latent blending in the CLIP space and the style transfer method, according to some embodiments of the present disclosure; and
FIG. 6 is a set of three tables of images generated using the essence transfer disclosed, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
Some embodiments described in the present disclosure relate to a generative machine learning models and, more specifically, but not exclusively, to a blending images according to semantic properties.
Some digital creations may be described as borrowing the essence of a “target” image It and transferring it to a “source” image Is, creating an output image Is,t. Is,t which should blend information from Is and It in a manner that draws semantic attributes from It while preserving the identity of image Is. A definition of essence may transcend that of known style transfer methods, which focused on what is often referred to as style, i.e., low-level feature statistics, which are usually local. The essence the disclosure considers is more general and includes unique style elements, such as complexion or texture, but also semantic elements, such as apparent gender, age, and unique facial attributes, when considering faces. For example, the disclosure experiment with two targets, one of a young boy and one of an older man. Using the disclosed method, the young boy transfers the age property to source images, and the older man transfers the age property, as well as the hair color and wrinkles. In comparison to performing blending in either the StyleGAN latent space or CLIP’S latent space, a viewer may observe that the disclosed method preserves the identity of source images, while transferring the most noticeable semantic features of target images. A more rigorous definition of the disclosure’s goal may be elusive, however the benefit may be observed as semantic properties, that may be straightforwardly described, such as human age, are transferred.
On numerous occasions, it was shown in literature that latent spaces of high-level vision networks such as those involving capabilities such as image understanding may be additive,. The disclosure assumes that the learned transformation is doubly additive, i.e. both in the latent space of the semantic-image generator, and in the latent space of the image understanding engine. In other words, out of all possible ways to transform an image Is in the direction of It, the disclosure may obtain a transformation that is based on a constant shift in the generator space and leads to a constant difference in the high-level description of the image.
StyleGAN may be used as an exemplary generator, also referred to herein as a second machine learning model. Additivity in the space of StyleGAN was demonstrated in some works exploring latent semantic spaces of generative adversarial networks (GAN) for linearly interpolating between different images along semantic directions as well as for the manipulation of semantic attributes as done in ‘styleclip’. For the image recognition engine, also referred to herein as a first machine learning model the CLIP network, which has shown zero- shot capabilities across multiple domains such as image classification and adaptation of generated images may be used. CLIP was also shown to behave additively.
The disclosure may derive style from CLIP, a method for the semantic association of text and images. In CLIP’S space, two images are close, or similar, to each other if their textual association is close. Such similarity may consider unique style elements, such as texture or complexion. It may also consider semantic elements, such as gender and facial attributes, but not other semantic elements such as identity. The disclosure argue that this notion of style, which the disclosure use here, is more general. Image Manipulation the disclosure’s work is also related to recent image manipulation works based on a pre-trained generator or CLIP, however other models may be used.
The method disclosed may apply training using two types of loss terms. The first term may ensure that the transformed image is semantically similar to the target image It in the latent space of the first machine learning model, for example CLIP. The second type of constraint links the constant shift in the latent space of the generator to a constant shift in the latent space of the first machine learning model. The disclosure demonstrate the ability to transfer the essence of a target image It to a source image Is, while preserving the identity of Is. The disclosure demonstrate coherent results even when the target images are out of the domain of the images generated by StyleGAN, on top of which the disclosure’s method may not require inversion of the target image. In addition, the disclosure demonstrate that the directions found by the disclosure’s method may be global and may be applied to any other source image, for performing the same semantic transfer of essence. Thereby, the blending operator may be optimized to be simultaneously additive in both semantic latent spaces and generative latent spaces.
The disclosure may derive style from CLIP, a method for the semantic association of text and images. In CLIP’S space, two images are close, or similar, to each other if their textual association is close. Such similarity may consider unique style elements, such as texture or complexion. It may also consider semantic elements, such as gender and facial attributes, but not other semantic elements such as identity. The disclosure argue that this notion of style, which the disclosure use here, is more general. Image Manipulation the disclosure’s work is also related to recent image manipulation works based on a pretrained generator or CLIP, however other models may be used.
The disclosed method comprises four components, given as inputs: (i) A generator G, which, given a vector z, generates an image G(z), also referred to as the second machine learning model (ii) An image recognition engine C, which, given an image I, provides a latent representation of the essence of the image, C(7) in some latent space, also referred to as the first machine learning model (iii) A target image It, from which the essence is extracted, and (iv) a set of, i.e, at least one, source images S, which are used to provide the statistics of images for which the method is applied. For convenience, the disclosure define 5 as a collection of z vectors, and the set of source images as G(z).
By processing these four inputs, the disclosure’s goal is to provide a generator H such that the image H(z) blends semantically a source image Is = G(z) with the target image It. Referring to a source image using a vector z rather than the set of image values in Is allows the disclosed method to directly define the double-linearity property. When transforming an image Is using H, the image may be converted to z by finding an optimal z, using StyleGAN encoding inversion method. The disclosure note, however, that the disclosure’s formulation does not require, at any stage, the inversion of It. On the generator side, linearity is expressed by:
H(z) = G(z + b) (1) for some shift vector b, where b represents the essence latent.
Linearity in the latent space of the image recognition engine is expressed as: (2)
Figure imgf000009_0004
for some fixed d. Put differently, modifying the latent z in G’s latent space with b induces a constant semantic change in the latent space of C. This property is otherwise referred to in literature as a global semantic direction as shown by ‘styleclip’.
In practice, the disclosure minimize the following over H and d~. (3)
Figure imgf000009_0005
Since the minimal point is obtained when d is the mean of the differences C((H(z)) - C(G(z)), this is equivalent, up to a scale, to minimizing over H:
Figure imgf000009_0001
So far, the disclosure have defined the problem of learning a pair of semantic directions b,d, in two different latent spaces, such that (b,d) match. The disclosure wish to add a constraint that ties this direction of It. To this end, the disclosure wish to maximize similarity in the semantic space provided by the recognition engine C between It and the generated images H(z). Put differently, the disclosure wish to minimize the sum of differences:
Figure imgf000009_0002
The disclosure now define the disclosure’s method, based on the implementations of (1) to (5). The disclosure first assumes that when considering vectors in the semantic space of the first machine learning model, for example CLIP, it may be beneficial to employ their normalized version, i.e., employ the cosine distance instead of the L2 norm, in accordance with the training process of CLIP, where text and image encoding are both normalized.
Therefore the following loss term (Ltransfer), may be applied over batches of S source images:
Figure imgf000009_0003
where N is the batch size. This transfer loss estimates the similarity between the image encodings of the target image and the source images.
The second concept the disclosure aims to maintain is consistency. The goal of essence transfer is to change the essence of the source image using a collection of semantic attributes that encapsulates the essence of the target image. These attributes are independent of the source image. To that end, the disclosure demand that the semantic edits by the direction b be consistent across the source images, using CLIP’S latent space. This is expressed in (4) above and translates to the following loss (Lconsistency) :
Figure imgf000010_0001
where
Figure imgf000010_0002
)) , and, as before, N is the batch size.
The optimization problem solved during training in order to recover H, as defined in (1), may be rephrased as: (8)
Figure imgf000010_0003
where, A is a hyperparameter.
For each target image It the disclosure consider a batch of N = 6 randomly drawn source images in order to recover b. When possible, the disclosure may initialize the direction b to be the latent z that corresponds to the target image It. However, as shown in Fig. 3, the disclosure experiment with out of domain targets for generator G. In such cases, the disclosure initialize the essence vector b randomly. In contrast to prior art methods, the disclosure does not rely on any specific properties of the semantic latent space of G, and the disclosure do not use any face recognition models to prevent identity loss. In order to maintain the identity of the source images 7i,...,Tvthe disclosure employ a standard the weight decay to limit the magnitude of effect that b has on source images. In addition, the disclosure’s consistency loss (7) aids identity preservation by ensuring that the impact of applying direction b is identical across different images, i.e. semantic changes are consistent. This prevents b from modifying properties for a single image.
The first machine learning model, such as CLIP may be trained on a large corpus of matched images and text captions and is, therefore, much richer semantically than networks that perform multiclass classification for a limited number of classes only. It has been shown to be extremely suitable for zero-shot computer vision tasks; here, the disclosure demonstrate its ability to support semantic blending. While the StyleGAN space already performs reasonable blending for images of, e.g., two children, it struggles when blending images with different attributes. On the other hand, CLIP by itself struggles to maintain identity when blending. The combination of the two seems to provide a powerful blending technique, which enjoys the benefits of both representations. This is enabled through a novel method, which assumes additivity in the first latent space and ensures additivity in the second through optimization.
Before explaining at least one embodiment in detail, it is to be understood that embodiments are not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. Implementations described herein are capable of other embodiments or of being practiced or carried out in various ways.
Embodiments may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the embodiments.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of embodiments may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of embodiments.
Aspects of embodiments are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Referring now to the drawings, FIG. 1 is a schematic illustration of an exemplary system for semantic image blending, according to some embodiments of the present disclosure. An exemplary computer environment 100 may be used for executing execute processes such as 300 for generating blended images. Further details about these exemplary processes follow as FIG. 2 and FIG. 3 are described.
Various aspects of the present disclosure are described by narrative text, flowcharts, block diagrams of computer systems and/or block diagrams of the machine logic included in computer program product (CPP) embodiments. With respect to any flowcharts, depending upon the technology involved, the operations may be performed in a different order than what is shown in a given flowchart. For example, again depending upon the technology involved, two operations shown in successive flowchart blocks may be performed in reverse order, as a single integrated step, concurrently, or in a manner at least partially overlapping in time.
A computer program product embodiment ("CPP embodiment" or “CPP”) is a term used in the present disclosure to describe any set of one, or more, storage media (also called "mediums") collectively included in a set of one, or more, storage devices that collectively include machine readable code corresponding to instructions and/or data for performing computer operations specified in a given CPP claim. A "storage device" is any tangible device that can retain and store instructions for use by a computer processor. Without limitation, the computer readable storage medium may be an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or any suitable combination of the foregoing. Some known types of storage devices that include these mediums include: diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash memory), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanically encoded device (such as punch cards or pits / lands formed in a major surface of a disc) or any suitable combination of the foregoing. A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media. As will be understood by those of skill in the art, data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, de-fragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored.
Computing environment 100 contains an example of an environment for the execution of at least some of the computer code involved in performing the inventive methods, such as a blending module for images 200. In addition to block 200, computing environment 100 includes, for example, computer 102, wide area network (WAN) 108, end user device (EUD) 132, remote server 104, public cloud 150, and private cloud 106. In this embodiment, computer 102 includes processor set 110 (including processing circuitry 120 and cache 134), communication fabric 160, volatile memory 112, persistent storage 116 (including operating system 122 and block 200, as identified above), peripheral device set 114 (including user interface (UI), device set 126, storage 124, and Internet of Things (loT) sensor set 128), and network module 118. Remote server 104 includes remote database 130. Public cloud 150 includes gateway 140, cloud orchestration module 146, host physical machine set 142, virtual machine set 148, and container set 144.
COMPUTER 102 may take the form of a desktop computer, laptop computer, tablet computer, smart phone, smart watch or other wearable computer, mainframe computer, quantum computer or any other form of computer or mobile device now known or to be developed in the future that is capable of running a program, accessing a network or querying a database, such as remote database 130. As is well understood in the art of computer technology, and depending upon the technology, performance of a computer-implemented method may be distributed among multiple computers and/or between multiple locations. On the other hand, in this presentation of computing environment 100, detailed discussion is focused on a single computer, specifically computer 102, to keep the presentation as simple as possible. Computer 102 may be located in a cloud, even though it is not shown in a cloud in Figure 1. On the other hand, computer 102 is not required to be in a cloud except to any extent as may be affirmatively indicated.
PROCESSOR SET 110 includes one, or more, computer processors of any type now known or to be developed in the future. For example, a processor set may include one or more of a central processing unit (CPU), a microcontroller, a parallel processor, supporting multiple data such as a digital signal processing (DSP) unit, a graphical processing unit (GPU) module, and the like, as well as optical processors, quantum processors, and processing units based on technologies that may be developed in the future. Processing circuitry 120 may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry 120 may implement multiple processor threads and/or multiple processor cores. Cache 134 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 110. Cache memories are typically organized into multiple levels depending upon relative proximity to the processing circuitry. Alternatively, some, or all, of the cache for the processor set may be located “off chip.” In some computing environments, processor set 110 may be designed for working with qubits and performing quantum computing.
Computer readable program instructions are typically loaded onto computer 102 to cause a series of operational steps to be performed by processor set 110 of computer 102 and thereby effect a computer-implemented method, such that the instructions thus executed will instantiate the methods specified in flowcharts and/or narrative descriptions of computer-implemented methods included in this document (collectively referred to as “the inventive methods”). These computer readable program instructions are stored in various types of computer readable storage media, such as cache 134 and the other storage media discussed below. The program instructions, and associated data, are accessed by processor set 110 to control and direct performance of the inventive methods. In computing environment 100, at least some of the instructions for performing the inventive methods may be stored in block 200 in persistent storage 116.
COMMUNICATION FABRIC 160 is the signal conduction paths that allow the various components of computer 102 to communicate with each other. Typically, this fabric is made of switches and electrically conductive paths, such as the switches and electrically conductive paths that make up busses, bridges, physical input / output ports and the like. Other types of signal communication paths may be used, such as fiber optic communication paths and/or wireless communication paths.
VOLATILE MEMORY 112 is any type of volatile memory now known or to be developed in the future. Examples include dynamic type random access memory (RAM) or static type RAM. Typically, the volatile memory is characterized by random access, but this is not required unless affirmatively indicated. In computer 102, the volatile memory 112 is located in a single package and is internal to computer 102, but, alternatively or additionally, the volatile memory may be distributed over multiple packages and/or located externally with respect to computer 102.
PERSISTENT STORAGE 116 is any form of non-volatile storage for computers that is now known or to be developed in the future. The non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer 102 and/or directly to persistent storage 116. Persistent storage 116 may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid state storage devices. Operating system 122 may take several forms, such as various known proprietary operating systems or open source Portable Operating System Interface type operating systems that employ a kernel. The code included in block 200 typically includes at least some of the computer code involved in performing the inventive methods.
PERIPHERAL DEVICE SET 114 includes the set of peripheral devices of computer 102. Data communication connections between the peripheral devices and the other components of computer 102 may be implemented in various ways, such as Bluetooth connections, Near-Field Communication (NFC) connections, connections made by cables (such as universal serial bus (USB) type cables), insertion type connections (for example, secure digital (SD) card), connections made though local area communication networks and even connections made through wide area networks such as the internet. In various embodiments, UI device set 126 may include components such as a display screen, speaker, microphone, wearable devices (such as goggles and smart watches), keyboard, mouse, printer, touchpad, game controllers, and haptic devices. Storage 124 is external storage, such as an external hard drive, or insertable storage, such as an SD card. Storage 124 may be persistent and/or volatile. In some embodiments, storage 124 may take the form of a quantum computing storage device for storing data in the form of qubits. In embodiments where computer 102 is required to have a large amount of storage (for example, where computer 102 locally stores and manages a large database) then this storage may be provided by peripheral storage devices designed for storing very large amounts of data, such as a storage area network (SAN) that is shared by multiple, geographically distributed computers. loT sensor set 128 is made up of sensors that can be used in Internet of Things applications. For example, one sensor may be a thermometer and another sensor may be a motion detector.
NETWORK MODULE 118 is the collection of computer software, hardware, and firmware that allows computer 102 to communicate with other computers through WAN 108. Network module 118 may include hardware, such as modems or Wi-Fi signal transceivers, software for packetizing and/or de-packetizing data for communication network transmission, and/or web browser software for communicating data over the internet. In some embodiments, network control functions and network forwarding functions of network module 118 are performed on the same physical hardware device. In other embodiments (for example, embodiments that utilize software- defined networking (SDN)), the control functions and the forwarding functions of network module 118 are performed on physically separate devices, such that the control functions manage several different network hardware devices. Computer readable program instructions for performing the inventive methods can typically be downloaded to computer 102 from an external computer or external storage device through a network adapter card or network interface included in network module 118.
WAN 108 is any wide area network (for example, the internet) capable of communicating computer data over non-local distances by any technology for communicating computer data, now known or to be developed in the future. In some embodiments, the WAN may be replaced and/or supplemented by local area networks (LANs) designed to communicate data between devices located in a local area, such as a Wi-Fi network. The WAN and/or LANs typically include computer hardware such as copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and edge servers.
END USER DEVICE (EUD) 132 is any computer system that is used and controlled by an end user (for example, a customer of an enterprise that operates computer 102), and may take any of the forms discussed above in connection with computer 102. EUD 132 typically receives helpful and useful data from the operations of computer 102. For example, in a hypothetical case where computer 102 is designed to provide a recommendation to an end user, this recommendation would typically be communicated from network module 118 of computer 102 through WAN 108 to EUD 132. In this way, EUD 132 can display, or otherwise present, the recommendation to an end user. In some embodiments, EUD 132 may be a client device, such as thin client, heavy client, mainframe computer, desktop computer and so on.
REMOTE SERVER 104 is any computer system that serves at least some data and/or functionality to computer 102. Remote server 104 may be controlled and used by the same entity that operates computer 102. Remote server 104 represents the machine(s) that collect and store helpful and useful data for use by other computers, such as computer 102. For example, in a hypothetical case where computer 102 is designed and programmed to provide a recommendation based on historical data, then this historical data may be provided to computer 102 from remote database 130 of remote server 104.
PUBLIC CLOUD 150 is any computer system available for use by multiple entities that provides on-demand availability of computer system resources and/or other computer capabilities, especially data storage (cloud storage) and computing power, without direct active management by the user. Cloud computing typically leverages sharing of resources to achieve coherence and economies of scale. The direct and active management of the computing resources of public cloud 150 is performed by the computer hardware and/or software of cloud orchestration module 146. The computing resources provided by public cloud 150 are typically implemented by virtual computing environments that run on various computers making up the computers of host physical machine set 142, which is the universe of physical computers in and/or available to public cloud 150. The virtual computing environments (VCEs) typically take the form of virtual machines from virtual machine set 148 and/or containers from container set 144. It is understood that these VCEs may be stored as images and may be transferred among and between the various physical machine hosts, either as images or after instantiation of the VCE. Cloud orchestration module 146 manages the transfer and storage of images, deploys new instantiations of VCEs and manages active instantiations of VCE deployments. Gateway 140 is the collection of computer software, hardware, and firmware that allows public cloud 150 to communicate through WAN 108.
Some further explanation of virtualized computing environments (VCEs) will now be provided. VCEs can be stored as “images.” A new active instance of the VCE can be instantiated from the image. Two familiar types of VCEs are virtual machines and containers. A container is a VCE that uses operating- system- level virtualization. This refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances, called containers. These isolated user-space instances typically behave as real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can utilize all resources of that computer, such as connected devices, files and folders, network shares, CPU power, and quantifiable hardware capabilities. However, programs running inside a container can only use the contents of the container and devices assigned to the container, a feature which is known as containerization.
PRIVATE CLOUD 106 is similar to public cloud 150, except that the computing resources are only available for use by a single enterprise. While private cloud 106 is depicted as being in communication with WAN 108, in other embodiments a private cloud may be disconnected from the internet entirely and only accessible through a local/private network. A hybrid cloud is a composition of multiple clouds of different types (for example, private, community or public cloud types), often respectively implemented by different vendors. Each of the multiple clouds remains a separate and discrete entity, but the larger hybrid cloud architecture is bound together by standardized or proprietary technology that enables orchestration, management, and/or data/application portability between the multiple constituent clouds. In this embodiment, public cloud 150 and private cloud 106 are both part of a larger hybrid cloud.
Referring now to FIG. 2 which is a schematic block diagram of an exemplary image blending module, according to some embodiments of the present disclosure.
FIG. 2 shows an exemplary blending module 250, which may receive one or more source images 210 and a target image 212. The source images, as well as the target image may be processed by the first machine learning model 220 to generate a latent representation or an embedding. CLIP is an example of a first machine learning model, however other models are capable of embedding visual information and may be used. For each source image, referred to as the first vector 230, and for the target image, referred to as the second vector 232. Some alternative implementation may use a somewhat different model, however the latent representation should be made compatible.
The operator, which may be additive 238, may be used to generate a third vector 234 for each source image. The operator may apply additivity or other characteristics of the embedding space to generate a blended representation. Flowingly, the second machine learning model 240 may generate a blended image 260 for each source image. StyleGAN is an example of a second machine learning model, however other generative neural networks and similar models are known, and are expected to be developed in the future, and may also be used.
Referring now to FIG. 3, which is a flowchart of an exemplary process for semantic image blending, according to some embodiments of the present disclosure. The processing circuitry 120 may execute the exemplary process 300 for blending images for a variety of purposes such as generating visualizations, artistic goals, decoration, and/or the like.
The exemplary process 300 starts, as shown in 302, with receiving at least one source image and a target image
The process may aim at creating in image corresponding with a source image, or a plurality of source images, which shares a property with a target image. For example, when the plurality of source images be portraits of people, and the target image is of an elderly person, a plurality of blended images may be a plurality of portraits of the people as expected to appear when elderly.
Some implementation may receive the image as fixed size bitmaps, with 3 colors or greyscale, for example an integer array of sizes such as 256x256x3, or HD image, however other implementations may be adapted to receive images of various sizes, different formats such as PNG or JPEG, as a text description, as embedding, and/or the like.
The images may be received from an end user device 132, the public cloud 150, the private cloud 106, the persistent storage 116, a device from the peripheral device set 114, the volatile memory 112, and/or the like, and may be transferred using the communication fabric 160.
The exemplary process 300 continues, as shown in 304, with applying a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image. The first machine learning model may be executed using the processing circuitry 120, by one or more processors, and may benefit from dedicated processors with enhanced parallel processing, digital signal processing, graphic processing units, and/or the like.
In some implementations, the first machine learning model was trained on at least two modalities wherein a first modality is visual, a second modality is semantic, and using a loss function comprising a contrastive factor, for example CLIP. Other implementations may be based on extracting data from auto encoders, neural networks trained in adversarial manner, machine learning models trained for classification or segmentation, and/or the like, as well as models expected to be developed in the future.
Mapping to a latent space may be generated for purposes of dimensionality reduction, as well as extracting certain characteristics of an image, sound, table, and/or the like. Characteristics may comprise objects present in the image, their locations, relations, background, lighting, color, size, pose, expression, texture, artistic style aspects, and/or the like, as well as tacit properties which are difficult to express semantically.
The first vector may comprise embedding, or a latent representation of each of the at least one source image, and may also comprise metadata and the like.
The first latent representation may comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features.
The exemplary process 300 continues, as shown in 306, with applying the first machine learning model to generate a second vector comprising a second latent representation of the target image.
The second latent representation may be compatible with the first latent representation and comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features in corresponding parts of the associated vector.
In some implementations, the second latent representation may be generated by the same model used for generating the first latent representation, however alternative implementation may apply ablations and modifications to the model, or use a different model provided adequate compatibility pertaining to at least one property the implementation aims to extract and apply to source image. Mapping to the latent space may be similar to that of block 304, however different mapping that maintains adequate compatibility pertaining to the al least one property.
The exemplary process 300 continues, as shown in 308, with generating a third vector comprising latent representation of at least one blended image, using the first vector and the second vector. Generating the third vector may comprise a linear additive operator, a multiplicative operator, or a different operator adapted to the characteristic of the latent, or the embedding space. Some known latent spaces have been shown to be linearly additive.
The implementation may aim to generate blending which gives rise to an image having some, or all of the properties of an associated source images, and one or more properties extracted from the target image.
And subsequently, as shown in 310, the process 300 may continue by applying the second machine learning based model, executed by the processing circuitry 120, for to generate the at least one blended image from the third vector.
In some implementations, the second machine learning model is a generative machine learning model trained to generate a visual having local features of a first visual input, and global features of a second visual input. The second machine learning model may be based on generative adversarial network (GAN) architectures such as StyleGAN. It should be noted that StyleGAN may have plurality of versions and implementations.
Alternative implementations may derive the second machine learning model from DALL- E, Make-A-Scene GauGAN, BigGAN, CycleGAN, DCGAN, Deep convolutional GAN (DCGAN), Self-attention GAN (SAGAN), Transformer GAN (TransGAN), Bidirectional GAN (BiGAN), Adversarial autoencoder based generative models such as Variational autoencoder GAN (VAEGAN), and other generative image models, and/or others, either known or future models.
The second machine learning model may be trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the at least one source image.
The second machine learning model may be also be is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the target image.
The loss function may comprise other elements used by known models, an adversarial loss, and other elements that may be developed in the future.
The second machine learning model may also be executed using the processing circuitry 120, by one or more processors, and also benefit from accelerators based on parallel processing, either by single instruction multiple data (SIMD) or by different methods, graphic processing units (GPU) and/or the like.
In some implementations, the at least one blended image is the same size, and color depth of the at least one source image and/or the target image, however some implementation may generate the blended image in a different resolution, a fixed resolution, an adjustable resolution, apply super-resolution, compression and/or the like.
Referring now to FIG. 4, which is a table of images comparing the essence transfer disclosed to latent blending in the StyleGAN and CLIP spaces and the style transfer method, according to some embodiments of the present disclosure.
The essence the disclosure considers is reflected in. FIG. 4 through application of the disclosed method, and showing comparison between the disclosed method shown as ‘Ours’ and: StyleGAN latent blending (1), CLIP latent blending (2), and style transfer (3). The first row and first column of each row depict the source and target images. The second row depicts the result of applying the disclosed method to transfer the essence of the target to the source. The third, fourth, and fifth rows present the results of prior art.
The implementation of the disclosure used to generate the images in the table is based on a standard latent optimizer. The same hyperparameters are used for each type of experiment. When source and target images are generated randomly (FIG. 4), some implementations use a weight decay of 0.007, and λ = 0.7 for the consistency loss. For out of domain targets, some implementations use a weight decay of 0.001 and λ = 0.6 the consistency loss, in order to allow the consideration of directions that induce more significant edits to the source images.
FIG. 4 shows that the disclosure produces results that transcend both latent blending in a single latent space and traditional style transfer. FIG. 4 shows results of using the disclosed method for essence transfer in comparison to latent blending in the StyleGAN and CLIP spaces and in comparison to the style transfer method described in “Image style transfer using convolutional neural networks,” published in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), on 6 2016. Some implementations may define blending in the StyleGAN space as the average of the latent representations of the target and the source, i.e. for that holds: and
Figure imgf000022_0003
that holds: and some implementations may define the blended image as:
Figure imgf000022_0001
Figure imgf000022_0004
. Similarly, some implementations may define blending in the CLIP space as the image Is,t that holds:
Figure imgf000022_0002
Referring now to FIG. 5, which is a table of images comparing the disclosure to latent blending in the CLIP space and the style transfer method, according to some embodiments of the present disclosure. Blending results for StyleGAN which was trained on churches are shown in FIG. 5. The first row presents the source images and the first column is the target. This figure present essence transfer using our method, blending in the CLIP space, and style transfer following “Image style transfer using convolutional neural networks”. This example does not present blending in StyleGAN’ s space since the target is a real image and may not be inverted using existing models.
Some implementations may find the image corresponding to by optimizing in the latent
Figure imgf000023_0002
space of G for a latent
Figure imgf000023_0001
It may be observed that while single representation blending methods either change the identity or change unrelated semantic attributes, such as the background of the image, the disclosure’s method is able to preserve the identity of the person in the source image and adopting only the essential semantic features from the target image. Similarly, FIG. 5 compares the results of the disclosure’s method to blending in the CLIP space and to the results of [4]. Since the target images in the churches experiment are all real images, some implementations do not invert them. Thus, StyleGAN blending is not presented in this case.
Referring now to FIG. 6, which is a set of three tables of images generated using the essence transfer disclosed, according to some embodiments of the present disclosure.
The tables show images as follows: (a). Results of the disclosure’s method with StyleGAN for faces. The inputs are the source images (top row) and target images (left column). Notice how the disclosure’s method is able to produce coherent results, even with out-of-domain targets, (b). Target consistency results, (c). As in (a), but with churches.
Some implementations show that the semantic properties defined as the essence of a target are consistent. To this end, it is presented in FIG. 6(b) the results of applying the disclosure’s method to the same set of source images, with two different target images of the same person. As can be seen, the transferred semantic properties are hair and skin color, and wrinkles for both targets. Notice how the second target image has slightly brighter hair, resulting in slightly brighter hair being transferred to the sources, in addition to the distinct expression of the first target, which is also transferred to the sources.
FIG. 6 (a) demonstrates some results for out of domain targets and shows that the method is able to produce coherent results, even though some targets are animated. The results of the transfer process in this case resemble animated images; the results shown were generated using only StyleGAN, with no other generators involved in the process. FIG. 6(c) demonstrates results of using the disclosure’s method for StyleGAN with churches.
The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
It is expected that during the life of a patent maturing from this application many relevant machine learning models, generative, discriminative, embedding, and the like, will be developed and the scope of the terms machine learning model, neural network, and the exemplary neutral networks CLIP and StyleGAN are intended to include all such new technologies a priori.
As used herein the term “about” refers to ± 10 %.
The terms "comprises", "comprising", "includes", "including", “having” and their conjugates mean "including but not limited to". This term encompasses the terms "consisting of" and "consisting essentially of".
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.
The word “exemplary” is used herein to mean “serving as an example, instance or illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
The word “optionally” is used herein to mean “is provided in some embodiments and not provided in other embodiments”. Any particular embodiment may include a plurality of “optional” features unless such features conflict.
Throughout this application, various embodiments may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of embodiments. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
It is appreciated that certain features of embodiments, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of embodiments, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although embodiments have been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
It is the intent of the applicant(s) that all publications, patents and patent applications referred to in this specification are to be incorporated in their entirety by reference into the specification, as if each individual publication, patent or patent application was specifically and individually noted when referenced that it is to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. In addition, any priority document(s) of this application is/are hereby incorporated herein by reference in its/their entirety.

Claims

WHAT IS CLAIMED IS:
1. A system for image generation, comprising at least one processing circuitry, configure to: receive at least one source image and a target image; apply a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; apply the first machine learning model to generate a second vector comprising a second latent representation of the target image; generate a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and apply a second machine learning model to generate the at least one blended image from the third vector.
2. The system of claim 1 wherein the first machine learning model was trained on at least two modalities wherein a first modality is visual, a second modality is semantic, and using a loss function comprising a contrastive factor.
3. The system of claim 1 wherein the second machine learning model is a generative machine learning model trained to generate a visual having local features of a first visual input, and global features of a second visual input.
4. The system of claim 1 wherein generating the third vector comprising a linear additive operator.
5. The system of claim 1 wherein the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the at least one source image.
6. The system of claim 1 wherein the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the target image.
7. The system of claim 1 wherein the first latent representation comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features.
8. A method for image generation, comprising: receiving at least one source image and a target image; applying a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; applying the first machine learning model to generate a second vector comprising a second latent representation of the target image; generating a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and applying a second machine learning model to generate the at least one blended image from the third vector.
9. The method of claim 8 wherein the first machine learning model was trained on at least two modalities wherein a first modality is visual, a second modality is semantic, and using a loss function comprising a contrastive factor.
10. The method of claim 8 wherein the second machine learning model is a generative machine learning model trained to generate a visual having local features of a first visual input, and global features of a second visual input.
11. The method of claim 8 wherein generating the third vector comprising a linear additive operator.
12. The method of claim 8 wherein the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the at least one source image.
13. The method of claim 8 wherein the second machine learning model is trained using a loss function comprising difference between the between the encodings generated by the first machine learning model of the at least one blended image and the target image.
14. The method of claim 8 wherein the first latent representation comprise elements associated with semantic aspects or global features and elements associated with style aspects or local features.
15. One or more computer program products comprising instructions for image generation, wherein execution of the instructions by one or more processors of a computing system is to cause a computing system to: receive at least one source image and a target image; apply a first machine learning model to generate a first vector comprising a first latent representation of the at least one source image; apply the first machine learning model to generate a second vector comprising a second latent representation of the target image; generate a third vector comprising latent representation of at least one blended image, using the first vector and the second vector; and apply a second machine learning model to generate the at least one blended image from the third vector.
PCT/IL2022/051109 2021-10-21 2022-10-20 Semantic blending of images WO2023067603A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163270389P 2021-10-21 2021-10-21
US63/270,389 2021-10-21

Publications (1)

Publication Number Publication Date
WO2023067603A1 true WO2023067603A1 (en) 2023-04-27

Family

ID=86058831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2022/051109 WO2023067603A1 (en) 2021-10-21 2022-10-20 Semantic blending of images

Country Status (1)

Country Link
WO (1) WO2023067603A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789185A (en) * 2024-02-28 2024-03-29 浙江驿公里智能科技有限公司 Automobile oil hole gesture recognition system and method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200242774A1 (en) * 2019-01-25 2020-07-30 Nvidia Corporation Semantic image synthesis for generating substantially photorealistic images using neural networks
US20200372621A1 (en) * 2019-05-20 2020-11-26 Disney Enterprises, Inc. Automated Image Synthesis Using a Comb Neural Network Architecture
US20210097691A1 (en) * 2019-09-30 2021-04-01 Nvidia Corporation Image generation using one or more neural networks
US20220101577A1 (en) * 2020-09-28 2022-03-31 Adobe Inc. Transferring hairstyles between portrait images utilizing deep latent representations
US20220180602A1 (en) * 2020-12-03 2022-06-09 Nvidia Corporation Generating images of virtual environments using one or more neural networks
US20220198617A1 (en) * 2020-12-18 2022-06-23 Meta Platforms, Inc. Altering a facial identity in a video stream

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200242774A1 (en) * 2019-01-25 2020-07-30 Nvidia Corporation Semantic image synthesis for generating substantially photorealistic images using neural networks
US20200372621A1 (en) * 2019-05-20 2020-11-26 Disney Enterprises, Inc. Automated Image Synthesis Using a Comb Neural Network Architecture
US20210097691A1 (en) * 2019-09-30 2021-04-01 Nvidia Corporation Image generation using one or more neural networks
US20220101577A1 (en) * 2020-09-28 2022-03-31 Adobe Inc. Transferring hairstyles between portrait images utilizing deep latent representations
US20220180602A1 (en) * 2020-12-03 2022-06-09 Nvidia Corporation Generating images of virtual environments using one or more neural networks
US20220198617A1 (en) * 2020-12-18 2022-06-23 Meta Platforms, Inc. Altering a facial identity in a video stream

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789185A (en) * 2024-02-28 2024-03-29 浙江驿公里智能科技有限公司 Automobile oil hole gesture recognition system and method based on deep learning
CN117789185B (en) * 2024-02-28 2024-05-10 浙江驿公里智能科技有限公司 Automobile oil hole gesture recognition system and method based on deep learning

Similar Documents

Publication Publication Date Title
US10534605B2 (en) Application system having a gaming engine that enables execution of a declarative language
Zhu et al. Emotion classification with data augmentation using generative adversarial networks
KR102663519B1 (en) Cross-domain image transformation techniques
CN116724330A (en) High resolution portrait stylized framework using hierarchical variational encoder
US20130127860A1 (en) Methods and Apparatus for Light Space Graphical Model in Shape from Shading
US11748932B2 (en) Controllable image generation
Liu et al. Structural causal 3d reconstruction
WO2023067603A1 (en) Semantic blending of images
Huang et al. Planes vs. chairs: Category-guided 3d shape learning without any 3d cues
Yang et al. Context-aware unsupervised text stylization
Fu Digital image art style transfer algorithm based on CycleGAN
US11574456B2 (en) Processing irregularly arranged characters
Wang et al. Reconstructing 3D Model from Single‐View Sketch with Deep Neural Network
Garcia et al. The Metaverse and AIGC: Navigating the shifts in tech trends and future prospects
US20240193464A1 (en) Network-lightweight model for multi deep-learning tasks
Cabezon Pedroso et al. Capabilities, limitations and challenges of style transfer with CycleGANs: a study on automatic ring design generation
US20240242539A1 (en) Converting 3d face landmarks
US11934359B1 (en) Log content modeling
US20240095456A1 (en) Text summarization with emotion conditioning
US20240232533A9 (en) Wireframe generation
US20240104830A1 (en) Augmenting data used to train computer vision model with images of different perspectives
US20240161365A1 (en) Enhancing images in text documents
US20240096121A1 (en) Training and using a vector encoder to determine vectors for sub-images of text in an image subject to optical character recognition
US20240171550A1 (en) Recommendation engine using fully homomorphic encryption
US20240119239A1 (en) Word-tag-based language system for sentence acceptability judgment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22883114

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE