WO2020180084A1 - Procédé permettant d'achever la coloration d'une image cible, et dispositif et programme informatique associés - Google Patents

Procédé permettant d'achever la coloration d'une image cible, et dispositif et programme informatique associés Download PDF

Info

Publication number
WO2020180084A1
WO2020180084A1 PCT/KR2020/002992 KR2020002992W WO2020180084A1 WO 2020180084 A1 WO2020180084 A1 WO 2020180084A1 KR 2020002992 W KR2020002992 W KR 2020002992W WO 2020180084 A1 WO2020180084 A1 WO 2020180084A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
target
image
colored
neural network
Prior art date
Application number
PCT/KR2020/002992
Other languages
English (en)
Korean (ko)
Inventor
장재혁
강성민
이가영
Original Assignee
네이버웹툰 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 네이버웹툰 주식회사 filed Critical 네이버웹툰 주식회사
Publication of WO2020180084A1 publication Critical patent/WO2020180084A1/fr
Priority to US17/464,899 priority Critical patent/US20210398331A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to a method, an apparatus, and a computer program for completing coloring of a target image to be colored by using a reference image.
  • a webtoon service that provides online cartoon content is a representative example of these services.
  • the present invention is intended to be able to produce online content more efficiently.
  • the present invention aims to reduce the amount of time the user spends on coloring in the production of content requiring coloring by a user, such as a webtoon, so that the content can be produced in a faster time.
  • the present invention in the automatic coloring of an image not by a user, by performing coloring by referring to a reference image provided together, the present invention seeks to perform coloring that more suits the user's needs, thereby reducing work time according to the user's modification. I want to make it.
  • the method of completing the coloring of a target image to be colored using a reference image according to one sealing PDp of the present invention includes a partial region of the target image using a learned first artificial neural network. Generating at least one target mask; Generating at least one reference mask corresponding to each of the at least one target mask and including at least a partial region of the reference image using the first artificial neural network; Generating at least one colored target mask by coloring each of the at least one target mask by referring to a color of the at least one reference mask; And generating a colored target image from the target image, the at least one target mask, and the at least one colored target mask using the learned second artificial neural network.
  • the first artificial neural network divides the target image into at least one area based on the similarity of the color to be colored in each area, and learns to generate the at least one target mask including each of the at least one divided area. It can be a neural network.
  • the at least one target mask may include whether each of a plurality of points constituting the target image is included in each of the at least one target mask.
  • the first artificial neural network divides the reference image into at least one area based on the similarity of the color to be colored in each area and the shape similarity with the area included in the at least one target mask, and divides at least one It may be a neural network trained to generate the at least one reference mask including each of the regions of.
  • the at least one reference mask may include whether each of a plurality of points constituting the reference image is included in each of the at least one reference mask.
  • the second artificial neural network may be a neural network trained to generate the colored target image from the at least one colored target mask by referring to the target image and the at least one target mask.
  • the second artificial neural network generates the colored target image from the at least one colored target mask by referring to the target image and the at least one target mask, and a predetermined image on the at least one colored target mask It may be a neural network trained to generate the colored target image by applying an effect.
  • the predetermined image effect may be an effect such that a color difference between pixels included in each of the at least one colored target mask and an adjacent pixel is equal to or less than a predetermined threshold difference.
  • the generating of the colored target mask may include determining a representative color of a region included in the first reference mask according to a predetermined method; And generating the colored target mask by determining the representative color as a color of a target mask corresponding to the first reference mask.
  • the predetermined method may be a method of determining an average color of a color of a region included in the first reference mask as the representative color.
  • the apparatus includes a processor, and the processor uses a learned first artificial neural network At least one target mask including a partial region of the image is generated, and at least one target mask corresponding to each of the at least one target mask and including at least a partial region of the reference image is generated using the first artificial neural network.
  • Generate at least one reference mask generate at least one colored target mask by coloring each of the at least one target mask by referring to the color of the at least one reference mask, and use the learned second artificial neural network ,
  • a colored target image may be generated from the target image, the at least one target mask, and the at least one colored target mask.
  • the first artificial neural network divides the target image into at least one area based on the similarity of the color to be colored in each area, and learns to generate the at least one target mask including each of the at least one divided area. It can be a neural network.
  • the at least one target mask may include whether each of a plurality of points constituting the target image is included in each of the at least one target mask.
  • the first artificial neural network divides the reference image into at least one area based on the similarity of the color to be colored in each area and the shape similarity with the area included in the at least one target mask, and divides at least one It may be a neural network trained to generate the at least one reference mask including each of the regions of.
  • the at least one reference mask may include whether each of a plurality of points constituting the reference image is included in each of the at least one reference mask.
  • the second artificial neural network may be a neural network trained to generate the colored target image from the at least one colored target mask by referring to the target image and the at least one target mask.
  • the second artificial neural network generates the colored target image from the at least one colored target mask by referring to the target image and the at least one target mask, and a predetermined image on the at least one colored target mask It may be a neural network trained to generate the colored target image by applying an effect.
  • the predetermined image effect may be an effect such that a color difference between pixels included in each of the at least one colored target mask and an adjacent pixel is equal to or less than a predetermined threshold difference.
  • the processor determines a representative color of an area included in the first reference mask according to a predetermined method, determines the representative color as a color of a target mask corresponding to the first reference mask, and generates the colored target mask can do.
  • the predetermined method may be a method of determining an average color of a color of a region included in the first reference mask as the representative color.
  • online content can be produced more efficiently.
  • content that requires the user's coloring such as a webtoon
  • FIG. 1 is a diagram illustrating an example of a network environment according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating the internal configuration of the user terminal 100 and the server 200 according to an embodiment of the present invention.
  • FIG. 3 is a diagram for explaining the structure of an exemplary artificial neural network learned by the processor 212.
  • FIG. 4 is a diagram illustrating an example of a target image.
  • 5A-5D are diagrams illustrating exemplary target masks generated by the processor 212 from the target image of FIG. 4 according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating an example of a reference image.
  • 7A to 7D are diagrams illustrating exemplary reference masks generated by the processor 212 from the reference image of FIG. 6.
  • FIG. 8 is a diagram illustrating an example of a colored target mask.
  • 10A to 10B are diagrams for schematically illustrating a process of generating a colored target image by the processor 212.
  • 11 is a flowchart illustrating a method of completing coloring of a target image performed by the server 200 according to an embodiment of the present invention.
  • FIG. 1 is a diagram illustrating an example of a network environment according to an embodiment of the present invention.
  • the network environment of FIG. 1 shows an example including a plurality of user terminals 101, 102, 103 and 104, a server 200 and a network 300.
  • 1 is an example for explaining the invention, and the number of user terminals 101, 102, 103, 104 or the number of servers 200 is not limited as shown in FIG. 1.
  • the plurality of user terminals 101, 102, 103, and 104 transmit a target image to be painted and a reference image to be referred to for coloring to the server 200 according to a user's manipulation, and ,
  • the colored target image may be received from the server 200.
  • the plurality of user terminals 101, 102, 103, and 104 determine a target image to be painted and a reference image to be referred to for coloring according to a user's manipulation, and color the target image. You can also complete
  • the plurality of user terminals 101, 102, 103 and 104 may be a fixed terminal implemented as a computer device or a mobile terminal.
  • Examples of the plurality of user terminals 101, 102, 103, 104 include smartphones, mobile phones, navigation, computers, notebook computers, digital broadcasting terminals, personal digital assistants (PDAs), portable multimedia players (PMPs), tablets PC, etc.
  • PDAs personal digital assistants
  • PMPs portable multimedia players
  • the plurality of user terminals 101, 102, 103, and 104 are connected to each other and/or the server (the plurality of user terminals 101, 102, 103, 104) through the network 300 using a wireless or wired communication method. 200).
  • the communication method of the plurality of user terminals 101, 102, 103, 104 is not limited, and a communication network (for example, a mobile communication network, wired Internet, wireless Internet, broadcasting network) that the network 300 can include is used. Short-range wireless communication between devices may also be included as well as a communication method.
  • a communication network for example, a mobile communication network, wired Internet, wireless Internet, broadcasting network
  • Short-range wireless communication between devices may also be included as well as a communication method.
  • the network 300 is a Personal Area Network (PAN), a Local Area Network (LAN), a Campus Area Network (CAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), and a Broad Band Network (BBN). , Internet, and the like.
  • PAN Personal Area Network
  • LAN Local Area Network
  • CAN Campus Area Network
  • MAN Metropolitan Area Network
  • WAN Wide Area Network
  • BBN Broad Band Network
  • the network 300 may include any one or more of a network topology including a bus network, a star network, a ring network, a mesh network, a star-bus network, a tree or a hierarchical network, etc. Not limited.
  • user terminals 101, 102, 103, and 104 are referred to as user terminals 100 and described.
  • the server 200 may receive a target image to be colored and a reference image to be referred to for coloring of the target image from the user terminal 100 described above, and may color the target image using learned artificial neural networks. A detailed description of this will be described later.
  • the server 200 may be implemented as a computer device or a plurality of computer devices that provide commands, codes, files, contents, services, etc. to the user terminal 100 through the network 300.
  • FIG. 2 is a block diagram illustrating the internal configuration of the user terminal 100 and the server 200 according to an embodiment of the present invention.
  • the user terminal 100 and the server 200 may include memories 111 and 211, processors 112 and 212, communication modules 113 and 213, and input/output interfaces 114 and 214.
  • the memories 111 and 211 are computer-readable recording media, and may include a random access memory (RAM), a read only memory (ROM), and a permanent mass storage device such as a disk drive.
  • RAM random access memory
  • ROM read only memory
  • permanent mass storage device such as a disk drive.
  • the memory 111 and 211 may store an operating system and at least one program code (for example, a code for a program that is installed in the user terminal 100 to color an image through data transmission/reception with the server 200). .
  • These software components may be loaded from a computer-readable recording medium separate from the memories 111 and 211 using a drive mechanism.
  • a separate computer-readable recording medium may include a computer-readable recording medium such as a floppy drive, a disk, a tape, a DVD/CD-ROM drive, and a memory card.
  • software components may be loaded into the memories 111 and 211 through the communication modules 113 and 213 instead of a computer-readable recording medium.
  • at least one program is based on a program installed by files provided by a file distribution system (for example, the server 200 described above) that distributes the installation files of developers or applications through the network 300 Thus, it can be loaded into the memories 111 and 211.
  • the processors 112 and 212 may be configured to process instructions of a computer program by performing basic arithmetic, logic, and input/output operations.
  • the instructions may be provided to the processors 112 and 212 by the memories 111 and 211 or the communication modules 113 and 213.
  • the processors 112 and 212 may be configured to execute an instruction received according to a program code stored in a recording device such as the memories 111 and 211.
  • the communication modules 113 and 213 may provide a function for the user terminal 100 and the server 200 to communicate with each other through the network 300, and other user terminals (not shown) or other servers (not shown) It can provide a function to communicate with.
  • a request generated by the processor 112 of the user terminal 100 according to a program code stored in a recording device such as a memory 111 is sent to the server 200 through the network 300 under the control of the communication module 113.
  • control signals, commands, contents, files, etc. provided under the control of the server 200 and the processor 212 are transmitted via the communication module 213 and the network 300 to the communication module 113 of the user terminal 100 It may be received by the user terminal 100 through.
  • the input/output interfaces 114 and 214 may be means for interfacing with the input/output device 115.
  • the input device may include a device such as a keyboard or a mouse
  • the output device may include a device such as a display for displaying an image.
  • the input/output interfaces 114 and 214 may be a means for an interface with a device in which input and output functions are integrated into one, such as a touch screen.
  • the user terminal 100 and the server 200 may include more components than those of FIG. 2. However, there is no need to clearly show most of the prior art components.
  • the user terminal 100 may be implemented to include at least some of the input/output devices 115 described above, or other components such as a transceiver, a global positioning system (GPS) module, a camera, various sensors, and a database. May include more.
  • GPS global positioning system
  • the processor 212 of the server 200 may complete the coloring of the target image using learned artificial neural networks.
  • a'target image' is an image to be painted, and may mean an image in which at least some areas are not painted (that is, there is no color information on at least some areas).
  • the'area' may mean a part of an image that is divided by an outline in the image.
  • a'reference image' is an image for reference to the coloring of the above-described target image, and may mean an image in which the coloring is completed. Such a reference image may be acquired together with the target image and used for coloring the target image.
  • the reference image may be provided by the user.
  • the user may provide a reference image colored in a desired style (ie, a desired color combination) together with the target image, so that the target image may be colored in a similar style to the reference image.
  • the reference image may be provided by the processor 212 based on a predetermined image analysis result.
  • the processor 212 uses any one of the plurality of candidate reference images as a reference image based on the similarity of the shape of the plurality of candidate reference images and the target image (ie, the similarity of the shape of at least one area constituting the image). You can choose. In this case, the user can obtain the colored target image by providing only the target image.
  • a'mask' such as a'target mask' and a'reference mask' is an image including at least a partial area of an original image (for example, a reference image or a target image), and each of a plurality of points constituting the original image This may mean an image including information on whether to be included in the corresponding mask.
  • the first target mask generated from the target image may include whether each of a plurality of points constituting the target image is included in the first target mask in the form of a ground truth (ie, in the form of 1 or 0).
  • the first reference mask generated from the reference image may include whether each of a plurality of points constituting the reference image is included in the first reference mask in the form of ground truth (that is, in the form of 1 or 0). have.
  • the'colored mask' may mean an image to which color information is further added to the above-described mask.
  • the'colored first target mask' determines whether each of a plurality of points constituting the target image is included in the first target mask, in the form of ground truth (that is, 1 or 0). Form), and may mean an image that further includes color information of the included area.
  • the'artificial neural network' such as the first artificial neural network and the second artificial neural network is a neural network that is properly trained according to a purpose and/or purpose, and is learned by machine learning or deep learning. It may have been.
  • the'first artificial neural network divides the target image into at least one area based on the similarity of the color to be colored in each area of the target image, and the divided at least one area It may be a neural network trained to generate at least one target mask including each.
  • the first artificial neural network is a neural network that has been trained to generate at least one mask by dividing the input image into a plurality of areas, and a mask by dividing the input image into a plurality of areas based on the similarity of the color to be colored. Can be created.
  • the first artificial neural network uses the similarity of the color to be colored in each area of the reference image (here, the reference image means an outline image generated from the colored reference image) and the above-described process.
  • the reference image is divided into at least one area based on a shape similarity with a partial area of the target image included in the generated at least one target mask, and at least one reference mask including each of the at least one divided area It may be a neural network that has been trained to generate.
  • the first artificial neural network is a neural network that has been trained to generate at least one mask by dividing the input image into a plurality of regions, similar to the case of the target image, and refers to the input masks together with the similarity of the color to be colored.
  • a mask may be generated by dividing the input image into a plurality of regions.
  • the'second artificial neural network' refers to a target image to be colored and at least one target mask generated from the target image, and is a neural network that is trained to generate a colored target image from at least one colored target mask.
  • the second artificial neural network may be trained to generate a colored target image by applying a predetermined image effect to at least one colored target mask.
  • a description of the colored target mask will be described later.
  • a first artificial neural network and a second artificial neural network are sometimes collectively referred to as an'artificial neural network'.
  • FIG. 3 is a diagram for explaining the structure of an exemplary artificial neural network learned by the processor 212.
  • the artificial neural network may be a neural network according to a convolutional neural network (CNN) model.
  • CNN convolutional neural network
  • the CNN model may be a hierarchical model used to finally extract features of input data by alternately performing a plurality of computational layers (Convolutional Layer, Pooling Layer).
  • the processor 212 includes a convolution layer for extracting feature values of the target image included in the training data, and the extracted feature values. By combining, a pooling layer constituting the feature map may be generated.
  • the processor 212 may combine the generated feature maps to generate a fully connected layer that prepares to generate at least one target mask.
  • the processor 212 may calculate an output layer including at least one target mask.
  • the input data (for example, the target image) of the convolution layer is divided into 5X7 blocks, a 5X3 unit block is used to generate the convolution layer, and 1X4 or 1X2 is used to generate the pooling layer.
  • the unit block of the form is used, this is illustrative and the spirit of the present invention is not limited thereto. Therefore, the size of the image block used to generate each layer may be variously set.
  • FIG. 3 illustrates the structure of an exemplary first artificial neural network as described above, and the structure of the first artificial neural network may be different from FIG. 3 according to the type and/or quantity of input data.
  • the first artificial neural network may include a block for receiving at least one target mask in addition to the reference image as input data of the convolutional layer.
  • the processor 212 may calculate an output layer including at least one reference mask according to the above-described process.
  • the convolution layer may include a target image, at least one target mask generated from the target image, and a block for receiving at least one colored target mask.
  • the output layer may include a block for outputting a colored target image.
  • Such an artificial neural network may be stored in the above-described memory 211 in the form of coefficients of at least one node constituting the artificial neural network, a weight of the node, and coefficients of a function defining a relationship between a plurality of layers constituting the artificial neural network.
  • the structure of the neural network may also be stored in the memory 211 in the form of a source code and/or a program.
  • the processor 212 may build or train a neural network model by processing training data according to a supervised learning technique.
  • the training data may include a target image and at least one target mask generated from the target image.
  • the processor 212 may repeatedly perform learning so that the first artificial neural network learns a correspondence relationship between the target image and at least one target mask generated from the target image based on the learning data.
  • the processor 212 may train the first artificial neural network to reflect the characteristics of image segmentation using the original image and at least one mask obtained by dividing the original image. Accordingly, the first artificial neural network may be trained to output at least one target mask in response to an input of a target image.
  • the training data may include a reference image, at least one reference mask generated from the reference image, and at least one target mask that is referred to for generation of at least one reference mask.
  • the processor 212 may perform training such that the first artificial neural network learns a correspondence relationship between the reference image and at least one target mask and at least one reference mask generated from the reference image.
  • the processor 212 further learns the first artificial neural network to reflect the characteristics of image segmentation using the reference information by using the original image, information referenced for segmentation of the original image, and at least one mask obtained by segmenting the original image. I can make it. Accordingly, the first artificial neural network may be trained to output at least one reference mask with respect to the input of the reference image and at least one target mask. Of course, even in this case, learning can be performed repeatedly.
  • the training data may include a target image, at least one target mask generated from the target image, at least one colored target mask, and a colored target image.
  • the processor 212 corresponds to the target image by the second artificial neural network, at least one target mask generated from the target image, and at least one colored target mask and the colored target image. Learning can be done to learn relationships. Accordingly, the second artificial neural network may be trained to output a colored target image with respect to the input of the target image, at least one target mask generated from the target image, and at least one colored target mask. Of course, even in this case, learning can be performed repeatedly.
  • 'training' an artificial neural network is to update the coefficients of at least one node constituting the artificial neural network, the weight of the node, and/or the coefficients of a function defining the relationship between a plurality of layers constituting the artificial neural network.
  • the artificial neural network is a neural network according to a synthetic product neural network (CNN) model, but this is exemplary and the neural network model is not limited thereto. Therefore, the artificial neural network may be a neural network according to various types of neural network models.
  • CNN synthetic product neural network
  • the processor 212 may acquire a target image to be colored.
  • the processor 212 may receive the target image from the user terminal 100 described above, or may read the target image from the memory 211 in the server 200.
  • the processor 212 may acquire a reference image for reference to the coloring of the target image.
  • a reference image may also be obtained by receiving from the user terminal 100 or reading from the memory 211.
  • FIG. 4 is a diagram illustrating an example of a target image.
  • a'target image' is an image to be painted, and may mean an image in which at least some areas are not painted.
  • the target image may be a scene (or one frame) of a cartoon, or may be various types of images such as posters and illustrations.
  • the target image is the same as the image shown in FIG. 4.
  • the processor 212 may generate at least one target mask including a partial region of the acquired target image using the learned first artificial neural network.
  • the first artificial neural network may be an artificial neural network in which features related to image segmentation are learned using an original image and at least one mask obtained by dividing the original image. Accordingly, the processor 212 may generate at least one target mask from the target image using the first artificial neural network. In other words, the processor 212 may input the target image to the first artificial neural network and obtain at least one target mask as the output.
  • 5A-5D are diagrams illustrating exemplary target masks generated by the processor 212 from the target image of FIG. 4 according to an embodiment of the present invention.
  • the processor 212 may generate a first target mask including an upper region of the background as illustrated in FIG. 5A from the target image as illustrated in FIG. 4.
  • an area included in one target mask correspond to areas in which colors similar to each other can be colored. Accordingly, an area included in the mask shown in FIG. 5A (ie, a white area) may correspond to an area that can be colored with one color or a similar color.
  • the processor 212 includes a second target mask including a lower area of the background as shown in FIG. 5B, a third target mask including a speech balloon area, and You can create a fourth target mask that includes the hair area.
  • regions included on each mask may correspond to regions that can be colored with similar colors.
  • the target masks shown in FIGS. 5A to 5D are exemplary, and the quantity or specific shape of the target masks is not limited thereto. Accordingly, in addition to the target mask described in FIGS. 5A to 5D, the processor 212 may further generate a mask including a face region, a mask including a body region, and the like.
  • the processor 212 corresponds to each of the at least one target mask generated by the above-described process, using the learned first artificial neural network, and includes at least a partial region of the obtained reference image. At least one reference mask may be generated.
  • the first artificial neural network is a neural network that has been trained to generate at least one mask by dividing an input image into a plurality of regions as described above.
  • input masks that is, input target mask S
  • the processor 212 may generate at least one reference mask from the reference image and at least one target mask using the first artificial neural network.
  • the processor 212 may input a reference image and at least one target mask to the first artificial neural network and obtain at least one reference mask as the output.
  • FIG. 6 is a diagram illustrating an example of a reference image.
  • 7A to 7D are diagrams illustrating exemplary reference masks generated by the processor 212 from the reference image of FIG. 6.
  • the processor 212 refers to the target mask as shown in FIG. 5A including the background upper region of the target image (FIG. 4), from the reference image as shown in FIG. 6 to the top of the background as shown in FIG. 7A.
  • a first reference mask including an area can be created.
  • the processor 212 refers to the target mask shown in FIG. 5B including the lower area of the background, and generates a second reference mask including the lower area of the background as shown in FIG. 7B from the reference image shown in FIG. 6. can do.
  • the processor 212 may generate a reference mask as shown in FIG. 7C with reference to a target mask as shown in FIG. 5C including a speech balloon region, or may generate a reference mask as illustrated in FIG. 5D including a hair region, etc. You can also create a reference mask like.
  • the reference masks illustrated in FIGS. 7A to 7D are exemplary, and the quantity or specific shape of the reference masks is not limited thereto. Accordingly, the processor 212 may further generate a mask including a face region, a mask including a body region, and the like, in addition to the target mask described in FIGS. 7A to 7D.
  • the processor 212 may generate at least one colored target mask by coloring each of the at least one target mask by referring to the color of at least one reference mask.
  • the processor 212 may determine a representative color of a region included in the reference mask according to a predetermined method.
  • a'mask' such as a reference mask includes at least a partial area of the original image, and includes only whether each of a plurality of points constituting the original image is included in the corresponding mask.
  • the processor 212 may refer to the reference mask and the reference image together to determine a representative color of a region included in the reference mask as the color of the reference mask.
  • the processor 212 refers to an area included in the first reference mask shown in FIG. 7A, and calculates the average color of the corresponding area in the reference image of FIG. 6 as a representative color of the area included in the first reference mask. It can be determined by the color of the mask.
  • the processor 212 refers to the area included in the second reference mask shown in FIG. 7B, and uses the average color of the corresponding area in the reference image of FIG. 6 to the representative color of the area included in the second reference mask (ie, the reference It can also be determined by the color of the mask).
  • the processor 212 may generate a colored target mask by determining the representative color determined by the above-described process as the color of the target mask corresponding to the reference mask. In other words, the processor 212 may generate a colored target mask by coloring the target mask with a color of a reference mask corresponding to each target mask.
  • FIG. 8 is a diagram illustrating an example of a colored target mask.
  • the processor 212 refers to the area included in the first reference mask shown in FIG. 7A, and uses the average color of the corresponding area in the reference image of FIG. 6 as a representative color of the area included in the first reference mask. You can decide.
  • the processor 212 may generate a colored target mask as shown in FIG. 8 by coloring the first target mask shown in FIG. 5A with the determined representative color in consideration of the correspondence between the reference mask and the target mask. .
  • the processor 212 may generate a colored target mask as shown in FIG. 8 by coloring each of the target masks shown in FIGS. 5B, 5C and 5D with a representative color determined for each in the same manner. I can.
  • the target mask colored by the processor 212 may be used to generate a colored target image.
  • the processor 212 may generate a colored target image from a target image, at least one target mask, and at least one colored target mask using the learned second artificial neural network.
  • the second artificial neural network may be a neural network that has been trained to generate a colored target image from at least one colored target mask by referring to a target image to be colored and at least one target mask generated from the target image. have.
  • the second artificial neural network may be a neural network trained to generate a colored target image from at least one colored target mask.
  • the processor 212 generates a colored target image by merging at least one colored target mask using such a second artificial neural network, and applying a predetermined image effect to the at least one colored target mask. You can create an image.
  • the predetermined image effect may be an effect such that a color difference between pixels included in each of the at least one colored target mask and an adjacent pixel becomes less than or equal to a predetermined threshold difference.
  • This image effect mitigates unnatural parts (e.g., borders generated by checkerboard artifacts) included in the image generated by the second artificial neural network, or unnatural parts caused by merging of a plurality of masks. This may be for improving the completeness of the colored target image.
  • the same color may be colored in regions corresponding to each other in the target image and the reference image by the first artificial neural network and the second artificial neural network. have.
  • the coloring in the automatic coloring of images not by the user, by performing coloring by referring to the reference image provided together, the coloring can be performed more appropriate to the needs of the user, thereby reducing the working time according to the user's modification. Can be saved.
  • the processor 212 may generate at least one target mask (FIGS. 5A to 5D) from the target image (FIG. 4) using the first artificial neural network.
  • the processor 212 may generate a reference image from which coloring is removed (ie, an image in which only outlines remain) from the reference image (FIG. 6 ).
  • the processor 212 may use the first artificial neural network to generate at least one reference mask (FIGS. 7A to 7D) from the reference image from which coloring has been removed.
  • the processor refers to at least one target mask (FIGS. 5A to 5D) generated previously, and at least one reference mask (FIGS. 7A to 7D) corresponding to each of the at least one target mask (FIGS. 5A to 5D ). Can be created.
  • the processor 212 may generate at least one colored target mask by referring to at least one target mask, at least one reference mask, and a reference image generated by the above-described process. .
  • the processor 212 may color each target mask based on an average color on the reference image of an area included in the reference mask corresponding to each target mask (a partial area of the reference image).
  • the processor 212 uses a second artificial neural network to generate at least one colored target mask, at least one target mask, and a target image colored from the target image. Can be generated.
  • the present invention can generate a target image that is automatically colored in a style similar to the reference image provided with it.
  • FIG. 11 is a flowchart illustrating a method of completing coloring of a target image performed by the server 200 according to an embodiment of the present invention.
  • description will be made with reference to FIGS. 1 to 10B together, but descriptions of contents overlapping with FIGS. 1 to 10B will be omitted.
  • the server 200 may generate at least one target mask including a partial region of a target image acquired using the learned first artificial neural network (S111).
  • the first artificial neural network may be an artificial neural network in which features related to image segmentation are learned using an original image and at least one mask obtained by dividing the original image. Accordingly, the server 200 may generate at least one target mask from the target image using the first artificial neural network. In other words, the server 200 may obtain at least one target mask by inputting a target image to the first artificial neural network and outputting the target image.
  • the server 200 may generate a first target mask including an upper region of the background as illustrated in FIG. 5A from the target image as illustrated in FIG. 4.
  • an area included in one target mask correspond to areas in which colors similar to each other can be colored. Accordingly, an area included in the mask shown in FIG. 5A (ie, a white area) may correspond to an area that can be colored with one color or a similar color.
  • the server 200 includes a second target mask including a lower area of the background as shown in FIG. 5B, a third target mask including a speech balloon area from the target image shown in FIG. You can create a fourth target mask that includes the hair area.
  • regions included on each mask may correspond to regions that can be colored with similar colors.
  • the server 200 may further generate a mask including a face region, a mask including a body region, and the like, in addition to the target mask described in FIGS. 5A to 5D.
  • the server 200 corresponds to each of the at least one target mask generated by the above-described process, using the learned first artificial neural network, and includes at least a partial region of the obtained reference image.
  • At least one reference mask may be generated (S112).
  • the first artificial neural network is a neural network that has been trained to generate at least one mask by dividing an input image into a plurality of regions as described above.
  • input masks that is, input target mask S
  • the server 200 may generate at least one reference mask from the reference image and at least one target mask using the first artificial neural network.
  • the server 200 may obtain at least one reference mask by inputting the reference image and at least one target mask to the first artificial neural network and outputting the input.
  • the server 200 refers to the target mask shown in FIG. 5A including the background upper area of the target image (FIG. 4), from the reference image shown in FIG. 6 to the top of the background as shown in FIG. 7A.
  • a first reference mask including an area can be created.
  • the server 200 creates a second reference mask including the lower region of the background as shown in FIG. 7B from the reference image of FIG. 6 by referring to the target mask as shown in FIG. 5B including the lower region of the background. can do.
  • the server 200 may generate a reference mask as shown in FIG. 7C with reference to a target mask as shown in FIG. 5C including a speech bubble area, and FIG. 7D with reference to a target mask as shown in FIG. 5D including a hair area and the like. You can also create a reference mask like.
  • the reference masks illustrated in FIGS. 7A to 7D are exemplary, and the quantity or specific shape of the reference masks is not limited thereto. Accordingly, in addition to the target mask described in FIGS. 7A to 7D, the server 200 may further generate a mask including a face area, a mask including a body area, and the like.
  • the server 200 may generate at least one colored target mask by coloring each of the at least one target mask by referring to the color of at least one reference mask (S113).
  • the server 200 may determine a representative color of an area included in the reference mask according to a predetermined method.
  • a'mask' such as a reference mask includes at least a partial area of the original image, and includes only whether each of a plurality of points constituting the original image is included in the corresponding mask.
  • the server 200 may refer to the reference mask and the reference image together to determine a representative color of an area included in the reference mask as the color of the reference mask.
  • the server 200 refers to an area included in the first reference mask shown in FIG. 7A, and calculates the average color of the corresponding area in the reference image of FIG. 6 as a representative color of the area included in the first reference mask, that is, the reference It can be determined by the color of the mask.
  • the server 200 refers to the area included in the second reference mask shown in FIG. 7B, and uses the average color of the corresponding area in the reference image of FIG. 6 to the representative color of the area included in the second reference mask (that is, the reference It can also be determined by the color of the mask.
  • the server 200 may generate a colored target mask by determining the representative color determined by the above-described process as the color of the target mask corresponding to the reference mask. In other words, the server 200 may generate a colored target mask by coloring the target mask with the color of the reference mask corresponding to each target mask.
  • the server 200 refers to an area included in the first reference mask shown in FIG. 7A, and uses the average color of the corresponding area in the reference image of FIG. 6 as a representative color of the area included in the first reference mask. You can decide.
  • the server 200 may generate a colored target mask as shown in FIG. 8 by coloring the first target mask shown in FIG. 5A with the determined representative color in consideration of the correspondence between the reference mask and the target mask. .
  • the server 200 may generate a colored target mask as shown in FIG. 8 by coloring each of the target masks shown in FIGS. 5B, 5C, and 5D with a representative color determined for each in the same manner. I can.
  • the target mask colored by the server 200 may be used to generate a colored target image.
  • the server 200 may generate a colored target image from the target image, at least one target mask, and at least one colored target mask using the learned second artificial neural network.
  • the second artificial neural network may be a neural network that has been trained to generate a colored target image from at least one colored target mask by referring to a target image to be colored and at least one target mask generated from the target image. have.
  • the second artificial neural network may be a neural network trained to generate a colored target image from at least one colored target mask.
  • the server 200 generates a colored target image by merging at least one colored target mask by using such a second artificial neural network, and the colored target by applying a predetermined image effect to the at least one colored target mask. You can create an image.
  • the predetermined image effect may be an effect such that a color difference between pixels included in each of the at least one colored target mask and an adjacent pixel becomes less than or equal to a predetermined threshold difference.
  • This image effect mitigates unnatural parts (e.g., borders generated by checkerboard artifacts) included in the image generated by the second artificial neural network, or unnatural parts caused by merging of a plurality of masks. This may be for improving the completeness of the colored target image.
  • the same color in the corresponding regions in the target image and the reference image by the first artificial neural network and the second artificial neural network This can be colored.
  • the coloring in the automatic coloring of images not by the user, by performing coloring by referring to the reference image provided together, the coloring can be performed more appropriate to the needs of the user, thereby reducing the working time according to the user's modification. Can be saved.
  • the user terminal 100 may perform the method of completing the coloring of the target image described in FIG. 11.
  • the method for completing the coloring of the target image described as being performed by the server 200 and/or the processor 212 of the server 200 in FIGS. 1 to 11 is the user terminal 100 and/or It may be performed by the processor 112 of the user terminal 100.
  • the user terminal 100 determines the target image and the reference image based on the user's manipulation, and includes at least a partial region of the target image acquired using the learned first artificial neural network as described in step S111.
  • One target mask can be created.
  • the user terminal 100 corresponds to each of the at least one target mask generated by the above-described process, using the learned first artificial neural network, and covers at least a partial area of the acquired reference image. At least one included reference mask may be generated.
  • the user terminal 100 may generate at least one colored target mask by coloring each of the at least one target mask by referring to the color of at least one reference mask.
  • the user terminal 100 may generate a colored target image from the target image, at least one target mask, and at least one colored target mask using the learned second artificial neural network.
  • steps S111 to S114 are omitted.
  • the apparatus described above may be implemented as a hardware component, a software component, and/or a combination of a hardware component and a software component.
  • the devices and components described in the embodiments are, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA). , A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, such as one or more general purpose computers or special purpose computers.
  • the processing device may execute an operating system (OS) and one or more software applications executed on the operating system.
  • OS operating system
  • the processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • the processing device is a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that it may include.
  • the processing device may include a plurality of processors or one processor and one controller.
  • other processing configurations are possible, such as a parallel processor.
  • the software may include a computer program, code, instructions, or a combination of one or more of these, configuring the processing unit to behave as desired or processed independently or collectively. You can command the device.
  • Software and/or data may be interpreted by a processing device or to provide instructions or data to a processing device, of any type of machine, component, physical device, virtual equipment, computer storage medium or device. , Or may be permanently or temporarily embodyed in a transmitted signal wave.
  • the software may be distributed over networked computer systems and stored or executed in a distributed manner. Software and data may be stored on one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, and the like alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and usable to those skilled in computer software.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks.
  • -A hardware device specially configured to store and execute program instructions such as magneto-optical media, and ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those produced by a compiler but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the hardware device described above may be configured to operate as one or more software modules to perform the operation of the embodiment, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Selon un mode de réalisation, la présente invention concerne un procédé de réalisation d'une coloration d'une image cible en couleur, à l'aide d'une image de référence, qui comprend les étapes suivantes : générer au moins un masque cible comprenant une partie de l'image cible à l'aide d'un premier réseau neuronal artificiel, qui a été entraîné; générer au moins un masque de référence qui correspond au ou aux masques cibles et qui comprend au moins une partie de l'image de référence, en utilisant le premier réseau neuronal artificiel; générer au moins un masque cible coloré par coloration du ou des masques cibles en référence à la couleur du ou des masques de référence; et générer une image cible colorée à partir de l'image cible, du ou des masques cibles, et du ou des masques cibles colorés à l'aide d'un second réseau neuronal artificiel, qui a été entraîné.
PCT/KR2020/002992 2019-03-05 2020-03-03 Procédé permettant d'achever la coloration d'une image cible, et dispositif et programme informatique associés WO2020180084A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/464,899 US20210398331A1 (en) 2019-03-05 2021-09-02 Method for coloring a target image, and device and computer program therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0025357 2019-03-05
KR1020190025357A KR102216749B1 (ko) 2019-03-05 2019-03-05 타겟 이미지의 채색 완성 방법, 장치 및 컴퓨터 프로그램

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/464,899 Continuation US20210398331A1 (en) 2019-03-05 2021-09-02 Method for coloring a target image, and device and computer program therefor

Publications (1)

Publication Number Publication Date
WO2020180084A1 true WO2020180084A1 (fr) 2020-09-10

Family

ID=72337086

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002992 WO2020180084A1 (fr) 2019-03-05 2020-03-03 Procédé permettant d'achever la coloration d'une image cible, et dispositif et programme informatique associés

Country Status (3)

Country Link
US (1) US20210398331A1 (fr)
KR (1) KR102216749B1 (fr)
WO (1) WO2020180084A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411550A (zh) * 2020-10-29 2021-09-17 腾讯科技(深圳)有限公司 视频上色方法、装置、设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102247662B1 (ko) * 2021-01-29 2021-05-03 주식회사 아이코드랩 만화의 스케치 이미지를 자동으로 채색하기 위한 장치 및 방법
KR102470821B1 (ko) * 2022-01-27 2022-11-28 주식회사 위딧 웹툰 배경 이미지 생성 장치
KR102477798B1 (ko) * 2022-01-27 2022-12-15 주식회사 위딧 웹툰 캐릭터 채색 장치
KR102449790B1 (ko) * 2022-02-23 2022-09-30 주식회사 아이코드랩 스케치 이미지 자동 채색 장치 및 방법
KR102449795B1 (ko) * 2022-02-23 2022-09-30 주식회사 아이코드랩 스케치 이미지 자동 채색 장치 및 방법

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010084996A (ko) * 2001-07-09 2001-09-07 한희철 단일 이미지를 이용한 3차원 아바타 제작 방법 및 이를이용한 자판기
KR100478767B1 (ko) * 1998-08-20 2005-03-24 애플 컴퓨터, 인크. 그래픽 렌더링 방법, 컴퓨터 그래픽 파이프라인용 상태 감시 장치 및 3차원 그래픽 렌더링용 계산처리 시스템
KR101676575B1 (ko) * 2015-07-24 2016-11-15 주식회사 카카오 만화 컨텐츠의 공유 영역 추출 장치 및 그 방법

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459495B1 (en) * 1997-07-15 2002-10-01 Silverbrook Research Pty Ltd Dot center tracking in optical storage systems using ink dots
WO1999023586A2 (fr) * 1997-10-30 1999-05-14 Dr. Baldeweg Gmbh Procede et dispositif de traitement d'objets d'image
US6577826B1 (en) * 2000-03-24 2003-06-10 Fuji Xerox Co., Ltd. Image forming apparatus which sets parameters for the formation of paper
FR2825817B1 (fr) * 2001-06-07 2003-09-19 Commissariat Energie Atomique Procede de traitement d'images pour l'extraction automatique d'elements semantiques
JP5678584B2 (ja) * 2009-12-16 2015-03-04 株式会社リコー 画像処理装置、画像処理方法、およびプログラム
US11087504B2 (en) * 2017-05-19 2021-08-10 Google Llc Transforming grayscale images into color images using deep neural networks
JP7477260B2 (ja) * 2018-01-30 2024-05-01 株式会社Preferred Networks 情報処理装置、情報処理プログラム及び情報処理方法
US20220301227A1 (en) * 2019-09-11 2022-09-22 Google Llc Image colorization using machine learning
US10997752B1 (en) * 2020-03-09 2021-05-04 Adobe Inc. Utilizing a colorization neural network to generate colorized images based on interactive color edges

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100478767B1 (ko) * 1998-08-20 2005-03-24 애플 컴퓨터, 인크. 그래픽 렌더링 방법, 컴퓨터 그래픽 파이프라인용 상태 감시 장치 및 3차원 그래픽 렌더링용 계산처리 시스템
KR20010084996A (ko) * 2001-07-09 2001-09-07 한희철 단일 이미지를 이용한 3차원 아바타 제작 방법 및 이를이용한 자판기
KR101676575B1 (ko) * 2015-07-24 2016-11-15 주식회사 카카오 만화 컨텐츠의 공유 영역 추출 장치 및 그 방법

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUPTA, Raj Kumar et al. Image colorization using similar images. MM 12: Proceedings of the 20th ACM international conference on Multimedia. 20 October-02 November 2012. pp. 369-378. See abstract; pp. 370; and figure 1. *
RICHARD ZHANG; JUN-YAN ZHU; PHILLIP ISOLA; XINYANG GENG; ANGELA S. LIN; TIANHE YU; ALEXEI A. EFROS: "Real-Time User-Guided Image Colorization with Learned Deep Priors", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 8 May 2017 (2017-05-08), 201 Olin Library Cornell University Ithaca, NY 14853, XP080946745 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411550A (zh) * 2020-10-29 2021-09-17 腾讯科技(深圳)有限公司 视频上色方法、装置、设备及存储介质

Also Published As

Publication number Publication date
KR20200106754A (ko) 2020-09-15
US20210398331A1 (en) 2021-12-23
KR102216749B1 (ko) 2021-02-17

Similar Documents

Publication Publication Date Title
WO2020180084A1 (fr) Procédé permettant d'achever la coloration d'une image cible, et dispositif et programme informatique associés
WO2020091207A1 (fr) Procédé, appareil et programme informatique pour compléter une peinture d'une image et procédé, appareil et programme informatique pour entraîner un réseau neuronal artificiel
WO2019031714A1 (fr) Procédé et appareil de reconnaissance d'objet
WO2019164251A1 (fr) Procédé de réalisation d'apprentissage d'un réseau neuronal profond et appareil associé
WO2021107204A1 (fr) Procédé de modélisation tridimensionnelle pour vêtements
WO2020180134A1 (fr) Système de correction d'image et son procédé de correction d'image
WO2020231226A1 (fr) Procédé de réalisation, par un dispositif électronique, d'une opération de convolution au niveau d'une couche donnée dans un réseau neuronal, et dispositif électronique associé
WO2022050719A1 (fr) Procédé et dispositif de détermination d'un niveau de démence d'un utilisateur
WO2022196945A1 (fr) Appareil pour prévoir une répartition de la population sur la base d'un modèle de simulation de répartition de la population, et procédé de prévision de répartition de la population à l'aide de celui-ci
WO2023171981A1 (fr) Dispositif de gestion de caméra de surveillance
WO2022255529A1 (fr) Procédé d'apprentissage pour générer une vidéo de synchronisation des lèvres sur la base d'un apprentissage automatique et dispositif de génération de vidéo à synchronisation des lèvres pour l'exécuter
WO2022265262A1 (fr) Procédé d'extraction de données pour l'entraînement d'intelligence artificielle basé sur des mégadonnées, et programme informatique enregistré sur support d'enregistrement pour l'exécuter
WO2021040105A1 (fr) Dispositif d'intelligence artificielle générant une table d'entité nommée et procédé associé
WO2020262721A1 (fr) Système de commande pour commander une pluralité de robots par l'intelligence artificielle
WO2019190171A1 (fr) Dispositif électronique et procédé de commande associé
WO2021075758A1 (fr) Appareil électronique et procédé de commande associé
WO2019190142A1 (fr) Procédé et dispositif de traitement d'image
EP3707678A1 (fr) Procédé et dispositif de traitement d'image
EP3659073A1 (fr) Appareil électronique et procédé de commande associé
WO2021107202A1 (fr) Procédé de modélisation tridimensionnelle de vêtement
WO2023022321A1 (fr) Serveur d'apprentissage distribué et procédé d'apprentissage distribué
WO2018182066A1 (fr) Procédé et appareil d'application d'un effet dynamique à une image
WO2021221394A1 (fr) Procédé et dispositif électronique pour une augmentation d'image
WO2020251151A1 (fr) Procédé et appareil d'estimation de la pose d'un utilisateur en utilisant un modèle virtuel d'espace tridimensionnel
WO2024143731A1 (fr) Procédé et système de construction de données de contenu de métavers en temps réel multi-vues sur la base d'une super-résolution sélective

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20766103

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20766103

Country of ref document: EP

Kind code of ref document: A1