WO2023207500A1 - 一种图像生成方法及装置 - Google Patents
一种图像生成方法及装置 Download PDFInfo
- Publication number
- WO2023207500A1 WO2023207500A1 PCT/CN2023/085006 CN2023085006W WO2023207500A1 WO 2023207500 A1 WO2023207500 A1 WO 2023207500A1 CN 2023085006 W CN2023085006 W CN 2023085006W WO 2023207500 A1 WO2023207500 A1 WO 2023207500A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- clothing
- image
- target
- target object
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000000694 effects Effects 0.000 claims abstract description 46
- 238000005286 illumination Methods 0.000 claims abstract description 31
- 238000009877 rendering Methods 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 17
- 230000037237 body shape Effects 0.000 claims description 33
- 238000012937 correction Methods 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 20
- 230000004044 response Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 6
- 238000004088 simulation Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 6
- 210000000323 shoulder joint Anatomy 0.000 description 5
- 210000002310 elbow joint Anatomy 0.000 description 4
- 210000000629 knee joint Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000002683 foot Anatomy 0.000 description 2
- 210000004247 hand Anatomy 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Definitions
- the present disclosure relates to the field of image processing technology, and in particular, to an image generation method and device.
- Virtual fitting refers to the use of virtual technical means to output images of the effects of fitting objects after trying on new clothes. Since virtual fitting technology allows users to view the effect of putting on new clothes without having to take off/put on clothes, which greatly improves the efficiency of fitting, virtual fitting has a very wide range of application prospects.
- embodiments of the present disclosure provide an image generation method, including:
- the first image and the second image are fused to obtain an effect image after the target object wears the virtual clothing corresponding to the target clothing model.
- the method further includes:
- Rendering the target clothing model according to the lighting information and generating the second image includes: rendering the target clothing model according to the clothing state of the target clothing model corresponding to the target object and the lighting information, Generate a second image.
- constructing a first model corresponding to the target object based on the first image includes:
- the first model is constructed based on the body shape and/or posture of the target object.
- the method further includes:
- the body shape and/or posture of the first model are corrected.
- determining the clothing status of the target clothing model corresponding to the target object according to the first model includes:
- the clothing state of the target clothing model corresponding to the target object is determined according to the first model and the second model.
- determining the clothing status of the target clothing model corresponding to the target object based on the first model and the second model includes:
- the model sequence includes a plurality of models, and the plurality of models gradually gradually change from the second model to the first model;
- n is an integer greater than 1 ;
- the clothing state corresponding to the last model in the model sequence is determined as the clothing state of the target clothing model corresponding to the target object.
- rendering the target clothing model according to the lighting information and generating a second image includes:
- the target clothing model is rendered according to the clothing state of the target clothing model corresponding to the target object and the lighting information, and a second image is generated, including :
- the target clothing model is rendered according to the clothing state of the target clothing model corresponding to the target object, the lighting information, and the material information of the virtual clothing, and the second image is generated.
- the method before rendering the target clothing model according to the lighting information, the method further includes:
- the clothing model that received the selection operation is determined as the target clothing model.
- the method further includes:
- the effect image is corrected.
- an image generation device including:
- an acquisition unit configured to acquire the first image including the target object
- a processing unit configured to perform illumination estimation on the first image and obtain illumination information of the first image
- a rendering unit configured to render the target clothing model according to the lighting information and generate a second image
- a fusion unit configured to fuse the first image and the second image to obtain an effect image after the target object wears the virtual clothing corresponding to the target clothing model.
- the processing unit is further configured to construct a first model corresponding to the target object based on the first image; and determine the clothing status of the target clothing model corresponding to the target object based on the first model;
- the rendering unit is specifically configured to render the target clothing model according to the clothing state of the target clothing model corresponding to the target object and the lighting information, and generate a second image.
- the processing unit is specifically configured to perform key point detection on the target object and obtain position information of multiple key points of the target object; and obtain the body shape and size of the target object based on the position information of the multiple key points. /or posture; construct the first model according to the body shape and/or posture of the target object.
- the image generation device further includes:
- a model correction unit configured to receive a correction operation on the body shape and/or posture of the first model; in response to the correction operation on the body shape and/or posture of the first model, correct the body shape and/or posture of the first model. /or posture correction.
- the processing unit is specifically configured to according to the target clothing
- the initial state of the model constructs a second model corresponding to the target object; and determines the clothing state of the target clothing model corresponding to the target object based on the first model and the second model.
- the processing unit is specifically configured to generate a model sequence according to the first model and the second model, the model sequence includes multiple models, and the multiple models The models gradually gradually change from the second model to the first model; based on the first model in the model sequence, the target clothing model in the initial state is simulated to obtain the first model.
- Corresponding clothing state ; perform simulation on the target clothing model of the clothing state corresponding to the n-1th model based on the n-th model in the model sequence, and obtain the clothing state corresponding to the n-th model, n is an integer greater than 1; determine the clothing state corresponding to the last model in the model sequence as the clothing state of the target clothing model corresponding to the target object.
- the rendering unit is specifically configured to generate a light map corresponding to the first image according to the lighting information; render the target clothing model according to the light map, and generate Second image.
- the rendering unit is specifically configured to obtain material information of virtual clothing; according to the clothing status of the target clothing model corresponding to the target object, the lighting information and the The target clothing model is rendered using the material information of the virtual clothing to generate the second image.
- the processing unit is further configured to display a clothing selection interface before rendering the target clothing model according to the lighting information, and the clothing selection interface displays at least A clothing model; receives a selection operation input on the clothing selection interface; and determines the clothing model that receives the selection operation as the target clothing model.
- the image generation device further includes: an effect correction unit, configured to receive a correction operation on the effect image; in response to the correction operation on the effect image, The effect image is corrected.
- embodiments of the present disclosure provide an electronic device, including: a memory and a processor, the memory is used to store a computer program; the processor is used to enable the electronic device to implement any of the above when executing the computer program.
- An image generation method according to an embodiment.
- embodiments of the present disclosure provide a computer-readable storage medium, which when the computer program is executed by a computing device, causes the computing device to implement the image generation method described in any of the above embodiments.
- embodiments of the present disclosure provide a computer program product.
- the computer program product When the computer program product is run on a computer, it causes the computer to implement the image generation method described in any of the above embodiments.
- the image generation method first obtains a first image including a target object, then performs illumination estimation on the first image, obtains illumination information of the first image, and generates illumination data based on the illumination information of the first image.
- the target clothing model is rendered to generate a second image, and the first image and the second image are fused to generate an effect image after the target object wears the virtual clothing corresponding to the target clothing model.
- Figure 1 is one of the step flow charts of an image generation method provided by an embodiment of the present disclosure
- Figure 2 is the second flow chart of the steps of the image generation method provided by the embodiment of the present disclosure.
- Figure 3 is a schematic diagram of a first model provided by an embodiment of the present disclosure.
- Figure 4 is a schematic diagram of a second model provided by an embodiment of the present disclosure.
- Figure 5 is a schematic diagram of a model sequence provided by an embodiment of the present disclosure.
- Figure 6 is a schematic diagram of a gradient clothing state provided by an embodiment of the present disclosure.
- Figure 7 is one of the structural schematic diagrams of the image generation device provided by an embodiment of the present disclosure.
- Figure 8 is a second structural schematic diagram of an image generation device provided by an embodiment of the present disclosure.
- FIG. 9 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present disclosure.
- words such as “exemplary” or “such as” are used to represent examples, illustrations or explanations. Any embodiment or design described as “exemplary” or “such as” in the embodiments of the present disclosure should not be construed as any other Embodiments or designs are more preferred or advantageous. Rather, invocations of the words “exemplary” or “such as” are intended to present the relevant concept in a concrete manner. Furthermore, in the description of the embodiments of the present disclosure, unless otherwise specified, the meaning of “plurality” means two or more.
- the commonly used solution for virtual fitting is to collect images of the fitting object, obtain the image of the fitting object, and then fuse the image of the fitting object and the image of the virtual clothing to obtain the virtual clothing worn by the fitting object. After effect image.
- the light source information of the image including the fitting object is the light source information in the real environment
- the light source information of the image of the virtual clothing is the light source information manually configured by the developer
- embodiments of the present disclosure provide an image generation method and device to solve the problem of mismatch between real light source information and manually configured light source information, affecting the realism of the effect image.
- Scenario 1 The embodiment of the present disclosure can be applied to clothing try-on on online platforms such as e-commerce, special effects, and short videos.
- the implementation process may include: the user selects the clothing they want to try on on an application or web page, and uploads an image containing the target object, and then generates the target object through the image generation method and the pre-built clothing model provided by the embodiments of the present disclosure. An image of the effect after putting on the clothing you want to try on, and output the effect image.
- Scenario 2 The embodiment of the present disclosure can be applied to clothing trying on offline platforms such as shopping malls and supermarkets.
- the implementation process may include: when the target object wants to try on a certain physical garment, image acquisition is performed on the target object through an image acquisition device, and an image containing the target object is obtained, and then the image generation method and method provided by the embodiments of the present disclosure are used.
- the pre-built clothing model generates an effect image after the target object wears the physical clothing that he wants to try on, and outputs the effect image.
- An embodiment of the present disclosure provides an image generation method.
- the image generation method includes the following steps S11 to S14:
- the implementation of acquiring the first image may include: acquiring an image of a target object through an image acquisition device, and acquiring a first image including the target object.
- the implementation of obtaining the first image may include: receiving a first image uploaded or imported by a user and including the target object.
- the target object in the embodiment of the present disclosure can be any entity object, such as: people, pets, Physical objects such as humanoid clothes stands are not limited in the embodiments of the present disclosure.
- S12. Perform illumination estimation on the first image and obtain illumination information of the first image.
- illumination information of the first image can be obtained by performing illumination estimation on the first image through illumination estimation algorithms such as Gardner’s algorithm, Dominant Light algorithm, and multi-illumination algorithm.
- illumination estimation algorithms such as Gardner’s algorithm, Dominant Light algorithm, and multi-illumination algorithm.
- the embodiments of the present disclosure are not limited to the illumination estimation algorithm for performing illumination estimation on the first image, as long as the illumination map corresponding to the first image can be obtained.
- the light source position, lighting color and other information when rendering the target clothing model are determined according to the lighting information of the first image, thereby obtaining the second image.
- the target clothing model in the embodiment of the present disclosure refers to the three-dimensional model of the virtual clothing that the target object wants to try on, and the target clothing model can be pre-built by the developer.
- the target clothing model can be a three-dimensional model of clothes, shoes, bags, jewelry, scarves and other clothing.
- the method provided by the embodiment of the present disclosure also needs to determine the target clothing model before the above step S13.
- the target clothing model may be determined based on the selection operation input by the user; and the process of determining the target clothing model based on the selection operation input by the user may include the following steps 1) to 3):
- the clothing selection interface displays at least one clothing model.
- the two-dimensional images of multiple clothing models can be displayed on the clothing selection interface to facilitate user selection.
- Step 2) Receive the selection operation input on the clothing selection interface.
- the selection operation may be a mouse operation, a touch click operation, or a voice command.
- Step 3) Determine the clothing model that received the selection operation as the target clothing model.
- the second image is obtained by rendering the target clothing model based on the lighting information of the first image, the second image includes virtual clothing corresponding to the target clothing model.
- the embodiment of the present disclosure does not limit the image fusion algorithm used when fusing the first image and the second image, as long as the first image and the second image can be fused to obtain the image.
- the image generation method provided by the embodiment of the present disclosure first obtains the first image including the target object, and then generates the Perform illumination estimation on an image, obtain the illumination information of the first image, render the target clothing model to generate a second image according to the illumination information of the first image, and fuse the first image and the second image Generate an effect image after the target object wears the virtual clothing corresponding to the target clothing model. Since the second image in the image generation method provided by the embodiment of the present disclosure is generated by rendering the target clothing model according to the lighting information of the first image, the embodiment of the present disclosure can improve the relationship between the first image including the target object and the first image including the target clothing model. The light source information of the second image of the virtual clothing does not match, thus affecting the realism of the effect image.
- the image generation method includes the following steps S201 to S211:
- S202 Perform illumination estimation on the first image and obtain illumination information of the first image.
- the light map in the embodiment of the present disclosure may be a high dynamic range (High Dynamic Range, HDR) light map.
- HDR High Dynamic Range
- step S204 (constructing the first model corresponding to the target object based on the first image) includes the following steps 1 to 3:
- Step 1 Perform key point detection on the target object to obtain position information of multiple key points of the target object.
- different key point detection algorithms can be used to perform key point detection on the target object according to different target objects.
- the limb key point detection algorithm can be used to detect key points such as head, hands, feet, elbow joints, shoulder joints, knee joints, etc., and then obtain the head, hands, feet, etc. , location information of key points such as elbow joints, shoulder joints, and knee joints.
- Step 2 Obtain the body shape and/or posture of the target object according to the position information of the multiple key points.
- the body shape and/or posture of the target object can be obtained according to the relative positions between the multiple key points.
- the multiple key points include key points such as head, hands, feet, elbow joints, shoulder joints, knee joints, etc.
- the key points of the head and the key points of the feet can be The relative position between the points determines the height of the target object
- the arm posture of the target object is determined according to the relative position between the hand key point and the elbow joint key point
- the left shoulder joint key point and the right shoulder joint key point determine the target object's arm posture.
- the relative positions of the keypoints determine the shoulder width of the target object.
- Step 3 Construct the first model according to the body shape and posture of the target object.
- the first model 32 is constructed according to the body shape and/or posture of the target object 31 , the first model 32 is consistent with the body shape and/or posture of the target object 31 . Or the same attitude.
- the above embodiment further receives a correction operation on the body shape and/or posture input by the first model, and corrects the body shape and/or posture of the first model in response to the correction operation on the first model, , therefore the above embodiments can better match the first model to the body shape and/or posture of the target object.
- the above step S206 (determining the clothing status of the target clothing model corresponding to the target object according to the first model) includes the following steps a and b:
- Step a Construct a second model corresponding to the target object according to the initial state of the target clothing model.
- a model of the target object suitable for the initial state of the target clothing model is constructed as the second model.
- the second model 42 since the second model 42 is constructed according to the initial state of the target clothing model 41 , the second model 42 matches the initial state of the target clothing model 41 .
- Step b Determine the clothing status of the target clothing model corresponding to the target object according to the first model and the second model.
- step b (determining the clothing status of the target clothing model corresponding to the target object based on the first model and the second model) includes: the following step b1 Go to step b4:
- Step b1 Generate a model sequence according to the first model and the second model.
- the model sequence includes multiple models, and the multiple models gradually gradually change from the second model to the first model.
- the only difference between the first model 32 and the second model 42 is that the left arm of the first model 32 is in a naturally drooping state, while the left arm of the second model 42 is in a horizontal state, and the other parts are the same, so
- the model sequence generated according to the first model 32 and the second model 42 may be as shown in Figure 5, including multiple models, and the multiple models gradually gradually change from the second model 42 to the first model 32. .
- Step b2 Perform simulation on the target clothing model in the initial state based on the first model in the model sequence, and obtain the clothing state corresponding to the first model.
- Step b3 Simulate the target clothing model of the clothing state corresponding to the n-1th model based on the n-th model in the model sequence, and obtain the clothing state corresponding to the n-th model.
- n is an integer greater than 1.
- the clothing state of the target clothing model gradually transforms from an initial state (a state matching the second model 42 ) to a state matching the first model 32 .
- simulating the target clothing model based on the model not only includes adapting the target clothing model to the model's body shape and posture, but also includes simulating the wrinkles, drape, etc. of the target clothing model. .
- Step b4 Determine the clothing state corresponding to the last model in the model sequence as the clothing state of the target clothing model corresponding to the target object.
- the last model in the model sequence is the first model, so the above step b4 is to determine the clothing state corresponding to the first model as the clothing state of the target clothing model corresponding to the target object.
- the target clothing model When the body shape and/or posture of the target object is significantly different from the target clothing model in the initial state, if the target clothing model is directly transformed from the initial state into the clothing of the target clothing model corresponding to the target object based on the first model, If the clothing state of the target clothing model changes too much, the target clothing model will be abnormal.
- a model sequence is generated based on the first model and the second model, which gradually changes from the second model to the first model, and the clothing state of the target clothing model is gradually determined through the models in the model sequence. Changes are made to improve the abnormality caused by excessive changes in the clothing status of the target clothing model during each change. Therefore, the above embodiment can improve the abnormality caused by the excessive changes in the clothing status of the target clothing model.
- the implementation of obtaining the material information of the virtual clothing may include: determining the preset material information as the material information of the virtual clothing.
- the implementation of obtaining the material information of virtual clothing may include: outputting prompt information for prompting the user to select a material, receiving a selection operation input by the user, and determining the material of the virtual clothing in response to the user's selection operation. information.
- S209 Render the target clothing model according to the clothing state of the target clothing model corresponding to the target object, the light map, and the material information, and generate the second image.
- the implementation of outputting the effect image includes: displaying the effect image through a display.
- the implementation of outputting the effect image includes: sending the effect image to a designated device so that the corresponding user can view the effect image.
- an embodiment of the present disclosure also provides an image generation device.
- This embodiment corresponds to the foregoing method embodiment.
- this embodiment no longer refers to the foregoing method embodiment.
- the details will be described one by one, but it should be clear that the image generation device in this embodiment can correspondingly implement all the contents in the foregoing method embodiments.
- FIG. 7 is a schematic structural diagram of the image generation device. As shown in Figure 7, the image generation device 700 includes:
- Acquisition unit 71 used to acquire the first image including the target object
- the processing unit 72 is configured to perform illumination estimation on the first image and obtain illumination information of the first image
- Rendering unit 73 configured to render the target clothing model according to the lighting information and generate a second image
- the fusion unit 74 is configured to fuse the first image and the second image to obtain an effect image after the target object wears the virtual clothing corresponding to the target clothing model.
- the processing unit 72 is further configured to construct a first model corresponding to the target object according to the first image; and determine the clothing status of the target clothing model corresponding to the target object according to the first model;
- the rendering unit 73 is specifically configured to render the target clothing model according to the clothing state of the target clothing model corresponding to the target object and the lighting information, and generate a second image.
- the processing unit 72 is specifically configured to perform key point detection on the target object, obtain position information of multiple key points of the target object, and obtain the body shape of the target object based on the position information of the multiple key points. and/or posture; constructing the first model according to the body shape and/or posture of the target object.
- the image generation device 800 further includes:
- the model correction unit 75 is configured to receive a correction operation on the body shape and/or posture of the first model; in response to the correction operation on the body shape and/or posture of the first model, correct the body shape of the first model. and/or posture correction.
- the processing unit 72 is specifically configured to construct a second model corresponding to the target object according to the initial state of the target clothing model; according to the first model and the The second model determines the clothing status of the target clothing model corresponding to the target object.
- the processing unit 72 is specifically configured to generate a model sequence according to the first model and the second model, the model sequence includes multiple models, and the Multiple models gradually change from the second model to the first model in sequence; perform simulation on the target clothing model in the initial state based on the first model in the model sequence to obtain the first The clothing state corresponding to the model; based on the nth model in the model sequence, simulate the target clothing model of the clothing state corresponding to the n-1th model to obtain the clothing state corresponding to the nth model, n is an integer greater than 1; the clothing state corresponding to the last model in the model sequence is determined as the clothing state of the target clothing model corresponding to the target object.
- the rendering unit 72 is specifically configured to generate a light map corresponding to the first image according to the lighting information; render the target clothing model according to the light map, Generate a second image.
- the rendering unit 73 is specifically configured to obtain material information of virtual clothing; according to the clothing status of the target clothing model corresponding to the target object, the lighting information and The material information of the virtual clothing renders the target clothing model to generate the second image.
- the processing unit 72 is also configured to display a clothing selection interface before rendering the target clothing model according to the lighting information.
- the clothing selection interface displays At least one clothing model; receiving a selection operation input on the clothing selection interface; and determining the clothing model that receives the selection operation as the target clothing model.
- the image generation device 800 further includes:
- the effect correction unit 76 is configured to receive a correction operation on the effect image; in response to the correction operation on the effect image, correct the effect image.
- the image generation device provided in this embodiment can execute the image generation method provided in the above method embodiment. Its implementation principles and technical effects are similar and will not be described again here.
- FIG. 9 provides an embodiment of the present disclosure.
- a schematic structural diagram of an electronic device, as shown in Figure 9, the electronic device provided in this embodiment includes: a memory 901 and a processor 902.
- the memory 901 is used to store computer programs; the processor 902 is used to execute the computer program.
- the image generation method provided by the above embodiment.
- embodiments of the present disclosure also provide a computer-readable storage medium.
- the computer-readable storage medium stores a computer program.
- the computing device implements the above embodiments. Provided image generation methods.
- embodiments of the present disclosure also provide a computer program product.
- the computing device implements the image generation method provided in the above embodiments.
- embodiments of the present disclosure may be provided as methods, systems, or computer program products. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
- the processor can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or off-the-shelf programmable Gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- CPU Central Processing Unit
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA off-the-shelf programmable Gate array
- a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
- Memory may include non-volatile memory in computer-readable media, random access memory (RAM) and/or non-volatile memory in the form of read-only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
- RAM random access memory
- ROM read-only memory
- flash RAM flash memory
- Computer-readable media includes permanent and non-permanent, removable and non-removable storage media.
- Storage media can be implemented by any method or technology to store information, and information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- read-only memory read-only memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory or other memory technology
- compact disc read-only memory CD-ROM
- DVD digital versatile disc
- Magnetic tape cassettes disk storage or other magnetic storage devices, or any other non-transmission medium, can be used to store information that can be accessed by a computing device.
- computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
本公开实施例提供了一种图像生成方法及装置,涉及图像处理技术领域。该方法包括:获取包括目标对象的第一图像;对所述第一图像进行光照估计,获取所述第一图像的光照信息;根据所述光照信息对目标服装模型进行渲染,生成第二图像;融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
Description
相关申请的交叉引用
本申请是以申请号为202210476306.9,申请日为2022年4月29日的中国申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
本公开涉及图像处理技术领域,尤其涉及一种图像生成方法及装置。
虚拟试衣是指通过虚拟的技术手段输出试衣对象试穿新衣以后的效果图像。由于虚拟试衣技术可以在避免用户脱/穿衣服的情况下查看换上新衣后的效果,极大的提升了试衣效率,因此虚拟试衣具有非常广泛的使用前景。
发明内容
本公开实施例提供技术方案如下:
第一方面,本公开的实施例提供了一种图像生成方法,包括:
获取包括目标对象的第一图像;
对所述第一图像进行光照估计,获取所述第一图像的光照信息;
根据所述光照信息对目标服装模型进行渲染,生成第二图像;
融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
作为本公开实施例一种可选的实施方式,所述方法还包括:
根据所述第一图像构建所述目标对象对应的第一模型;
根据所述第一模型确定所述目标对象对应的目标服装模型的服装状态;
所述根据所述光照信息对目标服装模型进行渲染,生成第二图像,包括:根据所述目标对象对应的所述目标服装模型的服装状态和所述光照信息对所述目标服装模型进行渲染,生成第二图像。
作为本公开实施例一种可选的实施方式,所述根据所述第一图像构建所述目标对象对应的第一模型,包括:
对所述目标对象进行关键点检测,获取所述目标对象的多个关键点的位置信息;
根据所述多个关键点的位置信息获取所述目标对象的体型和/或姿态;
根据所述目标对象的体型和/或姿态构建所述第一模型。
作为本公开实施例一种可选的实施方式,所述方法还包括:
接收对所述第一模型的体型和/或姿态的修正操作;
响应于对所述第一模型的体型和/或姿态的修正操作,对所述第一模型的体型和/或姿态进行修正。
作为本公开实施例一种可选的实施方式,所述根据所述第一模型确定所述目标对象对应的所述目标服装模型的服装状态,包括:
根据所述目标服装模型的初始状态构建所述目标对象对应的第二模型;
根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,所述根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态,包括:
根据所述第一模型和所述第二模型生成模型序列,所述模型序列包括多个模型,且所述多个模型依次由所述第二模型渐变为所述第一模型;
基于所述模型序列中的第一个模型对所述初始状态的所述目标服装模型进行模拟仿真,获取所述第一个模型对应的服装状态;
基于所述模型序列中的第n个模型对第n-1个模型对应的服装状态的所述目标服装模型进行模拟仿真,获取所述第n个模型对应的服装状态,n为大于1的整数;
将所述模型序列中的最后一个模型对应的服装状态确定为所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,所述根据所述光照信息对目标服装模型进行渲染,生成第二图像,包括:
根据所述光照信息生成所述第一图像对应的光照贴图;
根据所述光照贴图对目标服装模型进行渲染,生成第二图像。
作为本公开实施例一种可选的实施方式,所述根据所述目标对象对应的所述目标服装模型的服装状态和所述光照信息对所述目标服装模型进行渲染,生成第二图像,包括:
获取虚拟服装的材质信息;
根据所述目标对象对应的所述目标服装模型的服装状态、所述光照信息以及所述虚拟服装的材质信息对所述目标服装模型进行渲染,生成所述第二图像。
作为本公开实施例一种可选的实施方式,在根据所述光照信息对所述目标服装模型进行渲染之前,所述方法还包括:
显示服装选取界面,所述服装选取界面显示有至少一个服装模型;
接收在所述服装选取界面输入的选择操作;
将接收到所述选择操作的服装模型确定为所述目标服装模型。
作为本公开实施例一种可选的实施方式,所述方法还包括:
接收对所述效果图像的修正操作;
响应于对所述效果图像的修正操作,对所述效果图像进行修正。
第二方面,本公开实施例提供了一种图像生成装置,包括:
获取单元,用于获取包括目标对象的第一图像;
处理单元,用于对所述第一图像进行光照估计,获取所述第一图像的光照信息;
渲染单元,用于根据所述光照信息对目标服装模型进行渲染,生成第二图像;
融合单元,用于融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
作为本公开实施例一种可选的实施方式,
所述处理单元,还用于根据所述第一图像构建所述目标对象对应的第一模型;所述根据所述第一模型确定所述目标对象对应的所述目标服装模型的服装状态;
所述渲染单元,具体用于根据所述目标对象对应的所述目标服装模型的服装状态和所述光照信息对所述目标服装模型进行渲染,生成第二图像。
作为本公开实施例一种可选的实施方式,
所述处理单元,具体用于对所述目标对象进行关键点检测,获取所述目标对象的多个关键点的位置信息;根据所述多个关键点的位置信息获取所述目标对象的体型和/或姿态;根据所述目标对象的体型和/或姿态构建所述第一模型。
作为本公开实施例一种可选的实施方式,所述图像生成装置还包括:
模型修正单元,用于接收对所述第一模型的体型和/或姿态的修正操作;响应于对所述第一模型的体型和/或姿态的修正操作,对所述第一模型的体型和/或姿态进行修正。
作为本公开实施例一种可选的实施方式,所述处理单元,具体用于根据所述目标服装
模型的初始状态构建所述目标对象对应的第二模型;根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,所述处理单元,具体用于根据所述第一模型和所述第二模型生成模型序列,所述模型序列包括多个模型,且所述多个模型依次由所述第二模型渐变为所述第一模型;基于所述模型序列中的第一个模型对所述初始状态的所述目标服装模型进行模拟仿真,获取所述第一个模型对应的服装状态;基于所述模型序列中的第n个模型对第n-1个模型对应的服装状态的所述目标服装模型进行模拟仿真,获取所述第n个模型对应的服装状态,n为大于1的整数;将所述模型序列中的最后一个模型对应的服装状态确定为所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,所述渲染单元,具体用于根据所述光照信息生成所述第一图像对应的光照贴图;根据所述光照贴图对目标服装模型进行渲染,生成第二图像。
作为本公开实施例一种可选的实施方式,所述渲染单元,具体用于获取虚拟服装的材质信息;根据所述目标对象对应的所述目标服装模型的服装状态、所述光照信息以及所述虚拟服装的材质信息对所述目标服装模型进行渲染,生成所述第二图像。
作为本公开实施例一种可选的实施方式,所述处理单元,还用于在根据所述光照信息对所述目标服装模型进行渲染之前,显示服装选取界面,所述服装选取界面显示有至少一个服装模型;接收在所述服装选取界面输入的选择操作;将接收到所述选择操作的服装模型确定为所述目标服装模型。
作为本公开实施例一种可选的实施方式,所述图像生成装置,还包括:效果修正单元,用于接收对所述效果图像的修正操作;响应于对所述效果图像的修正操作,对所述效果图像进行修正。
第三方面,本公开实施例提供了一种电子设备,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器用于在执行计算机程序时,使得所述电子设备实现上述任一实施方式所述的图像生成方法。
第四方面,本公开实施例提供一种计算机可读存储介质,当所述计算机程序被计算设备执行时,使得所述计算设备实现上述任一实施方式所述的图像生成方法。
第五方面,本公开实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机实现上述任一实施方式所述的图像生成方法。
本公开实施例提供的图像生成方法首先获取包括目标对象的第一图像,然后对所述第一图像进行光照估计,获取所述第一图像的光照信息,并根据所述第一图像的光照信息对目标服装模型进行渲染生成第二图像,以及融合所述第一图像和所述第二图像生成所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要调用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的图像生成方法的步骤流程图之一;
图2为本公开实施例提供的图像生成方法的步骤流程图之二;
图3为本公开实施例提供的第一模型的示意图;
图4为本公开实施例提供的第二模型的示意图;
图5为本公开实施例提供的模型序列的示意图;
图6为本公开实施例提供的渐变服装状态的示意图;
图7为本公开实施例提供的图像生成装置的结构示意图之一;
图8为本公开实施例提供的图像生成装置的结构示意图之二;
图9为本公开实施例提供的电子设备的硬件结构示意图。
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
在本公开实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它
实施例或设计方案更优选或更具优势。确切而言,调用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。此外,在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。
目前,虚拟试衣普遍采用的方案为:对试衣对象进行图像采集,获取试衣对象的图像,然后对试衣对象的图像和虚拟服装的图像进行融合,以获取试衣对象穿上虚拟服装后的效果图像。然而,由于包括试衣对象的图像的光源信息为真实环境中的光源信息,而虚拟服装的图像的光源信息为开发人员手动配置的光源信息,因此常常会因为真实光源信息与手动配置的光源信息不匹配而导致效果图像中光源位置混乱、光照不均匀等问题,严重影响试衣效果图像的真实感。
有鉴于此,本公开实施例提供了一种图像生成方法及装置,用于解决真实光源信息与手动配置的光源信息不匹配,影响效果图像的真实感的问题。
以下对本公开实施例提供的图像生成方法的使用场景进行说明。
场景一、本公开实施例可以适用对电商、特效、短视频等线上平台的服装试穿。实现过程可以包括:用户在应用程序或网页上选取想要试穿的服装,并上传包含目标对象的图像,然后通过本公开实施例提供的图像生成方法和预构建服装模型的生成所述目标对象穿戴上想要试穿的服装之后的效果图像,并输出所述效果图像。
场景二、本公开实施例可以适用对商场、超市等线下平台的服装试穿。实现过程可以包括:当目标对象想要对某一实体服装进行试穿时,通过图像采集设备对目标对象进行图像采集,获取包含目标对象的图像,然后通过本公开实施例提供的图像生成方法和预构建服装模型生成所述目标对象穿戴上想要试穿的实体服装之后的效果图像,并输出所述效果图像。
本公开实施例提供了一种图像生成方法,参照图1所示,该图像生成方法包括如下步骤S11至S14:
S11、获取包括目标对象的第一图像。
在一些实施例中,获取所述第一图像的实现方式可以包括:通过图像采集装置对目标对象进行图像采集,获取包括所述目标对象的第一图像。
在一些实施例中,获取所述第一图像的实现方式可以包括:接收用户上传或导入的包括所述目标对象的第一图像。
需要说明的是,本公开实施例中的目标对象可以为任意实体对象,例如:人、宠物、
人形衣台等实体对象,本公开实施例对此不做限定。
S12、对所述第一图像进行光照估计,获取所述第一图像的光照信息。
示例性的,可以通过Gardner’s算法、主光源(Dominant Light)算法、多光源(Multi-illumination)算法等光照估计算法对所述第一图像进行光照估计,从而获取所述第一图像的光照信息。本公开实施例不限定对所述第一图像进行光照估计的光照估计算法,以能够获取所述第一图像对应的光照贴图为准。
S13、根据所述光照信息对目标服装模型进行渲染,生成第二图像。
即,根据所述第一图像的光照信息确定渲染目标服装模型时的光源位置、光照颜色等信息,从而得到所述第二图像。
本公开实施例中的目标服装模型是指目标对象想要试穿戴的虚拟服装的三维模型,目标服装模型可以由开发人员预先构建。具体的,目标服装模型可以为衣服、鞋子、包、首饰、围巾等服装的三维模型。
由于上步骤S13中需要对目标服装模型进行渲染,因此在上步骤S13之前,本公开实施例提供的方法还需要确定目标服装模型。在一些实施例中,目标服装模型可以基于用户输入的选择操作确定;且基于用户输入的选择操作确定目标服装模型的过程可以包括如下步骤1)至步骤3):
步骤1)、显示服装选取界面。
其中,所述服装选取界面显示有至少一个服装模型。
具体的,可以在服装选取界面显示多个服装模型的二维图像以便用户进行选取。
步骤2)、接收在所述服装选取界面输入的选择操作。
示例性的,选择操作可以为鼠标操作、触控点击操作或者语音指令。
步骤3)、将接收到所述选择操作的服装模型确定为所述目标服装模型。
需要说明的是,由于第二图像是根据所述第一图像的光照信息对目标服装模型进行渲染得到的,因此第二图像包括目标服装模型对应的虚拟服装。
S14、融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
本公开实施例不限定对所述第一图像和所述第二图像进行融合时所使用的图像融合算法,以能够融合所述第一图像和所述第二图像,获取所述图像为准。
本公开实施例提供的图像生成方法首先获取包括目标对象的第一图像,然后对所述第
一图像进行光照估计,获取所述第一图像的光照信息,并根据所述第一图像的光照信息对目标服装模型进行渲染生成第二图像,以及融合所述第一图像和所述第二图像生成所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。由于本公开实施例提供的图像生成方法中的第二图像是根据所述第一图像的光照信息对目标服装模型进行渲染生成的,因此本公开实施例可以改善包括目标对象的第一图像与包括虚拟服装的第二图像的光源信息不匹配,进而影响效果图像的真实感。
作为对上述实施例的扩展和细化,本公开实施例提供了另一种图像生成方法,参照图2所示,该图像生成方法包括如下步骤S201至步骤S211:
S201、获取包括目标对象的第一图像。
S202、对所述第一图像进行光照估计,获取所述第一图像的光照信息。
S203、根据所述光照信息生成所述第一图像对应的光照贴图(Light Maps)。
作为本公开实施例一种可选的实施方式,本公开实施例中的光照贴图可以为高动态范围(High Dynamic Range,HDR)的光照贴图。
S204、根据所述第一图像构建所述目标对象对应的第一模型。
作为本公开实施例一种可选的实施方式,上述步骤S204(根据所述第一图像构建所述目标对象对应的第一模型),包括如下步骤1至步骤3:
步骤1、对所述目标对象进行关键点检测,获取所述目标对象的多个关键点的位置信息。
具体的,本公开实施例中可以根据不同的目标对象采用不同的关键点检测算法对目标对象进行关键点检测。例如:当所述目标对象为人时,则可以采用肢体关键点检测算法检测头部、手部、脚部、肘关节、肩关节、膝关节等关键点,进而获取头部、手部、脚部、肘关节、肩关节、膝关节等关键点的位置信息。
步骤2、根据所述多个关键点的位置信息获取所述目标对象的体型和/或姿态。
具体的,可以根据所述多个关键点之间的相对位置获取所述目标对象的体型和/或姿态。例如:当所述目标对象为人时,所述多个关键点包括:头部、手部、脚部、肘关节、肩关节、膝关节等关键点,则可以根据头部关键点与脚部关键点之间的相对位置确定所述目标对象的身高,根据手部关键点与肘关节关键点之间的相对位置确定所述目标对象的手臂姿态,根据左侧肩关节关键点与右侧肩关节关键点的相对位置确定所述目标对象的肩部宽度。
步骤3、根据所述目标对象的体型和姿态构建所述第一模型。
示例性的,参照图3所示,由于所述第一模型32是根据所述目标对象31的体型和/或姿态所构建的,因此所述第一模型32与所述目标对象31的体型/或姿态相同。
S205、接收对所述第一模型的体型和/或姿态的修正操作。
S206、响应于对所述第一模型的体型和/或姿态的修正操作,对所述第一模型的体型和/或姿态进行修正。
由于上述实施例还进一步接收对所述第一模型输入的体型和/或姿态的修正操作,并响应于对所述第一模型的修正操作对所述第一模型的体型和/或姿态进行修正,因此上述实施例可以时第一模型与目标对象的体型和/或姿态更加匹配。
S207、根据所述第一模型确定所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,上步骤S206(根据所述第一模型确定所述目标对象对应的所述目标服装模型的服装状态)包括如下步骤a和步骤b:
步骤a、根据所述目标服装模型的初始状态构建所述目标对象对应的第二模型。
即,构建一个适用于所述目标服装模型的初始状态的目标对象的模型作为所述第二模型。
示例性的,参照图4所示,由于所述第二模型42是根据所述目标服装模型41的初始状态构建的,因此所述第二模型42与所述目标服装模型41的初始状态相匹配。
步骤b、根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,上步骤b(根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态)包括:如下步骤b1至步骤b4:
步骤b1、根据所述第一模型和所述第二模型生成模型序列。
其中,所述模型序列包括多个模型,且所述多个模型依次由所述第二模型渐变为所述第一模型。
承上实例所述,第一模型32与第二模型42的区别仅在于第一模型32的左臂处于自然下垂的状态,而第二模型42的左臂处于水平状态,其余部位均相同,因此根据所述第一模型32和所述第二模型42生成模型序列可以如图5所示,包括多个模型,所述多个模型依次由所述第二模型42渐变为所述第一模型32。
步骤b2、基于所述模型序列中的第一个模型对所述初始状态的所述目标服装模型进行模拟仿真,获取所述第一个模型对应的服装状态。
步骤b3、基于所述模型序列中的第n个模型对第n-1个模型对应的服装状态的所述目标服装模型进行模拟仿真,获取所述第n个模型对应的服装状态。
其中,n为大于1的整数。
即,如图6所示,所述目标服装模型的服装状态由初始状态(与第二模型42相匹配的状态)逐渐变换为与第一模型32相匹配的状态。
需要说明的是,本公开实施例中基于模型对目标服装模型进行模拟仿真不但包括使目标服装模型与模型的体型和姿态相适应,而还包括对目标服装模型的褶皱、垂感等进行模拟仿真。
步骤b4、将所述模型序列中的最后一个模型对应的服装状态确定为所述目标对象对应的所述目标服装模型的服装状态。
即,所述模型序列中的最后一个模型为第一模型,因此上述步骤b4即为将所述第一模型对应的服装状态确定为所述目标对象对应的所述目标服装模型的服装状态。
当目标对象与初始状态下的目标服装模型的体型和/或姿态相差较大时,若直接根据第一模型将目标服装模型由于初始状态变换为所述目标对象对应的所述目标服装模型的服装状态,则因为目标服装模型的服装状态变化过大而导致模目标服装模型出现异常。上述实施例中根据所述第一模型和所述第二模型生成依次由所述第二模型渐变为所述第一模型的模型序列,且通过模型序列中的模型逐渐对目标服装模型的服装状态进行变化,从而改善每一次变化时目标服装模型的服装状态变化过大,因此上述实施例可以改善因为目标服装模型的服装状态变化过大而导致的异常。
S208、获取虚拟服装的材质信息。
在一些实施例中,获取虚拟服装的材质信息的实现方式可以包括:将预设材质信息确定为所述虚拟服装的材质信息。
在一些实施例中,获取虚拟服装的材质信息的实现方式可以包括:输出用于提示用户进行材质选择的提示信息,接收用户输入的选择操作,响应于用户的选择操作确定所述虚拟服装的材质信息。
S209、根据所述目标对象对应的所述目标服装模型的服装状态、所述光照贴图以及所述材质信息对所述目标服装模型进行渲染,生成所述第二图像。
S210、融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
S211、接收对所述效果图像的修正操作。
S212、响应于对所述效果图像的修正操作,对所述效果图像进行修正。
S213、输出所述效果图像。
在一些实施例中,输出所述效果图像的实现方式包括:通过显示器对所述效果图像进行显示。
在一些实施例中,输出所述效果图像的实现方式包括:将所述效果图像发送至指定设备,以便相应用户查看所述效果图像。
基于同一发明构思,作为对上述方法的实现,本公开实施例还提供了一种图像生成装置,该实施例与前述方法实施例对应,为便于阅读,本实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的图像生成装置能够对应实现前述方法实施例中的全部内容。
本公开实施例提供了一种图像生成装置,图7为该图像生成装置的结构示意图,如图7所示,该图像生成装置700包括:
获取单元71,用于获取包括目标对象的第一图像;
处理单元72,用于对所述第一图像进行光照估计,获取所述第一图像的光照信息;
渲染单元73,用于根据所述光照信息对目标服装模型进行渲染,生成第二图像;
融合单元74,用于融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
作为本公开实施例一种可选的实施方式,
所述处理单元72,还用于根据所述第一图像构建所述目标对象对应的第一模型;所述根据所述第一模型确定所述目标对象对应的所述目标服装模型的服装状态;
所述渲染单元73,具体用于根据所述目标对象对应的所述目标服装模型的服装状态和所述光照信息对所述目标服装模型进行渲染,生成第二图像。
作为本公开实施例一种可选的实施方式,
所述处理单元72,具体用于对所述目标对象进行关键点检测,获取所述目标对象的多个关键点的位置信息;根据所述多个关键点的位置信息获取所述目标对象的体型和/或姿态;根据所述目标对象的体型和/或姿态构建所述第一模型。
作为本公开实施例一种可选的实施方式,参照图8所示,所述图像生成装置800还包括:
模型修正单元75,用于接收对所述第一模型的体型和/或姿态的修正操作;响应于对所述第一模型的体型和/或姿态的修正操作,对所述第一模型的体型和/或姿态进行修正。
作为本公开实施例一种可选的实施方式,所述处理单元72,具体用于根据所述目标服装模型的初始状态构建所述目标对象对应的第二模型;根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,所述处理单元72,具体用于根据所述第一模型和所述第二模型生成模型序列,所述模型序列包括多个模型,且所述多个模型依次由所述第二模型渐变为所述第一模型;基于所述模型序列中的第一个模型对所述初始状态的所述目标服装模型进行模拟仿真,获取所述第一个模型对应的服装状态;基于所述模型序列中的第n个模型对第n-1个模型对应的服装状态的所述目标服装模型进行模拟仿真,获取所述第n个模型对应的服装状态,n为大于1的整数;将所述模型序列中的最后一个模型对应的服装状态确定为所述目标对象对应的所述目标服装模型的服装状态。
作为本公开实施例一种可选的实施方式,所述渲染单元72,具体用于根据所述光照信息生成所述第一图像对应的光照贴图;根据所述光照贴图对目标服装模型进行渲染,生成第二图像。
作为本公开实施例一种可选的实施方式,所述渲染单元73,具体用于获取虚拟服装的材质信息;根据所述目标对象对应的所述目标服装模型的服装状态、所述光照信息以及所述虚拟服装的材质信息对所述目标服装模型进行渲染,生成所述第二图像。
作为本公开实施例一种可选的实施方式,所述处理单元72,还用于在根据所述光照信息对所述目标服装模型进行渲染之前,显示服装选取界面,所述服装选取界面显示有至少一个服装模型;接收在所述服装选取界面输入的选择操作;将接收到所述选择操作的服装模型确定为所述目标服装模型。
作为本公开实施例一种可选的实施方式,参照图8所示,所述图像生成装置800,还包括:
效果修正单元76,用于接收对所述效果图像的修正操作;响应于对所述效果图像的修正操作,对所述效果图像进行修正。
本实施例提供的图像生成装置可以执行上述方法实施例提供的图像生成方法,其实现原理与技术效果类似,此处不再赘述。
基于同一发明构思,本公开实施例还提供了一种电子设备。图9为本公开实施例提供
的电子设备的结构示意图,如图9所示,本实施例提供的电子设备包括:存储器901和处理器902,所述存储器901用于存储计算机程序;所述处理器902用于在执行计算机程序时执行上述实施例提供的图像生成方法。
基于同一发明构思,本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,当计算机程序被处理器执行时,使得所述计算设备实现上述实施例提供的图像生成方法。
基于同一发明构思,本公开实施例还提供了一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算设备实现上述实施例提供的图像生成方法。
本领域技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式。
处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。存储器是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动存储介质。存储介质可以由任何方法或技术来实现信息存储,信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。根据本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
最后应说明的是:以上各实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述各实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的范围。
Claims (15)
- 一种图像生成方法,包括:获取包括目标对象的第一图像;对所述第一图像进行光照估计,获取所述第一图像的光照信息;根据所述光照信息对目标服装模型进行渲染,生成第二图像;融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
- 根据权利要求1所述的方法,其中所述方法还包括:根据所述第一图像构建所述目标对象对应的第一模型;根据所述第一模型确定所述目标对象对应的所述目标服装模型的服装状态;所述根据所述光照信息对目标服装模型进行渲染,生成第二图像,包括:根据所述目标对象对应的所述目标服装模型的服装状态和所述光照信息对所述目标服装模型进行渲染,生成第二图像。
- 根据权利要求2所述的方法,其中所述根据所述第一图像构建所述目标对象对应的第一模型,包括:对所述目标对象进行关键点检测,获取所述目标对象的多个关键点的位置信息;根据所述多个关键点的位置信息获取所述目标对象的体型和姿态中的至少一种;根据所述目标对象的体型和姿态中的至少一种构建所述第一模型。
- 根据权利要求3所述的方法,其中所述方法还包括:接收对所述第一模型的体型和姿态中的至少一种的修正操作;响应于对所述第一模型的体型和姿态中的至少一种的修正操作,对所述第一模型的体型和姿态中的至少一种进行修正。
- 根据权利要求3-4任一项所述的方法,其中所述根据所述第一模型确定所述目标对象对应的所述目标服装模型的服装状态,包括:根据所述目标服装模型的初始状态构建所述目标对象对应的第二模型;根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态。
- 根据权利要求5所述的方法,其中所述根据所述第一模型和所述第二模型确定所述目标对象对应的所述目标服装模型的服装状态,包括:根据所述第一模型和所述第二模型生成模型序列,所述模型序列包括多个模型,且所述多个模型依次由所述第二模型渐变为所述第一模型;基于所述模型序列中的第一个模型对所述初始状态的所述目标服装模型进行模拟仿真,获取所述第一个模型对应的服装状态;基于所述模型序列中的第n个模型对第n-1个模型对应的服装状态的所述目标服装模型进行模拟仿真,获取所述第n个模型对应的服装状态,n为大于1的整数;将所述模型序列中的最后一个模型对应的服装状态确定为所述目标对象对应的所述目标服装模型的服装状态。
- 根据权利要求1-6任一项所述的方法,其中所述根据所述光照信息对目标服装模型进行渲染,生成第二图像,包括:根据所述光照信息生成所述第一图像对应的光照贴图;根据所述光照贴图对目标服装模型进行渲染,生成第二图像。
- 根据权利要求2-7任一项所述的方法,其中所述根据所述目标对象对应的所述目标服装模型的服装状态和所述光照信息对所述目标服装模型进行渲染,生成第二图像,包括:获取虚拟服装的材质信息;根据所述目标对象对应的所述目标服装模型的服装状态、所述光照信息以及所述虚拟服装的材质信息对所述目标服装模型进行渲染,生成所述第二图像。
- 根据权利要求1-8任一项所述的方法,其中在根据所述光照信息对所述目标服装模型进行渲染之前,所述方法还包括:显示服装选取界面,所述服装选取界面显示有至少一个服装模型;接收在所述服装选取界面输入的选择操作;将接收到所述选择操作的服装模型确定为所述目标服装模型。
- 根据权利要求1-9任一项所述的方法,其中所述方法还包括:接收对所述效果图像的修正操作;响应于对所述效果图像的修正操作,对所述效果图像进行修正。
- 一种图像生成装置,包括:获取单元,被配置为获取包括目标对象的第一图像;处理单元,被配置为对所述第一图像进行光照估计,获取所述第一图像的光照信息;渲染单元,被配置为根据所述光照信息对目标服装模型进行渲染,生成第二图像;融合单元,被配置为融合所述第一图像和所述第二图像,获取所述目标对象穿戴所述目标服装模型对应的虚拟服装之后的效果图像。
- 一种电子设备,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器被配置为在执行计算机程序时,使得所述电子设备实现权利要求1-10任一项所述的图像生成方法。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现权利要求1-10任一项所述的图像生成方法。
- 一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机实现如权利要求1-10任一项所述的图像生成方法。
- 一种计算机程序,包括指令,所述指令当由计算设备执行时使所述计算设备执行根据权利要求1-10中任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210476306.9A CN117035894A (zh) | 2022-04-29 | 2022-04-29 | 一种图像生成方法及装置 |
CN202210476306.9 | 2022-04-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023207500A1 true WO2023207500A1 (zh) | 2023-11-02 |
Family
ID=88517356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/085006 WO2023207500A1 (zh) | 2022-04-29 | 2023-03-30 | 一种图像生成方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117035894A (zh) |
WO (1) | WO2023207500A1 (zh) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615683A (zh) * | 2018-08-30 | 2019-04-12 | 广州多维魔镜高新科技有限公司 | 一种基于3d服装模型的3d游戏动画模型制作方法 |
CN113191843A (zh) * | 2021-04-28 | 2021-07-30 | 北京市商汤科技开发有限公司 | 模拟服饰试穿方法、装置、电子设备及存储介质 |
CN114202630A (zh) * | 2020-08-27 | 2022-03-18 | 北京陌陌信息技术有限公司 | 一种光照匹配的虚拟试衣方法、设备和存储介质 |
-
2022
- 2022-04-29 CN CN202210476306.9A patent/CN117035894A/zh active Pending
-
2023
- 2023-03-30 WO PCT/CN2023/085006 patent/WO2023207500A1/zh unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615683A (zh) * | 2018-08-30 | 2019-04-12 | 广州多维魔镜高新科技有限公司 | 一种基于3d服装模型的3d游戏动画模型制作方法 |
CN114202630A (zh) * | 2020-08-27 | 2022-03-18 | 北京陌陌信息技术有限公司 | 一种光照匹配的虚拟试衣方法、设备和存储介质 |
CN113191843A (zh) * | 2021-04-28 | 2021-07-30 | 北京市商汤科技开发有限公司 | 模拟服饰试穿方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN117035894A (zh) | 2023-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11662829B2 (en) | Modification of three-dimensional garments using gestures | |
JP7337104B2 (ja) | 拡張現実によるモデル動画多平面インタラクション方法、装置、デバイス及び記憶媒体 | |
US20120086783A1 (en) | System and method for body scanning and avatar creation | |
CN109584377B (zh) | 一种用于呈现增强现实内容的方法与设备 | |
CN109493431B (zh) | 3d模型数据处理方法、装置及系统 | |
US10242498B1 (en) | Physics based garment simulation systems and methods | |
WO2024109522A1 (zh) | 图像处理方法、装置及设备 | |
US10373373B2 (en) | Systems and methods for reducing the stimulation time of physics based garment simulations | |
WO2023207500A1 (zh) | 一种图像生成方法及装置 | |
Chu et al. | A cloud service framework for virtual try-on of footwear in augmented reality | |
Fondevilla et al. | Fashion transfer: Dressing 3d characters from stylized fashion sketches | |
CN114445271B (zh) | 一种生成虚拟试穿3d图像的方法 | |
CN109669541B (zh) | 一种用于配置增强现实内容的方法与设备 | |
CN114452646A (zh) | 虚拟对象透视处理方法、装置及计算机设备 | |
WO2023179341A1 (zh) | 在视频中放置虚拟对象的方法及相关设备 | |
US20220222887A1 (en) | System and method for rendering clothing on a two-dimensional image | |
CN114596412B (zh) | 一种生成虚拟试穿3d图像的方法 | |
US11410361B2 (en) | Digital content editing using a procedural model | |
CN109242941A (zh) | 三维对象合成通过使用视觉引导作为二维数字图像的一部分 | |
Kolivand et al. | Livephantom: Retrieving virtual world light data to real environments | |
KR102541262B1 (ko) | Vr 컨텐츠의 오브젝트 적용 방법, 장치 및 컴퓨터-판독 가능 기록 매체 | |
JP5413188B2 (ja) | 三次元画像処理装置、三次元画像処理方法および三次元画像処理プログラムを記録した媒体 | |
CN118096963A (zh) | 一种图像生成方法及装置 | |
Garg et al. | Augmented Reality in E-Commerce: Unveiling the Future of Online Shopping | |
Chaudhuri et al. | View-dependent character animation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23794935 Country of ref document: EP Kind code of ref document: A1 |