WO2020040567A2 - Procédé et système de génération d'un avatar 3d en temps réel pour un essayage virtuel - Google Patents

Procédé et système de génération d'un avatar 3d en temps réel pour un essayage virtuel Download PDF

Info

Publication number
WO2020040567A2
WO2020040567A2 PCT/KR2019/010694 KR2019010694W WO2020040567A2 WO 2020040567 A2 WO2020040567 A2 WO 2020040567A2 KR 2019010694 W KR2019010694 W KR 2019010694W WO 2020040567 A2 WO2020040567 A2 WO 2020040567A2
Authority
WO
WIPO (PCT)
Prior art keywords
face
avatar
video
texture
unit
Prior art date
Application number
PCT/KR2019/010694
Other languages
English (en)
Korean (ko)
Other versions
WO2020040567A3 (fr
Inventor
나경건
Original Assignee
주식회사 에프엑스기어
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에프엑스기어 filed Critical 주식회사 에프엑스기어
Publication of WO2020040567A2 publication Critical patent/WO2020040567A2/fr
Publication of WO2020040567A3 publication Critical patent/WO2020040567A3/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present invention relates to a method and system for generating a real-time 3D avatar for virtual fitting.
  • the present invention provides a method and system for generating a real-time 3D avatar for a virtual fitting to maximize the matching ratio between the user and the 3D avatar, and to eliminate fitting inaccuracies that reduce the fit of the virtual clothes.
  • a video input unit for receiving a video, 3D avatar shape determination to detect the user's face in the video, and determine the face shape of the 3D avatar based on the detected face
  • a texture mapping unit configured to texture-map the detected photorealistic image of the face to the face of the 3D avatar in which the shape is determined.
  • the 3D avatar shape determiner only the facial expression of the large change in the facial shape of the detected facial expressions is characterized in that reflected in the shape of the face of the 3D avatar.
  • the texture mapping unit may texture map the photorealistic image of the face based on a position mapping relationship between the detected face and the shape of the 3D avatar.
  • the face detection, determining the shape of the face of the 3D avatar, and the live-action image texture mapping is performed in real time.
  • the video is a live video shooting the user in real time, a 3D rendering unit for rendering the 3D avatar to a 2D video in real time, and outputs the 2D video generated by the 3D rendering unit in real time It further comprises an image output unit.
  • the 3D avatar shape determiner may determine the appearance of the face of the 3D avatar in real time based on the detected face.
  • the 3D avatar shape determiner may determine the head pose and expression of the 3D avatar in real time based on the detected face.
  • the texture mapping may be performed for each frame of the video.
  • the 3D avatar shape determiner may include a 2D landmark tracking unit that detects positions of a plurality of feature points of the face in the video and includes a face shape of the 3D avatar based on the positions of the feature points.
  • the texture mapping unit may texture map the photorealistic image of the face based on positions of the plurality of feature points.
  • the 3D avatar shape determiner determines the head pose of the 3D avatar based on the position of the feature points less affected by the expression of the plurality of feature points, and the plurality of feature points
  • a facial expression tracking unit to determine the facial expression of the 3D face model based on the position and the determined head pose.
  • the facial expression tracking unit may determine the appearance of the 3D face model in real time based on the position of the plurality of feature points and the determined head pose.
  • the texture mapping unit may include a face texture color adjuster configured to adjust the color of the photorealistic image of the face to be texture mapped onto the face of the 3D avatar according to a skin color of the 3D avatar predefined. .
  • the texture mapping unit may further include a texture UV calculator configured to calculate U and V values of portions of the face-to-face photorealistic image to be texture-mapped to the face of the 3D avatar, and the face texture color adjuster may include: The color of the photorealistic image of the face is adjusted based on the calculated U and V values.
  • the facial texture color adjustment unit the facial value by multiplying each pixel value of the live-action image of the face by the skin color value of the 3D avatar and dividing the pixel value of the live-action image of the face by low pass filtering It is characterized by adjusting the color of the live-action image.
  • the 3D avatar shape determiner characterized in that the shape of the portion other than the face of the 3D avatar is generated in real time separately from the video.
  • the texture mapping unit may map a separate texture distinguished from the video in real time to a portion other than a portion of the 3D avatar to which the photo-realistic image of the face is mapped.
  • the 3D rendering unit may further determine lighting information regarding the photorealistic image of the face and apply a 3D lighting effect to a portion of the 3D avatar other than a portion to which the photorealistic image of the face is mapped based on the determined lighting information. It is characterized by including.
  • the apparatus may further include a virtual fitting unit configured to detect the user's body in the video and to generate the body and clothes of the 3D avatar in real time based on the detected body.
  • a virtual fitting unit configured to detect the user's body in the video and to generate the body and clothes of the 3D avatar in real time based on the detected body.
  • the video input unit receives a video
  • the 3D avatar shape determination unit detects the user's face in the video, based on the detected face of the 3D avatar And determining a shape, and texture mapping the texture mapping unit to the face of the 3D avatar in which the shape is determined.
  • the present invention includes a computer readable recording medium having recorded thereon a program for executing the method according to an embodiment of the present invention on a computer.
  • an interactive 3D avatar in which user's facial expressions and body motions are delicately reflected in real time can be provided to provide a virtual fitting experience with a high matching rate, a virtual costume wearing feeling, a realism, and an immersion feeling with the user.
  • the face similarity between the 3D avatar and the user can be dramatically increased without the prior face generation process, 100% of the fineness of the facial expression can be expressed, and all effects that can be realized through the 3D face model are possible.
  • FIG. 1 is a view schematically showing the configuration of a 3D avatar generation system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart schematically illustrating a flow of a 3D avatar generating method according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating in more detail the configuration of the 3D avatar generation system shown in FIG. 1.
  • FIG. 4 is a diagram illustrating an example of 68 feature points of a user's face detected by a 2D landmark tracking unit in a video according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an example of an operation of a head pose estimator according to an exemplary embodiment of the present invention.
  • FIG. 6 is a view showing an example of the operation of the facial expression tracking unit according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an example of an operation of a texture UV calculator according to an exemplary embodiment of the present invention.
  • FIG. 8 is a view showing an example of the operation of the facial texture color adjustment unit according to an embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an example of a face model of a 3D avatar generated in real time by the 3D avatar generating system according to an embodiment of the present invention.
  • FIG. 10 illustrates an example of a virtual fitting in which a face model of a 3D avatar generated in real time by the 3D avatar generating system according to an embodiment of the present invention is reflected.
  • the 3D avatar generation system 100 includes a video input unit 110, a 3D avatar shape determiner 120, and a texture mapping unit 130.
  • the video input unit 110 receives a video photographing the user (S210).
  • the video may be a live video photographing a user in real time.
  • the virtual fitting system typically captures the user's appearance in real time, and outputs an image in which the virtual clothes are reflected in the captured user's appearance and movement to the user in real time, thereby obtaining an effect as if the user is standing in front of a mirror.
  • 'live video shooting a user in real time' is a concept that is distinguished from a pre-recorded video, and means a video recording a current state of a user in real time.
  • the video is not traditional 3D scanned data but a conventional 2D video. However, the video may include a stereo 3D video including two 2D videos.
  • the 3D avatar shape determiner 120 detects the user's face (face) in the video and determines the spatial features of the face of the 3D avatar based on the detected face (S220).
  • the spatial features of the face may include the shape of the face and the pose of the face.
  • the shape of the face can be determined by the user's own appearance and expression.
  • the pose of the face can be determined by the spatial arrangement of the user's head, ie the position and orientation of the user's head. This spatial arrangement of the user's head is called a head pose.
  • the spatial features of the face including the shape and pose of the face will be referred to as a face form.
  • the texture mapping unit 130 performs texture mapping on the detected real image of the face to the face of the 3D avatar in which the shape is determined (S230).
  • the texture mapping unit 130 may texture-map the real-life image of the face only on the main part of the facial feature of the 3D avatar, which has a strong feature or a large expression change, such as an eye, a nose, and a mouth.
  • the part where the texture mapping of the photorealistic image of the face is called a mask area.
  • the live-action image of the face is a 2D image.
  • Photorealistic images of the face and the texture-mapped face used to determine the shape of the face of the 3D avatar may be simultaneous points in the video. That is, the 3D avatar shape determiner 120 detects the face of the user at the first time point of the video and determines the shape of the face of the 3D avatar based on this, and the texture mapper 130 determines the first shape of the video.
  • the live-action image of the face at the viewpoint may be texture mapped to the face of the 3D avatar.
  • the live image of the face and the texture-mapped face used to determine the shape of the face of the 3D avatar may be the same frame in the video.
  • Facial 3D modeling of the 3D avatar including the face detection described above, determination of the shape of the face of the 3D avatar, and photorealistic image texture mapping may be continuously performed according to the passage of time of the input video. It may be performed every frame of the video.
  • the present invention can obtain a 3D avatar in which the appearance and expression of the actual user's face are very accurately reflected by mapping the texture onto the face of the 3D avatar. For example, subtly sad eyes can be represented accurately in 3D avatars.
  • the appearance and expression of the user's face do not need to be reflected in the shape of the 3D model up to the minute part.
  • the appearance and expression of the user's face do not need to be reflected in the shape of the 3D model up to the minute part.
  • the calculation amount of the 3D avatar shape determining unit 120 to determine the shape of the face of the 3D avatar is very low, making it easy to process in real time.
  • the 3D avatar shape determiner 120 may determine the appearance of the face of the 3D avatar in real time based on the detected face from the input video. In addition, the 3D avatar shape determiner 120 may determine the head pose and facial expression of the 3D avatar in real time based on a face detected from the input video.
  • the texture mapping unit 130 may perform real-time image texture mapping in real time.
  • the live image is used as a texture for the main part of the user's face, it is not necessary to perform separate texture modeling for the main part of the user's face.
  • the 3D avatar generation system 100 can perform facial 3D modeling of a 3D avatar very easily in real time.
  • the 3D avatar generation system 100 may render a 3D avatar generated by the 3D avatar in real time as a 2D video and a 2D video generated by the 3D renderer in real time.
  • the image output unit may further include an output unit.
  • the output 2D video may include a stereo 3D video including two 2D videos.
  • the most complex facial features and facial expressions in the model of the 3D avatar are mainly represented by the real-life image texture, a 3D avatar that accurately reflects the actual user's appearance and facial expression can be obtained, and the calculation amount is very low. Everything from modeling to rendering can be easily processed in real time.
  • 'processing 3D modeling and rendering in real time' may mean that 3D modeling and rendering is performed immediately as a video is input.
  • 'real-time processing' means that the 3D modeling and rendering is completed at the time interval in the input video, that is, the time within the actual time interval, that is, the time required to perform the 3D modeling and rendering is the video. It may mean that it is shorter than or equal to the time interval of the corresponding part of the frame, and that the user may satisfy a frame rate (eg, 24 fps) at which the motion of the image may be natural, or satisfy the same frame rate as the input video. This may mean satisfying an interactive delay (eg, 10 ms) that the user cannot perceive.
  • This real time can be achieved within limited hardware resources (eg i7 CPU, 8GB RAM, and GTX 750 graphics card).
  • the texture mapping unit 130 may texture-map the photorealistic image of the face based on the position mapping relationship between the face detected in the video and the shape of the 3D avatar. Specifically, the 3D avatar shape determiner 120 determines the head pose and facial expression of the 3D avatar based on the faces detected in the video, and the texture mapping unit 130 determines the head poses of the facial and 3D avatars detected in the video. Based on the mapping relationship between the expressions, the real-life image of the face may be texture mapped. A more specific example will be described with reference to FIG. 3.
  • FIG. 3 is a diagram illustrating in more detail the configuration of the 3D avatar generation system shown in FIG. 1.
  • the 3D avatar shape determiner 120 may include a 2D landmark tracking unit 310, a head pose estimation unit 320, and an expression tracking unit 330, and the texture mapping unit 130.
  • the texture mapping unit 130 may further include a face texture color adjusting unit 350.
  • the 2D landmark tracking unit 310 may detect the positions of the plurality of feature points of the face of the user in real time. For example, the 2D landmark tracking unit 310 may track 68 feature points of a user's face using a supervised descent method (SDM) regression method.
  • SDM supervised descent method
  • 4 is a diagram illustrating an example of 68 feature points of a face of a user detected by a 2D landmark tracking unit 310 according to an exemplary embodiment of the present invention.
  • the 3D avatar shape determiner 120 determines the shape of the face of the 3D avatar in real time based on the detected positions of the plurality of feature points, and the texture mapper 130 based on the positions of the plurality of feature points.
  • the texture mapping may be performed in real time on the face of the 3D avatar whose shape is determined by the 3D avatar shape determiner 120.
  • the texture UV calculator 340 may determine a position of a plurality of feature points of the user's photorealistic image on the face of the user, that is, a position of the face of the 3D avatar corresponding to the plurality of feature points (ie, a plurality of feature points on the face of the 3D avatar).
  • the texture UV calculator 340 may specify a portion (mask area) to be texture-mapped to the face of the 3D avatar among the photorealistic images of the face. That is, the texture UV calculator 340 may calculate the U and V values of the mask area in the photorealistic image of the face.
  • the 3D avatar shape determiner 120 determines the shape of the face of the 3D avatar in real time based on the detected positions of the plurality of feature points.
  • the head pose of the 3D avatar is determined in real time based on the positions of the feature points less affected by the expression
  • the facial expression tracking unit 330 determines the expression of the 3D face model in real time based on the positions of the plurality of feature points and the determined head pose. You can decide.
  • the expression tracking unit 330 may determine the expression of the 3D face model in real time while simultaneously determining the appearance of the 3D face model.
  • the head pose estimator 320 performs a rigid head pose alignment between the face model of the 3D avatar and the face-real image of the user, and the operation of the head pose estimator 320 according to an embodiment of the present invention is illustrated in FIG. 5. An example is shown. Referring to FIG. 5, twelve feature points of red among the 68 feature points are eye and nose feature points that are less affected by facial expressions, and the head pose estimator 320 may calculate the head pose using them. The head pose estimator 320 may calculate a head pose M (translation, rotation) in real time by using an optimization technique as shown in Equation 1 below.
  • Proj is a camera projection matrix
  • n is the number of feature points less affected by facial expression (eg, 12)
  • L k is the 2D position of the k th feature point among n feature points
  • B v (k) Neutral is the 3D vertex position corresponding to the k th feature point of the standard 3D facial face
  • M is the translation * rotation matrix
  • the expression tracking unit 330 performs non rigid expression alignment between the face model of the 3D avatar and the face-real image of the user.
  • FIG. 6 illustrates an example of the operation of the expression tracking unit 330 according to an embodiment of the present invention. It is. Referring to FIG. 6, it can be seen that the jaw of the face model of the 3D avatar is stretched downward because the user has a wide open mouth.
  • the expression tracking unit 330 may calculate the blend shape parameter W in real time using an optimization technique as in Equation 2.
  • Proj is a camera projection matrix
  • n is the number of feature points (for example, 68)
  • L k is the 2D position of the k th feature point among n feature points
  • m is the number of blendshape models
  • B v (k) j is a 3D vertex position corresponding to the kth feature point of the jth blendshape model
  • M is a translation * rotation matrix calculated by the head pose estimator 320.
  • the blend shape may be regarded as a database of various expressions and various facial features
  • the facial shape of the 3D avatar may be reflected in real time according to the blend shape modeling.
  • Facial 3D modeling of the 3D avatar according to the present invention may be basically performed separately for each frame, but the head pose determination and the facial expression determination may use calculation results for previous frames in order to remove shaking. In the above head pose determination and facial expression determination process, depth information about the user's face is not used.
  • the pose of the 3D avatar face and the position of the plurality of feature points of the user's face of the live action image are projected by the 3D avatar shape determiner 120 to be as close as possible to the positions of the plurality of feature points of the user's face.
  • the texture UV calculator 340 may calculate U and V values of the live image of the 3D avatar face, for example, U and V values of the live image of the vertices of the 3D avatar.
  • the 3D avatar shape determiner 120 may determine the shape of the face of the 3D avatar based on the faces detected in the video, and generate the shape of the rest of the 3D avatar separately from the video. For example, the head, hair, and body of the 3D avatar may be generated separately from the video. In particular, since the body of the 3D avatar is generated separately from the video, it is possible to create a 3D model from which the user actually wears clothes. Of course, the posture of the 3D avatar may be determined by body motion tracking of the video, and the shape of the rest of the 3D avatar may be determined using information of the video.
  • the 3D avatar generation system 100 detects a user's body and its movement in a video, and generates the body and clothes of the 3D avatar in real time based on the detected body and its movement.
  • a virtual fitting may be further included. That is, according to the present invention, the facial modeling and the virtual fitting of the above-described 3D avatar are interlocked to accurately reflect the user's facial features and facial expressions, thereby providing a virtual fitting experience with a high degree of realism and immersion.
  • the remaining part may be surface treated (texture, contrast, shadow, etc.) separately from the video.
  • texture mapping is performed so that unnecessary parts, such as hair covering the forehead, are not reflected in the 3D avatar. You can do that.
  • the skin color of the mask area that is, the skin color of the live-action image
  • the skin color of the remaining parts there are a method of determining the skin color of the remaining part in accordance with the skin color of the live image and a method of first determining the skin color of the remaining part and then changing the skin color of the live image accordingly.
  • the 3D avatar When the skin color of the rest part is determined according to the skin color of the live-action video, the 3D avatar has a skin color similar to the actual user, whereas the skin color of the user captured in the video is sensitive to the change in the lighting of each frame. Skin color changes can be severe.
  • the 3D avatar has a constant skin color, so that the skin whitening effect or the desired skin color of the user can be expressed.
  • the face texture color adjusting unit 350 may adjust the color of the photorealistic image of the face to be texture mapped to the face of the 3D avatar according to the skin color of the predefined 3D avatar.
  • the skin color of the 3D avatar may be determined by various methods, such as using any basic color, separately measuring and reflecting the skin color of the user, or allowing the user to select the skin color.
  • the facial texture color adjustment unit 350 may adjust the color of the real image of the face by Equation 3 below.
  • adjustedFaceColor faceColor x modelColor / LBF (faceColor)
  • Equation 3 faceColor is a color of each pixel of the live image
  • modelColor is a skin color of a predefined 3D avatar
  • LBF is a low pass filter
  • adjustedFaceColor is an adjusted color of each pixel of the live image.
  • the camera shooting the video is an RGB camera, and Equation 3 may be calculated for each RGB channel as shown in Equation 4.
  • adjustedFaceColor.r faceColor.r x modelColor.r / LBF (faceColor.r)
  • adjustedFaceColor.g faceColor.g x modelColor.g / LBF (faceColor.g)
  • adjustedFaceColor.b faceColor.b x modelColor.b / LBF (faceColor.b)
  • the texture UV calculator 340 may calculate U and V values of a portion (mask area) to be texture-mapped to the face of the 3D avatar in the photorealistic image of the face, and the face texture color adjuster 350 may calculate the calculated U, The color of the real photo of the face may be adjusted based on the V value.
  • the face texture color adjusting unit 350 may adjust the color by applying Equation 3 only to the mask area of the photorealistic image of the face.
  • FIG. 7 An example of the operation of the texture UV calculation unit 340 according to an embodiment of the present invention is shown in FIG. 7, and an example of the operation of the face texture color adjustment unit 350 according to an embodiment of the present invention is shown in FIG. 8. have.
  • the 3D rendering unit may match lighting effects of portions of the 3D avatar other than the mask region to lighting information of the real-life image of the face corresponding to the mask region. That is, the 3D rendering unit may determine lighting information of the photorealistic image of the face, and apply a 3D lighting effect to a portion of the 3D avatar other than a portion to which the photorealistic image of the face is mapped according to the determined lighting information.
  • the 3D rendering unit determines lighting information of the photorealistic image of the face by analyzing information of the photorealistic image of the face to determine the lighting information, and receiving information such as the location of the light around the camera that captures the video and brightness. For example, a method of determining lighting information.
  • the 3D renderer may smoothly change the image of the two regions by blending the mask region to which the live image of the 3D avatar is texture-mapped and the boundary of the portion outside the mask region.
  • FIG. 9 is a diagram illustrating an example of a face model of a 3D avatar generated in real time by the 3D avatar generation system 100 according to an exemplary embodiment.
  • FIG. 10 is a diagram illustrating an example of a virtual fitting in which a face model of a 3D avatar generated in real time by the 3D avatar generating system 100 according to an embodiment of the present invention is reflected.
  • the facial model of the 3D avatar according to the present invention can be applied to all effects that can be realized through the existing 3D facial model. For example, face shape deformation (shaping), face texture effects (such as beautification, filters, and makeup), hairstyle changes, accessories such as glasses, and background replacement may be performed.
  • face model of the 3D avatar according to the present invention can be applied to other applications besides virtual fitting.
  • the present invention can also be embodied as computer readable code on a computer readable recording medium.
  • Computer-readable recording media include all storage media such as magnetic storage media and optical reading media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Hardware Design (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Computer Graphics (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Système de génération d'un avatar 3D selon la présente invention, comprenant : une unité d'entrée vidéo destinée à la réception d'une vidéo ; une unité de détermination de forme d'avatar 3D destinée à la détection du visage d'un utilisateur dans la vidéo et à la détermination de la forme du visage d'un avatar 3D sur la base du visage détecté ; et une unité de mise en correspondance de texture destinée à la mise en correspondance de la texture d'une image photoréaliste du visage détecté avec le visage de l'avatar 3D ayant la forme déterminée.
PCT/KR2019/010694 2018-08-23 2019-08-22 Procédé et système de génération d'un avatar 3d en temps réel pour un essayage virtuel WO2020040567A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180098840A KR102131923B1 (ko) 2018-08-23 2018-08-23 가상 피팅을 위한 실시간 3d 아바타 생성 방법 및 시스템
KR10-2018-0098840 2018-08-23

Publications (2)

Publication Number Publication Date
WO2020040567A2 true WO2020040567A2 (fr) 2020-02-27
WO2020040567A3 WO2020040567A3 (fr) 2020-04-16

Family

ID=69592644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/010694 WO2020040567A2 (fr) 2018-08-23 2019-08-22 Procédé et système de génération d'un avatar 3d en temps réel pour un essayage virtuel

Country Status (2)

Country Link
KR (1) KR102131923B1 (fr)
WO (1) WO2020040567A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113551772A (zh) * 2020-04-07 2021-10-26 武汉高德智感科技有限公司 一种红外测温方法、红外测温系统及存储介质
EP3970818A1 (fr) * 2020-09-18 2022-03-23 XRSpace CO., LTD. Procédé et système de réglage du teint de la peau d'un avatar

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102593760B1 (ko) * 2023-05-23 2023-10-26 주식회사 스푼라디오 디지털 서비스 기반의 dj 별 맞춤형 가상 아바타 생성 서버, 방법 및 프로그램

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110021330A (ko) * 2009-08-26 2011-03-04 삼성전자주식회사 3차원 아바타 생성 장치 및 방법
KR101165017B1 (ko) * 2011-10-31 2012-07-13 (주) 어펙트로닉스 3차원 아바타 생성 시스템 및 방법
KR101747898B1 (ko) * 2012-12-14 2017-06-16 한국전자통신연구원 영상 처리 방법 및 이를 이용하는 영상 처리 장치
KR101710521B1 (ko) * 2015-11-18 2017-02-27 (주)에프엑스기어 사용자 신체의 cg 표현 기능이 구비된 가상 피팅을 위한 시뮬레이션 장치, 방법 및 이를 위한 컴퓨터 프로그램
KR20180082170A (ko) * 2017-01-10 2018-07-18 트라이큐빅스 인크. 3차원 얼굴 모델 획득 방법 및 시스템

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113551772A (zh) * 2020-04-07 2021-10-26 武汉高德智感科技有限公司 一种红外测温方法、红外测温系统及存储介质
CN113551772B (zh) * 2020-04-07 2023-09-15 武汉高德智感科技有限公司 一种红外测温方法、红外测温系统及存储介质
EP3970818A1 (fr) * 2020-09-18 2022-03-23 XRSpace CO., LTD. Procédé et système de réglage du teint de la peau d'un avatar

Also Published As

Publication number Publication date
KR20200022778A (ko) 2020-03-04
KR102131923B1 (ko) 2020-07-09
WO2020040567A3 (fr) 2020-04-16

Similar Documents

Publication Publication Date Title
US11423556B2 (en) Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
US11756223B2 (en) Depth-aware photo editing
WO2020040567A2 (fr) Procédé et système de génération d'un avatar 3d en temps réel pour un essayage virtuel
WO2017026839A1 (fr) Procédé et dispositif permettant d'obtenir un modèle 3d de visage au moyen d'une caméra portative
JP4794678B1 (ja) 映像処理装置、映像処理方法、および映像通信システム
JP3779570B2 (ja) 化粧シミュレーション装置、化粧シミュレーション制御方法、化粧シミュレーションプログラムを記録したコンピュータ読み取り可能な記録媒体
WO2017010695A1 (fr) Appareil de génération de contenu tridimensionnel et procédé de génération de contenu tridimensionnel associé
US20230206531A1 (en) Avatar display device, avatar generating device, and program
Hillman et al. Alpha channel estimation in high resolution images and image sequences
WO2014105646A1 (fr) Fusion à faible latence de données d'image couleur
KR20100026240A (ko) 증강현실을 이용한 쓰리디 헤어스타일 시뮬레이션 방법 및 장치
CN108762508A (zh) 一种基于vr体验舱的人体与虚拟人合成系统及方法
WO2015166107A1 (fr) Systèmes, procédés, appareils et supports de stockage lisibles par ordinateur permettant de collecter des informations de couleur concernant un objet soumis à un balayage 3d
Malleson et al. Rapid one-shot acquisition of dynamic VR avatars
CN110177287A (zh) 一种图像处理和直播方法、装置、设备和存储介质
JP2022133133A (ja) 生成装置、生成方法、システム、およびプログラム
Zhu et al. SAVE: shared augmented virtual environment for real-time mixed reality applications
US12020363B2 (en) Surface texturing from multiple cameras
JP2002083286A (ja) アバタ生成方法、装置及びそのプログラムを記録した記録媒体
WO2022022260A1 (fr) Procédé de transfert de style d'image et appareil associé
US20230306676A1 (en) Image generation device and image generation method
WO2019216688A1 (fr) Procédé d'estimation de lumière pour la réalité augmentée et dispositif électronique associé
WO2024106565A1 (fr) Système et procédé de génération de modèle de visage 3d à partir d'une image faciale 2d
Woodward et al. An interactive 3D video system for human facial reconstruction and expression modeling
JP7322235B2 (ja) 画像処理装置、画像処理方法、およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19851551

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19851551

Country of ref document: EP

Kind code of ref document: A2