CN110363867B - Virtual decorating system, method, device and medium - Google Patents

Virtual decorating system, method, device and medium Download PDF

Info

Publication number
CN110363867B
CN110363867B CN201910640937.8A CN201910640937A CN110363867B CN 110363867 B CN110363867 B CN 110363867B CN 201910640937 A CN201910640937 A CN 201910640937A CN 110363867 B CN110363867 B CN 110363867B
Authority
CN
China
Prior art keywords
clothing
human body
key points
virtual
visual data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910640937.8A
Other languages
Chinese (zh)
Other versions
CN110363867A (en
Inventor
陈一鸣
朱海超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutou Technology Hangzhou Co Ltd
Original Assignee
Yutou Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yutou Technology Hangzhou Co Ltd filed Critical Yutou Technology Hangzhou Co Ltd
Priority to CN201910640937.8A priority Critical patent/CN110363867B/en
Publication of CN110363867A publication Critical patent/CN110363867A/en
Application granted granted Critical
Publication of CN110363867B publication Critical patent/CN110363867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Abstract

The invention relates to a virtual decorating system, a method, a device and a medium, wherein the system comprises: glasses, the glasses comprising a memory and a processor to: providing a clothing selection interface, and receiving selected clothing selected by a user; collecting visual data including human body images reflected by a mirror; obtaining the current human body posture according to the visual data; acquiring a clothing model of the selected clothing, and acquiring matching information of clothing key points and human body key points of the selected clothing; obtaining a clothing model of the selected clothing, which is matched with the current human body posture, according to the current human body posture, the clothing model of the selected clothing and the matching information; rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result; and displaying a clothes rendering result. By using the virtual decorating system, the method, the equipment and the medium, a user only needs to wear the augmented reality glasses and use any ordinary mirror around the user, so that the virtual decorating experience can be realized, and the virtual decorating system, the method, the equipment and the medium are flexible and convenient and have good user experience.

Description

Virtual decorating system, method, device and medium
Technical Field
The present invention relates to the field of augmented reality technologies, and in particular, to a virtual make-up system, method, device, and medium.
Background
Virtual dressing, also known as virtual fitting, refers to a user selecting one, more, or multiple types of fashion apparel and superimposing a two-dimensional or three-dimensional image of the fashion apparel on the user so that the user can observe what the user is dressing the fashion apparel.
There are several methods of implementing virtual grooming. Among these methods, some utilize augmented reality technology and motion tracking technology to generate a video such that a user appears to wear virtual apparel in the video. The method requires the use of a specially designed virtual fitting mirror, including a screen, a camera, and a processing unit for rendering virtual apparel, for capturing the user's gestures to present fashion apparel on the user, and displayed using the screen. From the user's point of view, such systems are not flexible enough and the user experience is poor. If a user places such a system at home, the system cannot be used outside.
Disclosure of Invention
The invention aims to provide a novel virtual decorating system, a method, equipment and a medium, wherein in the using process, a user only needs to wear augmented reality glasses and use any ordinary mirror around the user to realize the virtual decorating experience, and the system, the method, the equipment and the medium are flexible and convenient and have good user experience.
The purpose of the invention is realized by adopting the following technical scheme. According to the virtual make-up system of the present invention, the system comprises: glasses comprising a memory and a processor; the clothing selection module is used for providing a clothing selection interface and receiving clothing selected by a user as selected clothing; the visual data acquisition module is used for acquiring visual data, and the visual data comprises a human body image reflected by the mirror; the gesture recognition and positioning module is used for obtaining the current human body posture of the human body reflected by the mirror according to the visual data; the current human body posture comprises current posture information of a plurality of human body key points; the clothing data acquisition module is used for acquiring a clothing model of the selected clothing, wherein the clothing model comprises a plurality of clothing key points and acquiring matching information of the clothing key points and the human body key points of the selected clothing; the clothing and human body posture fusion module is used for determining the current posture information of the clothing key points of the selected clothing according to the current posture information of the human body key points, the clothing model of the selected clothing and the matching information so as to obtain the clothing model of the selected clothing, which is matched with the current human body posture; the rendering module is used for rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result; the clothing display module is used for displaying the clothing rendering result by overlaying the clothing rendering result on the human body image seen by the user, and displaying the dressing effect in an augmented reality form;
the clothing selection module, the visual data acquisition module, the gesture recognition and positioning module, the clothing data acquisition module, the clothing and human body posture fusion module, the rendering module and one or more of the clothing display module are arranged on a memory of the glasses.
The object of the invention can be further achieved by the following technical measures.
In the virtual make-up system, the gesture recognition and positioning module includes a mirror recognition and positioning unit and a gesture recognition and positioning unit; the mirror identification and positioning unit is used for segmenting the area of a mirror in the visual data to obtain in-mirror visual data, and the in-mirror visual data comprises the human body image; the gesture recognition and positioning unit is used for obtaining the current human body posture according to the in-lens vision data.
In the virtual decorating system, the visual data acquisition module is specifically configured to acquire two-dimensional visual data; the gesture recognition and positioning module is specifically configured to: and estimating and obtaining a multi-person linear model with a covering according to the two-dimensional visual data by using a DensePose mode to serve as the current human body posture, wherein the current posture information of a plurality of three-dimensional human body key points in the multi-person linear model with the covering is obtained.
In the virtual dressing system, the plurality of key points of the human body are head, neck, left shoulder, right shoulder, left forearm, right forearm, left hand, right arm, chest, abdomen, left thigh, right thigh, left shank, right shank, left foot and right foot.
In the virtual dress up system, the dress data obtaining module is specifically configured to: acquiring a clothing three-dimensional model of the selected clothing, and acquiring matching information of the three-dimensional clothing key points and the three-dimensional human body key points of the selected clothing; the rendering module is specifically configured to: rendering according to the three-dimensional clothes model matched with the current human body posture to obtain a clothes two-dimensional image of the selected clothes; the clothing display module is specifically used for: displaying the clothing two-dimensional image by overlaying the clothing two-dimensional image onto the human body image seen by a user.
In an embodiment, the virtual dress system further includes a lighting simulation unit, configured to: acquiring a current illumination condition; and dynamically setting the light source of the virtual world according to the illumination condition when the clothing model matched with the current human body posture is rendered.
The virtual dress up system further comprises a calibration module, configured to calibrate a position to be displayed of the dress rendering result by using the face information as a calibration mark; the clothing display module is specifically used for: and according to the calibrated position to be displayed, overlapping the calibrated clothes rendering result to the human body image seen by the user.
The virtual make-up system described above, wherein the calibration module includes: the face key point identification unit is used for identifying face key points of the human body image; the standard face display unit is used for displaying a preset standard face image by using the glasses and prompting a user to move so as to align the face of the user with the standard face image; the face alignment judging unit is used for judging whether the face of the user is aligned with the standard face image or not according to the identified face key points; and the calibration unit is used for determining the coordinate mapping relation between a camera coordinate system and a display coordinate system when the face of the user is aligned with the standard face image, and calibrating the position to be displayed of the clothing rendering result according to the coordinate mapping relation.
The virtual dress system further comprises a database for storing a plurality of dress models of optional dresses and matching information of one or more dress key points and one or more human key points in the dress models of the optional dresses; the clothing data acquisition module is specifically used for: calling out the clothing model of the selected clothing and matching information of the clothing key points and the human body key points of the selected clothing from the database; the system also includes a database entry module for: and receiving the clothing model of the optional clothing in advance, matching one or more clothing key points in the clothing model of the optional clothing with the human body key points to obtain matching information, and inputting the matching information into the database.
The virtual dress system further comprises a dress effect memo module, configured to: at every other preset time interval, repeatedly carrying out a plurality of steps from the acquisition of the visual data to the display of the clothing rendering result by superposing the clothing rendering result on the human body image seen by the user by utilizing the clothing selection module, the visual data acquisition module, the gesture recognition and positioning module, the clothing data acquisition module, the clothing and human body posture fusion module, the rendering module and the clothing display module so as to display the dressing effect in real time; recording one or more of the visual data, the fit key points and the body key points of the selected fit, and the fit model that has matched the current body pose during each iteration to generate a historical dress record.
The virtual decorating system further comprises a display mode judging module, configured to judge a display mode according to a current online state or an offline state, and/or according to a selection of a user; the display modes comprise one or more of a first display mode, a second display mode and a third display mode; the clothing display module comprises one or more of a first display unit, a second display unit and a third display unit; the first display unit is configured to: if the display mode is the first display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image reflected by the mirror, so that the user can see the virtually dressed user from the mirror; the second display unit is configured to: if the display mode is the second display mode, when the clothing rendering result is displayed, the clothing rendering result is superimposed on the human body image in the visual data in the historical decorating record for displaying so as to show the visual data on which the virtual decorating effect is superimposed; the third display unit is configured to: if the display mode is the third display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image in the visual data collected in real time to be displayed, so that the visual data superposed with the virtual dressing effect is displayed.
The object of the present invention is also achieved by the following technical means. The virtual decorating method provided by the invention comprises the following steps: providing a clothing selection interface, and receiving clothing selected by a user as selected clothing; collecting visual data, wherein the visual data comprises human body images reflected by a mirror; obtaining the current human body posture of the human body reflected by the mirror according to the visual data; wherein the current human body posture comprises current posture information of a plurality of human body key points; acquiring a clothing model of the selected clothing, wherein the clothing model comprises a plurality of clothing key points, and acquiring matching information of the clothing key points and the human body key points of the selected clothing; determining current pose information of the clothing key points of the selected clothing according to the current pose information of the human body key points, the clothing model of the selected clothing and the matching information so as to obtain a clothing model of the selected clothing, which is matched with the current human body pose; rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result; displaying the clothes rendering result by overlaying the clothes rendering result on the human body image seen by the user for displaying the dressing effect by adopting an augmented reality form.
The object of the invention can be further achieved by the following technical measures.
The virtual make-up method may further include, in the step of obtaining the current body posture of the body reflected by the mirror according to the visual data, the step of: dividing a mirror region in the visual data to obtain in-mirror visual data, wherein the in-mirror visual data comprises the human body image; and obtaining the current human body posture according to the in-lens vision data.
In the virtual decorating method, the acquiring visual data includes acquiring two-dimensional visual data; the obtaining the current body posture of the body reflected by the mirror according to the visual data comprises: and estimating and obtaining a multi-person linear model with a covering according to the two-dimensional visual data by using a DensePose mode to serve as the current human body posture, wherein the current posture information of a plurality of three-dimensional human body key points in the multi-person linear model with the covering is obtained.
In the virtual dressing method, the plurality of key points of the human body are head, neck, left shoulder, right shoulder, left upper arm, right upper arm, left forearm, right forearm, left hand, right hand, chest, abdomen, left thigh, right thigh, left calf, right calf, left foot and right foot.
The virtual dress method may further include: acquiring a clothing three-dimensional model of the selected clothing; the obtaining of matching information of the clothing key points and the human body key points of the selected clothing comprises: acquiring matching information of the three-dimensional clothing key points and the three-dimensional human body key points of the selected clothing; the step of rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result comprises the following steps: rendering according to the three-dimensional clothing model matched with the current human body posture to obtain a clothing two-dimensional image of the selected clothing; the displaying the clothing rendering result by overlaying the clothing rendering result on the human body image seen by the user comprises: and overlaying the clothing two-dimensional image on the human body image seen by the user to display the clothing two-dimensional image.
In the virtual dress method, the rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result includes: acquiring a current illumination condition; and dynamically setting the light source of the virtual world according to the illumination condition when the clothing model matched with the current human body posture is rendered.
The virtual dress method further includes, before the step of displaying the clothing rendering result by superimposing the clothing rendering result on the human body image viewed by the user, a step of: calibrating the position to be displayed of the clothing rendering result by taking the face information as a calibration mark; the displaying the dress rendering result by overlaying the dress rendering result onto the human body image seen by the user comprises: and according to the calibrated position to be displayed, overlaying the calibrated clothes rendering result to the human body image seen by the user.
In the virtual dress method, the calibrating the position to be displayed of the clothing rendering result by using the face information as a calibration mark comprises: identifying key points of the face of the human body image; displaying a preset standard face image, and prompting a user to move so as to align the face of the user with the standard face image; judging whether the face of the user is aligned with the standard face image or not according to the identified face key points; and when the user face is aligned with the standard face image, determining a coordinate mapping relation between a camera coordinate system and a display coordinate system, and calibrating the position to be displayed of the clothing rendering result according to the coordinate mapping relation.
The virtual decoration method further includes: repeating the steps of collecting the visual data to display the clothing rendering result by superimposing the clothing rendering result on the human body image seen by the user at every preset time interval to display the dressing effect in real time; recording one or more of the visual data, matching information of the clothing key points and the human key points of the selected clothing in each repeated process and the clothing model matched with the current human posture to generate a historical dress record.
The virtual make-up method further includes: judging display modes according to the current online state or offline state and/or according to the selection of a user, wherein the display modes comprise one or more of a first display mode, a second display mode and a third display mode; the displaying the dress rendering result by overlaying the dress rendering result on the human body image seen by the user comprises one or more of the following steps: if the display mode is the first display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image reflected by the mirror, so that the user can see the virtually dressed user from the mirror; if the display mode is the second display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image in the visual data in the historical decorating record to be displayed, so that the visual data superposed with the virtual decorating effect is displayed; if the display mode is the third display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image in the visual data collected in real time to be displayed, so that the visual data superposed with the virtual dressing effect is displayed.
The object of the present invention is also achieved by the following technical means. According to the invention, an apparatus is proposed, comprising: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, such that the computer readable instructions, when executed by the processor, implement the steps of the aforementioned virtual grooming method.
The object of the present invention is also achieved by the following technical means. A computer-readable storage medium is proposed according to the present invention for storing a computer program, which when executed by a computer or a processor implements the steps of the aforementioned method embodiments.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By the technical scheme, the virtual decorating system, the virtual decorating method, the virtual decorating equipment and the virtual decorating medium provided by the invention at least have the following advantages and beneficial effects:
(1) The virtual dressing experience is realized by utilizing the augmented reality glasses and one common mirror, in the using process, a user only needs to wear the augmented reality glasses, each common mirror at the side of the user can be changed into a 'virtual fitting mirror', the virtual fitting experience is not limited by a specially designed large screen, and the virtual fitting experience is flexible and convenient and good;
(2) According to the invention, the mirror area in the visual data is firstly divided, and then the obtained visual data in the mirror is utilized to carry out human body posture recognition, so that the interference on virtual decoration when personnel exist in the non-mirror area in the visual data can be removed, and the user can be more accurately recognized;
(3) According to the invention, the current human body posture is obtained by estimating the SMPL model by using a DensePose mode, and the current human body posture data can be accurately determined;
(4) According to the invention, 18 human body key points such as the head, the neck, the left shoulder, the right shoulder, the left forearm, the right forearm, the left hand, the right hand, the chest, the abdomen, the left thigh, the right thigh, the left calf, the right calf, the left foot, the right foot and the like are arranged in the human body model, so that the human body posture recognition speed is promoted, and the virtual dressing can be accurately carried out while the human body posture recognition speed is promoted;
(5) According to the invention, the three-dimensional clothes are modeled into the deformation object, so that the clothes can deform along with the change of the posture of a human body, and a better experience effect is generated;
(6) The light source of the virtual world is dynamically set according to the illumination condition, so that the color of the clothes in a real scene can be reflected, a real rendering effect is generated, and color matching is facilitated for a user;
(7) The invention realizes calibration by calibrating the face information, and can more accurately overlay the virtual clothes to the user when displaying the virtual clothes;
(8) According to the invention, by setting various display modes, the virtual decorating effect can be displayed in various modes according to different online and offline states or according to the selection of a user, so that the method is flexible and convenient, and the user experience is good.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understandable, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic structural diagram of a virtual dressing system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a virtual decorating system according to another embodiment of the present invention;
fig. 3 is a block flow diagram of a virtual grooming method of an embodiment of the invention;
fig. 4 is a block diagram of the structure of an apparatus of one embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given of specific embodiments, structures, features and effects of a virtual decorating system, a method, a device and a medium according to the present invention with reference to the accompanying drawings and preferred embodiments.
Fig. 1 is a schematic configuration diagram of an embodiment of a virtual decorating system 100 according to the present invention, and fig. 2 is a schematic configuration diagram of another embodiment of the virtual decorating system 100 according to the present invention. Referring to fig. 1 or fig. 2, a virtual make-up system 100 according to an example of the present invention mainly includes: and glasses 110. The glasses 110 include one or more of a clothing selection module 111, a visual data acquisition module 112, a gesture recognition and positioning module 113, a clothing data acquisition module 114, a clothing and human body pose fusion module 115, a rendering module 116, and a clothing display module 117. In some examples, the glasses 110 are smart glasses. Optionally, the glasses 110 are augmented reality glasses (also referred to as AR glasses). The glasses 110 comprise a memory and a processor, and one or more of the aforementioned clothing selection module 111, visual data acquisition module 112, gesture recognition and positioning module 113, clothing data acquisition module 114, clothing and human body posture fusion module 115, rendering module 116, and clothing display module 117 are disposed on the memory of the glasses 110. During use, the user is required to wear the glasses 110 and stand in front of the mirror. The mirror comprises a mirror surface capable of reflecting an object. Generally, the mirror may be a general mirror. This is because the camera on the AR glasses cannot take the image of the user himself, and a mirror is required so that the camera on the AR glasses can acquire the posture of the user.
The apparel selection module 111 is configured to: and providing a clothing selection interface, receiving one, more or more types of clothing to be tried on selected by a user as selected clothing, and outputting the selected clothing. It should be noted that the type of the clothes is not limited, and the clothes can be a bag, a jacket, a skirt and the like. In some examples, the aforementioned providing of the apparel selection interface is implemented with a screen of the glasses 110.
The visual data acquisition module 112 is configured to: and collecting visual data and outputting the visual data. Wherein the visual data comprises an image of the human body reflected by the mirror. In some embodiments, the visual data comprises an RGB image containing the user's human RGB information reflected off a mirror; the visual data collection module 112 includes an RGB image capturer for collecting RGB images containing RGB information of the user's body reflected off a mirror. Optionally, the visual data collection module 112 includes a camera disposed on the glasses 110.
The gesture recognition and positioning module 113 is configured to: and receiving the visual data, obtaining the current human body posture of the human body reflected by the mirror according to the visual data, and outputting the current human body posture. Wherein the current human body posture comprises current posture information of a plurality of human body key points. The human body key points can also be called human body part key points or human body key parts. Generally, the body reflected by the mirror is the user himself.
The clothing data obtaining module 114 is configured to: receiving a selected garment; obtaining a clothing model of the selected clothing, wherein the clothing model comprises a plurality of clothing key points; and acquiring matching information of the clothing key points and the human body key points of the selected clothing. In an optional example, the clothing model is stored in the server in advance, and the clothing data obtaining module 114 of the glasses 110 obtains the clothing model of the selected clothing provided by the server; in another optional example, the clothing model is stored in a storage unit of the glasses 110 in advance, and the clothing data obtaining module 114 of the glasses 110 obtains the clothing model of the selected clothing by reading the storage unit.
The garment and body pose fusion module 115 is configured to: and receiving the current human body posture, the clothing model of the selected clothing and the matching information, and determining the current posture information of the clothing key points of the selected clothing according to the current posture information of the human body key points, the clothing model of the selected clothing and the matching information so as to obtain the clothing model of the selected clothing matched with the current human body posture.
The rendering module 116 is configured to: and rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result. Optionally, the garment rendering result includes a garment image that has matched the current human pose.
The apparel display module 117 is configured to: and displaying the clothes rendering result. Specifically, the clothing rendering result is displayed by overlaying the clothing rendering result on the human body image seen by the user, so as to display the dressing effect in an augmented reality form. The image of the body reflected by the mirror is not called a virtual image. In some examples, the aforementioned presentation of the apparel rendering results is accomplished with the screen of the glasses 110, which is superimposed on the human imagery reflected off the mirror.
The virtual dressing system 100 of the embodiment of the invention realizes the virtual dressing experience by utilizing the augmented reality glasses and one common mirror, and in the using process, a user only needs to wear the augmented reality glasses, and each common mirror around the user can be changed into a 'virtual fitting mirror', thereby being flexible and convenient and having good user experience.
In some embodiments, gesture recognition and positioning module 113 includes a mirror recognition positioning unit and a gesture recognition and positioning unit. The mirror recognition positioning unit is used for: and receiving the visual data, and utilizing the mirror identification and positioning model to divide the area of the mirror in the visual data to obtain the in-mirror visual data. Wherein the in-mirror visual data comprises human body images reflected by the mirror. The gesture recognition and positioning unit is configured to: and obtaining the current human body posture according to the in-lens vision data. By using the virtual decorating system 100 of the embodiment of the invention, the interference on virtual decoration when people exist in a non-mirror area in visual data can be removed, and the virtual decorating system is beneficial to more accurately identifying a user.
In some embodiments, the acquired visual data is a two-dimensional image, such as a two-dimensional RGB image, and a three-dimensional human pose is identified from the two-dimensional image. Optionally, the visual data acquisition module 112 is specifically configured to acquire two-dimensional visual data. The gesture recognition and positioning module 113 is specifically configured to: and estimating and obtaining a Multi-Person Linear Model (A Skinned Multi-Person Linear Model, abbreviated as an SMPL Model) with skin according to the two-dimensional visual data by using a dense pose (DensePose) mode as the current human body pose. Optionally, the obtaining the SMPL model includes: and obtaining the current posture information of a plurality of three-dimensional human body key points in the SMPL model.
The DensePose is a human body pose estimation technology, which maps human pixels in a two-dimensional image to a three-dimensional human body surface, processes intensive coordinates at a speed of multiple frames per second, and finally realizes accurate positioning and pose estimation of a dynamic character. The SMPL model is a parameterized human body model, and includes various parameters for describing the human body. The human body posture and posture measuring device specifically comprises a plurality of parameters for representing the height, the weight and the head-body proportion of an individual human body and the like, and a plurality of parameters for representing the overall motion posture of the human body and the relative angles of key points of 24 individual human bodies and the like.
According to the invention, the current human body posture is obtained by estimating the SMPL model by using a DensePose mode, and the current human body posture data can be accurately obtained.
It should be noted that the present invention is not limited to the densipos method for gesture recognition, and is also not limited to the SMPL human body model for human body characterization, and the virtual make-up system 100 of the present invention may also be implemented by other gesture recognition methods or by using other human body models.
It should be noted that the present invention does not limit the type of the pose information of the human key points, and may use multiple ways to represent the pose information of the human key points, for example, the pose information of the human key points may be represented by using position coordinates under a rectangular planar coordinate system, or the pose information of the human key points may be represented by using relative angles and relative distances between multiple key points.
In some embodiments, instead of the 24 human body key points commonly used in SMPL models, fewer human body key points are used to increase the speed of human body pose estimation using the DensePose approach. Meanwhile, the characteristics of virtual decoration need to be considered, and the virtual decoration effect cannot be influenced by reducing the key points of the human body too little. Specifically, the human body key points of the present invention include: head, neck, left shoulder, right shoulder, left forearm, right forearm, left hand, right hand, chest, abdomen, left thigh, right thigh, left calf, right calf, left foot, right foot. The virtual decoration system 100 according to the example of the present invention is advantageous in that the above-described 18 human body key points are provided in the human body model, thereby increasing the speed of posture recognition using the DensePose model and enabling accurate virtual decoration while increasing the speed of posture estimation of the human body.
It should be noted that the aforementioned 18-person key point method proposed by the present invention is not limited to the aforementioned SMPL model, and can also be applied to the embodiment of the virtual make-up system 100 using another human body model according to the present invention.
In some embodiments, apparel data acquisition module 114 is specifically configured to: the method comprises the steps of obtaining a clothing three-dimensional model of the selected clothing, and obtaining matching information of three-dimensional clothing key points and three-dimensional human body key points of the selected clothing. Meanwhile, the clothing model generated by the clothing and human pose fusion module 115 that has been matched to the current human pose is also three-dimensional. The rendering module 116 is specifically configured to: and rendering according to the three-dimensional clothing model matched with the current human body posture to obtain a clothing two-dimensional image of the selected clothing. The clothing display module 117 is specifically configured to: the clothing two-dimensional image is displayed by superimposing the clothing two-dimensional image into the virtual image of the user. The virtual dress system 100 of the present example models a three-dimensional garment into a deformable object, such that the garment deforms according to the change of the posture of the human body, resulting in a better experience.
In some embodiments, apparel rendering module 116 includes a lighting simulation unit. The illumination simulation unit is used for: acquiring a current illumination condition; and when the clothes model matched with the current human body posture is used for rendering, dynamically setting the light source of the virtual world according to the illumination condition so as to reflect the color of the clothes in the real scene, generate the real rendering effect and facilitate color matching of a user. It is noted that the aforementioned obtaining of the current lighting conditions may be implemented in various ways. In one embodiment, the current lighting conditions are collected in real time using a sensor; in another embodiment, the current time is obtained, and the illumination condition of the current time is determined according to the preset corresponding relation between the time and the illumination; in another embodiment, the lighting conditions are obtained in multiple ways at the same time, and the lighting conditions obtained in multiple ways are comprehensively considered to determine the current lighting conditions.
In some embodiments, the virtual grooming system 100 of the present invention further comprises a calibration module 118. The calibration module 118 is used to calibrate the two-dimensional image of the garment generated by the garment rendering module 116 for the garment display module 117 to display the calibrated two-dimensional image of the garment. The calibration specifically refers to: unifying the real world coordinate system of the user and the camera coordinate system of the glasses 110. A typical calibration method involves the use of a 2d mark (marker) for calibration.
Further, in some embodiments, the calibration module 118 is specifically configured to: and calibrating by using the face information, and calibrating the position to be displayed of the clothes rendering result by using the face information as a calibration mark (marker). And the clothing display module 117 is specifically configured to: and according to the calibrated position to be displayed of the clothing rendering result, overlapping the calibrated clothing rendering result to the human body image seen by the user.
As an alternative specific embodiment, the calibration module 118 specifically includes the following units:
a face key point identification unit for identifying face key points of the human body image in the visual data, such as eyebrows, a nose and a mouth;
a standard face display unit, configured to display a preset standard face image by using the glasses 110, for example, display a standard face image on an AR glasses screen, and prompt a user to move to align a face of the user (i.e., a face in the glasses) with the standard face image;
the face alignment judging unit is used for judging whether the face of the user is aligned with the standard face image or not according to the identified face key points; note that it is not necessary to align completely, and the error is within a preset threshold;
and the calibration unit is used for determining the coordinate mapping relation between the camera coordinate system and the display coordinate system when the face of the user is aligned with the standard face image, and calibrating the position to be displayed of the clothes rendering result according to the coordinate mapping relation. So that the result of the clothes rendering on the screen of the glasses 110 can be accurately superimposed on the image of the human body reflected by the mirror.
The virtual grooming system 100 of the present example, which implements calibration by using calibration using face information, can accurately superimpose virtual information on the screen of the glasses 110 on the real world seen by the user to give the user an AR sensation.
In some embodiments, the virtual grooming system 100 of the present example also includes a database 120, with the database 120 being utilized to obtain apparel data. Specifically, the database 120 is used for storing a plurality of clothing models of selectable clothing available for the user to select and matching information of one or more clothing key points and one or more human body key points in the clothing models of the selectable clothing. Apparel data acquisition module 114 is specifically configured to: the clothing model of the selected clothing and the matching information of the clothing key points and the human body key points of the selected clothing are called out from the database 120.
It is noted that the database 120 may be implemented by a memory provided in the glasses 110. Alternatively, as shown in fig. 2, the database 120 may be disposed not in the glasses 110, but in the server, and the clothing data is acquired through the interaction of the clothing data acquiring module 114 and the server.
Further, the virtual grooming system 100 of the present example also includes a database entry module 121. The database entry module 121 is configured to: the clothing model of the optional clothing which can be selected by the user is received in advance, one or more clothing key points in the clothing model of the optional clothing are matched with one or more human body key points to obtain matching information, and the matching information is input into the database 120.
In some embodiments, the virtual grooming system 100 of the present examples also includes an apparel effect memo module 119. The apparel effect memo module 119 is for: at every preset time interval, the aforementioned modules and units of the virtual dress system 100, such as the visual data collecting module 112, the gesture recognizing and positioning module 113, the clothing data acquiring module 114, the clothing and human body posture fusing module 115, the rendering module 116 and the clothing displaying module 117, are started, so as to repeatedly perform the aforementioned steps of collecting visual data and displaying the clothing rendering result, thereby displaying the dress effect in real time.
Further, in some embodiments, apparel effect memo module 119 is also to: and recording the virtual decoration information in each repeated process to generate a user historical decoration record. The virtual fitting information includes one or more of visual data, matching information of clothing key points of the selected clothing and human body key points, and clothing models that have matched the current human body pose. So that the user can compare different dress effects at the same time.
In some embodiments, the virtual grooming system 100 of the present example further comprises a display mode determining module (not shown in the figures) for determining the display mode according to the current online state or offline state, and/or according to the user's selection. The display mode includes one or more of a first display mode, a second display mode, and a third display mode. The clothing display module 117 includes one or more of a first display unit, a second display unit, and a third display unit.
The first display unit is used for: if the display mode is the first display mode, when displaying the clothes rendering result, the clothes rendering result is superposed on the human body image reflected by the mirror, so that the user can see the virtually decorated user from the mirror.
The second display unit is configured to: and if the display mode is the second display mode, when displaying the clothing rendering result, overlaying the clothing rendering result on the human body image in the visual data in the historical dressing record for displaying so as to display the visual data overlaid with the virtual dressing effect.
The third display unit is configured to: if the display mode is the third display mode, when the clothes rendering result is displayed, the clothes rendering result is superposed on the human body image in the visual data collected in real time to be displayed, and therefore the visual data superposed with the virtual decorating effect is displayed.
As an optional specific example, the display mode determining module is specifically configured to: judging whether the display mode is a first display mode or a second display mode according to the current online state or offline state; the apparel display module 117 includes the aforementioned first display unit and second display unit. The first display unit is used for: if the human body image is in the online state, a first display mode is adopted, and when the clothes rendering result is displayed, the clothes rendering result is superposed on the human body image reflected by the mirror. In an online experience, the user sees that: the oneself in the mirror and the rendered apparel displayed by the glasses 110. The second display unit is configured to: and if the garment rendering result is in the off-line state, a second display mode is adopted, and the garment rendering result is superposed on the human body image in the visual data in the historical dressing record for displaying when the garment rendering result is displayed. In the offline experience, the user sees that: video of a user fitting a garment taken by a camera in the glasses 110 and rendered apparel displayed by the glasses 110.
Note that in some embodiments, the eyewear 110 need not include all, but may include only some, of the apparel selection module 111, the visual data acquisition module 112, the gesture recognition and positioning module 113, the apparel data acquisition module 114, the apparel and human pose fusion module 115, the rendering module 116, and the apparel display module 117. While other modules may be located at the server or may be located in another device. For example, the virtual grooming system 100 of the present example also includes the aforementioned mirror, which is a smart mirror containing memory and a processor, in which the aforementioned other modules are disposed.
Fig. 3 is a schematic flow chart diagram of an embodiment of the virtual make-up method of the present invention. Referring to fig. 3, the virtual impersonation method of the present invention mainly includes the following steps:
and S11, providing a clothing selection interface, and receiving one, more or multiple types of clothing to be tried on selected by the user as the selected clothing. It should be noted that the type of the apparel is not limited, and may be a bag, a jacket, a skirt, etc.
And S12, collecting visual data. Wherein the visual data comprises an image of the human body reflected by the mirror. The mirror comprises a mirror surface capable of reflecting an object. Generally, the mirror may be a general mirror. Optionally, the visual data comprises an RGB image containing RGB information of the user's body reflected off a mirror.
And S13, obtaining the current human body posture of the human body reflected by the mirror according to the visual data. The current human body posture comprises current posture information of a plurality of human body key points. The human body key points can also be called human body part key points or human body key parts.
And S14, obtaining a clothing model of the selected clothing, wherein the clothing model comprises a plurality of clothing key points. And acquiring matching information of the clothing key points of the selected clothing and the human body key points.
Step S15, determining the current posture information of the clothing key points of the selected clothing according to the clothing model of the selected clothing, the matching information and the current posture information of the human body key points, so as to obtain the clothing model of the selected clothing, which is matched with the current human body posture.
And S16, rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result. Optionally, the garment rendering result includes a garment image that has matched the current human pose.
And S17, displaying a clothing rendering result. Specifically, the dress rendering result is displayed by overlaying the dress rendering result on the human body image seen by the user so as to display the dress effect in an augmented reality mode.
In some embodiments, the foregoing step S13 specifically includes: utilizing a mirror identification positioning model to divide a mirror region in the visual data to obtain in-mirror visual data, wherein the in-mirror visual data comprises a human body image reflected by a mirror; and obtaining the current human body posture according to the in-lens vision data. By using the virtual decorating method disclosed by the invention, the interference on virtual decorating when people exist in the non-mirror area in the visual data can be removed, and the method is favorable for more accurately identifying the user.
In some embodiments, the acquired visual data is a two-dimensional image, such as a two-dimensional RGB image, and a three-dimensional human pose is identified from the two-dimensional image. Optionally, the aforementioned step S12 includes: two-dimensional visual data is collected. The aforementioned step S13 includes: and estimating and obtaining a Multi-Person Linear Model (A Skinned Multi-Person Linear Model, abbreviated as an SMPL Model) with skin according to the two-dimensional visual data by using a dense pose (DensePose) mode as the current human body pose. Optionally, the obtaining the SMPL model includes: and obtaining the current posture information of a plurality of three-dimensional human body key points in the SMPL model.
The DensePose is a human body pose estimation technology, which maps human pixels in a two-dimensional image to a three-dimensional human body surface, and processes intensive coordinates at a speed of multiple frames per second, thereby finally realizing accurate positioning and pose estimation of a dynamic character. The SMPL model is a parameterized human body model, and includes various parameters for describing the human body. The human body posture and posture measuring device specifically comprises a plurality of parameters for representing the height, the weight and the head-body proportion of an individual human body and the like, and a plurality of parameters for representing the overall motion posture of the human body and the relative angles of key points of 24 individual human bodies and the like.
It should be noted that the present invention is not limited to adopt the DensePose method for gesture recognition, and is also not limited to adopt the SMPL human body model for human body representation, and other gesture recognition methods or other human body models can be adopted for performing the virtual decoration method proposed by the present invention.
In some embodiments, instead of the 24 human body key points commonly used in SMPL models, fewer human body key points are used to increase the speed of human body pose estimation using the DensePose approach. Meanwhile, the characteristics of virtual decoration need to be considered, and the virtual decoration effect cannot be influenced by reducing the key points of the human body too little. Specifically, the human body key points of the present invention include: head, neck, left shoulder, right shoulder, left forearm, right forearm, left hand, right hand, chest, abdomen, left thigh, right thigh, left calf, right calf, left foot, right foot. The virtual dressing method according to the example of the present invention is advantageous in that the above-described 18 human body key points are provided in the human body model, so that the speed of posture recognition using the DensePose model can be increased, and the virtual dressing can be accurately performed while the speed of posture estimation of the human body is increased.
It should be noted that the method of 18 human body key points proposed by the present invention is not limited to the SMPL model, and the method can be applied to the embodiment of the virtual make-up method using another human body model according to the present invention.
In some embodiments, the aforementioned obtaining of the clothing model of the selected clothing in step S14 specifically includes: a three-dimensional model of the selected garment is obtained. The step S14 of obtaining matching information between the clothing key points of the selected clothing and the human body key points specifically includes: and acquiring matching information of the three-dimensional clothing key points and the three-dimensional human body key points of the selected clothing. Meanwhile, the clothing model generated in step S15 that has been matched to the current human body posture is also three-dimensional. The step S16 specifically includes: and rendering according to the three-dimensional clothing model matched with the current human body posture to obtain a clothing two-dimensional image of the selected clothing. Further, the step S17 specifically includes: the clothing two-dimensional image is displayed by superimposing the clothing two-dimensional image on the human body image seen by the user. According to the virtual dress method disclosed by the invention, the three-dimensional clothes are modeled into the deformation object, so that the clothes deform along with the change of the posture of a human body, and a better experience effect is generated.
In some embodiments, the foregoing step S16 specifically includes: acquiring a current illumination condition; and when the clothing model matched with the current human body posture is used for rendering, dynamically setting the light source of the virtual world according to the illumination condition so as to reflect the color of the clothing in the real scene, generate a real rendering effect and facilitate color matching of a user. It is noted that the aforementioned acquisition of the current lighting conditions may be implemented in a variety of ways. In one embodiment, current lighting conditions are collected in real time using a sensor; in another embodiment, the current time is obtained, and the illumination condition of the current time is determined according to the preset corresponding relation between the time and the illumination; in another embodiment, the lighting conditions are obtained in multiple ways at the same time, and the lighting conditions obtained in multiple ways are comprehensively considered to determine the current lighting conditions.
In some embodiments, before the foregoing step S17, the method further includes: and calibrating by using the face information, and calibrating the position to be displayed of the clothes rendering result by using the face information as a calibration mark (marker). The step S17 specifically includes: and according to the calibrated position to be displayed of the clothing rendering result, overlapping the calibrated clothing rendering result to the human body image seen by the user.
As an optional specific embodiment, the calibrating the position to be displayed of the clothing rendering result by using the face information as a calibration mark specifically includes the following steps:
identifying facial key points of the human body image in the visual data;
displaying a preset standard face image, and prompting a user to move so as to align the face of the user with the standard face image;
judging whether the face of the user is aligned with the standard face image or not according to the identified face key points;
and when the face of the user is aligned with the standard face image, determining a coordinate mapping relation between a camera coordinate system and a display coordinate system, and calibrating the position to be displayed of the clothes rendering result according to the coordinate mapping relation.
In some embodiments, the apparel data is obtained using a database. Specifically, the step S14 specifically includes: and calling out the clothing model of the selected clothing and matching information of clothing key points and human body key points of the selected clothing from the database. The database records a plurality of clothing models of selectable clothing for users to select and matching information of one or more clothing key points and one or more human body key points in the clothing models of the selectable clothing.
Further, the virtual dress method of the present invention further includes: the method comprises the steps of receiving a clothing model of selectable clothing for a user to select in advance, matching one or more clothing key points in the clothing model of the selectable clothing with one or more human body key points to obtain matching information, and inputting the matching information into a database.
In some embodiments, the virtual impersonation method of the present invention further comprises: the steps from the step S12 to the step S17 are repeated at predetermined time intervals to display the dressing effect in real time.
Further, in some embodiments, the virtual grooming method of the present examples further includes: in the foregoing process of repeating steps S12 to S17, virtual dressing information in each repetition is recorded to generate a user history dressing record. The virtual fitting information includes one or more of visual data, matching information of clothing key points and human body key points of the selected clothing, and clothing models that have been matched to the current human body pose. So that the user can compare different dress effects at the same time.
In some embodiments, the virtual make-up method of the present examples further includes: the display mode is determined according to whether the display mode is currently in an online state or an offline state, and/or according to a user's selection. The display mode includes one or more of a first display mode, a second display mode, and a third display mode. The aforementioned step S17 includes one or more of the following steps:
if the display mode is the first display mode, when displaying the clothes rendering result, the clothes rendering result is superposed on the human body image reflected by the mirror, so that the user can see the virtually dressed user from the mirror;
if the display mode is the second display mode, when displaying the clothes rendering result, the clothes rendering result is superposed on the human body image in the visual data in the historical dressing record for displaying so as to display the visual data superposed with the virtual dressing effect;
if the display mode is the third display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image in the visual data collected in real time to be displayed, so that the visual data superposed with the virtual dressing effect is displayed.
As an optional specific example, the step of determining the display mode specifically includes: and judging whether the display mode is the first display mode or the second display mode according to the current online state or offline state. The step S17 specifically includes: if the display screen is in the online state, a first display mode is adopted, and when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image reflected by the mirror; and if the garment rendering result is in the off-line state, adopting a second display mode, and overlaying the garment rendering result to the human body image in the visual data in the historical dressing record for displaying when the garment rendering result is displayed. In an online experience, the user sees that: self in the mirror and the rendered apparel displayed by the AR glasses. In the offline experience, the user sees that: video of a user fitting a garment taken by a camera in the AR glasses and rendered apparel in the AR glasses display.
Fig. 4 is a hardware block diagram illustrating an apparatus according to one embodiment of the invention. As shown in fig. 4, the apparatus 200 according to an embodiment of the present invention includes a memory 201 and a processor 202. The various components in the device 200 are interconnected by a bus system and/or other form of connection mechanism (not shown). The device 200 of the present invention may be implemented in various forms including, but not limited to, mobile terminal devices such as augmented reality glasses (or AR glasses, smart glasses) or other augmented reality devices (or AR devices), virtual reality devices (or VR devices), smart watches, smart phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), navigation apparatuses, in-vehicle terminal devices, in-vehicle display terminals, in-vehicle electronic rear view mirrors, etc., and fixed terminal devices such as digital TVs, desktop computers, etc.
The memory 201 is used to store non-transitory computer readable instructions. In particular, memory 201 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc.
The processor 202 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the device 200 to perform desired functions. In an embodiment of the present invention, the processor 202 is configured to execute the computer readable instructions stored in the memory 201, so that the apparatus 200 performs all or part of the aforementioned steps of the virtual decorating method according to the embodiments of the present invention.
In some embodiments, device 200 of embodiments of the present invention is augmented reality glasses.
An embodiment of the present invention further provides a computer-readable storage medium for storing a computer program, which when executed by a computer or a processor implements the steps of the virtual decorating method.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (23)

1. A virtual grooming system, characterized in that the system comprises:
glasses comprising a memory and a processor;
the clothing selection module is used for providing a clothing selection interface and receiving clothing selected by a user as selected clothing;
the visual data acquisition module is used for acquiring visual data, and the visual data comprises a human body image reflected by the mirror;
the posture recognition and positioning module is used for obtaining the current human body posture of the human body reflected by the mirror according to the visual data; wherein the current human body posture comprises current posture information of a plurality of human body key points;
the clothing data acquisition module is used for acquiring a clothing model of the selected clothing, wherein the clothing model comprises a plurality of clothing key points and acquiring matching information of the clothing key points of the selected clothing and the human body key points;
the dress and human body posture fusion module is used for determining the current posture information of the dress key points of the selected dress according to the current posture information of the human body key points, the dress model of the selected dress and the matching information so as to obtain the dress model of the selected dress matched with the current human body posture;
the rendering module is used for rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result;
the clothing display module is used for displaying the clothing rendering result by superposing the clothing rendering result on the human body image seen by the user and displaying the dressing effect in an augmented reality form;
the system comprises a clothing selection module, a visual data acquisition module, a gesture recognition and positioning module, a clothing data acquisition module, a clothing and human body posture fusion module, a rendering module and a clothing display module, wherein one or more of the clothing selection module, the visual data acquisition module, the gesture recognition and positioning module, the clothing data acquisition module, the clothing and human body posture fusion module, the rendering module and the clothing display module are arranged on a memory of the glasses.
2. The virtual make-up system according to claim 1,
the gesture recognition and positioning module comprises a mirror recognition and positioning unit and a gesture recognition and positioning unit;
the mirror identification and positioning unit is used for dividing the area of a mirror in the visual data to obtain in-mirror visual data, and the in-mirror visual data comprises the human body image;
the gesture recognition and positioning unit is used for obtaining the current human body gesture according to the in-mirror vision data.
3. The virtual grooming system according to claim 1, wherein,
the visual data acquisition module is specifically used for acquiring two-dimensional visual data;
the gesture recognition and positioning module is specifically configured to: and estimating and obtaining a multi-person linear model with a covering according to the two-dimensional visual data by using a DensePose mode to serve as the current human body posture, wherein the current posture information of a plurality of three-dimensional human body key points in the multi-person linear model with the covering is obtained.
4. The virtual dressing system according to claim 1, wherein the plurality of human body key points are head, neck, left shoulder, right shoulder, left forearm, right forearm, left forearm, right hand, chest, abdomen, left thigh, right thigh, left calf, right calf, left foot and right foot.
5. The virtual make-up system according to claim 3, wherein:
the clothing data acquisition module is specifically used for: acquiring a clothing three-dimensional model of the selected clothing, and acquiring matching information of the three-dimensional clothing key points and the three-dimensional human body key points of the selected clothing;
the rendering module is specifically configured to: rendering according to the three-dimensional clothing model matched with the current human body posture to obtain a clothing two-dimensional image of the selected clothing;
the clothing display module is specifically used for: displaying the clothing two-dimensional image by overlaying the clothing two-dimensional image onto the human body image seen by a user.
6. The virtual grooming system according to claim 1, wherein the apparel rendering module comprises a lighting simulation unit for:
acquiring a current illumination condition;
and dynamically setting a light source of the virtual world according to the illumination condition when the clothing model matched with the current human body posture is rendered.
7. The virtual make-up system according to claim 1,
the system also comprises a calibration module used for calibrating the position to be displayed of the clothes rendering result by taking the face information as a calibration mark;
the clothing display module is specifically used for: and according to the calibrated position to be displayed, overlapping the calibrated clothes rendering result to the human body image seen by the user.
8. The virtual grooming system according to claim 7, wherein the calibration module comprises:
the face key point identification unit is used for identifying face key points of the human body image;
the standard face display unit is used for displaying a preset standard face image by using the glasses and prompting a user to move so as to align the face of the user with the standard face image;
the face alignment judging unit is used for judging whether the face of the user is aligned with the standard face image or not according to the identified face key points;
and the calibration unit is used for determining the coordinate mapping relation between a camera coordinate system and a display coordinate system when the face of the user is aligned with the standard face image, and calibrating the position to be displayed of the clothing rendering result according to the coordinate mapping relation.
9. The virtual make-up system according to claim 1,
the system also comprises a database for storing a plurality of clothing models of the optional clothing and matching information of one or more clothing key points and one or more human body key points in the clothing models of the optional clothing;
the clothing data acquisition module is specifically used for: calling out the clothing model of the selected clothing and matching information of the clothing key points and the human body key points of the selected clothing from the database;
the system also includes a database entry module for: and receiving the clothing model of the optional clothing in advance, matching one or more clothing key points in the clothing model of the optional clothing with the human body key points to obtain matching information, and inputting the matching information into the database.
10. The virtual make-up system according to any one of claims 1 to 9, further comprising an apparel effect memo module for:
repeatedly performing, at every preset time interval, a plurality of steps of collecting visual data to display a clothing rendering result by superimposing the clothing rendering result on the human body image seen by the user, by using the clothing selecting module, the visual data collecting module, the gesture recognizing and positioning module, the clothing data acquiring module, the clothing and human body posture fusing module, the rendering module, and the clothing display module, so as to display a dressing effect in real time;
recording one or more of the visual data, the fit key points and the body key points of the selected fit, and the fit model that has matched the current body pose during each iteration to generate a historical dress record.
11. The virtual make-up system according to claim 10, wherein:
the system also comprises a display mode judging module which is used for judging the display mode according to the current online state or offline state and/or according to the selection of the user; the display modes comprise one or more of a first display mode, a second display mode and a third display mode;
the clothing display module comprises one or more of a first display unit, a second display unit and a third display unit;
the first display unit is configured to: if the display mode is the first display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image reflected by the mirror, so that the user can see the virtually-decorated user from the mirror;
the second display unit is configured to: if the display mode is the second display mode, when the clothing rendering result is displayed, the clothing rendering result is superimposed on the human body image in the visual data in the historical decorating record for displaying so as to show the visual data on which the virtual decorating effect is superimposed;
the third display unit is configured to: if the display mode is the third display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image in the visual data collected in real time to be displayed, and therefore the visual data superposed with the virtual decorating effect is displayed.
12. A virtual grooming method, characterized in that it comprises the steps of:
providing a clothing selection interface, and receiving clothing selected by a user as selected clothing;
collecting visual data, wherein the visual data comprises human body images reflected by a mirror;
obtaining the current human body posture of the human body reflected by the mirror according to the visual data; the current human body posture comprises current posture information of a plurality of human body key points;
acquiring a clothing model of the selected clothing, wherein the clothing model comprises a plurality of clothing key points, and acquiring matching information of the clothing key points and the human body key points of the selected clothing;
determining current pose information of the clothing key points of the selected clothing according to the current pose information of the human body key points, the clothing model of the selected clothing and the matching information so as to obtain a clothing model of the selected clothing, which is matched with the current human body pose;
rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result;
displaying the clothes rendering result by overlaying the clothes rendering result on the human body image seen by the user for displaying the dressing effect in an augmented reality form.
13. The virtual grooming method according to claim 12, wherein the deriving from the visual data a current body posture of the body reflected off the mirror comprises:
dividing a mirror region in the visual data to obtain in-mirror visual data, wherein the in-mirror visual data comprises the human body image;
and obtaining the current human body posture according to the in-lens vision data.
14. The virtual makeup method according to claim 12, wherein,
the acquiring visual data comprises acquiring two-dimensional visual data;
the obtaining of the current body posture of the body reflected by the mirror from the visual data comprises: and estimating and obtaining a multi-person linear model with a covering according to the two-dimensional visual data by using a DensePose mode to serve as the current human body posture, wherein the current posture information of a plurality of three-dimensional human body key points in the multi-person linear model with the covering is obtained.
15. The virtual dressing method according to claim 12, wherein the plurality of human key points are head, neck, left shoulder, right shoulder, left forearm, right forearm, left forearm, right forearm, left hand, right hand, chest, abdomen, left thigh, right thigh, left calf, right calf, left foot and right foot.
16. The virtual grooming method according to claim 14, wherein,
the obtaining of the apparel model for the selected apparel includes: acquiring a three-dimensional model of the selected clothes;
the obtaining of matching information of the clothing key points and the human body key points of the selected clothing comprises: acquiring matching information of the three-dimensional clothing key points and the three-dimensional human body key points of the selected clothing;
the step of rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result comprises the following steps: rendering according to the three-dimensional clothing model matched with the current human body posture to obtain a clothing two-dimensional image of the selected clothing;
the displaying the clothing rendering result by overlaying the clothing rendering result on the human body image seen by the user comprises: and overlaying the clothing two-dimensional image on the human body image seen by the user to display the clothing two-dimensional image.
17. The virtual decorating method according to claim 12, wherein the rendering according to the clothing model matched with the current human body posture to obtain a clothing rendering result comprises:
acquiring a current illumination condition;
and dynamically setting the light source of the virtual world according to the illumination condition when the clothing model matched with the current human body posture is rendered.
18. The virtual grooming method according to claim 12, wherein,
before the step of displaying the clothes rendering result by superimposing the clothes rendering result on the human body image seen by the user, further comprising: calibrating the position to be displayed of the clothing rendering result by taking the face information as a calibration mark;
the displaying the dress rendering result by overlaying the dress rendering result onto the human body image seen by the user comprises: and according to the calibrated position to be displayed, overlaying the calibrated clothing rendering result to the human body image seen by the user.
19. The virtual grooming method according to claim 18, wherein the calibrating the position to be presented of the clothing rendering result by using the face information as a marker of the calibration comprises:
identifying key points of the face of the human body image;
displaying a preset standard face image, and prompting a user to move so as to align the face of the user with the standard face image;
judging whether the face of the user is aligned with the standard face image or not according to the identified face key points;
and when the user face is aligned with the standard face image, determining a coordinate mapping relation between a camera coordinate system and a display coordinate system, and calibrating the position to be displayed of the clothing rendering result according to the coordinate mapping relation.
20. The virtual grooming method according to any one of claims 12 to 19, further comprising:
repeating the steps of collecting the visual data to display the clothing rendering result by superimposing the clothing rendering result on the human body image seen by the user at every preset time interval to display the dressing effect in real time;
recording one or more of the visual data, matching information of the clothing key points and the human key points of the selected clothing in each repeated process and the clothing model matched with the current human posture to generate a historical dress record.
21. The virtual makeup method according to claim 20, wherein,
the method further comprises the following steps: judging display modes according to the current online state or offline state and/or according to the selection of a user, wherein the display modes comprise one or more of a first display mode, a second display mode and a third display mode;
the displaying the clothing rendering result by overlaying the clothing rendering result on the human body image seen by the user comprises one or more of the following steps:
if the display mode is the first display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image reflected by the mirror, so that the user can see the virtually dressed user from the mirror;
if the display mode is the second display mode, when the clothing rendering result is displayed, the clothing rendering result is superimposed on the human body image in the visual data in the historical decorating record for displaying so as to show the visual data on which the virtual decorating effect is superimposed;
if the display mode is the third display mode, when the clothing rendering result is displayed, the clothing rendering result is superposed on the human body image in the visual data collected in real time to be displayed, so that the visual data superposed with the virtual dressing effect is displayed.
22. An apparatus, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the computer readable instructions, when executed by the processor, implement the virtual grooming method of any one of claims 12 to 21.
23. A computer-readable storage medium storing a computer program, characterized in that the program realizes the steps of the virtual-decorating method according to any one of claims 12 to 21 when executed by a computer or processor.
CN201910640937.8A 2019-07-16 2019-07-16 Virtual decorating system, method, device and medium Active CN110363867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910640937.8A CN110363867B (en) 2019-07-16 2019-07-16 Virtual decorating system, method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910640937.8A CN110363867B (en) 2019-07-16 2019-07-16 Virtual decorating system, method, device and medium

Publications (2)

Publication Number Publication Date
CN110363867A CN110363867A (en) 2019-10-22
CN110363867B true CN110363867B (en) 2022-11-29

Family

ID=68219602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910640937.8A Active CN110363867B (en) 2019-07-16 2019-07-16 Virtual decorating system, method, device and medium

Country Status (1)

Country Link
CN (1) CN110363867B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754303A (en) * 2020-06-24 2020-10-09 北京字节跳动网络技术有限公司 Method and apparatus for virtual changing of clothing, device and medium
US11195341B1 (en) * 2020-06-29 2021-12-07 Snap Inc. Augmented reality eyewear with 3D costumes
CN112070573A (en) * 2020-07-30 2020-12-11 象其形(浙江)智能科技有限公司 AR technology-based shoe purchasing method and shoe purchasing system
US11398079B2 (en) * 2020-09-23 2022-07-26 Shopify Inc. Systems and methods for generating augmented reality content based on distorted three-dimensional models
CN112232183B (en) * 2020-10-14 2023-04-28 抖音视界有限公司 Virtual wearing object matching method, device, electronic equipment and computer readable medium
CN113066125A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Augmented reality method and related equipment thereof
CN113140046A (en) * 2021-04-21 2021-07-20 上海电机学院 AR (augmented reality) cross-over control method and system based on three-dimensional reconstruction and computer readable medium
CN113129450B (en) * 2021-04-21 2024-04-05 北京百度网讯科技有限公司 Virtual fitting method, device, electronic equipment and medium
CN114565505B (en) * 2022-01-17 2023-07-11 北京新氧科技有限公司 Clothing deformation method, device, equipment and storage medium based on virtual replacement
CN114723517A (en) * 2022-03-18 2022-07-08 唯品会(广州)软件有限公司 Virtual fitting method, device and storage medium
CN114445271B (en) * 2022-04-01 2022-06-28 杭州华鲤智能科技有限公司 Method for generating virtual fitting 3D image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4826139B2 (en) * 2005-05-26 2011-11-30 パナソニック電工株式会社 Dressing system
US8711175B2 (en) * 2010-11-24 2014-04-29 Modiface Inc. Method and system for simulating superimposition of a non-linearly stretchable object upon a base object using representative images
CN108510594A (en) * 2018-02-27 2018-09-07 吉林省行氏动漫科技有限公司 Virtual fit method, device and terminal device
CN108681956A (en) * 2018-07-17 2018-10-19 深圳市艾贝比品牌管理咨询有限公司 Dress ornament screening technique, terminal and storage medium based on virtual reality

Also Published As

Publication number Publication date
CN110363867A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363867B (en) Virtual decorating system, method, device and medium
US20210365492A1 (en) Method and apparatus for identifying input features for later recognition
US20210177124A1 (en) Information processing apparatus, information processing method, and computer-readable storage medium
CN107004275B (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
CN103140879B (en) Information presentation device, digital camera, head mounted display, projecting apparatus, information demonstrating method and information are presented program
US20080204453A1 (en) Method and apparatus for generating three-dimensional model information
CN110913751B (en) Wearable eye tracking system with slip detection and correction functions
JP2004094917A (en) Virtual makeup device and method therefor
CN109547753A (en) The method and system of at least one image captured by the scene camera of vehicle is provided
JP2007213623A (en) Virtual makeup device and method therefor
US20160232708A1 (en) Intuitive interaction apparatus and method
JP2008102902A (en) Visual line direction estimation device, visual line direction estimation method, and program for making computer execute visual line direction estimation method
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
JP5103682B2 (en) Interactive signage system
CN114333046A (en) Dance action scoring method, device, equipment and storage medium
WO2019085519A1 (en) Method and device for facial tracking
CN111028318A (en) Virtual face synthesis method, system, device and storage medium
CN114387679A (en) System and method for realizing sight line estimation and attention analysis based on recursive convolutional neural network
US11275434B2 (en) Information processing apparatus, information processing method, and storage medium
CN109426336A (en) A kind of virtual reality auxiliary type selecting equipment
CN116452745A (en) Hand modeling, hand model processing method, device and medium
CN111324274A (en) Virtual makeup trial method, device, equipment and storage medium
US20220114748A1 (en) System and Method for Capturing a Spatial Orientation of a Wearable Device
CN113301243B (en) Image processing method, interaction method, system, device, equipment and storage medium
CN113342157B (en) Eyeball tracking processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant