CN116246041A - AR-based mobile phone virtual fitting system and method - Google Patents

AR-based mobile phone virtual fitting system and method Download PDF

Info

Publication number
CN116246041A
CN116246041A CN202111494265.8A CN202111494265A CN116246041A CN 116246041 A CN116246041 A CN 116246041A CN 202111494265 A CN202111494265 A CN 202111494265A CN 116246041 A CN116246041 A CN 116246041A
Authority
CN
China
Prior art keywords
human body
model
human
clothes
mobile phone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111494265.8A
Other languages
Chinese (zh)
Inventor
魏磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111494265.8A priority Critical patent/CN116246041A/en
Publication of CN116246041A publication Critical patent/CN116246041A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a virtual fitting system and method based on an AR mobile phone, comprising the following steps: acquiring color and depth images through a mobile phone depth camera; identifying a human body and extracting a human body Mask; extracting human body joint point data; realizing human body virtual modeling by using NURBS method; calculating a rotation matrix among bones to realize human animation synchronization; adopting a spring mass point model to perform cloth simulation; the invention utilizes the mobile phone depth camera to identify the human body joint points, combines the human body model construction, human body skeleton animation real-time synchronization and garment cloth simulation, expands the application range of the virtual fitting system, improves the garment showing effect and increases the interaction with a real person. The invention can synchronize the picture to a large screen to realize a virtual fitting platform, can shoot AR video through a mobile phone, and is used for enhancing immersion experience of AR/VR games.

Description

AR-based mobile phone virtual fitting system and method
Technical Field
The invention relates to the field of computer image processing and computer graphics, in particular to a combination and application of technologies such as AR-based mobile phone end human body virtual modeling, human body animation synchronization, cloth simulation and the like.
Background
In recent years, with the development of virtual reality and augmented reality technologies, more VR/AR devices and applications are going into the lives of people, and especially, the expansion of VR/AR technologies at the mobile phone end, so that industry applications are becoming popular.
The existing virtual fitting platform based on the AR technology is realized in hardware by externally connecting a body-sensing camera such as Kinect, is not convenient to use, needs additional purchase, and needs support of a computer host or high-end television and other equipment; in terms of software, the simulation of clothes is not realistic enough, and the clothes cannot be well attached to a human body, so that the user cannot see the clothes to get the fitting experience.
Disclosure of Invention
The invention provides an AR-based mobile phone virtual fitting system and method for overcoming the defects in the prior art by utilizing a depth camera of a mobile phone.
The technical scheme adopted by the invention is as follows, and the mobile phone virtual fitting system and method based on AR comprises the following steps:
s1, acquiring a color image and a depth image in real time through a mobile phone depth camera;
s2, extracting a human body image according to the color image and the depth image;
s3, acquiring human skeleton joint point data through a human body image;
s4, adjusting bone coordinates and rotation of the human body model by utilizing the identified bone joint point characteristic data, and determining boundaries of different parts (head, trunk and limbs) of the human body;
s5, fitting to generate skin grids (head, trunk and limbs) of different parts of the human body through a NURBS modeling method according to the depth image;
s6, extracting characteristic data of a human body by utilizing skeleton joint points and human body grid data, and carrying out structural modeling on the human body;
s7, automatically generating collision bodies (heads, trunk and limbs) with the sizes matched with the human body grids according to the characteristic data of the human body and the positions of skeleton nodes;
s8, selecting a try-on clothes model by a user, automatically matching the position of the clothes model by the system, and performing scaling adjustment on the clothes model according to the characteristic data of the human body;
s9, according to the fixed nodes and the non-fixed nodes of the clothes model, corresponding spring mass point models are automatically generated, and the dynamic effect of cloth is simulated;
s10, synchronizing the change of human skeleton nodes in real time, and simulating the collision and swing effects of clothes under different postures of a human body;
s11, taking the color image as a background, and covering the garment model in front of the color image to realize the effect of combining the real scene and the virtual scene;
s12, performing interface operation through different gestures of a human body, such as sliding arms left and right to change clothes, and sliding up and down to select clothes types;
and S13, connecting to the large-screen equipment through a mobile phone, and projecting a real-time picture onto the large-screen equipment to realize realistic virtual fitting mirror experience.
Preferably, in the steps S1 to S3, depth camera hardware of the mobile phone is relied on, and the depth camera specifically comprises a depth image, bone tracking and a human Mask.
Preferably, in the step S4, the different human body parts are divided according to the bone nodes, and the method specifically includes the following steps.
a) Firstly, the human body posture is identified, and the human body is erected, and the posture that the two arms are opened outwards is taken as a reference.
b) And converting the space coordinates of the skeleton nodes into plane coordinates of the depth image, and dividing the human body image by using the skeleton nodes.
c) The trunk area is characterized in that a horizontal line where a neck joint is located is taken as an upper boundary, a horizontal line where a hip joint point is located is taken as a lower boundary, a gap between the left side of the trunk and a left arm and a vertical line where a left shoulder joint point is located are taken as left boundaries, and a gap between the right side of the trunk and a right arm and a vertical line where a right shoulder joint point is located are taken as right boundaries.
d) The area where the left arm is located is an upper boundary with a horizontal line where the neck joint is located, a horizontal line where the left hand joint point is located, a left boundary with the leftmost side of the screen, and a right boundary with a gap between the left side of the body and the left arm and a vertical line where the left shoulder joint point is located; the area where the right arm is located is an upper boundary with a horizontal line where the neck joint is located, a lower boundary with a horizontal line where the right hand joint point is located, a gap between the right side of the body and the right arm and a vertical line where the right shoulder joint point is located are left boundaries, and the rightmost side of the screen is right boundaries.
e) The area where the left leg is located is an upper boundary with a horizontal line where the hip joint point is located, a lower boundary with a horizontal line where the left foot joint point is located, a left boundary with the leftmost side of the screen as a left boundary and a right boundary with a vertical line where the hip joint point is located; the right leg is located in the area, the horizontal line where the hip joint point is located is taken as an upper boundary, the horizontal line where the right foot joint point is located is taken as a lower boundary, the vertical line where the hip joint point is located is taken as a left boundary, and the rightmost side of the screen is taken as a right boundary.
f) The area where the head is located is defined by taking the horizontal line where the head joint is located as an upper boundary, taking the horizontal line where the neck joint point is located as a lower boundary, taking the leftmost side of the screen as a left boundary and taking the rightmost side of the screen as a right boundary.
Preferably, in the step S5, the human body model is built by the NURBS method based on the division of the human body parts, and the method specifically includes the following steps.
a) And setting a transverse average interval as U and a longitudinal average interval as V for the screen point coordinates of each part of the human body.
b) And traversing transversely and longitudinally to obtain the model value point coordinates of each part.
c) The central joint of buttocks is used as a reference, and the back type value points of the human body are obtained in a front-back symmetrical way.
d) And optimizing the curve by using NURBS, and finally generating a three-dimensional grid model with the number of patches of 2 x U x V.
Preferably, in the step S6, feature data of the human body is extracted, including features of the whole human body and front face widths and side face thicknesses of different parts of the human body, and specifically the following steps are included.
a) Taking the natural sagging of the arms as a reference, the width of the shoulder is the length of the intersecting line segment of the horizontal line where the left shoulder and the right shoulder are positioned and the human body; the trunk height is the length of the line connecting the cervical joint point and the hip joint.
b) The buttock width is the width of the bottommost cross section of the trunk; the width of the waist is the width of the cross section of the bottom part of the trunk at the position 3/10 of the upward position; the chest width is the width of the cross section of the bottom of the trunk at 6/10 of the upward position; the width of the upper chest is the width of the cross section of the upper 8/10 of the bottom of the trunk.
c) The width of the joint points of the four limbs is the width of the cross section of the joint point of the four limbs, for example, the width of the knee, and the width of the cross section of the leg at the joint point of the knee.
d) The head width is the width of the cross section where the midpoint of the head articulation point and the neck articulation point is located.
e) And b) repeating the steps b) and d) when the side face of the human body faces the lens and the arms are vertically downward, and obtaining the side face thicknesses of different parts of the trunk of the human body in two steps.
Preferably, in the step S7, collision bodies of different parts of the human body are generated based on the feature data of the human body and the positions of the bone nodes, and the method specifically includes the following steps.
a) For torso and head, a capsule collision volume is first generated. For example, a lumbar capsule collision body, oriented horizontally, height lumbar width, radius lumbar thickness/2, centered on the lumbar articulation point.
b) For the limbs, two cone-shaped collision bodies connected in a spherical mode are respectively generated. For example, a calf collision body, two ends are respectively a knee and an ankle, two spherical radii are respectively a knee width/2 and an ankle width/2, and a center point is respectively a knee articulation point and an ankle articulation point.
Preferably, in the step S8, the position of the clothing model is automatically matched, and the size of the model is scaled, which specifically includes the following steps.
a) Binding the clothing model skeleton and the human body model skeleton, and performing one-to-one correspondence according to skeleton names.
b) And (3) calculating the ratio of the characteristic width of the human body model to the characteristic width of the corresponding clothes model, and scaling the width and the thickness of the clothes model in the same proportion.
c) Dividing according to the type of clothes, and finding out the zoomed reference position. For example, for a shirt type, the ratio of the human shoulder width to the shirt shoulder width is obtained, and the shirt model is scaled; for the type of trousers, the ratio of the waist width of the human body to the waist width of the trousers is obtained, and the trousers model is scaled.
Preferably, in the step S9, the garment model vertices are divided into fixed nodes and non-fixed nodes, a spring mass point model is generated, and the dynamic effect of the cloth is simulated, and the method specifically includes the following steps.
a) According to the condition that whether the garment model needs to be fixed at different positions or not, setting the cloth vertex needing to be fixed as a fixed vertex, otherwise setting the cloth vertex needing to be fixed as an unfixed vertex; in addition, the vertex that does not need to exhibit the cloth effect may be set as a fixed node. For example, for a coat, the shoulder vertex is set to be a fixed vertex; for skirt, the waist apex is set to be a fixed apex.
b) The distance that cloth vertex is allowed to travel from the position of the skin mesh vertex is set, the distance of the fixed node is set to be zero, and the distance of the non-fixed node is set to be non-zero (the numerical value can be adjusted according to the actual effect).
Preferably, in the step S10, the change of the skeletal node of the human body is synchronized in real time, and the collision effect between the cloth and the human body is achieved, which specifically includes the following steps.
a) For the first recognition of the human body, the bone position and rotation are first initialized.
b) And (5) re-acquiring the position of the bone, calculating a rotation matrix of the bone relative to the initial state, and updating the rotation angle of the bone.
c) And carrying out collision treatment on the collision bodies of different parts of the cloth and the human body, and simulating physical interaction with a real human body.
Preferably, in the step S11, the human body depth image and the clothes model are generated, the clothes model is superimposed on the human body background image, coordinates of the model in the 3D scene are determined, the depths of the human body depth image and the clothes model material are compared, and the effect of mutually masking the clothes and the human body color image is finally achieved by combining the display of the color image background.
Preferably, in the step S12, the corresponding gesture type is identified by analyzing the positional relationship of the skeletal nodes, so as to implement gesture control.
Preferably, in the step S13, the dynamic effect of the garment may be displayed more clearly by synchronizing the images on the large screen device.
Drawings
Fig. 1 is a system framework diagram of AR-based handset virtual fit systems and methods.
Fig. 2 is a program flow diagram of an AR-based handset virtual fit system and method.
Fig. 3 is a block diagram of an AR-based virtual fit system and method for a mobile phone.
Fig. 4-7 are program interface shots of AR-based virtual fit systems and methods for mobile phones.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described with reference to the drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden, are within the scope of the present invention based on embodiments of the present invention.
Example fig. 1, AR-based mobile phone virtual fitting system and method, the framework of which is described as follows:
s1, an image acquisition part acquires a color image and a depth image in real time through a mobile phone depth camera;
s2, extracting human body information, and extracting skeleton points and human body Mask through mobile phone internal processing;
s3, human body modeling parts, namely combining human body depth data with skeleton points, dividing a human body into different parts according to semantics, and carrying out NURBS method modeling on each part;
s4, extracting human body characteristic parameters for generating human body collision bodies and adjusting the sizes of the garment models;
s5, a clothes simulation part divides according to clothes types and automatically adapts to clothes at different positions;
s6, simulating cloth of the clothes, dynamically displaying collision and swing effects along with posture changes, and processing mutually overlapped shielding parts of the clothes;
s7, an AR display part displays the virtual 3D model on the color image in a superposition manner, so that the effect of virtual-real combination is realized;
s8, through showing on the mobile phone screen, the picture can be further synchronized to large-screen equipment or clearer and more vivid effects can be achieved through VR/AR glasses.
Embodiment fig. 2, AR-based mobile phone virtual fitting system and method, comprising the steps of:
s1, acquiring a color image and a depth image in real time through a mobile phone depth camera;
s2, extracting a human body image according to the color image and the depth image;
when the human body is identified, if the human body is not identified, repeating the step S2; if the human body is identified, continuing to execute the step S3;
s3, acquiring human skeleton joint point data through a human body image;
when acquiring the human body joint point data, judging whether the human body is in a standing posture (namely, the human body stands upright, and the arms naturally sag to stand), otherwise, repeating the step S3; if yes, continuing to execute S4;
s4, adjusting bone coordinates and rotation of the human body model by utilizing the identified bone joint point characteristic data, and determining boundaries of different parts (head, trunk and limbs) of the human body;
s5, fitting to generate skin grids (head, trunk and limbs) of different parts of the human body by using a NURBS modeling method according to the depth image;
s6, extracting characteristic data of a human body by utilizing skeleton joint points and human body grid data, and carrying out structural modeling on the human body;
s7, automatically generating collision bodies (heads, trunk and limbs) with the sizes matched with the human body grids according to the characteristic data of the human body and the positions of skeleton nodes;
s8, selecting a try-on clothes model by a user, automatically matching the position of the clothes model by the system, and performing scaling adjustment on the clothes model according to the characteristic data of the human body;
s9, according to the fixed nodes and the non-fixed nodes of the clothes model, corresponding spring mass point models are automatically generated, and the dynamic effect of cloth is simulated;
s10, synchronizing the change of human skeleton nodes in real time, and simulating the collision and swing effects of clothes under different postures of a human body;
s11, taking the color image as a background, and covering a clothes model on the color image to realize the effect of combining the real scene and the virtual scene;
s12, performing interface operation through different gestures of a human body, such as sliding arms left and right to change clothes, and sliding up and down to select clothes types;
and S13, connecting to the large-screen equipment through a mobile phone, and projecting a real-time picture onto the large-screen equipment to realize realistic virtual fitting mirror experience.
Specifically, in the step S3, human body node data is acquired, wherein the returned node data is based on two coordinate systems: one is a world coordinate system with a camera as an origin, and the other is a local coordinate system with a human body center as an origin. For the latter, it is necessary to convert the local coordinate system into the world coordinate system, specifically comprising the following steps.
a) According to the embodiment, in the camera view cone, the known articulation points P1, P2 have a world coordinate length LW and a screen coordinate length LS of the articulation point connecting line L; the world coordinate length of the view cone height H is HW, and the screen coordinate length is HS; the formula can be obtained according to the conversion relation from world coordinates to screen coordinates:
Figure 997758DEST_PATH_IMAGE001
(1)
b) Assuming that the distance between the center of the human body and the camera is D and the view angle of the camera is θ, the formula can be obtained:
Figure 785454DEST_PATH_IMAGE002
(2)
c) According to the formula (1), LW and LS can be obtained through local coordinates of P1 and P2, HS is the height of a screen, so that HW value is obtained, HW is brought into the formula (2), and the distance D between the center of the human body and the camera can be calculated;
d) The partial data obtained by calculation are shown in the following table:
closing point connecting line LS 0.32 0.38 0.43 0.5 0.59
Distance D from camera 2.64 2.35 2.12 1.93 1.75
Specifically, in the step S4, different human body parts are divided according to the skeletal nodes, which specifically includes the following steps.
a) Firstly, the human body posture is identified, and the human body is erected, and the outward opening posture of the double arms is taken as a reference.
b) And converting the space coordinates of the skeleton nodes into plane coordinates of the depth image, and dividing the human body image by using the skeleton nodes.
c) The trunk area is characterized in that a horizontal line where a neck joint is located is taken as an upper boundary, a horizontal line where a hip joint point is located is taken as a lower boundary, a gap between the left side of the trunk and a left arm and a vertical line where a left shoulder joint point is located are taken as left boundaries, and a gap between the right side of the trunk and a right arm and a vertical line where a right shoulder joint point is located are taken as right boundaries.
d) The area where the left arm is located is an upper boundary with a horizontal line where the neck joint is located, a horizontal line where the left hand joint point is located, a left boundary with the leftmost side of the screen, and a right boundary with a gap between the left side of the body and the left arm and a vertical line where the left shoulder joint point is located; the area where the right arm is located is an upper boundary with a horizontal line where the neck joint is located, a lower boundary with a horizontal line where the right hand joint point is located, a gap between the right side of the body and the right arm and a vertical line where the right shoulder joint point is located are left boundaries, and the rightmost side of the screen is right boundaries.
e) The area where the left leg is located is an upper boundary with a horizontal line where the hip joint point is located, a lower boundary with a horizontal line where the left foot joint point is located, a left boundary with the leftmost side of the screen as a left boundary and a right boundary with a vertical line where the hip joint point is located; the right leg is located in the area, the horizontal line where the hip joint point is located is taken as an upper boundary, the horizontal line where the right foot joint point is located is taken as a lower boundary, the vertical line where the hip joint point is located is taken as a left boundary, and the rightmost side of the screen is taken as a right boundary.
f) The area where the head is located is defined by taking the horizontal line where the head joint is located as an upper boundary, taking the horizontal line where the neck joint point is located as a lower boundary, taking the leftmost side of the screen as a left boundary and taking the rightmost side of the screen as a right boundary.
Specifically, in the step S5, a mannequin is built by a NURBS method based on the division of the human body parts, which specifically includes the following steps.
a) For each part of human body, the screen point coordinates are set to have a transverse average interval of 10 and a longitudinal average interval of 10.
b) And traversing transversely and longitudinally to obtain the model value point coordinates of each part.
c) The central joint of buttocks is used as a reference, and the back type value points of the human body are obtained in a front-back symmetrical way.
d) And optimizing the curve by using NURBS, and finally generating a three-dimensional grid model with the number of patches of 2 x 10.
Specifically, in the step S6, feature data of the human body is extracted, including features of the whole human body and front widths and side thicknesses of different parts of the human body, and specifically includes the following steps.
a) Taking the standing posture of the human body and the outward opening posture of the double arms as a reference, wherein the shoulder width is the sum of the distance between the left shoulder joint point and the right shoulder joint point and the width of the shoulder joint point; the trunk height is the length of the line connecting the cervical joint point and the hip joint.
b) The buttock width is the width of the bottommost cross section of the trunk; the width of the waist is the width of the cross section of the bottom part of the trunk at the position 3/10 of the upward position; the chest width is the width of the cross section of the bottom of the trunk at 6/10 of the upward position; the width of the upper chest is the width of the cross section of the upper 8/10 of the bottom of the trunk.
c) The widths of the joints of the four limbs are the widths of the cross sections of the joints of the four limbs, wherein the widths of the shoulders, the elbows, the wrists, the hips, the knees and the ankles are the widths of the cross sections of the human body corresponding to the joints.
c) The head width is the width of the cross section where the midpoint of the head articulation point and the neck articulation point is located.
e) Repeating the above b) with the human body side facing the lens and the arm vertically downward, d) two steps to obtain the side thickness of the buttocks, waist, chest, upper chest, and head.
Specifically, in the step S7, collision volumes of different parts of the human body are generated according to the feature data of the human body and the positions of the skeletal nodes, and the method specifically includes the following steps.
a) For torso and head, a capsule collision volume is first generated. Wherein the waist capsule collision body is horizontal in direction, has a waist width and a radius of waist thickness/2, and takes a waist articulation point as a center.
b) For the limbs, two cone-shaped collision bodies connected in a spherical mode are respectively generated. Wherein the two ends of the small leg collision body are respectively a knee and an ankle, the radiuses of the two balls are respectively knee width/2 and ankle width/2, and the center points are respectively knee articulation points and ankle articulation points.
Specifically, in the step S8, the position of the clothing model is automatically matched, and the size of the model is scaled and adjusted, which specifically includes the following steps.
a) Binding the clothing model skeleton and the human body model skeleton, and performing one-to-one correspondence according to skeleton names.
b) And (3) calculating the ratio of the characteristic width of the human body model to the characteristic width of the corresponding clothes model, and scaling the width and the thickness of the clothes model in the same proportion.
c) Dividing according to the type of clothes, and finding out the zoomed reference position. For the type of the shirt, the ratio of the shoulder width of the human body to the shoulder width of the shirt is obtained, and the shirt model is scaled; for the type of trousers, the ratio of the waist width of the human body to the waist width of the trousers is obtained, and the trousers model is scaled.
Specifically, in the step S9, the vertex of the clothing model is divided into a fixed node and a non-fixed node, a spring mass point model is generated, and the dynamic effect of the cloth is simulated.
a) According to the condition that whether the garment model needs to be fixed at different positions or not, setting the cloth vertex needing to be fixed as a fixed vertex, otherwise setting the cloth vertex needing to be fixed as an unfixed vertex; in addition, the vertex that does not need to exhibit the cloth effect may be set as a fixed node. For example, for a coat, the shoulder vertex is set to be a fixed vertex; for skirt, the waist apex is set to be a fixed apex.
b) The distance that cloth vertex is allowed to travel from the position of the skin mesh vertex is set, the distance of the fixed node is set to be zero, and the distance of the non-fixed node is set to be non-zero (the numerical value can be adjusted according to the actual effect).
Specifically, in the step S10, the change of the skeletal nodes of the human body is synchronized in real time, and the collision effect between the cloth and the human body is achieved.
a) For the first recognition of the human body, the bone position and rotation are first initialized.
b) And (5) re-acquiring the position of the bone, calculating a rotation matrix of the bone relative to the initial state, and updating the rotation angle of the bone.
c) And the collision treatment of the collision bodies at different parts of the cloth and the human body is added, so that the physical interaction with the real human body is simulated.
Embodiment fig. 3, AR-based handset virtual fit system and method, the system comprising the following modules.
And the image acquisition module is used for acquiring a color image and a depth image by utilizing the mobile phone depth camera.
The human body identification module is used for identifying a human body, generating a human body Mask and separating the human body from the background so as to acquire human body contour data.
The bone identification tracking module is used for identifying the bone coordinates of the human body and tracking the change of the bone coordinates in real time; and smoothing the skeleton coordinate data to eliminate jitter caused by inaccurate equipment identification.
And the human body modeling module is used for utilizing the depth image acquired by the mobile phone depth camera, combining with a human body Mask and constructing a human body model by using a NURBS method.
And the gesture recognition control module is used for recognizing the human body gesture and realizing the control of the system interface through different gestures.
And the clothes model management module is used for loading clothes model files and reading parameters of the clothes models.
And the clothes model adapting module is used for carrying out scaling treatment on the clothes model and adjusting the model position so as to match the size and the position of the human body model.
And the collision processing module is used for generating collision bodies at different parts of the human body and simulating the collision effect between the clothes model and the human body.
And the cloth simulation module is used for simulating the cloth effect of the clothes model, so that the clothes can look more vivid.
And the AR image superposition module is used for superposing the virtual model on the real color background image to generate the virtual-real combined effect.
Embodiments fig. 4-7 show AR-based mobile phone virtual fitting system software, which specifically includes the following steps.
As shown in fig. 4, the initial screen displays a real-time image of the user and his surroundings.
As shown in fig. 5, the user makes a selection gesture and enters a clothing selection interface; the user's slide-left gesture may select a garment and then put out a fitting gesture to select the garment.
As shown in fig. 6, the user can put out various figures to view the try-on effect of the clothing.
As shown in fig. 7, by simulating the cloth, the wrinkling and swinging effects of the clothing can be exhibited.
The invention utilizes the mobile phone depth camera to identify the human joint points, combines with the NURBS human modeling method, synchronizes human skeleton animation in real time and simulates clothes and cloth, expands the application range of a virtual fitting system, improves the clothing showing effect and increases the interaction with a real person. The invention can synchronize the picture to a large screen to realize a virtual fitting platform, can shoot AR video through a mobile phone, and is used for enhancing immersion experience of AR/VR games.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (12)

1. A virtual fitting system and method of a mobile phone based on AR, which comprises the following steps:
s1, acquiring a color image and a depth image in real time through a mobile phone depth camera;
s2, extracting a human body image according to the color image and the depth image;
s3, acquiring human skeleton joint point data through a human body image;
s4, adjusting bone coordinates and rotation of the human body model by utilizing the identified bone joint point characteristic data, and determining boundaries of different parts (head, trunk and limbs) of the human body;
s5, fitting to generate skin grids (head, trunk and limbs) of different parts of the human body by using a NURBS modeling method according to the depth image;
s6, extracting characteristic data of a human body by utilizing skeleton joint points and human body grid data, and carrying out structural modeling on the human body;
s7, automatically generating collision bodies (heads, trunk and limbs) with the sizes matched with the human body grids according to the characteristic data of the human body and the positions of skeleton nodes;
s8, selecting a try-on clothes model by a user, automatically matching the position of the clothes model by the system, and performing scaling adjustment on the clothes model according to the characteristic data of the human body;
s9, according to the fixed nodes and the non-fixed nodes of the clothes model, corresponding spring mass point models are automatically generated, and the dynamic effect of cloth is simulated;
s10, synchronizing the change of human skeleton nodes in real time, and simulating the collision and swing effects of clothes under different postures of a human body;
s11, taking the color image as a background, and covering the garment model in front of the color image to realize the effect of combining the real scene and the virtual scene;
s12, performing interface operation through different gestures of a human body, such as sliding arms left and right to change clothes, and sliding up and down to select clothes types;
s13, connecting the mobile phone to the large-screen equipment, and synchronizing the real-time picture to the large-screen equipment to realize realistic virtual fitting mirror experience.
2. The method according to claim 1, wherein in steps S1-S3, depth image, bone tracking and human Mask are specifically included depending on the depth camera hardware of the mobile phone.
3. The method according to claim 1, wherein in step S4, the body is erected, the two arms are opened outward, the trunk area is a plane coordinate of the depth image, the neck joint is a horizontal line, the hip joint is a horizontal line, the gap between the left side and the left arm of the body and the vertical line of the left shoulder joint are left boundaries, and the gap between the right side and the right arm of the body and the vertical line of the right shoulder joint are right boundaries.
4. The method according to claim 1, wherein in step S5, the divided different body parts in claim 3 are utilized to obtain the model value point coordinates in the corresponding depth image, the transverse average interval is taken as the U model value points, the longitudinal average interval is taken as the V model value points, the hip center joint is taken as the reference, the back model value points of the human body are obtained in a front-back symmetrical way, and the three-dimensional grid model with the number of patches of 2 x U x V is generated.
5. The method according to claim 1, wherein in step S6, the width and thickness (waist width, hip width, elbow width) of the body part corresponding to the joint point, and the whole shoulder width and trunk height are obtained from the position of the human body joint point and the human body mesh data, respectively.
6. The method according to claim 1, wherein in step S7, a capsule collision body (trunk, head) and a cone collision body (limbs) corresponding to the joint point are generated based on the human body characteristic parameters, respectively.
7. The method according to claim 1, wherein in step S8, corresponding skeletal nodes of the human body are automatically matched according to skeletal nodes of the clothes; according to the characteristic parameters (shoulder width, length, etc.) of the clothing, the clothing model is scaled in proportion to the characteristic parameters of the mannequin, thereby matching the size of the mannequin.
8. The method according to claim 1, wherein in step S9, corresponding fixed nodes (adsorbed on the surface of the model, no cloth effect) and non-fixed nodes (cloth effect) are provided for each of the different garment classifications, capable of collision with the collision body; for the coat, more than the shoulder is generally set as fixed nodes, and the rest are non-fixed nodes; for the lower garment, the waist is generally set to be a fixed node above, and the rest is set to be a non-fixed node.
9. The method according to claim 1, wherein in step S10, for the first recognition of a human body, first initializing the location of skeletal nodes and determining the size of a virtual model; and then acquiring the space position of the skeleton node of the human body, and calculating and updating the rotation matrix of the skeleton node, so that the synchronization of the virtual model and the posture of the human body is realized.
10. The method according to claim 1, wherein in step S11, by generating a human depth image and a garment model, superimposing the garment model on a human background image, determining coordinates of the model in a 3D scene, comparing depths of human depth image and garment model materials, and combining display of color image backgrounds, the effect of mutually masking the garment and human color images is finally achieved.
11. The method according to claim 1, wherein in step S12, gesture control is implemented by analyzing the positional relationship of skeletal nodes, and identifying the corresponding gesture type.
12. The method according to claim 1, wherein in step S13, the dynamic effect of the clothing can be displayed more clearly by synchronizing the pictures on the large screen device.
CN202111494265.8A 2021-12-08 2021-12-08 AR-based mobile phone virtual fitting system and method Pending CN116246041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111494265.8A CN116246041A (en) 2021-12-08 2021-12-08 AR-based mobile phone virtual fitting system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111494265.8A CN116246041A (en) 2021-12-08 2021-12-08 AR-based mobile phone virtual fitting system and method

Publications (1)

Publication Number Publication Date
CN116246041A true CN116246041A (en) 2023-06-09

Family

ID=86633652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111494265.8A Pending CN116246041A (en) 2021-12-08 2021-12-08 AR-based mobile phone virtual fitting system and method

Country Status (1)

Country Link
CN (1) CN116246041A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292097A (en) * 2023-11-23 2023-12-26 南昌世弘泛亚科技股份有限公司 AR try-on interactive experience method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292097A (en) * 2023-11-23 2023-12-26 南昌世弘泛亚科技股份有限公司 AR try-on interactive experience method and system
CN117292097B (en) * 2023-11-23 2024-02-02 南昌世弘泛亚科技股份有限公司 AR try-on interactive experience method and system

Similar Documents

Publication Publication Date Title
US10311508B2 (en) Garment modeling simulation system and process
CN111460872B (en) Image processing method and device, image equipment and storage medium
CN114663199B (en) Dynamic display real-time three-dimensional virtual fitting system and method
JP4473754B2 (en) Virtual fitting device
CN103106604B (en) Based on the 3D virtual fit method of body sense technology
Wachter et al. Tracking persons in monocular image sequences
KR101424942B1 (en) A system and method for 3D space-dimension based image processing
CN106373178B (en) Apparatus and method for generating artificial image
US20020024517A1 (en) Apparatus and method for three-dimensional image production and presenting real objects in virtual three-dimensional space
US20130173226A1 (en) Garment modeling simulation system and process
CN104794722A (en) Dressed human body three-dimensional bare body model calculation method through single Kinect
CN109598798A (en) Virtual object approximating method and virtual object are fitted service system
TWI750710B (en) Image processing method and apparatus, image processing device and storage medium
JP4695275B2 (en) Video generation system
CN103456042A (en) Method and system for generation of human body models and clothes models, fitting method and fitting system
CN116246041A (en) AR-based mobile phone virtual fitting system and method
JP2013501284A (en) Representation of complex and / or deformable objects and virtual fitting of wearable objects
JP2022512262A (en) Image processing methods and equipment, image processing equipment and storage media
WO2021240848A1 (en) Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
CN112070879A (en) Intelligent fitting system and virtual fitting method
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
JPH11250282A (en) Cg animation preparing method and device using the method
WO2014028714A2 (en) Garment modeling simulation system and process
JP2005339363A (en) Device and method for automatically dividing human body part
WO2021043204A1 (en) Data processing method and apparatus, computer device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication