CN105404392B - Virtual method of wearing and system based on monocular cam - Google Patents
Virtual method of wearing and system based on monocular cam Download PDFInfo
- Publication number
- CN105404392B CN105404392B CN201510737831.1A CN201510737831A CN105404392B CN 105404392 B CN105404392 B CN 105404392B CN 201510737831 A CN201510737831 A CN 201510737831A CN 105404392 B CN105404392 B CN 105404392B
- Authority
- CN
- China
- Prior art keywords
- face
- model
- human
- monocular cam
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
Abstract
The present invention, which proposes a kind of virtual method of wearing based on monocular cam and system, this method, to be included:Scene depth image is caught using monocular cam, skeleton positioning is carried out to scene depth image, obtains the head position information of user;According to the head position information of acquisition, human body face feature is compared to judge whether face, and if so, combining default face shape model and face texture model determines human face region, face feature vector is extracted in human face region;Carry out 3D in real time to face feature vector to render, draw corresponding 3D wearings model;Show that the human body in 3D wearing models is superimposed the actual situation combination image to be formed with virtual scene to user.The present invention gathers depth scene information including human body image using monocular cam, is positioned by skeleton, face recognition technology and 3D Renderings, cross-platform to realize real-time rendering.
Description
Technical field
It is more particularly to a kind of based on the virtual of monocular cam the present invention relates to image procossing and virtual wearing technical field
Method of wearing and system.
Background technology
All technology companies for doing project between virtual fitting domestic at present are all based on the Kinect of Microsoft to realize
The virtually trying of PC (personal computer) version tries technology on.But at present people to the frequency of use of mobile terminal increasingly
Height, also without a kind of virtual wearing technology that can be adapted for mobile terminal of appearance.
In addition, the face recognition technology used in existing virtual wearing skill, point fitting algorithm is marked using conventional face
For example ASM is to solve for the optimal solution of global mark point, position inaccurate or mistake occurs in some local mark points.Specifically
Performance is exactly that can identify face's face general location but have certain deviation with input in the judgement of some face borders.And tradition
The method of method generally use increase mark point quantity when improving the precision of face edge fitting.There are following two for this method
A problem:When the limited efficacy of this method lifting, second, recognition speed can be caused to decline and train cost increase.
Also, the cutting method of traditional 2D to 3D is the i.e. 3D models 2D grids by 3D model renderings a to texture
Change, form a 2D texture.Then the test texture with being formed by facial structure side algorithm carries out the ALPHA surveys between 2D textures
Examination, achievees the purpose that cutting.Conventional method design is complicated, inefficiency, it is necessary to more than twice render flow, and effect occurs
The texture sawtooth that fruit differs.
The content of the invention
The purpose of the present invention is intended at least solve one of described technological deficiency.
, can be with for this reason, it is an object of the invention to propose a kind of virtual method of wearing and system based on monocular cam
The depth scene information including human body image is gathered using monocular cam, is positioned by skeleton, recognition of face skill
Art and 3D Renderings, it is cross-platform to realize real-time rendering.
To achieve these goals, the embodiment of one aspect of the present invention provides a kind of virtual wearing based on monocular cam
Method, includes the following steps:
Step S1, scene depth image is caught using monocular cam, and skeleton is carried out to the scene depth image
Positioning, obtains the head position information of user;
Step S2, according to the head position information of acquisition, is compared human body face feature to judge whether to deposit
In face, and if so, combining default face shape model and face texture model determines human face region, in the face area
Face feature vector is extracted in domain;
Step S3, carries out the face feature vector 3D and renders in real time, draws corresponding 3D wearings model;
Step S4, shows that the human body in the 3D wearings model is superimposed the actual situation knot to be formed with virtual scene to the user
Group photo picture.
Further, it is described that skeleton positioning, including following step are carried out to scene depth image in the step S1
Suddenly:
Single frames depth image is extracted from the scene depth image, by the single frames depth image and default human body bone
Bone feature set is matched, and after successful match, calculates human skeleton feature and human body center vector;
The head position information of the user is obtained according to the human skeleton feature and human body center vector.
Further, in the step S2, the human body face feature includes:The colour of skin and facial geometric feature.
Further, in the step S2, the definite human face region, includes the following steps:By facial image with presetting
Face shape model and face texture model are matched, and obtain facial markers point to determine human face region,
Displacement in facial markers point change posture, rotation, the three of change in size are calculated according to the human face region
Dimensional vector is as the face feature vector.
Further, in the step S3, the 3D that carried out in real time to the face feature vector is rendered, including following step
Suddenly:
According to the human face region of acquisition, world coordinate system, monitoring matrix and orthogonal intersection cast shadow matrix are sequentially adjusted in, is drawn
All-transparent blocks model;
Alpha hybrid cytokines are adjusted, draw the 3D wearings model.
The present invention also proposes a kind of virtual donning system based on monocular cam, including:Monocular cam, control device
And display device, wherein, the monocular cam is used to catch scene depth image, and by the scene depth image send to
The control device;The control device is used to carry out skeleton positioning to the scene depth image, obtains the head of user
Portion's positional information, and human body face feature is compared to judge whether face, such as according to the head position information
There is the default face shape model of then combination in fruit and face texture model determines human face region, and people is extracted in the human face region
Face feature vector, then carry out 3D in real time to the face feature vector and render, draw corresponding 3D wearings model;The display dress
Put for showing that the human body in the 3D wearings model is superimposed the actual situation combination image to be formed with virtual scene to the user.
Further, the control device extracts single frames depth image from the scene depth image, and the single frames is deep
Degree image is matched with default skeleton feature set, and after successful match, calculates human skeleton feature and human body
Center vector, and then obtain the head position information of the user.
Further, the human body face feature includes:The colour of skin and facial geometric feature.
Further, the control device is used to carry out facial image and default face shape model and face texture model
Matching, obtains facial markers point to determine human face region, and calculating the facial markers point according to the human face region changes posture
In displacement, rotation, the three-dimensional vector of change in size is as the face feature vector.
Further, the control device is used for the human face region according to acquisition, is sequentially adjusted in world coordinate system, monitoring
Matrix and orthogonal intersection cast shadow matrix, draw all-transparent and block model, adjust Alpha hybrid cytokines, draw the 3D wearings model.
Virtual method of wearing and system based on monocular cam according to embodiments of the present invention, is adopted using monocular cam
Collect depth scene information including human body image, positioned by skeleton, face recognition technology and 3D Renderings, across
Platform realizes real-time rendering, and human body and virtual scene are overlapped, and forms actual situation combination image, exists so as to lift user
The Experience Degree of dress ornament net purchase, when user chooses dress ornament class product online, it is possible to achieve as looking in the mirror in reality
Wearing effect is tried on online, can be very good to see that color and style fit and be not suitable for oneself, perfectly solve in net purchase
The problem of good or not that " I " wears is seen.The limited efficacy of method lifting, second, recognition speed can be caused to decline and train in cost
Rise.Also, the method that the present invention cuts 3D models with 2D images, realizes that itself is blocked, so that the effect after displaying wearing comprehensively
Fruit, whole process is simply efficient, and single imaging, is not in texture sawtooth.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination accompanying drawings below to embodiment
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the flow chart according to the virtual method of wearing based on monocular cam of the embodiment of the present invention;
Fig. 2 is the flow chart according to the human bioequivalence of the embodiment of the present invention;
Fig. 3 is the flow chart according to the screening of the skeleton of the embodiment of the present invention and Face location;
Fig. 4 is the flow chart according to the face recognition of the embodiment of the present invention;
Fig. 5 is the flow chart being precisely fitted according to the face of the embodiment of the present invention;
Fig. 6 is the flow chart that 3D models are cut with 2D images according to the embodiment of the present invention;
Fig. 7 is the 3D simulated attitude controllings according to the embodiment of the present invention and the flow chart rendered;
Fig. 8 is the flow chart that the virtually trying under the monocular cam according to the embodiment of the present invention is tried on;
Fig. 9 is the particular flow sheet according to the virtual method of wearing based on monocular cam of one embodiment of the invention;
Figure 10 is the idiographic flow according to the virtual method of wearing based on monocular cam of another embodiment of the present invention
Figure;
Figure 11 is the structure chart according to the virtual donning system based on monocular cam of the embodiment of the present invention.
Embodiment
The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
As shown in Figure 1, the virtual method of wearing based on monocular cam of the embodiment of the present invention, includes the following steps:
Step S1, scene depth image is caught using monocular cam, and carrying out skeleton to the scene depth image determines
Position, obtains the head position information of user.
In an embodiment of the present invention, skeleton positioning is carried out to scene depth image, included the following steps:
Single frames depth image is extracted from scene depth image, by single frames depth image and default skeleton feature set
Matched, and after successful match, calculate human skeleton feature and human body center vector.Then according to the bone of human body
Bone feature and human body center vector obtain the head position information of user.
Fig. 2 is the flow chart according to the human bioequivalence of the embodiment of the present invention.
Step S201, starts to record.
Step S202, catches deep image information.
In the image recorded from monocular cam, scene depth image is obtained.
Step S203, terminates to record.
Step S204, obtains depth characteristic.
Single frames depth image is extracted from scene depth image, obtains depth characteristic.
Step S205, inputs depth characteristic data set.
Wherein, which is skeleton feature set.
Specifically, more set human bodies and the depth image local Gradient Features table of posture are predefined, formulate this feature table
When, the key points of human body totally 20 bones is described with 28 dimensional feature vectors.
Step S206, starts characteristic matching.
Depth characteristic is matched with skeleton feature.
Step S207, terminates characteristic matching.
Step S208, after judging whether successful match, if it is, performing step S209 and S210, otherwise performs step
S204。
Step S209, obtains human body center vector.
Step S210, obtains skeleton point.
To the single frames depth image of extraction subset is carried out to table look-up, thus calculate human body include head skeleton character and
Human body center vector.
Fig. 3 is the flow chart according to the screening of the skeleton of the embodiment of the present invention and Face location.
Step S301, obtains successive frame.
Obtain the successive frame by monocular cam shooting image.
Step S302, obtains background frames.
Step S303, calculates frame-to-frame differences.
Calculate the frame-to-frame differences of successive frame and background frames.
Step S304, binary conversion treatment.
Step S305, shape filtering.
Step S306, carries out connectivity analysis.
Step S307, carries out body classification.
Step S308, obtains head zone.
Since computer external video capture device (monocular cam used in the present invention) is gathering and is transferring work
When, itself has certain time-consuming, and product will have good user experience and use feeling, certainly will require high performance algorithm, use up
Amount reduces the delay brought by calculating.And in terms of Face location, since face area may be very small in collection source, pass
The method of system is either distributed using the neural net method or complexion model of pattern-recognition according to the colour of skin in color space
Opposite collection rule can all have larger calculate to consume energy come the method detected.
Since human skeleton feature compares face data, have the characteristics that sample area is relatively large, detection feature is simpler.
For the present invention by detecting human skeleton so as to accurately calculate human head location, this method substantially increases detection speed
Rate, detection process are eased.Using the above method, outside I52.3GHz frequency, 256M video memorys integrated graphics card host under, face
Taking as 5ms~8ms for portion's positioning, i.e., it is per second to detect 125-200 frame still images, compared to current mainstream method for detecting consumption
When 60ms-160ms, detection efficiency higher.
Step S2, according to the head position information of acquisition, is compared human body face feature, judges whether people
Face, and if so, combining default face shape model and face texture model determines human face region, carries in the human face region
Take face feature vector.
In one embodiment of the invention, human body face feature includes:The colour of skin and facial geometric feature.
Fig. 4 is the flow chart according to the face recognition of the embodiment of the present invention.
Step S401, starts to record.
Step S402, obtains RGB pictures.
Step S403, terminates to record.
Step S404, carries out bone identification, obtains head predeterminated position.
Step S405, obtains predeterminated position feature.
Step S406, judges whether predeterminated position has facial information.
If head position information can be obtained in skeleton identification positioning, the ratio of human body face feature is directly carried out
It is right, otherwise by establish pretreatment more set human body face templates, then with from common camera seizure come single frames bitmap into
Row calculates, and tests matching degree, with this to determine whether there are face, that is, determines whether facial information.
Step S407, inputs facial feature database.
Step S408, inputs face shape model.
Step S409, inputs face texture model.
Step S410, carries out Model Matching.
Step S411, obtains facial markers point.
Specifically, it is determined that human face region includes the following steps:First, by facial image and default face shape model and people
Face texture model is matched, and obtains facial markers point to determine human face region.Then, it is special according to human body face set in advance
Sign is further confirmed that, and is extracted the face characteristic for calculating and getting and generated corresponding face structure rule, is calculated according to rule
Go out displacement in facial structural key point change posture, rotation, the three-dimensional vector of change in size as face feature vector.
Fig. 5 is the flow chart being precisely fitted according to local mark point iteration five face of progress of the embodiment of the present invention.
Step S501, obtains the face's mark point recognized.
Step S502, extraction target face mark point.
Step S503, obtains mark point Gradient Features nearby.
Step S504, obtains the line of demarcation of face and skin.
Step S505, mark point are deviated on a small quantity to line of demarcation.
Step S506, judges mark point change in location size after iteration, if greater than threshold value, then return to step S503, no
Then perform step S507.
Step S507, obtains mark point position after iteration.
The present invention greatly improves face border plan by having carried out second iteration to the global mark point of human face region
Close precision.And since second iteration and global characteristic point identification do not influence less recognition efficiency in same thread.Secondary
Target face correspond to point on the basis of note point first during iteration, are defeated by not stopping iteration to find the datum mark set representations face border
Enter image gradient figure peak value and global shape meets shape constraining.
Fig. 8 is the flow chart that the virtually trying under the monocular cam according to the embodiment of the present invention is tried on.
Whether step S801, judgement currently recognize face, if it is perform step S802, otherwise perform step
S803。
Step S802, obtains mark point near zone characteristics of image.
Step S803, initializes current face's shaped position.
Step S804, renewal identification facial contours.
Step S805, inputs shape generator.
Step S806, calculates current mark point matching degree.
Step S807, judgement and last time matching degree difference, if difference is excessive, return to step S802, otherwise performs step
Rapid S808.
Step S808, obtains mark point Gradient Features nearby.
Step S809, obtains the line of demarcation of face and skin.
Step S810, mark point are deviated on a small quantity to line of demarcation.
Step S811, judges mark point change in location size after iteration, if greater than threshold value, then performs step S808, no
Then perform step S812.
Step S812, obtains face mark point and form collection.
Step S813, tries on and renders.
Step S3, real-time 3 D romance is carried out to obtained face feature vector, draws corresponding 3D wearings model.
Specifically, world coordinate system, monitoring matrix and rectangular projection square are sequentially adjusted according to the human face region of acquisition first
Battle array, draws all-transparent and blocks model.Then Alpha hybrid cytokines are adjusted, draw 3D wearing models.
Since human body or human body face are that the direct imaging of RGB cameras exports, the shade without actual hiding relation
Ability.In practical application scene, such as user is in the scene of wearable garment, it is assumed that and neck is far above shirtfront open-neck collar after clothes,
3D dress forms can not realize itself block that in itself phenomenon of exposing the false just occurs in the 3D models at this moment floating on human body phase on piece, i.e.,
Neck is before the neck of people after clothes, and overall effect will very "false".
Such as user is when trying glasses on, when head turns left certain angle, the 3D glasses models positioned at left side
The temple of glasses should be sheltered from without drawing out by face, can be in actual drawing process, and the temple of the glasses is still deposited
And float on face, this does not just reach actual design demand and application effect.
Fig. 6 is the flow chart that 3D models are cut with 2D images according to the embodiment of the present invention.3D is cut with 2D images
The method of model, realizes that itself is blocked, so that the effect after displaying wearing comprehensively.
Step S601, according to default headform and material texture model, opens Alpha hybrid operations.
Step S602, converts three-dimensional matrice.
Step S603, draws headform.
Step S604, draws glasses model.
Step S605, closes alpha mixing.
Present invention employs one presetting 3D full impregnated head model of Synchronization Control, which is not used in actual drafting,
It is only applicable to the actual effect that model is presented and makees ALPHA hybrid operations.The model is influenced by head movement posture, supports to make same
Amplification, diminution, rotation, flexion-extension and the displacement of step.This method design is simple, efficient, and single imaging, is not in texture sawtooth.
Fig. 7 is the 3D simulated attitude controllings according to the embodiment of the present invention and the flow chart rendered.
Step S701, renders initialization.
Step S702, draws base map.
Step S703, adjusts world coordinate system.
Step S704, adjustment monitoring matrix.
Step S705, adjusts orthogonal intersection cast shadow matrix.
Step S706, draws all-transparent and blocks matrix.
Step S707, adjusts Alpha hybrid cytokines and method, draws 3D wearing models.
The face structure key point three-dimensional converting vector obtained according to face recognition part, control 3D wearing models and tool
The model for having hiding relation carries out the displacement based on world coordinate system, rotation and zoom, and calculate the 3D accordingly
Model is projected on screen with orthogonal intersection cast shadow matrix.Then the hiding relation model of all-transparent is drawn, and sets the mixed of cutting
The factor is closed, then draws 3D wearing models.
Fig. 9 is the particular flow sheet according to the virtual method of wearing based on monocular cam of one embodiment of the invention.
Step S901, Kinect xtion.
Step S902, obtains depth data.
Step S903, the positioning of personage's bone.
Step S904, obtains head position.
Step S905, starts RGB camera.
Step S906, obtains colour imaging data.
Colour imaging data are carried out Cabor wavelet convolution computings by step S907.
Step S908, mask contrast.
Step S909, inputs the colour of skin and facial geometric feature.
Step S910, determines human face region.
Step S911, renders initialization.
Step S912, draws base map.
Step S913, adjusts world coordinate system.
Step S914, adjustment monitoring matrix.
Step S915, adjusts orthogonal intersection cast shadow matrix.
Step S916, draws all-transparent and blocks model.
Step S917, adjusts Alpha hybrid cytokines and method.
Step S918, draws wearing model.
Step S919, exports result.
Step S4, shows that the human body in 3D wearing models is superimposed the actual situation combination image to be formed with virtual scene to user.
Wherein, virtual scene can be the scenes such as accessories scene, hair style, the beauty such as clothes scene, glasses.By by these
Virtual scene is overlapped with human body, forms actual situation combination image, and is shown by display screen to user, so that user can
Intuitively to find out the problem of good or not that " oneself " is worn is seen, the online shopping occasion such as net purchase can be applied to.
Figure 10 is the idiographic flow according to the virtual method of wearing based on monocular cam of another embodiment of the present invention
Figure.
Step S1001, opens camera.
Step S1002, obtains RGB/ depth maps.
Step S1003, closes camera.
Step S1004, catches body image/action.
Step S1005, skeleton identification.
Step S1006, obtains face/ocular position.
Step S1007, extracts the textural characteristics based on the colour of skin.
Step S1008, extracts the face morphological feature based on mark point.
Step S1009, face feature vector.
Step S1010, real-time 3 D romance.
Step S1011, draws wearing model.
Step S1012, shows actual situation combination image.
The present invention catches body image and action by monocular cam, carries out skeleton identification, determines face and eye
Eyeball region, then extracts face feature vector using the colour of skin or human body face geometrical model, carries out 3D in real time and renders, draws wearing
Model, finally shows the actual situation combination image that human body adds virtual scene on a display screen.By this system, user can be
The virtual world freely tries various glasses, clothes etc. on, is finally reached the similar effect looked in the mirror.
As shown in figure 11, the virtual donning system based on monocular cam of the embodiment of the present invention, including:Monocular cam
1st, control device 2 and display device 3.
Specifically, monocular cam 1 is used to catch scene depth image, and the scene depth image is sent to control and is filled
Put 2.
Control device 2 is used to carry out skeleton positioning to scene depth figure, obtains the head position information of user.
Specifically, control device 2 extracts single frames depth image from scene depth image, by the single frames depth image and in advance
If skeleton feature set matched, and after successful match, calculate human skeleton feature and human body center to
Amount, and then obtain the head position information of user.
Control device 2 is compared to judge whether face human body face feature according to the head position information,
And if so, combining default face shape model and face texture model determines human face region.
In one embodiment of the invention, human body face feature includes:The colour of skin and facial geometric feature.
Specifically, control device 2 is matched face figure with default face shape model and face texture model, is obtained
Facial markers point is taken to determine human face region.Displacement, rotation in human face region calculating facial markers point change posture, ruler
The three-dimensional vector of very little change is as face feature vector.
Face feature vector is extracted in human face region, then carries out 3D in real time to the face feature vector and renders, drafting pair
The 3D wearing models answered.
Specifically, control device 2 is according to the human face region of acquisition, is sequentially adjusted in world coordinate system, monitoring matrix and orthogonal
Projection matrix, draws all-transparent and blocks model, adjust Alpha hybrid cytokines, draws 3D wearing models.
Display device 3 is used to show that the human body in 3D wearing models is superimposed the actual situation to be formed with virtual scene and is combined to user
Image.
Virtual method of wearing and system based on monocular cam according to embodiments of the present invention, is adopted using monocular cam
Collect depth scene information including human body image, positioned by skeleton, face recognition technology and 3D Renderings, across
Platform realizes real-time rendering, and human body and virtual scene are overlapped, and forms actual situation combination image, exists so as to lift user
The Experience Degree of dress ornament net purchase, when user chooses dress ornament class product online, it is possible to achieve as looking in the mirror in reality
Wearing effect is tried on online, can be very good to see that color and style fit and be not suitable for oneself, perfectly solve in net purchase
The problem of good or not that " I " wears is seen.The limited efficacy of method lifting, second, recognition speed can be caused to decline and train in cost
Rise.Also, the method that the present invention cuts 3D models with 2D images, realizes that itself is blocked, so that the effect after displaying wearing comprehensively
Fruit, whole process is simply efficient, and single imaging, is not in texture sawtooth.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms is not
Necessarily refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any
One or more embodiments or example in combine in an appropriate manner.
Although the embodiment of the present invention has been shown and described above, it is to be understood that above-described embodiment is example
Property, it is impossible to limitation of the present invention is interpreted as, those of ordinary skill in the art are not departing from the principle of the present invention and objective
In the case of above-described embodiment can be changed within the scope of the invention, change, replace and modification.The scope of the present invention
Extremely equally limited by appended claims.
Claims (10)
1. a kind of virtual method of wearing based on monocular cam, it is characterised in that include the following steps:
Step S1, scene depth image is caught using monocular cam, and skeleton positioning is carried out to the scene depth image,
Obtain the head position information of user;
Step S2, according to the head position information of acquisition, is compared human body face feature to judge whether people
Face, and if so, combining default face shape model and face texture model determines human face region, in the human face region
Extract face feature vector;
Step S3, carries out the face feature vector 3D and renders in real time, draws corresponding 3D wearings model;
Step S4, shows that the human body in the 3D wearings model is superimposed the actual situation combination shadow to be formed with virtual scene to the user
Picture.
2. the virtual method of wearing based on monocular cam as claimed in claim 1, it is characterised in that in the step S1
In, it is described that skeleton positioning is carried out to scene depth image, include the following steps:
Single frames depth image is extracted from the scene depth image, the single frames depth image and default skeleton is special
Collection is matched, and after successful match, calculates human skeleton feature and human body center vector;
The head position information of the user is obtained according to the human skeleton feature and human body center vector.
3. the virtual method of wearing based on monocular cam as claimed in claim 1, it is characterised in that in the step S2
In, the human body face feature includes:The colour of skin and facial geometric feature.
4. the virtual method of wearing based on monocular cam as claimed in claim 1, it is characterised in that in the step S2
In, the definite human face region, includes the following steps:By facial image and default face shape model and face texture model into
Row matching, obtains facial markers point to determine human face region,
Calculated according to the human face region displacement in facial markers point change posture, rotation, change in size it is three-dimensional to
Amount is used as the face feature vector.
5. the virtual method of wearing based on monocular cam as claimed in claim 1, it is characterised in that in the step S3
In, the 3D that carried out in real time to the face feature vector is rendered, and is included the following steps:
According to the human face region of acquisition, world coordinate system, monitoring matrix and orthogonal intersection cast shadow matrix are sequentially adjusted in, draws full impregnated
It is bright to block model;
Alpha hybrid cytokines are adjusted, draw the 3D wearings model.
A kind of 6. virtual donning system based on monocular cam, it is characterised in that including:Monocular cam, control device and
Display device, wherein,
The monocular cam is used to catch scene depth image, and the scene depth image is sent to the control and is filled
Put;
The control device is used to carry out skeleton positioning to the scene depth image, obtains the head position letter of user
Breath, and human body face feature is compared to judge whether face according to the head position information, and if so,
Determine human face region with reference to default face shape model and face texture model, in the human face region extract face characteristic to
Amount, then carry out 3D in real time to the face feature vector and render, draw corresponding 3D wearings model;
The display device is used to show that the human body in the 3D wearings model is superimposed what is formed with virtual scene to the user
Actual situation combination image.
7. the virtual donning system based on monocular cam as claimed in claim 6, it is characterised in that the control device from
Single frames depth image is extracted in the scene depth image, by the single frames depth image and default skeleton feature set into
Row matching, and after successful match, human skeleton feature and human body center vector are calculated, and then obtain the user's
Head position information.
8. the virtual donning system based on monocular cam as claimed in claim 6, it is characterised in that the human body face is special
Sign includes:The colour of skin and facial geometric feature.
9. the virtual donning system based on monocular cam as claimed in claim 6, it is characterised in that the control device is used
In facial image is matched with default face shape model and face texture model, facial markers point is obtained to determine face
Region, calculated according to the human face region displacement in facial markers point change posture, rotation, change in size it is three-dimensional to
Amount is used as the face feature vector.
10. the virtual donning system based on monocular cam as claimed in claim 6, it is characterised in that the control device
For the human face region according to acquisition, world coordinate system, monitoring matrix and orthogonal intersection cast shadow matrix are sequentially adjusted in, draws full impregnated
It is bright to block model, Alpha hybrid cytokines are adjusted, draw the 3D wearings model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510737831.1A CN105404392B (en) | 2015-11-03 | 2015-11-03 | Virtual method of wearing and system based on monocular cam |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510737831.1A CN105404392B (en) | 2015-11-03 | 2015-11-03 | Virtual method of wearing and system based on monocular cam |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105404392A CN105404392A (en) | 2016-03-16 |
CN105404392B true CN105404392B (en) | 2018-04-20 |
Family
ID=55469917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510737831.1A Active CN105404392B (en) | 2015-11-03 | 2015-11-03 | Virtual method of wearing and system based on monocular cam |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105404392B (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203364B (en) * | 2016-07-14 | 2019-05-24 | 广州帕克西软件开发有限公司 | System and method is tried in a kind of interaction of 3D glasses on |
CN106295955A (en) * | 2016-07-27 | 2017-01-04 | 邓耀华 | A kind of client based on augmented reality is to the footwear custom-built system of factory and implementation method |
CN106648098B (en) * | 2016-12-23 | 2021-01-08 | 武汉市马里欧网络有限公司 | AR projection method and system for user-defined scene |
CN107608513B (en) * | 2017-09-18 | 2021-07-16 | 联想(北京)有限公司 | Wearable device and data processing method |
CN107959792A (en) * | 2017-11-27 | 2018-04-24 | 启云科技股份有限公司 | It is a kind of can two-way manipulation camera system |
CN107773254A (en) * | 2017-12-05 | 2018-03-09 | 苏州创捷传媒展览股份有限公司 | A kind of method and device for testing Consumer's Experience |
CN108509855B (en) * | 2018-03-06 | 2021-11-23 | 成都睿码科技有限责任公司 | System and method for generating machine learning sample picture through augmented reality |
CN108549484B (en) * | 2018-03-29 | 2019-09-20 | 北京微播视界科技有限公司 | Man-machine interaction method and device based on human body dynamic posture |
TWI692968B (en) * | 2018-04-26 | 2020-05-01 | 財團法人工業技術研究院 | 3d model establishing device and calibration method applying to the same |
CN110415329B (en) | 2018-04-26 | 2023-10-13 | 财团法人工业技术研究院 | Three-dimensional modeling device and calibration method applied to same |
CN108959075B (en) * | 2018-06-15 | 2021-11-23 | Oppo(重庆)智能科技有限公司 | Method and device for testing algorithm library, storage medium and electronic equipment |
CN108959073B (en) * | 2018-06-15 | 2022-03-25 | Oppo(重庆)智能科技有限公司 | Method and device for testing algorithm library, storage medium and electronic equipment |
CN108830928A (en) * | 2018-06-28 | 2018-11-16 | 北京字节跳动网络技术有限公司 | Mapping method, device, terminal device and the readable storage medium storing program for executing of threedimensional model |
CN109284591B (en) * | 2018-08-17 | 2022-02-08 | 北京小米移动软件有限公司 | Face unlocking method and device |
CN109407709B (en) * | 2018-09-25 | 2022-01-18 | 国网天津市电力公司 | Kinect skeleton tracking algorithm-based conference camera shooting automatic tracking system |
CN111710044A (en) * | 2019-03-18 | 2020-09-25 | 北京京东尚科信息技术有限公司 | Image processing method, apparatus and computer-readable storage medium |
CN110111638A (en) * | 2019-05-28 | 2019-08-09 | 李伟 | A kind of AR drive simulating method and system |
CN111178167B (en) * | 2019-12-12 | 2023-07-25 | 咪咕文化科技有限公司 | Method and device for checking lasting lens, electronic equipment and storage medium |
CN113012042B (en) * | 2019-12-20 | 2023-01-20 | 海信集团有限公司 | Display device, virtual photo generation method, and storage medium |
CN113091227B (en) * | 2020-01-08 | 2022-11-01 | 佛山市云米电器科技有限公司 | Air conditioner control method, cloud server, air conditioner control system and storage medium |
CN112348841B (en) * | 2020-10-27 | 2022-01-25 | 北京达佳互联信息技术有限公司 | Virtual object processing method and device, electronic equipment and storage medium |
CN112991494B (en) * | 2021-01-28 | 2023-09-15 | 腾讯科技(深圳)有限公司 | Image generation method, device, computer equipment and computer readable storage medium |
CN115063565B (en) * | 2022-08-08 | 2023-01-24 | 荣耀终端有限公司 | Wearable article try-on method and device and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156810A (en) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | Augmented reality real-time virtual fitting system and method thereof |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
CN104898832A (en) * | 2015-05-13 | 2015-09-09 | 深圳彼爱其视觉科技有限公司 | Intelligent terminal based 3D real-time glass fitting method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8565476B2 (en) * | 2009-01-30 | 2013-10-22 | Microsoft Corporation | Visual target tracking |
-
2015
- 2015-11-03 CN CN201510737831.1A patent/CN105404392B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156810A (en) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | Augmented reality real-time virtual fitting system and method thereof |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
CN104898832A (en) * | 2015-05-13 | 2015-09-09 | 深圳彼爱其视觉科技有限公司 | Intelligent terminal based 3D real-time glass fitting method |
Non-Patent Citations (1)
Title |
---|
人物替换模式的虚拟试衣;李俊,张明敏,潘志庚;《计算机辅助设计与图形学学报》;20150930;第27卷(第9期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN105404392A (en) | 2016-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105404392B (en) | Virtual method of wearing and system based on monocular cam | |
EP2718902B1 (en) | Generation of avatar reflecting player appearance | |
US10055880B2 (en) | Methods and systems to modify a two dimensional facial image to increase dimensional depth and generate a facial image that appears three dimensional | |
US11450075B2 (en) | Virtually trying cloths on realistic body model of user | |
EP2993893B1 (en) | Method for image segmentation | |
EP3038053B1 (en) | Method and system for generating garment model data | |
CN110363867B (en) | Virtual decorating system, method, device and medium | |
US20130063487A1 (en) | Method and system of using augmented reality for applications | |
US11670059B2 (en) | Controlling interactive fashion based on body gestures | |
CN108537126B (en) | Face image processing method | |
CN108460398B (en) | Image processing method and device and cloud processing equipment | |
JP5762600B1 (en) | Information processing apparatus and information processing method | |
CN111638784B (en) | Facial expression interaction method, interaction device and computer storage medium | |
CN114821675B (en) | Object processing method and system and processor | |
US11636662B2 (en) | Body normal network light and rendering control | |
CN111667588A (en) | Person image processing method, person image processing device, AR device and storage medium | |
WO2020104990A1 (en) | Virtually trying cloths & accessories on body model | |
CN107692701A (en) | The display methods and device of a kind of Intelligent mirror | |
KR101321022B1 (en) | Computing device, method and system for embodying augmented reality | |
CN107728981A (en) | The method and device of display | |
CN113850245A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN114445427A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111627118A (en) | Scene portrait showing method and device, electronic equipment and storage medium | |
CN107495761A (en) | A kind of Intelligent mirror | |
US20230316666A1 (en) | Pixel depth determination for object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |