CN103489219A - 3D hair style effect simulation system based on depth image analysis - Google Patents

3D hair style effect simulation system based on depth image analysis Download PDF

Info

Publication number
CN103489219A
CN103489219A CN201310430905.8A CN201310430905A CN103489219A CN 103489219 A CN103489219 A CN 103489219A CN 201310430905 A CN201310430905 A CN 201310430905A CN 103489219 A CN103489219 A CN 103489219A
Authority
CN
China
Prior art keywords
hair style
face
rectangle
model
image analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310430905.8A
Other languages
Chinese (zh)
Other versions
CN103489219B (en
Inventor
黄翰
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201310430905.8A priority Critical patent/CN103489219B/en
Publication of CN103489219A publication Critical patent/CN103489219A/en
Application granted granted Critical
Publication of CN103489219B publication Critical patent/CN103489219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a 3D hair style effect simulation system based on depth image analysis. According to the hair style and the color selected by a user, a 3D hair style simulation method based on the depth image analysis is adopted to carry out hair style simulation, and the user can check the hair style simulation effect in real time. A 3D model is adopted in the system, and compared with a traditional hair style simulation system, the 3D hair style effect simulation system based on the depth image analysis has a stronger sense of reality. In addition, the 3D hair style effect simulation system based on the depth image analysis can acquire face azimuth information in real time, and the real-time hair style simulation effect can be achieved.

Description

3D hair style effect emulation system based on range image analysis
Technical field
The present invention relates generally to computing machine augmented reality field, be specifically related to the 3D hair style effect emulation method based on range image analysis.
Background technology
In hair salon is carried out the process of hair style design for the client, the most important actual demand that will understand exactly the client.But, if want, accurately and rapidly client's demand is identified, but and be not easy.And for the customer, attempt new hair style the same just as betting, only having after the haircut end just knowledge of result.
For these problems, start to research and develop successively the hair style design system both at home and abroad.By computer simulation hair style collocation, the hair style design system is carried out digitizing, datumization by client's actual demand, thereby makes the hairdresser more be well understood to client's actual demand and make the hairdresser and common recognition that client both sides reach an agreement.Yet, the stacked systems of hair style design system now great majority application 2D picture are more restyled the hair for the client, the shortcomings such as ubiquity face and the hair style matching is inharmonious, poor operability, unicity, the sense of reality a little less than, can't meet client's requirement, practical application exists sizable limitation.
With conventional art, compare, this method has the following advantages:
1. adopt the hair style model of 3D, there is the stronger sense of reality.
2. the same hair style can be selected a variety of different colours.
3. show in real time the hair style simulated effect.
4. can the multi-angle display effect.
Summary of the invention
The present invention is directed to the deficiency of current hair style design system, the system of the 3D hair style effect emulation based on range image analysis is provided, realized the 3D hair style effect emulation system based on range image analysis.The object of the invention is to show in real time, truly the hair style simulated effect, allow the user observe the effect after restyling the hair as looking in the mirror from multi-angle, concrete technical scheme is as follows.
3D hair style effect emulation system based on range image analysis, it comprises the following steps:
(a), according to the hair style icon of user's selection, read in the hair style model file of corresponding IVE form from local disk;
(b) obtain the RGB color value of user's input, realize the hair style color conversion;
(c) obtain the input of Kinect camera, analyze size, position and the direction of face in frame of video;
(d), in three dimensions, the face location of obtaining according to step (c), carry out translation, Rotation and Zoom to the hair style model, and with the video flowing Overlapping display that camera obtains, realize the hair style simulated effect; In described three dimensions, to the right, vertically upward, z axle positive dirction normal to screen points to the user to the positive dirction of y axle to the positive dirction level of x axle.The above-mentioned 3D hair style effect emulation method based on range image analysis, step (d) comprises the following steps:
(d ?1) as pinup picture, is attached to each frame of video flowing on a rectangle in the three dimensions of playing up, and adjusts position and the angle of observation visual angle, so that rectangle just in time takes display window;
The azimuth information of the face that (d ?2) gets according to step c, carry out convergent-divergent, translation and rotation to the hair style model, makes it with suitable size, position with towards being added on (d ?1) described rectangle;
The RGB color value that (d ?3) selects in step (b) according to the user, rectangle and model three dimensions in add the light of respective color, then scene is played up, the window of playing up show.
The above-mentioned 3D hair style effect emulation method based on range image analysis, for fear of the situation that could not successfully capture face location due to a certain frame of step (c) and cause model shake, first judge face location upper left corner coordinate, if the particular value of setting, the face position that model is caught according to previous frame is moved, otherwise the face position captured according to present frame is moved.
The above-mentioned 3D hair style effect emulation method based on range image analysis, model in the computing formula of directions X translational movement is tx is the translational movement of model at directions X, the length that La is the described rectangle of step (d ?1), and Fx is the upper left corner x coordinate of face's rectangle of getting of step c, and Fw is the wide of face's rectangle, and Vx is the pixel count that frame of video accounts on directions X.
The above-mentioned 3D hair style effect emulation method based on range image analysis, model in the computing formula of Y-direction translational movement is
Figure BDA0000384534390000021
ty is the translational movement of model in Y-direction, and Lb is the wide of the described rectangle of step (d ?1), and Fy is the upper left corner y coordinate of face's rectangle of getting of step c, and Fh is the height of face's rectangle, and Vy is the pixel count that frame of video accounts on Y-direction.
The above-mentioned 3D hair style effect emulation method based on range image analysis, the computing formula of scaling of model amount is S=
Figure BDA0000384534390000022
s is amount of zoom, and Fa is the length of the face that gets of step c, and Fb is the wide of the face that gets of step c, and C is the experience constant gone out through experimental summary.
Compared with prior art, the present invention has following advantage and technique effect:
1. the stacked systems of present hair style design system great majority application 2D picture are more restyled the hair for the client, the sense of reality a little less than.The present invention adopts the mode of 3D model and the matching of user face to realize the hair style simulated effect, and the sense of reality is strong.
2. prior art is generally all taken pictures, and just returns results image after background processed, and this has just caused time delay.And the present invention can be added to the hair style model tram of video flowing in real time, reach the real-time effect of changing.
3. prior art generally only supports the hair style of positive face to replace, can not the multi-angle view result.The present invention can when user's side face, correctly capture face towards, thereby the postrotational model appropriate location that is added to, the user can be checked from different perspectives and change effect.
The accompanying drawing explanation
The process flow diagram that Fig. 1 is the 3D hair style effect emulation method based on range image analysis in embodiment.
Fig. 2 is the detail flowchart of step 4 in Fig. 1.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the present invention are described further, but enforcement of the present invention is not limited to this.
As Fig. 1, the main flow process of the 3D hair style effect emulation method based on range image analysis comprises the following steps:
1. according to the hair style icon of user's click, read in the hair style model file of corresponding IVE form from local disk;
2. obtain the RGB color value of user's input;
3. by free size, position and the direction of Face Tracking SDK methods analyst face in video of authorizing of Microsoft;
4. in the three dimensions created by the OpenSceneGraph graphics rendering technology, the hair style model through translation, Rotation and Zoom with take the rectangle that frame of video is pinup picture and stack up, realize the hair style simulated effect.
Step 1 allows the user to select a hair style as the target hair style, and system is read in the hair style model file of corresponding IVE form from local hard drive.
Step 2 allows the user to input the RGB color value.The color that the user selects will be delivered in the scene that the OpenSceneGraph graphics rendering technology creates, and system changes the light color of scene, realizes the effect of hair style color change.OpenSceneGraph is a graph rendering API who increases income, the scene management of providing convenience and graph rendering interface.The scene of the preview window is also played up with OpenSceneGraph, from the following stated to play up scene different, the scene of the preview window only comprises hair style model and light, and following effect scene comprises a rectangle, hair style model and light.
Respective function and method in the Face Tracking SDK increased income that step 3 adopts Microsoft to provide, real-time follow-up obtain the relative size of face in video, position and towards.Its specific implementation is: after the coloured image and depth image that is ready to input, start face tracking.During following the tracks of, the face position of using previous frame to obtain, as information, is searched in adjacent domain.If a certain frame is because the user moves, the reason such as too fast or light variation suddenly causes and follows the tracks of unsuccessfully, intactly scans next frame, obtains the position of face.Carry out scene while redrawing, submitting tracking results to, the face position got, in next step, information is carried out translation, the Rotation and Zoom of 3D hair style model accordingly.The face location information of obtaining comprises the upper left corner coordinate of face's rectangle, and the height and width of face frame rectangle also have face respectively around x, y, the angle of z axle rotation.
As shown in Figure 2, step 4 comprises the following steps:
4 ?1 each frame of video flowing as pinup picture, be attached on a rectangle in the three dimensions of playing up, and adjust position and the angle of observation visual angle, so that rectangle just in time takes display window;
4 ?the azimuth information of 2 faces that get according to step 3, the hair style model is carried out to convergent-divergent, translation and rotation, make its with suitable size, position and towards be added to 4 ?on 1 described rectangle;
4 ?the 3 RGB color values of selecting in step 2 according to the user, rectangle and model three dimensions in add the light of respective color, then scene is played up, show playing up window.
Step 4 ?in 1, rectangle is a carrier playing the video obtained from the Kinect camera, allows rectangle just in time take display window, just can reach the effect of displaying video.For this reason, need the corresponding position of adjusting rectangle size and observation point.Such as, under the OpenSceneGraph coordinate system, rectangle size is 104*78, and rectangular centre is positioned at (0,0,0) to be located, and observer sitting is located at (0,0 , ?150), just can allow rectangle just in time take display window.Above-mentioned OpenSceneGraph coordinate system, to the right, vertically upward, z axle positive dirction normal to screen points to the user to y axle positive dirction to its x axle positive dirction level.
Step 4 ?in 2, for the hair style model is moved to correct position, the coordinate of face's rectangle that need to obtain according to step 3 is calculated: model in the computing formula of directions X translational movement is tx is the translational movement of model at directions X, La be step 4 ?the length of 1 described rectangle, Fx is the upper left corner x coordinate of face's rectangle of getting of step 3, Fw is the wide of face's rectangle, Vx is the pixel count that frame of video accounts on directions X; Model in the computing formula of Y-direction translational movement is
Figure BDA0000384534390000042
ty is the translational movement of model in Y-direction, Lb be step 4 ?1 described rectangle wide, Fy is the upper left corner y coordinate of face's rectangle of getting of step 3, Fh is the height of face's rectangle, Vy is the pixel count that frame of video accounts on Y-direction.For the hair style scaling of model is arrived to suitable size, need to be calculated according to the height and width of face frame rectangle: the computing formula of model model amount of zoom is
Figure BDA0000384534390000043
s is amount of zoom, and Fa is the length of the face that gets of step 3, and Fb is the wide of the face that gets of step 3, and C is can be the experience constant that the user arranges.For the hair style model is rotated to correct towards, the face that need to obtain according to step (c) around x, y, the rotation amount of z axle, be rotated operation to model.
Step 3 and 4 is major cycles, selects or stops unless the user re-starts hair style, otherwise circular flow always.
The present invention, according to user's click, reads the hair style model file of IVE form from local disk, then obtains the RGB color value of user's input.Afterwards, assign each frame of the color video of Kinect camera input as pinup picture, be attached on the rectangle in the three dimensions of establishment, in calculate video flowing in people's face size, position and towards, then by translation, Rotation and Zoom, the hair style model is added to above-mentioned three-dimensional appropriate position, then the RGB color value of inputting according to the user, add the light of corresponding color in the three-dimensional scenic created, played up demonstration.The user can change effect by real time inspection.

Claims (7)

1. the 3D hair style effect emulation method based on range image analysis, is characterized in that, comprises the following steps:
(a), according to the hair style icon of user's selection, read in the hair style model file of corresponding IVE form from local disk;
(b) obtain the RGB color value of user's input, realize the hair style color conversion;
(c) obtain the input of Kinect camera, analyze size, position and the direction of face in frame of video;
(d), in three dimensions, the face location of obtaining according to step (c), carry out translation, Rotation and Zoom to the hair style model, and with the video flowing Overlapping display that camera obtains, realize the hair style simulated effect; In described three dimensions, to the right, vertically upward, z axle positive dirction normal to screen points to the user to the positive dirction of y axle to the positive dirction level of x axle.
2. the 3D hair style effect emulation method based on range image analysis according to claim 1, it is characterized in that, in step (b), the color that the user selects will be delivered in created scene, system changes the light color of scene, realize the effect of hair style color change, color value is the RGB color value that the user inputs.
3. the 3D hair style effect emulation method based on range image analysis according to claim 1, is characterized in that, step (d) comprises the following steps:
(d-1) each frame of video flowing as pinup picture, be attached on a rectangle in the three dimensions of establishment, and adjust position and the angle of observation visual angle, so that rectangle just in time takes display window;
(d-2) azimuth information of the face got according to step (c), carry out convergent-divergent, translation and rotation to the hair style model, makes it with suitable size, position with on (d-1) the described rectangle that is added to;
(d-3) the RGB color value of selecting in step (b) according to the user, above-mentioned rectangle and hair style model three dimensions in add the light of respective color, then scene is played up, show playing up window.
4. the 3D hair style effect emulation method based on range image analysis according to claim 3, it is characterized in that, in step (d-2): first judge face location upper left corner coordinate, if the value of setting, the face position that model is caught according to previous frame is moved, otherwise the face position captured according to present frame is moved.
5. the 3D hair style effect emulation method based on range image analysis according to claim 3, is characterized in that, in step (d-2): model is Tx=in the computing formula of directions X translational movement
Figure FDA0000384534380000021
tx is the translational movement of model at directions X, the length that La is the described rectangle of step (d-1), and Fx is the upper left corner x coordinate of face's rectangle of getting of step c, and Fw is the wide of face's rectangle, and Vx is the pixel count that frame of video accounts on directions X.
6. the 3D hair style effect emulation method based on range image analysis according to claim 3, is characterized in that, in step (d-2): model is Ty=in the computing formula of Y-direction translational movement
Figure FDA0000384534380000022
ty is the translational movement of model in Y-direction, and Lb is the wide of the described rectangle of step (d-1), and Fy is the upper left corner y coordinate of face's rectangle of getting of step c, and Fh is the height of face's rectangle, and Vy is the pixel count that frame of video accounts on Y-direction.
7. the 3D hair style effect emulation method based on range image analysis according to claim 3, is characterized in that, in step (d-2): the computing formula of scaling of model amount is
Figure FDA0000384534380000023
s is amount of zoom, and Fa is the length of the face that gets of step (c), and Fb is the wide of the face that gets of step (c), and C is constant.
CN201310430905.8A 2013-09-18 2013-09-18 3D hair style effect simulation system based on depth image analysis Active CN103489219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310430905.8A CN103489219B (en) 2013-09-18 2013-09-18 3D hair style effect simulation system based on depth image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310430905.8A CN103489219B (en) 2013-09-18 2013-09-18 3D hair style effect simulation system based on depth image analysis

Publications (2)

Publication Number Publication Date
CN103489219A true CN103489219A (en) 2014-01-01
CN103489219B CN103489219B (en) 2017-02-01

Family

ID=49829414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310430905.8A Active CN103489219B (en) 2013-09-18 2013-09-18 3D hair style effect simulation system based on depth image analysis

Country Status (1)

Country Link
CN (1) CN103489219B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854303A (en) * 2014-03-06 2014-06-11 寇懿 Three-dimensional hair style design system and method based on somatosensory sensor
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device
CN108182588A (en) * 2017-11-29 2018-06-19 深圳中科蓝海创新科技有限公司 A kind of hair style design and clipping device, system and method, equipment and medium
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111510769A (en) * 2020-05-21 2020-08-07 广州华多网络科技有限公司 Video image processing method and device and electronic equipment
CN112015934A (en) * 2020-08-27 2020-12-01 华南理工大学 Intelligent hair style recommendation method, device and system based on neural network and Unity
CN112084983A (en) * 2020-09-15 2020-12-15 华南理工大学 ResNet-based hair style recommendation method and application thereof
CN113628350A (en) * 2021-09-10 2021-11-09 广州帕克西软件开发有限公司 Intelligent hair dyeing and testing method and device
CN114880057A (en) * 2022-04-22 2022-08-09 北京三快在线科技有限公司 Image display method, image display device, terminal, server, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010024486A1 (en) * 2008-08-29 2010-03-04 Sang Guk Kim 3d hair style simulation system and method using augmented reality
CN102737235A (en) * 2012-06-28 2012-10-17 中国科学院自动化研究所 Head posture estimation method based on depth information and color image
CN103065360A (en) * 2013-01-16 2013-04-24 重庆绿色智能技术研究院 Generation method and generation system of hair style effect pictures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010024486A1 (en) * 2008-08-29 2010-03-04 Sang Guk Kim 3d hair style simulation system and method using augmented reality
CN102737235A (en) * 2012-06-28 2012-10-17 中国科学院自动化研究所 Head posture estimation method based on depth information and color image
CN103065360A (en) * 2013-01-16 2013-04-24 重庆绿色智能技术研究院 Generation method and generation system of hair style effect pictures

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚俊峰等: "三维技术及其在发型设计领域的应用", 《系统仿真学报》 *
魏尚: "基于Kinect深度图像的三维人脸识别技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854303A (en) * 2014-03-06 2014-06-11 寇懿 Three-dimensional hair style design system and method based on somatosensory sensor
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device
CN108182588A (en) * 2017-11-29 2018-06-19 深圳中科蓝海创新科技有限公司 A kind of hair style design and clipping device, system and method, equipment and medium
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111510769A (en) * 2020-05-21 2020-08-07 广州华多网络科技有限公司 Video image processing method and device and electronic equipment
CN112015934A (en) * 2020-08-27 2020-12-01 华南理工大学 Intelligent hair style recommendation method, device and system based on neural network and Unity
CN112015934B (en) * 2020-08-27 2022-07-26 华南理工大学 Intelligent hair style recommendation method, device and system based on neural network and Unity
CN112084983A (en) * 2020-09-15 2020-12-15 华南理工大学 ResNet-based hair style recommendation method and application thereof
CN112084983B (en) * 2020-09-15 2022-07-26 华南理工大学 Hair style recommendation method based on ResNet and application thereof
CN113628350A (en) * 2021-09-10 2021-11-09 广州帕克西软件开发有限公司 Intelligent hair dyeing and testing method and device
CN114880057A (en) * 2022-04-22 2022-08-09 北京三快在线科技有限公司 Image display method, image display device, terminal, server, and storage medium

Also Published As

Publication number Publication date
CN103489219B (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN103489219A (en) 3D hair style effect simulation system based on depth image analysis
US11263823B2 (en) Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications
WO2019223463A1 (en) Image processing method and apparatus, storage medium, and computer device
CN112243583B (en) Multi-endpoint mixed reality conference
WO2020069049A1 (en) Employing three-dimensional data predicted from two-dimensional images using neural networks for 3d modeling applications
DE112016004216T5 (en) General Spherical Observation Techniques
CN103226830A (en) Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
JP2014504384A (en) Generation of 3D virtual tour from 2D images
CN103543827B (en) Based on the implementation method of the immersion outdoor activities interaction platform of single camera
CN103136793A (en) Live-action fusion method based on augmented reality and device using the same
WO2022002181A1 (en) Free viewpoint video reconstruction method and playing processing method, and device and storage medium
CN113223130B (en) Path roaming method, terminal equipment and computer storage medium
CN108133454B (en) Space geometric model image switching method, device and system and interaction equipment
US9025007B1 (en) Configuring stereo cameras
KR20150106879A (en) Method and apparatus for adding annotations to a plenoptic light field
Inoue et al. Tracking Robustness and Green View Index Estimation of Augmented and Diminished Reality for Environmental Design
DuVall et al. Compositing light field video using multiplane images
US9881419B1 (en) Technique for providing an initial pose for a 3-D model
Plopski et al. Efficient in-situ creation of augmented reality tutorials
JP2017016166A (en) Image processing apparatus and image processing method
Kim et al. 3-d virtual studio for natural inter-“acting”
WO2022022260A1 (en) Image style transfer method and apparatus therefor
CN108510433B (en) Space display method and device and terminal
JP2024512447A (en) Data generation method, device and electronic equipment
CN103309444A (en) Kinect-based intelligent panoramic display method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant